Commit 022ba1fa authored by yhesper's avatar yhesper
Browse files

pathtracer doc 1st draft

parent e2ed4635
This source diff could not be displayed because it is too large. You can view the blob instead.
try{
s_e().Fa("wkrYee");
var s_cq=function(a){s_L.call(this,a.Ma);var b=this;this.ka=a.service.window.get();this.Ba=this.Yw();this.Ca=window.orientation;this.wa=function(){var c=b.Yw(),d="orientation"in window&&90===Math.abs(window.orientation)&&b.Ca===-1*window.orientation;b.Ca=window.orientation;if(c!==b.Ba||d){b.Ba=c;d=s_a(b.Aa);for(var e=d.next();!e.done;e=d.next()){e=e.value;var f=new s_ijb(c);try{e(f)}catch(g){s_Ea(g)}}}};this.Aa=new Set;this.ka.addEventListener("resize",this.wa);"orientation"in window&&this.ka.addEventListener("orientationchange",
this.wa)};s_i(s_cq,s_L);s_cq.Ia=function(){return{service:{window:s_aj}}};s_cq.prototype.addListener=function(a){this.Aa.add(a)};s_cq.prototype.removeListener=function(a){this.Aa.delete(a)};s_cq.prototype.Yw=function(){if(s_jkb()){var a=s_uf(this.ka);a=new s_jf(a.width,Math.round(a.width*this.ka.innerHeight/this.ka.innerWidth))}else a=this.yc()||(s_7d()?s_jkb():this.ka.VisualViewport)?s_uf(this.ka):new s_jf(this.ka.innerWidth,this.ka.innerHeight);return a.height<a.width};
s_cq.prototype.destroy=function(){this.ka.removeEventListener("resize",this.wa);this.ka.removeEventListener("orientationchange",this.wa)};var s_jkb=function(){return s_7d()&&s_sd()&&!navigator.userAgent.includes("GSA")};s_cq.prototype.yc=function(){return s_kkb};var s_kkb=!1;s_Ii(s_9ra,s_cq);
s_kkb=!0;
s_e().Ea();
}catch(e){_DumpException(e)}
// Google Inc.
.gb_8e{background:rgba(60,64,67,0.90);-webkit-border-radius:4px;border-radius:4px;color:#ffffff;font:500 12px 'Roboto',arial,sans-serif;letter-spacing:.8px;line-height:16px;margin-top:4px;min-height:14px;padding:4px 8px;position:absolute;z-index:1000;-webkit-font-smoothing:antialiased}sentinel{}
\ No newline at end of file
---
layout: default
title: "(Task 3) Bounding Volume Hierarchy"
permalink: /pathtracer/bounding_volume_hierarchy
---
# (Task 3) Bounding Volume Hierarchy
In this task you will implement a bounding volume hierarchy that accelerates ray-scene intersection. Most of this work will be in `student/bvh.cpp`.
First, take a look at the definition for our `BVH` in `rays/bvh.h`. We represent our BVH using a vector of `Node`s, `nodes`, as an implicit tree data structure in the same fashion as heaps that you probably have seen in some other courses. A `Node` has the following fields:
* `BBox bbox`: the bounding box of the node (bounds all primitives in the subtree rooted by this node)
* `size_t start`: start index of primitives in the `BVH`'s primitive array
* `size_t size`: range of index in the primitive list (number of primitives in the subtree rooted by the node)
* `size_t l`: the index of the left child node
* `size_t r`: the index of the right child node
The BVH class also maintains a vector of all primitives in the BVH. The fields start and range in the BVH `Node` refer the range of contained primitives in this array. The primitives in this array are not initially in any particular order, and you will need to _rearrange the order_ as you build the BVH so that your BVH can accurately represent the spacial hierarchy.
The starter code constructs a valid BVH, but it is a trivial BVH with a single node containing all scene primitives.
## Step 0: Bounding Box Calculation
Implement `BBox::hit` in `student/bbox.cpp`.
Also if you haven't already, implement `Triangle::bbox` in `student/tri_mesh.cpp`.
## Step 1: BVH Construction
Your job is to construct a `BVH` using the [Surface Area Heuristic](http://15462.courses.cs.cmu.edu/fall2017/lecture/acceleratingqueries/slide_025) discussed in class. Tree construction would occur when the BVH object is constructed.
## Step 2: Ray-BVH Intersection
Implement the ray-BVH intersection routine `Trace BVH<Primitive>::hit(const Ray& ray)`. You may wish to consider the node visit order optimizations we discussed in class. Once complete, your renderer should be able to render all of the test scenes in a reasonable amount of time. [Visualization of normals](visualization_of_normals.md) may help with debugging.
---
layout: default
title: "(Task 1) Generating Camera Rays"
permalink: /pathtracer/camera_rays
---
# (Task 1) Generating Camera Rays
"Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.)
Take a look at `Pathtracer::trace_pixel` in `studnet/pathtracer.cpp`. The job of this function is to compute the amount of energy arriving at this pixel of the image. Conveniently, we've given you a function `Pathtracer::trace_ray(r)` that provides a measurement of incoming scene radiance along the direction given by ray `r`. See `lib/ray.h` for the interface of ray.
When the number of samples per pixel is 1, you should sample incoming radiance at the center of each pixel by constructing a ray `r` that begins at this sensor location and travels through the camera's pinhole. Once you have computed this ray, then call `Pathtracer::trace_ray(r)` to get the energy deposited in the pixel. When supersampling is enabled, the expected behavior of the program is being addressed below.
Here are some [rough notes](https://drive.google.com/file/d/0B4d7cujZGEBqVnUtaEsxOUI4dTMtUUItOFR1alQ4bmVBbnU0/view) giving more detail on how to generate camera rays.
This tutorial from [Scratchapixel](https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-generating-camera-rays/generating-camera-rays) also provides a detailed walkthrough of what you need to do. (Note that the coordinate convention that Scratchpixel adopted is different from the one we use, and you should stick to the coordinate system from the [rough notes](https://drive.google.com/file/d/0B4d7cujZGEBqVnUtaEsxOUI4dTMtUUItOFR1alQ4bmVBbnU0/view) all the time.)
**Step 1:** Given the width and height of the screen, and point in screen space, compute the corresponding coordinates of the point in normalized ([0-1]x[0-1]) screen space in `Pathtracer::trace_pixel`. Pass these coordinates to the camera via `Camera::generate_ray` in `camera.cpp`.
**Step 2:** Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point. We recommend that you compute this ray in camera space (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio. Note that the camera maintains camera-space-to-world space transform matirx `iview` that will come fairly handy.
**Step 3:** Your implementation of `Pathtracer::trace_pixel` must support supersampling (more than one sample per pixel). The member `Pathtracer::ns_aa` in the raytracer class gives the number of samples of scene radiance your ray tracer should take per pixel (a.k.a. the number of camera rays per pixel. You should implement `Rect::Uniform::sample` (see `src/student/samplers.cpp`), such that it provides uniformly distributed random 2D points in the [0-1]^2 box. Supersampling should be implemented by calling get_sample() to obtain randomly chosen points within the pixel.
Once you have implemented `Pathtracer::trace_pixel`, `Rect::Uniform::sample` and `Camera::generate_ray`, you should have a working camera.
**Tips:**
* Since it'll be hard to know if you camera rays are correct until you implement primitive intersection, we recommend debugging your camera rays by checking what your implementation of `Camera::generate_ray` does with rays at the center of the screen (0.5, 0.5) and at the corners of the image.
* The code can log the results of raytracing for visualization and debugging. To do so, simply call function `Pathtracer::log_ray` in your `Pathtracer::trace_pixel`. Function `Pathtracer::log_ray` takes in 3 arguments: the ray thay you want to log, a float that specifies the time/distance to log that ray up to, as well as the color. You don't need to worry about the color as it is being set by default.
After running the ray tracer, rays will be shown as lines in visualizer. Press v to switch to the visualizer, and use s to toggle showing rays. A yellow ray indicates that it intersected some primitive (which you will implement soon). Be sure to wait for rendering to complete so you see all rays while visualizing, a message is printed to standard output on completion. **Ray logging is NOT thread-safe, so only use with a single thread (the default -t 1 setting) and remember to disable it out when you render using multiple threads**.
**Extra credit ideas:**
* Modify the implementation of the camera to simulate a camera with a finite aperture (rather than a pinhole camera). This will allow your ray tracer to simulate the effect of defocus blur.
* Write your own sampler that generates samples with improved distribution. Some examples include:
* Jittered Sampling
* Multi-jittered sampling
* N-Rooks (Latin Hypercube) sampling
* Sobol sequence sampling
* Halton sequence sampling
* Hammersley sequence sampling
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment