In this task you will implement a bounding volume hierarchy that accelerates ray-scene intersection. Most of this work will be in `student/bvh.inl`. Note that this file has an unusual extension (`.inl` = inline) because it is an implementation file for a template class. This means `bvh.h` must `#include` it, so all code that sees `bvh.h` will also see `bvh.inl`.
...
...
@@ -25,21 +28,26 @@ The starter code constructs a valid BVH, but it is a trivial BVH with a single n
Finally, note that the BVH visualizer will start drawing from `BVH::root_idx`, so be sure to set this to the proper index (probably 0 or `nodes.size() - 1`, depending on your implementation) when you build the BVH.
Implement `BBox::hit` in `student/bbox.cpp` and `Triangle::bbox` in `student/tri_mesh.cpp` (if you haven't already from Task 2).
Implement `BBox::hit` in `student/bbox.cpp`.
Also if you haven't already, implement `Triangle::bbox` in `student/tri_mesh.cpp` (`Triangle::bbox` should be fairly straightforward). We recommend checking out this [Scratchapixel article](https://www.scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection).
We recommend checking out this [Scratchapixel article](https://www.scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection) for implementing bounding box intersections.
## Step 1: BVH Construction
Your job is to construct a `BVH` using the [Surface Area Heuristic](http://15462.courses.cs.cmu.edu/fall2017/lecture/acceleratingqueries/slide_025) discussed in class. Tree construction would occur when the BVH object is constructed. Below is the pseudocode by which your BVH construction procedure should generally follow (copied from lecture slides).
Your job is to construct a `BVH` in `void BVH<Primitive>::build` in
`student/bvh.inl`
using the [Surface Area Heuristic](http://15462.courses.cs.cmu.edu/fall2017/lecture/acceleratingqueries/slide_025) discussed in class. Tree construction would occur when the BVH object is constructed. Below is the pseudocode by which your BVH construction procedure should generally follow (copied from lecture slides).
**Note:** You may find that this task is one of the most time consuming parts of A3, especially since this part of the documentation is intentionally sparse.
Implement the ray-BVH intersection routine `Trace BVH<Primitive>::hit(const Ray& ray)`. You may wish to consider the node visit order optimizations we discussed in class. Once complete, your renderer should be able to render all of the test scenes in a reasonable amount of time. [Visualization of normals](visualization_of_normals.md) may help with debugging.
Implement the ray-BVH intersection routine `Trace BVH<Primitive>::hit(const Ray& ray)` in `student/bvh.inl`. You may wish to consider the node visit order optimizations we discussed in class. Once complete, your renderer should be able to render all of the test scenes in a reasonable amount of time. [Visualization of normals](visualization_of_normals.md) may help with debugging.
"Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.)
"Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.) Your job is to generate these rays, which is the first step in the raytracing procedure.
## Step 1: `Pathtracer::trace_pixel`
Take a look at `Pathtracer::trace_pixel` in `student/pathtracer.cpp`. The job of this function is to compute the amount of energy arriving at this pixel of the image. Conveniently, we've given you a function `Pathtracer::trace_ray(r)` that provides a measurement of incoming scene radiance along the direction given by ray `r`. See `lib/ray.h` for the interface of ray.
Here are some [rough notes](https://drive.google.com/file/d/0B4d7cujZGEBqVnUtaEsxOUI4dTMtUUItOFR1alQ4bmVBbnU0/view) giving more detail on how to generate camera rays.
Given the width and height of the screen, and a point's _screen space_ coordinates (`size_t x, size_t y`), compute the point's _normalized_ ([0-1] x [0-1]) screen space coordinates in `Pathtracer::trace_pixel`. Pass these coordinates to the camera via `Camera::generate_ray` in `camera.cpp` (note that `Camera::generate_ray` accepts a `Vec2` object as its input argument)
This tutorial from [Scratchapixel](https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-generating-camera-rays/generating-camera-rays) also provides a detailed walkthrough of what you need to do. (Note that the coordinate convention that Scratchpixel adopted is different from the one we use, and you should stick to the coordinate system from the [rough notes](https://drive.google.com/file/d/0B4d7cujZGEBqVnUtaEsxOUI4dTMtUUItOFR1alQ4bmVBbnU0/view) all the time.)
## Step 2: `Camera::generate_ray`
Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point, i.e. the input argument. We recommend that you compute this ray in camera space (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio. Note that the camera maintains camera-space-to-world space transform matrix `iview` that will come in handy.
**Step 1:** Given the width and height of the screen, and point in screen space, compute the corresponding coordinates of the point in normalized ([0-1]x[0-1]) screen space in `Pathtracer::trace_pixel`. Pass these coordinates to the camera via `Camera::generate_ray` in `camera.cpp`.
Your implementation of `Pathtracer::trace_pixel` must support super-sampling. The starter code will hence call `Pathtracer::trace_pixel` one time for each sample (number of samples specified by `Pathtracer::n_samples`, so your implementation of `Pathtracer::trace_pixel` should choose a **single** new location within the pixel each time.
**Step 2:** Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point. We recommend that you compute this ray in camera space (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio. Note that the camera maintains camera-space-to-world space transform matrix `iview` that will come in handy.
To choose a sample within the pixel, you should implement `Rect::Uniform::sample` (see `src/student/samplers.cpp`), such that it provides (random) uniformly distributed 2D points within the rectangular region specified by the origin and the member `Rect::Uniform::size`. Then you may then create a `Rect::Uniform` sampler with a one-by-one region and call `sample()` to obtain randomly chosen offsets within the pixel.
**Step 3:** Your implementation of`Pathtracer::trace_pixel` must support super-sampling. The member `Pathtracer::n_samples` specifies the number of samples of scene radiance to evaluate per pixel. The starter code will hence call `Pathtracer::trace_pixel` one time for each sample, so your implementation of `Pathtracer::trace_pixel` should choose a new location within the pixel each time.
Once you have implemented`Pathtracer::trace_pixel`, `Rect::Uniform::sample` and `Camera::generate_ray`, you should have a working camera (see **Raytracing Visualization** section below to confirm that your camera is indeed working).
To choose a sample within the pixel, you should implement `Rect::Uniform::sample` (see `src/student/samplers.cpp`), such that it provides (random) uniformly distributed 2D points within the rectangular region specified by the origin and the member `Rect::Uniform::size`. Then you may then create a `Rect::Uniform` sampler with a one-by-one region and call `sample()` to obtain randomly chosen offsets within the pixel.
## Step 4: `Camera::generate_ray` → Defous Blur and Bokeh
**Step 4:**`Camera` also includes the members `aperture` and `focal_dist`. These parameters are used to simulate the effects of de-focus blur and bokeh found in real cameras. Focal distance represents the distance between the camera aperture and the plane that is perfectly in focus. To use it, you must simply scale up the sensor position from step 2 (and hence ray direction) by `focal_dist` instead of leaving it on the `z = -1` plane. You might notice that this doesn't actually change anything about your result, since this is just scaling up a vector that is later normalized. However, now aperture comes in: by default, all rays start a single point, representing a pinhole camera. But when `aperture > 0`, we want to randomly choose the ray origin from an `aperture`x`aperture` square centered at the origin and facing the camera direction (-Z). Then, we use this point as the starting point of the ray while keeping its sensor position fixed (consider how that changes the ray direction). Now it's as if the same image was taken from slightly off origin. This simulates real cameras with non-pinhole apertures: the final photo is equivalent to averaging images taken by pinhole cameras placed at every point in the aperture.
Once you have implemented `Pathtracer::trace_pixel`, `Rect::Uniform::sample` and `Camera::generate_ray`, you should have a working camera.
Finally, we can see that non-zero aperture makes focal distance matter: objects on the focal plane are unaffected, since where the ray hits on the sensor is the same regardless of the ray's origin. However, rays that hit objects objects closer or farther than the focal distance will be able to "see" slightly different parts of the object based on the ray origin. Averaging over many rays within a pixel, this results in collecting colors from a region larger slightly than that pixel would cover given zero aperture, causing the object to become blurry. We are using a square aperture, so bokeh effects will reflect this.
**Tip:** Since it'll be hard to know if you camera rays are correct until you implement primitive intersection, we recommend debugging your camera rays by checking what your implementation of `Camera::generate_ray` does with rays at the center of the screen (0.5, 0.5) and at the corners of the image.
You can test aperture/focal distance by adjusting `aperture` and `focal_dist` using the camera UI and examining logging rays. Once you have implemented primitive intersections and path tracing (tasks 3/5), you will be able to properly render `dof.dae`:
The code can log the results of raytracing for visualization and debugging. To do so, simply call function `Pathtracer::log_ray` in your `Pathtracer::trace_pixel`. Function `Pathtracer::log_ray` takes in 3 arguments: the ray tat you want to log, a float that specifies the distance to log that ray up to, and a color for the ray. If you don't pass a color, it will default to white. You should only log only a portion of the generated rays, or else the result will be hard to interpret. To do so, you can add `if(RNG::coin_flip(0.0005f)) log_ray(out, 10.0f);` to log 0.05% of camera rays.
Finally, you can visualize the logged rays by checking the box for Logged rays under Visualize and then **starting the render** (Open Render Window -> Start Render). After running the path tracer, rays will be shown as lines in visualizer. Be sure to wait for rendering to complete so you see all rays while visualizing.
## Raytracing Visualization & Tips
![logged_rays](new_results/log_rays.png)
**Tip 1:** This tutorial from [Scratchapixel](https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-generating-camera-rays/generating-camera-rays) also provides a detailed walkthrough of generating camera rays. Note that the coordinate convention that Scratchpixel adopted is different from the one we use, and you should stick to the coordinate system from the [rough notes](https://drive.google.com/file/d/0B4d7cujZGEBqVnUtaEsxOUI4dTMtUUItOFR1alQ4bmVBbnU0/view) all the time.
**Step 4:**`Camera` also includes the members `aperture` and `focal_dist`. These parameters are used to simulate the effects of de-focus blur and bokeh found in real cameras. Focal distance represents the distance between the camera aperture and the plane that is perfectly in focus. To use it, you must simply scale up the sensor position from step 2 (and hence ray direction) by `focal_dist` instead of leaving it on the `z = -1` plane. You might notice that this doesn't actually change anything about your result, since this is just scaling up a vector that is later normalized. However, now aperturecomes in: by default, all rays start a single point, representing a pinhole camera. But when `aperture > 0`, we want to randomly choose the ray origin from an `aperture`x`aperture` square centered at the origin and facing the camera direction (-Z). Then, we use this point as the starting point of the ray while keeping its sensor position fixed (consider how that changes the ray direction). Now it's as if the same image was taken from slightly offorigin. This simulates real cameras with non-pinhole apertures: the final photo is equivalent to averaging images taken by pinhole cameras placed at every point in the aperture.
**Tip 2:**Since you won't know if your camera rays are correct until you implement primitive intersections, we recommend debugging camera rays by checking what your implementation of `Camera::generate_ray` does with rays at the center of the screen (0.5, 0.5) and at the corners of the image.
Finally, we can see that non-zero aperture makes focal distance matter: objects on the focal plane are unaffected, since where the ray hits on the sensor is the same regardless of the ray's origin. However, rays that hit objects objects closer or farther than the focal distance will be able to "see" slightly different parts of the object based on the ray origin. Averaging over many rays within a pixel, this results in collecting colors from a region larger slightly than that pixel would cover given zero aperture, causing the object to become blurry. We are using a square aperture, so bokeh effects will reflect this.
**Raytracing Visualization**
You can test aperture/focal distance by adjusting `aperture` and `focal_dist` using the camera UI and examining logging rays. Once you have implemented primitive intersections and path tracing (tasks 3/5), you will be able to properly render `dof.dae`:
The code can log the results of raytracing for visualization and debugging. To do so, simply call function `Pathtracer::log_ray` in your `Pathtracer::trace_pixel`. Function `Pathtracer::log_ray` takes in 3 arguments: the ray that you want to log, a float that specifies the distance to log that ray up to, and a color for the ray. If you don't pass a color, it will default to white.
You should only log only a portion of the generated rays, or else the result will be hard to interpret. To do so, you can add `if(RNG::coin_flip(0.0005f)) log_ray(out, 10.0f);` to log 0.05% of camera rays.
![depth of field test](new_results/dof.png)
Finally, you can visualize the logged rays by checking the box for Logged rays under Visualize and then **starting the render** (Open Render Window -> Start Render). After running the path tracer, rays will be shown as lines in visualizer. Be sure to wait for rendering to complete so you see all rays while visualizing.
![logged_rays](new_results/log_rays.png)
**Extra credit ideas:**
### Extra credit ideas:
* Write your own camera pixel sampler (replacing Rect::Uniform) that generates samples with improved distribution. Some examples include:
The final task of this assignment will be to implement a new type of light source: an infinite environment light. An environment light is a light that supplies incident radiance (really, the light intensity dPhi/dOmega) from all directions on the sphere. Rather than using a predefined collection of explicit lights, an environment light is a capture of the actual incoming light from some real-world scene; rendering using environment lighting can be quite striking.
The intensity of incoming light from each direction is defined by a texture map parameterized by phi and theta, as shown below.
...
...
@@ -17,7 +20,7 @@ The intensity of incoming light from each direction is defined by a texture map
In this task you need to implement the `Env_Map::sample` and `Env_Map::sample_direction` method in `student/env_light.cpp`. You'll start with uniform direction sampling to get things working, and then move to a more advanced implementation that uses **importance sampling** to significantly reduce variance in rendered images.
## Step 1: Uniform sampling
## Step 1: Uniformly sampling the environment map
To get things working, your first implementation of `Env_Map::sample` will be quite simple. You should generate a random direction on the sphere (**with uniform (1/4pi) probability with respect to solid angle**), convert this direction to coordinates (phi, theta) and then look up the appropriate radiance value in the texture map using **bilinear interpolation** (note: we recommend you begin with bilinear interpolation to keep things simple.)
...
...
@@ -44,16 +47,8 @@ A pixel with coordinate <img src="environment_eq1.png" width ="45"> subtends an
**Summing the fluxes for all pixels, then normalizing the values so that they sum to one, yields a discrete probability distribution for picking a pixel based on flux through its corresponding solid angle on the sphere.**
The question is now how to sample from this 2D discrete probability distribution. We recommend the following process which reduces the problem to drawing samples from two 1D distributions, each time using the inversion method discussed in class:
* Given <imgsrc="environment_eq6.png"width ="45"> the probability distribution for all pixels, compute the marginal probability distribution <imgsrc="environment_eq7.png"width ="100"> for selecting a value from each row of pixels.
* Given for any pixel, compute the conditional probability <imgsrc="environment_eq8.png"width ="100">.
Given the marginal distribution for <imgsrc="environment_eq9.png"width ="10"> and the conditional distributions <imgsrc="environment_eq10.png"width ="45"> for environment map rows, it is easy to select a pixel as follows:
1. Use the inversion method to first select a "row" of the environment map according to <imgsrc="environment_eq11.png"width ="35">.
2. Given this row, use the inversion method to select a pixel in the row according to <imgsrc="environment_eq12.png"width ="45">.
To sample this 2D discrete probability distribution, we recommend treating the image as a single vector (row-major), where
the CDF of a pixel is the sum of the PDFs of the pixels before it. You can then use inversion sampling with this vector to sample a pixel from this 2D discrete probability distribuion.
Now that your ray tracer generates camera rays, we need to be able to answer the core query in ray tracing: "does this ray hit this object?" Here, you will start by implementing ray-object intersection routines against the two types of objects in the starter code: triangles and spheres. Later, we will use a BVH to accelerate these queries, but for now we consider an intersection test against a single object.
Now that your ray tracer generates camera rays, we need to be able to answer the core query in ray tracing: "does this ray hit this object?" Here, you will start by implementing ray-object intersection routines against the two types of objects in the starter code: **triangles** and **spheres**.
First, take a look at the `rays/object.h` for the interface of `Object` class. An `Object` can be **either** a `Tri_Mesh`, a `Shape`, a BVH(which you will implement in Task 3), or a list of `Objects`. Right now, we are only dealing with `Tri_Mesh`'s case and `Shape`'s case, and their interfaces are in `rays/tri_mesh.h` and `rays/shapes.h`, respectively. `Tri_Mesh` contains a BVH of `Triangle`, and in this task you will be working with the `Triangle` class. For `Shape`, you are going to work with `Sphere`s, which is the major type of `Shape` in Scotty 3D.
First, take a look at `rays/object.h` for the interface of the `Object` class. An `Object` can be **either** a `Tri_Mesh`, a `Shape`, a BVH(which you will implement in Task 3), or a list of `Objects`. Right now, we are only dealing with `Tri_Mesh`'s case and `Shape`'s case, and their interfaces are in `rays/tri_mesh.h` and `rays/shapes.h`, respectively. `Tri_Mesh` contains a BVH of `Triangle`, and in this task you will be working with the `Triangle` class. For `Shape`, you are going to work with `Sphere`s, which is the major type of `Shape` in Scotty 3D.
Now, you need to implement the `hit` routine for both `Triangle` and `Sphere`. `hit` takes in a ray, and returns a `Trace` structure, which contains information on whether the ray hits the object and if hits, the information describing the surface at the point of the hit. See `rays/trace.h` for the definition of `Trace`.
...
...
@@ -27,7 +27,7 @@ One important detail of the ray structure is that `dist_bounds` is a mutable fie
---
### **Step 1: Intersecting Triangles**
##Step 1: `Triangle::hit`
The first intersect routine that the `hit` routines for the triangle mesh in `student/tri_mesh.cpp`.
...
...
@@ -46,17 +46,22 @@ There are two important details you should be aware of about intersection:
Once you've successfully implemented triangle intersection, you will be able to render many of the scenes in the media directory. However, your ray tracer will be very slow!
**While you are working with `student/tri_mesh.cpp`, you should implement `Triangle::bbox` as well, which are important for task 3.**
**Tip:**While you are working with `student/tri_mesh.cpp`, you can choose to implement `Triangle::bbox` as well (pretty straightforward to do), which is needed for task 3.
### **Step 2: Intersecting Spheres**
##Step 2: `Sphere::hit`
You also need to implement the `hit` routines for the `Sphere` class in `student/sphapes.cpp`. Remember that your intersection tests should respect the ray's `dist_bound`. Because spheres always represent closed surfaces, you should not flip back-facing normals you did with triangles.
You also need to implement the `hit` routines for the `Sphere` class in `student/shapes.cpp`. Remember that your intersection tests should respect the ray's `dist_bound`. Because spheres always represent closed surfaces, you should not flip back-facing normals you did with triangles.
Note: take care **not** to use the `Vec3::normalize()` method when computing your
**Tip 1:** take care **NOT** to use the `Vec3::normalize()` method when computing your
normal vector. You should instead use `Vec3::unit()`, since `Vec3::normalize()`
will actually change the `Vec3` calling object rather than returning a
normalized version.
**Tip 2:** A common mistake is to forget to check the case where the first
interesection time t1 is out of bounds but the second interesection time t2 is
(in which case you should return t2).
---
[Visualization of normals](visualization_of_normals.md) might be very helpful with debugging.
Now that you have implemented the ability to sample more complex light paths, it's finally time to add support for more types of materials (other than the fully Lambertian material that you have implemented in Task 5). In this task you will add support for two types of materials: a perfect mirror and glass (a material featuring both specular reflection and transmittance) in `student/bsdf.cpp`.
**Note:** In the BSDF diagrams and documentation below, both the `out_dir` and the returned in-direction are pointing away from the intersection point of the ray and the surface, as illustrated in this picture below. This is so that it is easy to define the angles with respect to the surface normal.
Also, remember that in pathtracing, we are tracing _backwards_, from the scene to the camera, which is why the output of these BSDF diagrams (and hence your BSDF functions) correspond with the input rays of the pathtracing procedure.
To get started take a look at the BSDF interface in `rays/bsdf.h`. There are a number of key methods you should understand in `BSDF class`:
*`Spectrum evaluate(Vec3 out_dir, Vec3 in_dir)`: evaluates the distribution function for a given pair of directions.
...
...
@@ -22,15 +30,27 @@ There are also two helper functions in the BSDF class in `student/bsdf.cpp` that
*`Vec3 reflect(Vec3 dir)` returns a direction that is the **perfect specular reflection** direction corresponding to `dir` (reflection of `dir` about the normal, which in the surface coordinate space is [0,1,0]). More detail about specular reflection is [here](http://15462.courses.cs.cmu.edu/fall2015/lecture/reflection/slide_028).
*`Vec3 refract(Vec3 out_dir, float index_of_refraction, bool& was_internal)` returns the ray that results from refracting the ray in `out_dir` about the surface according to [Snell's Law](http://15462.courses.cs.cmu.edu/fall2015/lecture/reflection/slide_032). The surface's index of refraction is given by the argument `index_of_refraction`. Your implementation should assume that if the ray in `out_dir`**is entering the surface** (that is, if `cos(out_dir, N=[0,1,0]) > 0`) then the ray is currently in vacuum (index of refraction = 1.0). If `cos(out_dir, N=[0,1,0]) < 0` then your code should assume the ray is leaving the surface and entering vacuum. **In the case of total internal reflection, you should set `*was_internal` to `true`.**
*`Vec3 refract(Vec3 out_dir, float index_of_refraction, bool& was_internal)` returns the ray that results from refracting the ray in `out_dir` about the surface according to [Snell's Law](http://15462.courses.cs.cmu.edu/fall2015/lecture/reflection/slide_032). The surface's index of refraction is given by the argument `index_of_refraction`. Your implementation should assume that if the ray in `out_dir`**is entering the surface** (that is, if `cos(out_dir, N=[0,1,0]) > 0`) then the ray is currently in vacuum (index of refraction = 1.0). If `cos(out_dir, N=[0,1,0]) < 0` then your code should assume the ray is leaving the surface and entering vacuum. Remember to **flip the sign of the x and z components** of the output ray direction from the refract method.
* Note that in `reflect` and `refract`, both the `out_dir` and the returned in-direction are pointing away from the intersection point of the ray and the surface, as illustrated in this picture below.
* There is a special case to account for, specifically known as **total internal
reflection**. This happens for certain angles when the incoming ray is in the material with the
higher refractive index. These certain angles are angles that are greater than the _critical angle_, which is the incident angle \theta_i that causes the
refracted angle \theta_t to be >= 90 degrees, which causes there to be no real solution to Snell's
Implement the class `BSDF_Mirror` which represents a material with perfect specular reflection (a perfect mirror). You should Implement `BSDF_Mirror::sample`, `BSDF_Mirror::evaluate`, and `reflect`. **(Hint: what should the pdf sampled by `BSDF_Mirror::sample` be? What should the reflectance function `BSDF_Mirror::evalute` be?)**
**In the case of total internal reflection, you should set `*was_internal` to `true`**.
Implement the class `BSDF_Mirror` which represents a material with perfect specular reflection (a perfect mirror). You should Implement `BSDF_Mirror::sample`, `BSDF_Mirror::evaluate`, and `reflect`.
**Hint:** the mirror BSDF is a Dirac Delta. What does this mean for the pdf and the evaluate function?
## Step 2: `BSDF_Glass`
Implement the class `BSDF_Glass` which is a glass-like material that both reflects light and transmit light. As discussed in class the fraction of light that is reflected and transmitted through glass is given by the dielectric Fresnel equations.
...
...
@@ -64,10 +84,24 @@ Alternatively, you may compute <img src="dielectric_eq8.png" width="18"> using
### Distribution Function for Transmitted Light
We described the BRDF for perfect specular reflection in class, however we did not discuss the distribution function for transmitted light. Since refraction "spreads" or "condenses" a beam, unlike perfect reflection, the radiance along the ray changes due to a refraction event. In your assignment you should use Snell's Law to compute the direction of refraction rays, and use the following distribution function to compute the radiance of transmitted rays. We refer you guys to Pharr, Jakob, and and Humphries's book [Physically Based Rendering](http://www.pbr-book.org/) for a derivation based on Snell's Law and the relation <imgsrc="dielectric_eq10.png"width="150">. (But you are more than welcome to attempt a derivation on your own!)
We described the BRDF for perfect specular reflection in class, however we did not discuss the distribution function for transmitted light. Since refraction "spreads" or "condenses" a beam, unlike perfect reflection, the radiance along the ray changes due to a refraction event. In your assignment you should use Snell's Law to compute the direction of refraction rays, and use the following distribution function to compute the radiance of transmitted rays. We refer you guys to Pharr, Jakob, and and Humphries's book [Physically Based Rendering](http://www.pbr-book.org/3ed-2018/Reflection_Models/Specular_Reflection_and_Transmission.html) for a derivation based on Snell's Law and the relation <imgsrc="dielectric_eq10.png"width="150">. (But you are more than welcome to attempt a derivation on your own!)
When you are done, you will be able to render images like this one, the Cornell Box with a metal and glass sphere (`cbox.dae`):
Up to this point, your renderer simulates light which begins at a source, bounces off a surface, and hits a camera. However in the real world, light can take much more complicated paths, bouncing of many surfaces before eventually reaching the camera. Simulating this multi-bounce light is referred to as _indirect illumination_, and it is critical to producing realistic images, especially when specular surfaces are present. In this task you will modify your ray tracer to simulate multi-bounce light, adding support for indirect illumination.
Up to this point, your renderer simulates light which begins at a source, bounces off a surface, and hits a camera. However, light can take much more complicated paths, bouncing off many surfaces before eventually reaching the camera. Simulating this multi-bounce light is referred to as _indirect illumination_, and it is critical to producing realistic images, especially when specular surfaces are present.
You must modify `Pathtracer::trace_ray` to simulate multiple bounces. We recommend using the [Russian Roulette](http://15462.courses.cs.cmu.edu/spring2020/lecture/montecarloraytracing/slide_044) algorithm discussed in class.
In this task you will modify your renderer to simulate multi-bounce light, adding support for indirect illumination in addition to the direct illumination implemented for you in the starter code.
## Step 1: `Pathtracer::trace_ray`
You must modify `Pathtracer::trace_ray` to simulate multiple bounces using the [Russian Roulette](http://15462.courses.cs.cmu.edu/spring2020/lecture/montecarloraytracing/slide_044) algorithm discussed in class.
The basic structure will be as follows:
* (1) Randomly select a new ray direction using `bsdf.sample` (which you will implement in Step 2)
* (2) Potentially terminate the path (using Russian roulette)
* (3) Recursively trace the ray to evaluate weighted reflectance contribution due to light from this direction. Remember to respect the maximum number of bounces from `max_depth` (which is a member of class `Pathtracer`). Don't forget to add in the BSDF emissive component!
* (2) Potentially terminate the path (using Russian Roulette)
* (3) Recursively trace the ray to evaluate weighted reflectance contribution due to light from this direction. Remember to respect the maximum number of bounces from `max_depth` (which is a member of class `Pathtracer`). Don't forget to add in the BSDF emissive component to account for light materials!
## Step 2
## Step 2: `BSDF_Lambertian::sample`
Implement `BSDF_Lambertian::sample` for diffuse reflections, which randomly samples a direction from a uniform hemisphere distribution and returns a `BSDF_Sample`. Note that the interface is in `rays/bsdf.h`. Task 6 contains further discussion of sampling BSDFs, reading ahead may help your understanding. The implementation of `BSDF_Lambertian::evaluate` is already provided to you.
...
...
@@ -31,11 +38,13 @@ Note:
(cbox.dae). To do this, simply add the BSDF sample's emissive term to your
total radiance, i.e. `L += sample.emisssive`.
* Functions in `student/sampler.cpp` from class `Sampler` contains helper functions for random sampling, which you will use for sampling. Our starter code uses uniform hemisphere sampling `Samplers::Hemisphere::Uniform sampler`(see `rays/bsdf.h` and `student/sampler.cpp`) which is already implemented. You are welcome to implement Cosine-Weighted Hemisphere sampling for extra credit, but it is not required. If you want to implement Cosine-Weighted Hemisphere sampling, fill in `Hemisphere::Cosine::sample` in `student/samplers.cpp` and then change `Samplers::Hemisphere::Uniform sampler` to `Samplers::Hemisphere::Cosine sampler` in `rays/bsdf.h`.
* Functions in `student/sampler.cpp` from class `Sampler` contains helper functions for random sampling, which you will use for sampling. Our starter code uses uniform hemisphere sampling `Samplers::Hemisphere::Uniform sampler`(see `rays/bsdf.h` and `student/sampler.cpp`) which is already implemented for you.
* If you want to implement Cosine-Weighted Hemisphere sampling for extra credit, fill in `Hemisphere::Cosine::sample` in `student/samplers.cpp` and then in `rays/bsdf.h`change `Samplers::Hemisphere::Uniform sampler` to `Samplers::Hemisphere::Cosine sampler`.
---
After correctly implementing path tracing, your renderer should be able to make a beautifully lit picture of the Cornell Box with Lambertian spheres (`cbox_lambertian.dae`). Below is a render using 1024 samples per pixel:
After correctly implementing path tracing, your renderer should be able to make a beautifully lit picture of the Cornell Box with Lambertian spheres (`cbox_lambertian.dae`). Below is a render using 1024 samples per pixel (spp):
![cornell_lambertian](new_results/lambertian.png)
...
...
@@ -45,11 +54,11 @@ Note the time-quality tradeoff here. With these arguments, your path tracer will
Also note that if you have enabled Russian Roulette, your result may seem noisier, but should complete faster. The point of Russian roulette is not to increase sample quality, but to allow the computation of more samples in the same amount of time, resulting in a higher quality result.
Here are a few tips:
## Tips
* The path termination probability should be computed based on the [overall throughput](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_044) of the path. The throughput of the ray is recorded in its `throughput` member, which represents the multiplicative factor the current radiance will be affected by before contributing to the final pixel color. Hence, you should both use and update this field. To update it, simply multiply in the rendering equation factors: BSDF attenuation,`cos(theta)`, and (inverse) BSDF PDF. Remember to apply the coefficients from the current step before deriving the termination probability. Finally, note that the updated throughput should be copied to the recursive ray for later steps.
* The path termination probability should be computed based on the [overall throughput](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_044) of the path. The throughput of the ray is recorded in its `throughput` member, which represents the multiplicative factor the current radiance will be affected by before contributing to the final pixel color. Hence, you should both use and update this field. To update it, simply multiply in the rendering equation factors: BSDF attenuation and`cos(theta)`. Remember to apply the coefficients from the current step before deriving the termination probability. Finally, note that the updated throughput should be copied to the recursive ray for later steps.
Keep in mind that delta function BSDFs can take on values greater than one, so clamping termination probabilities derived from BSDF values to 1 is wise.
*Keep in mind that delta function BSDFs can take on values greater than one, so clamping termination probabilities derived from BSDF values to 1 is wise.
* To convert a Spectrum to a termination probability, we recommend you use the luminance (overall brightness) of the Spectrum, which is available via `Spectrum::luma`
@@ -21,7 +21,7 @@ Your job is to implement the logic needed to compute whether hit point is in sha
* In the starter code, when we call `light.sample(hit.position)`, it returns us a `Light_Sample` at the hit point . (You might want to take a look at `rays/light.h` for the definition of `struct Light_Sample` and `class light`.) A `Light_Sample` contains fields `radiance`, `pdf`, `direction`, and `distance`. In particular, `sample.direction` is the direction from the hit point to the light source, and `sample.distance` is the distance from the hit point to the light source.
* A common ray tracing pitfall is for the "shadow ray" shot into the scene to accidentally hit the same object as the original ray. That is, the surface is erroneously determined to be occluded because the shadow ray hits itself! To fix this, you can set the minimum valid intersection distance (`dist_bound.x`) to a small positive value, for example `EPS_F`. `EPS_F` is defined in for this purpose(see `lib/mathlib.h`).
* A common ray tracing pitfall is for the "shadow ray" shot into the scene to accidentally hit the same object as the original ray. That is, the surface is erroneously determined to be occluded because the shadow ray hits itself (this is also known as _shadow acne_)! To fix this, you can set the minimum valid intersection distance (`dist_bound.x`) to a small positive value, for example `EPS_F`. `EPS_F` is defined in for this purpose(see `lib/mathlib.h`).
* Another common pitfall is forgetting that it doesn't matter if the shadow ray hits any scene geometry after reaching the light. Note that the light's distance from the hit point is given by `sample.distance`, and you can again limit the distance we check for intersections with `dist_bound`. Also consider the fact that using the _exact_ distance bound can have the same issues as shadow self-intersections.