@@ -18,7 +18,11 @@ First, take a look at the definition for our `BVH` in `rays/bvh.h`. We represent
The BVH class also maintains a vector of all primitives in the BVH. The fields start and range in the BVH `Node` refer the range of contained primitives in this array. The primitives in this array are not initially in any particular order, and you will need to _rearrange the order_ as you build the BVH so that your BVH can accurately represent the spacial hierarchy.
The starter code constructs a valid BVH, but it is a trivial BVH with a single node containing all scene primitives.
The starter code constructs a valid BVH, but it is a trivial BVH with a single node containing all scene primitives. Once you are done with this task, you can check the box for BVH in the left bar under "Visualize" to visualize your BVH and adjust the levels.
@@ -24,6 +24,10 @@ This tutorial from [Scratchapixel](https://www.scratchapixel.com/lessons/3d-basi
Once you have implemented `Pathtracer::trace_pixel`, `Rect::Uniform::sample` and `Camera::generate_ray`, you should have a working camera.
You can visualize the result of the generated rays by checking the box for Logged rays under Visualize.
![logged_rays](new_results/log_ray.png)
**Tips:**
* Since it'll be hard to know if you camera rays are correct until you implement primitive intersection, we recommend debugging your camera rays by checking what your implementation of `Camera::generate_ray` does with rays at the center of the screen (0.5, 0.5) and at the corners of the image.
PathTrace is (as the name suggests) a simple path tracer that can render pictures with global illumination effects. The first part of the assignment will focus on providing an efficient implementation of **ray-scene geometry queries**. In the second half of the assignment you will **add the ability to simulate how light bounces around the scene**, which will allow your renderer to synthesize much higher-quality images. Much like in MeshEdit, input scenes are defined in COLLADA files, so you can create your own scenes for your scenes to render using free software like [Blender](https://www.blender.org/).)
![CBsphere](CBsphere.png)
![CBsphere](new_results/32k_large.png)
Implementing the functionality of PathTracer is split in to 7 tasks, and here are the instructions for each of them:
-[(Task 1) Generating Camera Rays](camera_rays.md)
After correctly implementing path tracing, your renderer should be able to make a beautifully lit picture of the Cornell Box with:
After correctly implementing path tracing, your renderer should be able to make a beautifully lit picture of the Cornell Box. Below is the rendering result of 1024 sample per pixel.
![cornell_lambertian](cornell_lambertian.png)
![cornell_lambertian](new_results/lambertian.png)
Note the time-quality tradeoff here. With these commandline arguments, your path tracer will be running with 8 worker threads at a sample rate of 1024 camera rays per pixel, with a max ray depth of 4. This will produce an image with relatively high quality but will take quite some time to render. Rendering a high quality image will take a very long time as indicated by the image sequence below, so start testing your path tracer early!
Note the time-quality tradeoff here. With these commandline arguments, your path tracer will be running with 8 worker threads at a sample rate of 1024 camera rays per pixel, with a max ray depth of 4. This will produce an image with relatively high quality but will take quite some time to render. Rendering a high quality image will take a very long time as indicated by the image sequence below, so start testing your path tracer early! Below are the result and runtime of rendering cornell box with different sample per pixel at 640 by 430 on Macbook Pro(3.1 GHz Dual-Core Intel Core i5).
![spheres](spheres.png)
![spheres](new_results/timing.png)
Also note that if you have enabled Russian Roulette, your result may seem noisier.
@@ -26,7 +26,6 @@ Your job is to implement the logic needed to compute whether hit point is in sha
* You will find it useful to debug your shadow code using the `DirectionalLight` since it produces hard shadows that are easy to reason about.
* You would want to comment out the line `Spectrum radiance_out = Spectrum(0.5f);` and initialize the `radiance_out` to a more reasonable value. Hint: is there supposed to have any amount of light before we even start considering each light sample?
At this point you should be able to render very striking images. For example, here is the Stanford Dragon model rendered with both a directional light (light coming from a single direction only) and a hemispherical light (light coming from all directions). Notice how both have realistic shadows in response to the the lighting conditions.
At this point you should be able to render very striking images. For example, here is the head of Peter Schröder rendered with an area light from above.