Commit ae2b9dc5 authored by TheNumbat's avatar TheNumbat
Browse files

Release new version

Features:
    - Particle systems can now specify a maximum dt per step
    - Animation key-framing & timing system now supports objects with simulation
    - Mixture/multiple importance sampling for correct low-variance direct lighting
        - New BSDF, point light, and environment light APIs that separate sampling, evaluation, and pdf
        - Area light sampling infrastructure
        - Removed rectangle area lights; all area lights are now emissive meshes
        - Reworked PathTracer tasks 4-6, adjusted/improved instructions for the other tasks

Bug fixes:
    - Use full rgb/srgb conversion equation instead of approximation
    - Material albedo now specified in srgb (matching the displayed color)
    - ImGui input fields becoming inactive no longer apply to a newly selected object
    - Rendering animations with path tracing correctly steps simulations each frame
    - Rasterization based renderer no longer inherits projection matrix from window
    - Scene file format no longe...
parent afa3f68f
...@@ -191,9 +191,9 @@ set_target_properties(Scotty3D PROPERTIES ...@@ -191,9 +191,9 @@ set_target_properties(Scotty3D PROPERTIES
CXX_EXTENSIONS OFF) CXX_EXTENSIONS OFF)
if(MSVC) if(MSVC)
target_compile_options(Scotty3D PRIVATE /MP /W4 /WX /wd4201 /wd4840 /wd4100 /fp:fast) target_compile_options(Scotty3D PRIVATE /MP /W4 /WX /wd4201 /wd4840 /wd4100 /wd4505 /fp:fast)
else() else()
target_compile_options(Scotty3D PRIVATE -Wall -Wextra -Werror -Wno-reorder -Wno-unused-parameter) target_compile_options(Scotty3D PRIVATE -Wall -Wextra -Werror -Wno-reorder -Wno-unused-function -Wno-unused-parameter)
endif() endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang") if(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
......
...@@ -573,6 +573,10 @@ void ColladaExporter::WriteAmbienttLight(const aiLight *const light) { ...@@ -573,6 +573,10 @@ void ColladaExporter::WriteAmbienttLight(const aiLight *const light) {
mOutput << startstr << "<pps>" mOutput << startstr << "<pps>"
<< colord.r << "</pps>" << endstr; << colord.r << "</pps>" << endstr;
} }
if(colord.g > 0.0f) {
mOutput << startstr << "<timestep>"
<< colord.g << "</timestep>" << endstr;
}
mOutput << startstr << "<constant_attenuation>" mOutput << startstr << "<constant_attenuation>"
<< light->mAttenuationConstant << light->mAttenuationConstant
<< "</constant_attenuation>" << endstr; << "</constant_attenuation>" << endstr;
......
...@@ -185,6 +185,7 @@ struct Light { ...@@ -185,6 +185,7 @@ struct Light {
//! Common light intensity //! Common light intensity
ai_real mIntensity; ai_real mIntensity;
ai_real mPPS; ai_real mPPS;
ai_real mdt;
aiString env_map; aiString env_map;
}; };
......
...@@ -389,6 +389,7 @@ void ColladaLoader::BuildLightsForNode(const ColladaParser &pParser, const Colla ...@@ -389,6 +389,7 @@ void ColladaLoader::BuildLightsForNode(const ColladaParser &pParser, const Colla
if (out->mType == aiLightSource_AMBIENT) { if (out->mType == aiLightSource_AMBIENT) {
out->mColorDiffuse = out->mColorSpecular = aiColor3D(0, 0, 0); out->mColorDiffuse = out->mColorSpecular = aiColor3D(0, 0, 0);
out->mColorDiffuse.r = srcLight->mPPS; out->mColorDiffuse.r = srcLight->mPPS;
out->mColorDiffuse.g = srcLight->mdt;
out->mColorAmbient = srcLight->mColor * srcLight->mIntensity; out->mColorAmbient = srcLight->mColor * srcLight->mIntensity;
} else { } else {
// collada doesn't differentiate between these color types // collada doesn't differentiate between these color types
......
...@@ -1274,6 +1274,9 @@ void ColladaParser::ReadLight(Collada::Light &pLight) { ...@@ -1274,6 +1274,9 @@ void ColladaParser::ReadLight(Collada::Light &pLight) {
} else if (IsElement("pps")) { } else if (IsElement("pps")) {
pLight.mPPS = ReadFloatFromTextContent(); pLight.mPPS = ReadFloatFromTextContent();
TestClosing("pps"); TestClosing("pps");
} else if (IsElement("timestep")) {
pLight.mdt = ReadFloatFromTextContent();
TestClosing("timestep");
} else if (IsElement("falloff_exponent")) { } else if (IsElement("falloff_exponent")) {
pLight.mFalloffExponent = ReadFloatFromTextContent(); pLight.mFalloffExponent = ReadFloatFromTextContent();
TestClosing("falloff_exponent"); TestClosing("falloff_exponent");
......
_site
Gemfile.lock
.jekyll-cache
\ No newline at end of file
gem 'jekyll-sitemap' gem 'jekyll-sitemap'
gem "just-the-docs" gem 'jekyll-remote-theme'
#source "https://rubygems.org" gem 'just-the-docs'
#gem "just-the-docs", group: :jekyll_plugins source "https://rubygems.org"
#gem "github-pages", group: :jekyll_plugins
...@@ -2,4 +2,5 @@ remote_theme: pmarsceill/just-the-docs ...@@ -2,4 +2,5 @@ remote_theme: pmarsceill/just-the-docs
baseurl: /Scotty3D baseurl: /Scotty3D
plugins: plugins:
- jekyll-sitemap - jekyll-sitemap
logo: "/assets/spot.png" - jekyll-remote-theme
logo: '/assets/spot.png'
...@@ -20,7 +20,7 @@ However, if the particle can have arbitrary forces applied to it, we no longer k ...@@ -20,7 +20,7 @@ However, if the particle can have arbitrary forces applied to it, we no longer k
<center><img src="task4_media/fma.png"></center> <center><img src="task4_media/fma.png"></center>
There are many different techniques for integrating our equations of motion, including forward, backward, and symplectic Euler, Verlet, Runge-Kutta, and Leapfrog. Each strategy comes with a slightly different way of computing how much to update our velocity and position across a single time-step. In this task, we will use the simplest - forward Euler - as we aren't too concerned with stability or energy conservation for our particle system. Forward Euler simply steps our position forward by our velocity, and then velocity by our acceleration: There are many different techniques for integrating our equations of motion, including forward, backward, and symplectic Euler, Verlet, Runge-Kutta, and Leapfrog. Each strategy comes with a slightly different way of computing how much to update our velocity and position across a single time-step. In this case, we will use the simplest - forward Euler - as we aren't too concerned with stability or energy conservation. Forward Euler simply steps our position forward by our velocity, and then velocity by our acceleration:
<center><img src="task4_media/euler.png"></center> <center><img src="task4_media/euler.png"></center>
...@@ -28,18 +28,31 @@ In `Particle::update`, use this integrator to step the current particle forward ...@@ -28,18 +28,31 @@ In `Particle::update`, use this integrator to step the current particle forward
## Collisions ## Collisions
The more substantial part of this task is colliding each particle with the rest of our scene geometry. Thankfully, you've already done most of the work required here during A3: we can use Scotty3D's ray-tracing capabilities to find collisions along our particles' paths. The more substantial part of this task is colliding each particle with the rest of our scene geometry. Thankfully, you've already done most of the work required here for A3: we can use Scotty3D's ray-tracing capabilities to find collisions.
During each timestep, we know that in the absence of a collision, our particle will travel in the direction of velocity for distance `||velocity|| * dt`. We can create a ray representing this position and velocity to look for collisions during the time-step. If the ray intersects with the scene, we can compute when the particle would experience a collision. Note that since we are representing particles as small spheres, you must take `radius` into account when finding the collision point. If the path intersects a surface, at what distance does the closest point on the sphere start colliding? (Hint - it depends on the angle of intersection.) Also note that if a collision would occur after the end of the current timestep, it may be ignored. During each timestep, we know that in the absence of a collision, our particle will travel in the direction of velocity for distance `||velocity|| * dt`. Hence, we can create a ray from the particle's position and velocity to look for collisions during the time-step. If the ray intersects with the scene, we can compute exactly when the particle would experience a collision.
When we find a collision, we could just place the particle at the collision point and be done, but we don't want our particles to simply stick the surface! Instead, we will assume all particles collide elastically (and massless-ly) - that is, the magnitude of their velocity should be the same before and after the collision, and its direction should be reflected about the normal of the collision surface. When collision is found, we could just place the particle at the collision point and be done, but we don't want our particles to simply stick the surface! Instead, we will assume all particles collide elastically (and massless-ly): the magnitude of a particle's velocity should be the same before and after a collision, but its direction should be reflected about the normal of the collided surface.
Finally, once we have a reflected velocity, we can compute how much of the time step remains after the collision, and step the particle forward that amount. However, what if the particle collided with the scene _again_ before the end of the time-step? If we are using very small time-steps, it might be OK to ignore this possibility, but we want to be able to resolve all collisions. So, we can repeat the ray-casting procedure in a loop until we have used up the entire time-step (up to some epsilon). Remember to only use the remaining portion of the time-step each iteration, and to step forward both the velocity and position at each sub-step. Once we have reflected velocity, we can compute how much of the time step remains after the collision. But what if the particle collided with the scene _again_ before the end of the time-step? If we are using very small time-steps, it might be acceptable to ignore this possibility, but we want to resolve all collisions. So, we can repeat the ray-casting procedure in a loop until we have used up the entire time-step, up to some epsilon. Remember to only use the remaining portion of the time-step each iteration, and to step forward both the velocity and position at each sub-step.
### Spherical Particles
The above procedure is perfectly realistic for point particles, but we would like to draw our particles with some non-zero size and still have the simulation appear natural.
We will hence represent particles as small spheres. This means you must take `radius` into account when finding intersections: if the collision ray intersects a surface, at what distance does the closest point on the sphere incur the collision? (Hint - it depends on the angle of intersection.)
Simply finding closer intersections based on `radius` will not, of course, resolve all sphere-scene intersections: the ray can miss all geometry while the edge of the sphere would see a collision. However, this will produce visually acceptable results, greatly reducing overlap and never letting particles 'leak' through geometry.
<center><img src="task4_media/collision.png"></center> <center><img src="task4_media/collision.png"></center>
Once you have got collisions working, you should be able to open `particles.dae` and see a randomized collision-fueled waterfall. Try rendering the scene! Once you have got collisions working, you should be able to open `particles.dae` and see a randomized collision-fueled waterfall. Try rendering the scene!
Tips:
- **Don't** use `abs()`. In GCC, this is the integer-only absolute value function. To get the float version, use `std::abs()` or `fabsf()`.
- When accounting for radius, you don't know how far along the ray a collision might occur. Look for a collision at any distance, and if it occurs after the end of the current timestep, ignore it.
- When accounting for radius, consider in what situation(s) you could find a collision at a negative time. In this case, you should not move the particle - just reflect the velocity.
## Lifetime ## Lifetime
Finally, note that `Particle::update` is supposed to return a boolean representing whether or not the particle should be removed from the simulation. Each particle has an `age` member that represents the remaining time it has to live. Each time-step, you should subtract `dt` from `age` and return whether `age` is still greater than zero. Finally, note that `Particle::update` is supposed to return a boolean representing whether or not the particle should be removed from the simulation. Each particle has an `age` member that represents the remaining time it has to live. Each time-step, you should subtract `dt` from `age` and return whether `age` is still greater than zero.
......
...@@ -24,8 +24,6 @@ Your implementation should have the following basic steps for each vertex: ...@@ -24,8 +24,6 @@ Your implementation should have the following basic steps for each vertex:
Below we have an equation representation. The ith vertex v is the new vertex position. The weight w is the weight metric computed as the inverse of distance between the ith vertex and the closest point on joint j. We multiply this term with the position of the ith vertex v with respect to joint j after joint's transformations has been applied. Below we have an equation representation. The ith vertex v is the new vertex position. The weight w is the weight metric computed as the inverse of distance between the ith vertex and the closest point on joint j. We multiply this term with the position of the ith vertex v with respect to joint j after joint's transformations has been applied.
<!--# TODO: fix this-->
<!--![skinnning_equations](task3_media/skinning_equations.png)-->
<center><img src="task3_media/skinning_eqn1.png" style="height:100px"> <center><img src="task3_media/skinning_eqn1.png" style="height:100px">
<img src="task3_media/skinning_eqn2.png" style="height:120px"></center> <img src="task3_media/skinning_eqn2.png" style="height:120px"></center>
......
...@@ -27,6 +27,6 @@ For example, the `particles.dae` test scene: ...@@ -27,6 +27,6 @@ For example, the `particles.dae` test scene:
<video src="{{ site.baseurl }}/guide/simulate_mode/guide-simulate-1.mp4" controls preload muted loop style="max-width: 100%; margin: 0 auto;"></video> <video src="{{ site.baseurl }}/guide/simulate_mode/guide-simulate-1.mp4" controls preload muted loop style="max-width: 100%; margin: 0 auto;"></video>
Finally, note that you can render particles just like any other scene objects. In the path tracer, each particle is also a point light source! Rendering `particles.dae` with depth of field: Finally, note that you can render particles just like any other scene objects. Rendering `particles.dae` with depth of field:
![particles render](simulate_mode/render.png) ![particles render](simulate_mode/render.png)
...@@ -7,9 +7,10 @@ parent: "A3: Pathtracer" ...@@ -7,9 +7,10 @@ parent: "A3: Pathtracer"
# (Task 3) Bounding Volume Hierarchy # (Task 3) Bounding Volume Hierarchy
### Walkthrough Video ## Walkthrough
<iframe width="750" height="500" src="Task3_BVH.mp4" frameborder="0" allowfullscreen></iframe> <video width="750" height="500" controls>
<source src="videos/Task3_BVH.mp4" type="video/mp4">
</video>
In this task you will implement a bounding volume hierarchy that accelerates ray-scene intersection. Most of this work will be in `student/bvh.inl`. Note that this file has an unusual extension (`.inl` = inline) because it is an implementation file for a template class. This means `bvh.h` must `#include` it, so all code that sees `bvh.h` will also see `bvh.inl`. In this task you will implement a bounding volume hierarchy that accelerates ray-scene intersection. Most of this work will be in `student/bvh.inl`. Note that this file has an unusual extension (`.inl` = inline) because it is an implementation file for a template class. This means `bvh.h` must `#include` it, so all code that sees `bvh.h` will also see `bvh.inl`.
...@@ -17,7 +18,7 @@ First, take a look at the definition for our `BVH` in `rays/bvh.h`. We represent ...@@ -17,7 +18,7 @@ First, take a look at the definition for our `BVH` in `rays/bvh.h`. We represent
* `BBox bbox`: the bounding box of the node (bounds all primitives in the subtree rooted by this node) * `BBox bbox`: the bounding box of the node (bounds all primitives in the subtree rooted by this node)
* `size_t start`: start index of primitives in the `BVH`'s primitive array * `size_t start`: start index of primitives in the `BVH`'s primitive array
* `size_t size`: range of index in the primitive list (number of primitives in the subtree rooted by the node) * `size_t size`: range of index in the primitive list (# of primitives in the subtree rooted by the node)
* `size_t l`: the index of the left child node * `size_t l`: the index of the left child node
* `size_t r`: the index of the right child node * `size_t r`: the index of the right child node
...@@ -27,6 +28,7 @@ The starter code constructs a valid BVH, but it is a trivial BVH with a single n ...@@ -27,6 +28,7 @@ The starter code constructs a valid BVH, but it is a trivial BVH with a single n
Finally, note that the BVH visualizer will start drawing from `BVH::root_idx`, so be sure to set this to the proper index (probably 0 or `nodes.size() - 1`, depending on your implementation) when you build the BVH. Finally, note that the BVH visualizer will start drawing from `BVH::root_idx`, so be sure to set this to the proper index (probably 0 or `nodes.size() - 1`, depending on your implementation) when you build the BVH.
---
## Step 0: Bounding Box Calculation & Intersection ## Step 0: Bounding Box Calculation & Intersection
...@@ -38,34 +40,42 @@ We recommend checking out this [Scratchapixel article](https://www.scratchapixel ...@@ -38,34 +40,42 @@ We recommend checking out this [Scratchapixel article](https://www.scratchapixel
Your job is to construct a `BVH` in `void BVH<Primitive>::build` in Your job is to construct a `BVH` in `void BVH<Primitive>::build` in
`student/bvh.inl` `student/bvh.inl`
using the [Surface Area Heuristic](http://15462.courses.cs.cmu.edu/fall2017/lecture/acceleratingqueries/slide_025) discussed in class. Tree construction would occur when the BVH object is constructed. Below is the pseudocode by which your BVH construction procedure should generally follow (copied from lecture slides). using the [Surface Area Heuristic](http://15462.courses.cs.cmu.edu/fall2017/lecture/acceleratingqueries/slide_025) discussed in class. Tree construction would occur when the BVH object is constructed. Below is the pseudocode by which your BVH construction procedure should generally follow:
**Note:** You may find that this task is one of the most time consuming parts of A3, especially since this part of the documentation is intentionally sparse. <center><img src="figures/BVH_construction_pseudocode.png"></center>
<center><img src="BVH_construction_pseudocode.png"></center> **Tip:** A helpful C++ function to use for partitioning primitives is
[std::partition](https://en.cppreference.com/w/cpp/algorithm/partition). This function divides the original group of elements
into two sub-groups, where the first group contains elements that return
true for the execution policy and the second group contains the elements that
return false. Note that the elements are **not sorted** within the subgroups
themselves.
**Note:** You may find that this task is one of the most time consuming parts of A3, especially since this part of the documentation is intentionally sparse.
## Step 2: Ray-BVH Intersection ## Step 2: Ray-BVH Intersection
Implement the ray-BVH intersection routine `Trace BVH<Primitive>::hit(const Ray& ray)` in `student/bvh.inl`. You may wish to consider the node visit order optimizations we discussed in class. Once complete, your renderer should be able to render all of the test scenes in a reasonable amount of time. [Visualization of normals](visualization_of_normals.md) may help with debugging. Implement the ray-BVH intersection routine `Trace BVH<Primitive>::hit(const Ray& ray)` in `student/bvh.inl`. You may wish to consider the node visit order optimizations we discussed in class. Once complete, your renderer should be able to render all of the test scenes in a reasonable amount of time.
<center><img src="figures/ray_bvh_pseudocode.png"></center>
<center><img src="ray_bvh_pseudocode.png"></center> ---
## Visualization ## Reference Results
In Render mode, simply check the box for "BVH", and you would be able to see the BVH you generated in task 3 when you **start rendering**. You can click on the horizontal bar to see each level of your BVH. In Render mode, simply check the box for "BVH", and you would be able to see the BVH you generated in task 3 when you **start rendering**. You can click on the horizontal bar to see each level of your BVH.
<center><img src="new_results/bvh_button.png" style="height:120px"></center> <center><img src="images/bvh_button.png" style="height:120px"></center>
## Sample BVHs
The BVH constructed for Spot the Cow on the 10th level. The BVH constructed for Spot the Cow on the 10th level.
<center><img src="new_results/bvh.png" style="height:320px"></center> <center><img src="images/bvh.png" style="height:320px"></center>
The BVH constructed for a scene composed of several cubes and spheres on the 0th and 1st levels. The BVH constructed for a scene composed of several cubes and spheres on the 0th and 1st levels.
<center><img src="new_results/l0.png" style="height:220px"><img src="new_results/l2.png" style="height:220px"></center> <center><img src="images/l0.png" style="height:220px"><img src="images/l2.png" style="height:220px"></center>
The BVH constructed for the Stanford Bunny on the 10th level. The BVH constructed for the Stanford Bunny on the 10th level.
<center><img src="new_results/bvh_bunny_10.png" style="height:320px"></center> <center><img src="images/bvh_bunny_10.png" style="height:320px"></center>
...@@ -7,57 +7,76 @@ permalink: /pathtracer/camera_rays ...@@ -7,57 +7,76 @@ permalink: /pathtracer/camera_rays
# (Task 1) Generating Camera Rays # (Task 1) Generating Camera Rays
### Walkthrough Video ## Walkthrough
<iframe width="750" height="500" src="Task1_CameraRays.mp4" frameborder="0" allowfullscreen></iframe> <video width="750" height="500" controls>
<source src="videos/Task1_CameraRays.mp4" type="video/mp4">
</video>
"Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.) Your job is to generate these rays, which is the first step in the raytracing procedure. "Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.) Your job is to generate these rays, which is the first step in the raytracing procedure.
---
## Step 1: `Pathtracer::trace_pixel` ## Step 1: `Pathtracer::trace_pixel`
Take a look at `Pathtracer::trace_pixel` in `student/pathtracer.cpp`. The job of this function is to compute the amount of energy arriving at this pixel of the image. Conveniently, we've given you a function `Pathtracer::trace_ray(r)` that provides a measurement of incoming scene radiance along the direction given by ray `r`. See `lib/ray.h` for the interface of ray. The job of this function is to compute the amount of energy arriving at this pixel of the image. Take a look at `Pathtracer::trace_pixel` in `student/pathtracer.cpp`. Conveniently, we've given you a function `Pathtracer::trace_ray(r)` that provides a measurement of incoming scene radiance along the direction given by ray `r`. See `lib/ray.h` for the interface of ray.
Given the width and height of the screen, and a point's _screen space_ coordinates (`size_t x, size_t y`), compute the point's _normalized_ ([0-1] x [0-1]) screen space coordinates in `Pathtracer::trace_pixel`. Pass these coordinates to the camera via `Camera::generate_ray` in `camera.cpp` (note that `Camera::generate_ray` accepts a `Vec2` object as its input argument) Given the width and height of the screen, and a point's _screen space_ coordinates (`size_t x, size_t y`), compute the point's _normalized_ ([0-1] x [0-1]) screen space coordinates in `Pathtracer::trace_pixel`. Pass these coordinates to the camera via `Camera::generate_ray` in `camera.cpp` (note that `Camera::generate_ray` accepts a `Vec2` object as its input argument)
## Step 2: `Camera::generate_ray` ## Step 2: `Camera::generate_ray`
Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point, i.e. the input argument. We recommend that you compute this ray in camera space (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio. Note that the camera maintains camera-space-to-world space transform matrix `iview` that will come in handy. Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point, i.e. the input argument. Compute this ray in **camera space** (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio.
<center><img src="images/camera_coordinate_system.png" ></center>
Note that the camera maintains camera-space-to-world space transform matrix `iview` that you will need to use in order to get the new ray back into **world space**.
Once you have implemented `Pathtracer::trace_pixel`, `Rect::Uniform::sample` and `Camera::generate_ray`ou should now have a camera that can shoot rays into the scene! See the
**Raytracing Visualization** below to confirm this.
## Step 3: `Pathtracer::trace_pixel` &#8594; Super-sampling ## Step 3: `Pathtracer::trace_pixel` &#8594; Super-sampling
Your implementation of `Pathtracer::trace_pixel` must support super-sampling. The starter code will hence call `Pathtracer::trace_pixel` one time for each sample (number of samples specified by `Pathtracer::n_samples`, so your implementation of `Pathtracer::trace_pixel` should choose a **single** new location within the pixel each time. Your implementation of `Pathtracer::trace_pixel` must support super-sampling. The starter code will hence call `Pathtracer::trace_pixel` one time for each sample (number of samples specified by `Pathtracer::n_samples`, so your implementation of `Pathtracer::trace_pixel` should choose a **single** new location within the pixel each time.
To choose a sample within the pixel, you should implement `Rect::Uniform::sample` (see `src/student/samplers.cpp`), such that it provides (random) uniformly distributed 2D points within the rectangular region specified by the origin and the member `Rect::Uniform::size`. Then you may then create a `Rect::Uniform` sampler with a one-by-one region and call `sample()` to obtain randomly chosen offsets within the pixel. To choose a sample within the pixel, you should implement `Rect::Uniform::sample` (see `src/student/samplers.cpp`), such that it provides (random) uniformly distributed 2D points within the rectangular region specified by the origin and the member `Rect::Uniform::size`. Then you may then create a `Rect::Uniform` sampler with a one-by-one region and call `sample()` to obtain randomly chosen offsets within the pixel.
Once you have implemented `Pathtracer::trace_pixel`, `Rect::Uniform::sample` and `Camera::generate_ray`, you should have a working camera (see **Raytracing Visualization** section below to confirm that your camera is indeed working). ---
## Step 4: `Camera::generate_ray` &#8594; Defous Blur and Bokeh ### Tips
**Step 4:** `Camera` also includes the members `aperture` and `focal_dist`. These parameters are used to simulate the effects of de-focus blur and bokeh found in real cameras. Focal distance represents the distance between the camera aperture and the plane that is perfectly in focus. To use it, you must simply scale up the sensor position from step 2 (and hence ray direction) by `focal_dist` instead of leaving it on the `z = -1` plane. You might notice that this doesn't actually change anything about your result, since this is just scaling up a vector that is later normalized. However, now aperture comes in: by default, all rays start a single point, representing a pinhole camera. But when `aperture > 0`, we want to randomly choose the ray origin from an `aperture`x`aperture` square centered at the origin and facing the camera direction (-Z). Then, we use this point as the starting point of the ray while keeping its sensor position fixed (consider how that changes the ray direction). Now it's as if the same image was taken from slightly off origin. This simulates real cameras with non-pinhole apertures: the final photo is equivalent to averaging images taken by pinhole cameras placed at every point in the aperture.
Finally, we can see that non-zero aperture makes focal distance matter: objects on the focal plane are unaffected, since where the ray hits on the sensor is the same regardless of the ray's origin. However, rays that hit objects objects closer or farther than the focal distance will be able to "see" slightly different parts of the object based on the ray origin. Averaging over many rays within a pixel, this results in collecting colors from a region larger slightly than that pixel would cover given zero aperture, causing the object to become blurry. We are using a square aperture, so bokeh effects will reflect this. - Since you won't be sure your camera rays are correct until you implement primitive intersections, we recommend debugging camera rays by checking what your implementation of `Camera::generate_ray` does with rays at the center of the screen (0.5, 0.5) and at the corners of the image.
You can test aperture/focal distance by adjusting `aperture` and `focal_dist` using the camera UI and examining logging rays. Once you have implemented primitive intersections and path tracing (tasks 3/5), you will be able to properly render `dof.dae`: ### Raytracing Visualization
<center><img src="new_results/dof.png" width="400"></center> Your code can also log the results of ray computations for visualization and debugging. To do so, simply call function `Pathtracer::log_ray` in your `Pathtracer::trace_pixel`. Function `Pathtracer::log_ray` takes in 3 arguments: the ray that you want to log, a float that specifies the distance to log that ray up to, and a color for the ray. If you don't pass a color, it will default to white. We encourage you to make use of this feature for debugging both camera rays, and those used for sampling direct & indirect lighting.
## Raytracing Visualization & Tips You should only log only a small fraction of the generated rays, or else the result will be hard to interpret. To do so, you can add `if(RNG::coin_flip(0.0005f)) log_ray(out, 10.0f);` to log 0.05% of camera rays.
**Tip 1:** This tutorial from [Scratchapixel](https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-generating-camera-rays/generating-camera-rays) also provides a detailed walkthrough of generating camera rays. Note that the coordinate convention that Scratchpixel adopted is different from the one we use, and you should stick to the coordinate system from the [rough notes](https://drive.google.com/file/d/0B4d7cujZGEBqVnUtaEsxOUI4dTMtUUItOFR1alQ4bmVBbnU0/view) all the time. Finally, you can visualize the logged rays by checking the box for Logged rays under Visualize and then **starting the render** (Open Render Window -> Start Render). After running the path tracer, rays will be shown as lines in visualizer. Be sure to wait for rendering to complete so you see all rays while visualizing.
**Tip 2:** Since you won't know if your camera rays are correct until you implement primitive intersections, we recommend debugging camera rays by checking what your implementation of `Camera::generate_ray` does with rays at the center of the screen (0.5, 0.5) and at the corners of the image. ![logged_rays](images/ray_log.png)
**Raytracing Visualization** ---
The code can log the results of raytracing for visualization and debugging. To do so, simply call function `Pathtracer::log_ray` in your `Pathtracer::trace_pixel`. Function `Pathtracer::log_ray` takes in 3 arguments: the ray that you want to log, a float that specifies the distance to log that ray up to, and a color for the ray. If you don't pass a color, it will default to white. ## Extra Credit
You should only log only a portion of the generated rays, or else the result will be hard to interpret. To do so, you can add `if(RNG::coin_flip(0.0005f)) log_ray(out, 10.0f);` to log 0.05% of camera rays. ### Defous Blur and Bokeh
Finally, you can visualize the logged rays by checking the box for Logged rays under Visualize and then **starting the render** (Open Render Window -> Start Render). After running the path tracer, rays will be shown as lines in visualizer. Be sure to wait for rendering to complete so you see all rays while visualizing. `Camera` also includes the members `aperture` and `focal_dist`. **Aperture** is the opening in the lens by which light enters the camera. **Focal distance** represents the distance between the camera aperture and the plane that is perfectly in focus. These parameters can be used to simulate the effects of de-focus blur and bokeh found in real cameras.
To use the focal distance parameter, you simply scale up the sensor position from step 2 (and hence ray direction) by `focal_dist` instead of leaving it on the `z = -1` plane. You might notice that this doesn't actually change anything about your result, since this is just scaling up a vector that is later normalized. However, now aperture comes in.
By default, all rays start a single point, representing a pinhole camera. But when `aperture` > 0, we want to randomly choose the ray origin from an `aperture`x`aperture` square centered at the origin and facing the camera direction (-Z). Note that typically aperture of a camera is roughly circular in shape, but a square suffices for our purposes.
![logged_rays](new_results/log_rays.png) Then, we use this random point as the origin of the ray to be generated while keeping its sensor position fixed (consider how that changes the ray direction). Now it's as if the same image was taken from slightly off origin. This simulates real cameras with non-pinhole apertures: the final photo is equivalent to averaging images taken by pinhole cameras placed at every point in the aperture.
Finally, we can see that non-zero aperture makes focal distance matter: objects on the focal plane are unaffected, since where the ray hits on the sensor is the same regardless of the ray's origin. However, rays that hit objects objects closer or farther than the focal distance will be able to "see" slightly different parts of the object based on the ray origin. Averaging over many rays within a pixel, this results in collecting colors from a region larger slightly than that pixel would cover given zero aperture, causing the object to become blurry. We are using a square aperture, so bokeh effects will reflect this.
You can test aperture/focal distance by adjusting `aperture` and `focal_dist` using the camera UI and examining logging rays. Once you have implemented primitive intersections and path tracing (tasks 3/5), you will be able to properly render `dof.dae`:
### Extra credit ideas: <center><img src="images/dof.png" width="400"></center>
* Write your own camera pixel sampler (replacing Rect::Uniform) that generates samples with improved distribution. Some examples include: ### Low-discrepancy Sampling
* Jittered Sampling Write your own pixel sampler (replacing `Rect`) that generates samples with a more advanced distribution. Refer to [Physically Based Rendering](http://www.pbr-book.org/3ed-2018/) chapter 7. Some examples include:
* Multi-jittered sampling - Jittered Sampling
* N-Rooks (Latin Hypercube) sampling - Multi-jittered sampling
* Sobol sequence sampling - N-Rooks (Latin Hypercube) sampling
* Halton sequence sampling - Sobol sequence sampling
* Hammersley sequence sampling - Halton sequence sampling
- Hammersley sequence sampling
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment