Commit 77a6b9ea authored by allai5's avatar allai5
Browse files

new docs dump

parent cbbc4fa0
......@@ -2,6 +2,8 @@
layout: default
title: "Loop Subdivision"
permalink: /meshedit/global/loop/
parent: Global Operations
grand_parent: "A2: MeshEdit"
---
# Loop Subdivision
......@@ -49,16 +51,16 @@ Although this routine looks straightforward, it can very easily crash! The reaso
int n = mesh.n_edges();
EdgeRef e = mesh.edges_begin();
for (int i = 0; i < n; i++) {
// get the next edge NOW!
EdgeRef nextEdge = e;
nextEdge++;
// now, even if splitting the edge deletes it...
if (some condition is met) {
mesh.split_edge(e);
}
// ...we still have a valid reference to the next edge.
e = nextEdge;
}
......
---
layout: default
title: "MeshEdit Overview"
title: "A2: MeshEdit"
permalink: /meshedit/
nav_order: 5
has_children: true
has_toc: false
---
# MeshEdit Overview
......@@ -14,8 +17,8 @@ The following sections contain guidelines for implementing the functionality of
- [Halfedge Mesh](halfedge)
- [Local Mesh Operations](local)
- [Tutorial: Edge Flip](edge_flip)
- [Beveling](bevel)
- [Tutorial: Edge Flip](local/edge_flip)
- [Beveling](local/bevel)
- [Global Mesh Operations](global)
- [Triangulation](global/triangulate)
- [Linear Subdivision](global/linear)
......
......@@ -2,6 +2,8 @@
layout: default
title: "Isotropic Remeshing"
permalink: /meshedit/global/remesh/
parent: Global Operations
grand_parent: "A2: MeshEdit"
---
# Isotropic Remeshing
......@@ -17,29 +19,29 @@ Scotty3D also supports remeshing, an operation that keeps the number of samples
Repeating this simple process several times typically produces a mesh with fairly uniform triangle areas, angles, and vertex degrees. However, each of the steps deserves slightly more explanation.
#### Edge Splitting / Collapsing
### Edge Splitting / Collapsing
Ultimately we want all of our triangles to be about the same size, which means we want edges to all have roughly the same length. As suggested in the paper by Botsch and Kobbelt, we will aim to keep our edges no longer than 4/3rds of the **mean** edge length _L_ in the input mesh, and no shorter than 4/5ths of _L_. In other words, if an edge is longer than 4L/3, split it; if it is shorter than 4L/5, collapse it. We recommend performing all of the splits first, then doing all of the collapses (though as usual, you should be careful to think about when and how mesh elements are being allocated/deallocated).
#### Edge Flipping
### Edge Flipping
We want to flip an edge any time it reduces the total deviation from regular degree (degree 6). In particular, let _a1_, _a2_ be the degrees of an edge that we're thinking about flipping, and let _b1_, _b2_ be the degrees of the two vertices across from this edge. The total deviation in the initial configuration is `|a1-6| + |a2-6| + |b1-6| + |b2-6|`. You should be able to easily compute the deviation after the edge flip **without actually performing the edge flip**; if this number decreases, then the edge flip should be performed. We recommend flipping all edges in a single pass, after the edge collapse step.
#### Vertex Averaging
### Vertex Averaging
Finally, we also want to optimize the geometry of the vertices. A very simple heuristic is that a mesh will have reasonably well-shaped elements if each vertex is located at the center of its neighbors. To keep your code clean and simple, we recommend using the method `Vertex::neighborhood_center()`, which computes the average position of the vertex's neighbors. Note that you should not use this to immediately replace the current position: we don't want to be taking averages of vertices that have already been averaged. Doing so can yield some bizarre behavior that depends on the order in which vertices are traversed (if you're interested in learning more about this issue, Google around for the terms "Jacobi iterations" and "Gauss-Seidel). So, the code should (i) first compute the new positions (stored in `Vertex::new_pos`) for all vertices using their neighborhood centroids, and (ii) _then_ update the vertices with new positions (copy `new_pos` to `pos`).
![Laplacian smoothing](laplacian_smoothing.png)
<center><img src="laplacian_smoothing.png" style="height:200px"></center>
How exactly should the positions be updated? One idea is to simply replace each vertex position with its centroid. We can make the algorithm slightly more stable by moving _gently_ toward the centroid, rather than immediately snapping the vertex to the center. For instance, if _p_ is the original vertex position and _c_ is the centroid, we might compute the new vertex position as _q_ = _p_ + _w_(_c_ - _p_) where _w_ is some weighting factor between 0 and 1 (we use 1/5 in the examples below). In other words, we start out at _p_ and move a little bit in the update direction _v_ = _c_ - _p_.
Another important issue arises if the update direction _v_ has a large _normal_ component, then we'll end up pushing the surface in or out, rather than just sliding our sample points around on the surface. As a result, the shape of the surface will change much more than we'd like (try it!). To ameliorate this issue, we will move the vertex only in the _tangent_ direction, which we can do by projecting out the normal component, i.e., by replacing _v_ with _v_ - dot(_N_,_v_)_N_, where _N_ is the unit normal at the vertex. To get this normal, you will implement the method `Vertex::normal()`, which computes the vertex normal as the area-weighted average of the incident triangle normals. In other words, at a vertex i the normal points in the direction
![area weighted normal eqn](vert_normal_eq.png)
<center><img src="vert_normal_eq.png" style="height:80px"></center>
where A_ijk is the area of triangle ijk, and N_ijk is its unit normal; this quantity can be computed directly by just taking the cross product of two of the triangle's edge vectors (properly oriented).
#### Implementation
### Implementation
The final implementation requires very little information beyond the description above; the basic recipe is:
......@@ -52,4 +54,4 @@ The final implementation requires very little information beyond the description
Repeating this procedure about 5 or 6 times should yield results like the ones seen below; you may want to repeat the smoothing step 10-20 times for each "outer" iteration.
![Isotropic remeshing examples](remesh_example.png)
<center><img src="remesh_example.png" style="height:420px"></center>
......@@ -2,6 +2,8 @@
layout: default
title: "Simplification"
permalink: /meshedit/global/simplify/
parent: Global Operations
grand_parent: "A2: MeshEdit"
---
# Simplification
......@@ -16,7 +18,7 @@ The basic idea is to iteratively collapse edges until we reach the desired numbe
More precisely, we can write the distance d of a point _x_ to a plane with normal _N_ passing through a point _p_ as dist(_x_) = dot(_N_, _x_ - _p_)
![plane normal diagram](plane_normal.png)
<center><img src="plane_normal.png" style="height:360px"></center>
In other words, we measure the extent of the vector from _p_ to _x_ along the normal direction. This quantity gives us a value that is either _positive_ (above the plane), or _negative_ (below the plane). Suppose that _x_ has coordinates (_x_,_y_,_z_), _N_ has coordinates (_a_,_b_,_c_), and let _d_(_x_) = -dot(_N_, _p_), then in _homogeneous_ coordinates, the distance to the plane is just
......@@ -24,7 +26,7 @@ dot(_u_, _v_)
where _u_ = (_x_,_y_,_z_,_1_) and _v_ = (_a_,_b_,_c_,_d_). When we're measuring the quality of an approximation, we don't care whether we're above or below the surface; just how _far away_ we are from the original surface. Therefore, we're going to consider the _square_ of the distance, which we can write in homogeneous coordinates as
![homogeneous coordinates](homogeneous_coord.png)
<center><img src="homogeneous_coord.png" style="height:40px"></center>
where T denotes the transpose of a vector. The term _vv_^T is an [outer product](https://en.wikipedia.org/wiki/Outer_product) of the vector _v_ with itself, which gives us a symmetric matrix _K_ = _vv_^T. In components, this matrix would look like
......@@ -37,13 +39,12 @@ but in Scotty3D it can be constructed by simply calling the method `outer( Vec4,
The matrix _K_ tells us something about the distance to a plane. We can also get some idea of how far we are from a _vertex_ by considering the sum of the squared distances to the planes passing through all triangles that touch that vertex. In other words, we will say that the distance to a small neighborhood around the vertex i can be approximated by the sum of the quadrics on the incident faces ijk:
![Vertex K sum](K_sum.png)
![Vertex normals](vert_normals.png)
<center><img src="K_sum.png" style="height:100px"></center>
<center><img src="vert_normals.png" style="height:360px"></center>
Likewise, the distance to an _edge_ ij will be approximated by the sum of the quadrics at its two endpoints:
![Edge K sum](edge_k_sum.png)
<center><img src="edge_k_sum.png" style="height:50px"></center>
The sums above should then be easy to compute -- you can just add up the `Mat4` objects around a vertex or along an edge using the usual "+" operator. You do not need to write an explicit loop over the 16 entries of the matrix.
......@@ -59,11 +60,11 @@ _Ax_ = _b_
where A is the upper-left 3x3 block of K, and b is _minus_ the upper-right 3x1 column. In other words, the entries of A are just
![K_A_block](K_A_block.png)
<center><img src="K_A_block.png" style="height:100px"></center>
and the entries of b are
![b_vec](b_vec.png)
<center><img src="b_vec.png" style="height:100px"></center>
The cost associated with this solution can be found by plugging _x_ back into our original expression, i.e., the cost is just
......@@ -75,7 +76,7 @@ where _K_ is the quadric associated with the edge. Fortunately, _you do not need
Vec3 b; // computed by extracting minus the upper-right 3x1 column from the same matrix
Vec3 x = A.inverse() * b; // solve Ax = b for x by hitting both sides with the inverse of A
However, A might not always be invertible: consider the case where the mesh is composed of points all on the same plane. In this case, you need to select an optimal point along the original edge. Please read [Garland's paper](http://reports-archive.adm.cs.cmu.edu/anon/1999/CMU-CS-99-105.pdf) on page 62 section 3.5 for more details.
However, A might not always be invertible: consider the case where the mesh is composed of points all on the same plane. In this case, you need to select an optimal point along the original edge. Please read [Garland's paper](http://reports-archive.adm.cs.cmu.edu/anon/1999/CMU-CS-99-105.pdf) on page 62 section 3.5 for more details.
If you're a bit lost at this point, don't worry! There are a lot of details to go through, and we'll summarize everything again in the implementation section. The main idea to keep in mind right now is:
......@@ -91,22 +92,22 @@ The one final thing we want to think about is performance. At each iteration, we
class Edge_Record {
public:
Edge_Record() {}
Edge_Record(std::unordered_map<Halfedge_Mesh::VertexRef, Mat4>& vertex_quadrics,
Edge_Record(std::unordered_map<Halfedge_Mesh::VertexRef, Mat4>& vertex_quadrics,
Halfedge_Mesh::EdgeRef e) : edge(e) {
// The second constructor takes a dictionary mapping vertices
// to quadric error matrices and an edge reference. It then
// computes the sum of the quadrics at the two endpoints
// and solves for the optimal midpoint position as measured
// and solves for the optimal midpoint position as measured
// by this quadric. It also stores the value of this quadric
// as the "score" used by the priority queue.
}
EdgeRef edge; // the edge referred to by this record
Vec3 optimal; // the optimal point, if we were
// to collapse this edge next
float cost; // the cost associated with collapsing this edge,
// which is very (very!) roughly something like
// the distance we'll deviate from the original
......@@ -134,8 +135,8 @@ More documentation is provided inline in `student/meshedit.cpp`.
Though conceptually sophisticated, quadric error simplification is actually not too hard to implement. It basically boils down to two methods:
Edge_Record::Edge_Record(std::unordered_map<Halfedge_Mesh::VertexRef, Mat4>& vertex_quadrics, EdgeIter e);
Halfedge_Mesh::simplify();
Edge_Record::Edge_Record(std::unordered_map<Halfedge_Mesh::VertexRef, Mat4>& vertex_quadrics, EdgeIter e);
Halfedge_Mesh::simplify();
As discussed above, the edge record initializer should:
......@@ -147,7 +148,7 @@ As discussed above, the edge record initializer should:
The downsampling routine can then be implemented by following this basic recipe:
1. Compute quadrics for each face by simply writing the plane equation for that face in homogeneous coordinates, and building the corresponding quadric matrix using `outer()`. This matrix should be stored in the yet-unmentioned dictionary `face_quadrics`.
1. Compute quadrics for each face by simply writing the plane equation for that face in homogeneous coordinates, and building the corresponding quadric matrix using `outer()`. This matrix should be stored in the yet-unmentioned dictionary `face_quadrics`.
2. Compute an initial quadric for each vertex by adding up the quadrics at all the faces touching that vertex. This matrix should be stored in `vertex_quadrics`. (Note that these quadrics must be updated as edges are collapsed.)
3. For each edge, create an `Edge_Record`, insert it into the `edge_records` dictionary, and add it to one global `PQueue<Edge_Record>` queue.
4. Until a target number of triangles is reached, collapse the best/cheapest edge (as determined by the priority queue) and set the quadric at the new vertex to the sum of the quadrics at the endpoints of the original edge. You will also have to update the cost of any edge connected to this vertex.
......@@ -168,4 +169,5 @@ Steps 4 and 7 are highlighted because it is easy to get these steps wrong. For i
A working implementation should look something like the examples below. You may find it easiest to implement this algorithm in stages. For instance, _first_ get the edge collapses working, using just the edge midpoint rather than the optimal point, _then_ worry about solving for the point that minimizes quadric error.
![Quadric error simplification examples](quad_example.png)
\ No newline at end of file
<!--![Quadric error simplification examples](quad_example.png)-->
<center><img src="quad_example.png" style="height:480px"></center>
......@@ -2,19 +2,21 @@
layout: default
title: "Triangulation"
permalink: /meshedit/global/triangulate/
parent: Global Operations
grand_parent: "A2: MeshEdit"
---
# Triangulation
For an in-practice example, see the [User Guide](/Scotty3D/guide/model).
A variety of geometry processing algorithms become easier to implement (or are only well defined) when the input consists purely of triangles. The method `Halfedge_Mesh::triangulate` converts any polygon mesh into a triangle mesh by splitting each polygon into triangles.
A variety of geometry processing algorithms become easier to implement (or are only well defined) when the input consists purely of triangles. The method `Halfedge_Mesh::triangulate` converts any polygon mesh into a triangle mesh by splitting each polygon into triangles.
This transformation is performed in-place, i.e., the original mesh data is replaced with the new, triangulated data (rather than making a copy). The implementation of this method will look much like the implementation of the local mesh operations, with the addition of looping over every face in the mesh.
There is more than one way to split a polygon into triangles. Two common patterns are to connect every vertex to a single vertex, or to "zig-zag" the triangulation across the polygon:
![triangulate](triangulate.png)
<center><img src="triangulate.png" style="height:300px"></center>
The `triangulate` routine is not required to produce any particular triangulation so long as:
......@@ -24,4 +26,4 @@ The `triangulate` routine is not required to produce any particular triangulatio
Note that triangulation of nonconvex or nonplanar polygons may lead to geometry that is unattractive or difficult to interpret. However, the purpose of this method is simply to produce triangular _connectivity_ for a given polygon mesh, and correct halfedge connectivity is the only invariant that must be preserved by the implementation. The _geometric_ quality of the triangulation can later be improved by running other global algorithms (e.g., isotropic remeshing); ambitious developers may also wish to consult the following reference:
* Zou et al, ["An Algorithm for Triangulating Multiple 3D Polygons"](http://www.cs.wustl.edu/~taoju/research/triangulate_final.pdf)
\ No newline at end of file
* Zou et al, ["An Algorithm for Triangulating Multiple 3D Polygons"](http://www.cs.wustl.edu/~taoju/research/triangulate_final.pdf)
---
layout: default
title: "(Task 3) Bounding Volume Hierarchy"
title: (Task 3) BVH
permalink: /pathtracer/bounding_volume_hierarchy
parent: "A3: Pathtracer"
---
# (Task 3) Bounding Volume Hierarchy
In this task you will implement a bounding volume hierarchy that accelerates ray-scene intersection. Most of this work will be in `student/bvh.inl`. Note that this file has an unusual extension (`.inl` = inline) because it is an implementation file for a template class. This means `bvh.h` must `#include` it, so all code that sees `bvh.h` will also see `bvh.inl`.
First, take a look at the definition for our `BVH` in `rays/bvh.h`. We represent our BVH using a vector of `Node`s, `nodes`, as an implicit tree data structure in the same fashion as heaps that you probably have seen in some other courses. A `Node` has the following fields:
......@@ -26,34 +28,36 @@ Finally, note that the BVH visualizer will start drawing from `BVH::root_idx`, s
## Step 0: Bounding Box Calculation
Implement `BBox::hit` in `student/bbox.cpp`.
Also if you haven't already, implement `Triangle::bbox` in `student/tri_mesh.cpp`.
Also if you haven't already, implement `Triangle::bbox` in `student/tri_mesh.cpp` (`Triangle::bbox` should be fairly straightforward). We recommend checking out this [Scratchapixel article](https://www.scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection).
## Step 1: BVH Construction
Your job is to construct a `BVH` using the [Surface Area Heuristic](http://15462.courses.cs.cmu.edu/fall2017/lecture/acceleratingqueries/slide_025) discussed in class. Tree construction would occur when the BVH object is constructed.
Your job is to construct a `BVH` using the [Surface Area Heuristic](http://15462.courses.cs.cmu.edu/fall2017/lecture/acceleratingqueries/slide_025) discussed in class. Tree construction would occur when the BVH object is constructed. Below is the pseudocode by which your BVH construction procedure should generally follow (copied from lecture slides).
<center><img src="BVH_construction_pseudocode.png"></center>
## Step 2: Ray-BVH Intersection
Implement the ray-BVH intersection routine `Trace BVH<Primitive>::hit(const Ray& ray)`. You may wish to consider the node visit order optimizations we discussed in class. Once complete, your renderer should be able to render all of the test scenes in a reasonable amount of time. [Visualization of normals](visualization_of_normals.md) may help with debugging.
<center><img src="ray_bvh_pseudocode.png"></center>
## Visualization
In Render mode, simply check the box for "BVH", and you would be able to see the BVH you generated in task 3 when you **start rendering**. You can click on the horizontal bar to see each level of your BVH.
![visualize](new_results/bvh_button.png)
The bvh constructed for spot on the 10th level.
![bvh](new_results/bvh.png)
<center><img src="new_results/bvh_button.png" style="height:120px"></center>
The bvh constructed for a scene composed of several cubes and spheres on the 0th level.
![0](new_results/l0.png)
## Sample BVHs
The BVH constructed for Spot the Cow on the 10th level.
<center><img src="new_results/bvh.png" style="height:320px"></center>
The bvh constructed for a scene composed of several cubes and spheres on the 1st level.
![1](new_results/l2.png)
The BVH constructed for a scene composed of several cubes and spheres on the 0th and 1st levels.
<center><img src="new_results/l0.png" style="height:220px"><img src="new_results/l2.png" style="height:220px"></center>
The bvh constructed for bunny on the 10th level.
![bunny](new_results/bvh_bunny_10.png)
The BVH constructed for the Stanford Bunny on the 10th level.
<center><img src="new_results/bvh_bunny_10.png" style="height:320px"></center>
---
layout: default
title: "(Task 1) Generating Camera Rays"
title: (Task 1) Camera Rays
parent: "A3: Pathtracer"
permalink: /pathtracer/camera_rays
---
# (Task 1) Generating Camera Rays
"Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.)
### Walkthrough Video
<iframe width="750" height="500" src="Task1_Camera_Rays_1.mp4" frameborder="0" allowfullscreen></iframe>
Take a look at `Pathtracer::trace_pixel` in `student/pathtracer.cpp`. The job of this function is to compute the amount of energy arriving at this pixel of the image. Conveniently, we've given you a function `Pathtracer::trace_ray(r)` that provides a measurement of incoming scene radiance along the direction given by ray `r`. See `lib/ray.h` for the interface of ray.
"Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.)
When the number of samples per pixel is 1, you should sample incoming radiance at the center of each pixel by constructing a ray `r` that begins at this sensor location and travels through the camera's pinhole. Once you have computed this ray, then call `Pathtracer::trace_ray(r)` to get the energy deposited in the pixel. When supersampling is enabled, the expected behavior of the program is being addressed below.
Take a look at `Pathtracer::trace_pixel` in `student/pathtracer.cpp`. The job of this function is to compute the amount of energy arriving at this pixel of the image. Conveniently, we've given you a function `Pathtracer::trace_ray(r)` that provides a measurement of incoming scene radiance along the direction given by ray `r`. See `lib/ray.h` for the interface of ray.
Here are some [rough notes](https://drive.google.com/file/d/0B4d7cujZGEBqVnUtaEsxOUI4dTMtUUItOFR1alQ4bmVBbnU0/view) giving more detail on how to generate camera rays.
......@@ -18,27 +20,33 @@ This tutorial from [Scratchapixel](https://www.scratchapixel.com/lessons/3d-basi
**Step 1:** Given the width and height of the screen, and point in screen space, compute the corresponding coordinates of the point in normalized ([0-1]x[0-1]) screen space in `Pathtracer::trace_pixel`. Pass these coordinates to the camera via `Camera::generate_ray` in `camera.cpp`.
**Step 2:** Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point. We recommend that you compute this ray in camera space (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio. Note that the camera maintains camera-space-to-world space transform matrix `iview` that will come fairly handy.
**Step 2:** Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point. We recommend that you compute this ray in camera space (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio. Note that the camera maintains camera-space-to-world space transform matrix `iview` that will come in handy.
**Step 3:** Your implementation of `Pathtracer::trace_pixel` must support super-sampling. The starter code will call `Pathtracer::trace_pixel` one time for each sample (given by `Pathtracer::n_samples`), so your implementation of `Pathtracer::trace_pixel` should choose a new location within the pixel each time.
**Step 3:** Your implementation of `Pathtracer::trace_pixel` must support super-sampling. The member `Pathtracer::n_samples` specifies the number of samples of scene radiance to evaluate per pixel. The starter code will hence call `Pathtracer::trace_pixel` one time for each sample, so your implementation of `Pathtracer::trace_pixel` should choose a new location within the pixel each time.
To choose a sample within the pixel, you should implement `Rect::Uniform::sample` (see `src/student/samplers.cpp`), such that it provides (random) uniformly distributed 2D points within the rectangular region specified by the origin and the member `Rect::Uniform::size`. You may then create a `Rect::Uniform` sampler with a one-by-one region and call `sample()` to obtain randomly chosen offsets within the pixel.
To choose a sample within the pixel, you should implement `Rect::Uniform::sample` (see `src/student/samplers.cpp`), such that it provides (random) uniformly distributed 2D points within the rectangular region specified by the origin and the member `Rect::Uniform::size`. Then you may then create a `Rect::Uniform` sampler with a one-by-one region and call `sample()` to obtain randomly chosen offsets within the pixel.
Once you have implemented `Pathtracer::trace_pixel`, `Rect::Uniform::sample` and `Camera::generate_ray`, you should have a working camera.
**Tip:** Since it'll be hard to know if you camera rays are correct until you implement primitive intersection, we recommend debugging your camera rays by checking what your implementation of `Camera::generate_ray` does with rays at the center of the screen (0.5, 0.5) and at the corners of the image.
**Tip:** Since it'll be hard to know if you camera rays are correct until you implement primitive intersection, we recommend debugging your camera rays by checking what your implementation of `Camera::generate_ray` does with rays at the center of the screen (0.5, 0.5) and at the corners of the image.
The code can log the results of raytracing for visualization and debugging. To do so, simply call function `Pathtracer::log_ray` in your `Pathtracer::trace_pixel`. Function `Pathtracer::log_ray` takes in 3 arguments: the ray thay you want to log, a float that specifies the time/distance to log that ray up to, as well as the color. You don't need to worry about the color as it is being set to white by default. You can only log only a portion of the generated rays for a cleaner visual effect, for example, do `if (RNG::coin_flip(0.0005f))log_ray(out, 10.0f);` to log 0.05% of generated camera rays.
The code can log the results of raytracing for visualization and debugging. To do so, simply call function `Pathtracer::log_ray` in your `Pathtracer::trace_pixel`. Function `Pathtracer::log_ray` takes in 3 arguments: the ray tat you want to log, a float that specifies the distance to log that ray up to, and a color for the ray. If you don't pass a color, it will default to white. You should only log only a portion of the generated rays, or else the result will be hard to interpret. To do so, you can add `if(RNG::coin_flip(0.0005f)) log_ray(out, 10.0f);` to log 0.05% of camera rays.
You can visualize the result of the generated rays by checking the box for Logged rays under Visualize and then **start render**(Open Render Window -> Start Render).
After running the ray tracer, rays will be shown as lines in visualizer. Be sure to wait for rendering to complete so you see all rays while visualizing, a message is printed to standard output on completion.
Finally, you can visualize the logged rays by checking the box for Logged rays under Visualize and then **starting the render** (Open Render Window -> Start Render). After running the path tracer, rays will be shown as lines in visualizer. Be sure to wait for rendering to complete so you see all rays while visualizing.
![logged_rays](new_results/log_rays.png)
**Step 4:** `Camera` also includes the members `aperture` and `focal_dist`. These parameters are used to simulate the effects of de-focus blur and bokeh found in real cameras. Focal distance represents the distance between the camera aperture and the plane that is perfectly in focus. To use it, you must simply scale up the sensor position from step 2 (and hence ray direction) by `focal_dist` instead of leaving it on the `z = -1` plane. You might notice that this doesn't actually change anything about your result, since this is just scaling up a vector that is later normalized. However, now aperture comes in: by default, all rays start a single point, representing a pinhole camera. But when `aperture > 0`, we want to randomly choose the ray origin from an `aperture`x`aperture` square centered at the origin and facing the camera direction (-Z). Then, we use this point as the starting point of the ray while keeping its sensor position fixed (consider how that changes the ray direction). Now it's as if the same image was taken from slightly off origin. This simulates real cameras with non-pinhole apertures: the final photo is equivalent to averaging images taken by pinhole cameras placed at every point in the aperture.
Finally, we can see that non-zero aperture makes focal distance matter: objects on the focal plane are unaffected, since where the ray hits on the sensor is the same regardless of the ray's origin. However, rays that hit objects objects closer or farther than the focal distance will be able to "see" slightly different parts of the object based on the ray origin. Averaging over many rays within a pixel, this results in collecting colors from a region larger slightly than that pixel would cover given zero aperture, causing the object to become blurry. We are using a square aperture, so bokeh effects will reflect this.
You can test aperture/focal distance by adjusting `aperture` and `focal_dist` using the camera UI and examining logging rays. Once you have implemented primitive intersections and path tracing (tasks 3/5), you will be able to properly render `dof.dae`:
![depth of field test](new_results/dof.png)
**Extra credit ideas:**
* Modify the implementation of the camera to simulate a camera with a finite aperture (rather than a pinhole camera). This will allow your ray tracer to simulate the effect of defocus blur.
* Write your own sampler that generates samples with improved distribution. Some examples include:
* Write your own camera pixel sampler (replacing Rect::Uniform) that generates samples with improved distribution. Some examples include:
* Jittered Sampling
* Multi-jittered sampling
* N-Rooks (Latin Hypercube) sampling
......
---
layout: default
title: "Dielectrics and Transmission"
title: Dielectrics and Transmission
parent: (Task 6) Materials
grand_parent: "A3: Pathtracer"
permalink: /pathtracer/dielectrics_and_transmission
---
......
---
layout: default
title: "(Task 7) Environment Lighting"
title: (Task 7) Environment Lighting
parent: "A3: Pathtracer"
permalink: /pathtracer/environment_lighting
has_children: true
has_toc: false
---
# (Task 7) Environment Lighting
......@@ -35,7 +38,7 @@ For more HDRIs for creative environment maps, check out [HDRIHAVEN](https://hdri
Much like light in the real world, most of the energy provided by an environment light source is concentrated in the directions toward bright light sources. **Therefore, it makes sense to bias selection of sampled directions towards the directions for which incoming radiance is the greatest.** In this final task you will implement an importance sampling scheme for environment lights. For environment lights with large variation in incoming light intensities, good importance sampling will significantly improve the quality of renderings.
The basic idea is that you will assign a probability to each pixel in the environment map based on the total flux passing through the solid angle it represents.
The basic idea is that you will assign a probability to each pixel in the environment map based on the total flux passing through the solid angle it represents.
A pixel with coordinate <img src="environment_eq1.png" width ="45"> subtends an area <img src="environment_eq2.png" width = "80"> on the unit sphere (where <img src="environment_eq3.png" width = "20"> and <img src="environment_eq4.png" width = "20"> the angles subtended by each pixel -- as determined by the resolution of the texture). Thus, the flux through a pixel is proportional to <img src="environment_eq5.png" width = "45">. (We only care about the relative flux through each pixel to create a distribution.)
......@@ -64,12 +67,12 @@ Given the marginal distribution for <img src="environment_eq9.png" width ="10">
ennis.exr with 32 spp
![ennis](new_results/ennis32importance.png)
![ennis](new_results/ennis32importance.png)
uffiz.exr with 32 spp
![uffiz](new_results/uffiz32importance.png)
![uffiz](new_results/uffiz32importance.png)
field.exr with 1024 spp
![ennis](new_results/field1024importance.png)
![ennis](new_results/field1024importance.png)
---
layout: default
title: "Environment Light Importance Sampling"
title: Environment Light Importance Sampling
grand_parent: "A3: Pathtracer"
parent: (Task 7) Environment Lighting
permalink: /pathtracer/importance_sampling
---
# Environment Light Importance Sampling
A pixel with coordinate ![environment_eq1](environment_eq1.png) subtends an area ![environment_eq2](environment_eq2.png) on the unit sphere (where ![environment_eq3](environment_eq3.png) and ![environment_eq4](environment_eq4.png) the angles subtended by each pixel -- as determined by the resolution of the texture). Thus, the flux through a pixel is proportional to ![environment_eq5](environment_eq5.png). (We only care about the relative flux through each pixel to create a distribution.)
A pixel with coordinate <img src="environment_eq1.png" style="height:18px"> subtends an area <img src="environment_eq2.png" style="height:18px"> on the unit sphere (where <img src="environment_eq3.png" style="height:16px"> and <img src="environment_eq4.png" style="height:18px"> the angles subtended by each pixel -- as determined by the resolution of the texture). Thus, the flux through a pixel is proportional to <img src="environment_eq5.png" style="height:14px">. (We only care about the relative flux through each pixel to create a distribution.)
**Summing the fluxes for all pixels, then normalizing the values so that they sum to one, yields a discrete probability distribution for picking a pixel based on flux through its corresponding solid angle on the sphere.**
The question is now how to sample from this 2D discrete probability distribution. We recommend the following process which reduces the problem to drawing samples from two 1D distributions, each time using the inversion method discussed in class:
* Given ![environment_eq6](environment_eq6.png) the probability distribution for all pixels, compute the marginal probability distribution ![environment_eq7](environment_eq7.png) for selecting a value from each row of pixels.
* Given <img src="environment_eq6.png" style="height:18px"> the probability distribution for all pixels, compute the marginal probability distribution <img src="environment_eq7.png" style="height:32px"> for selecting a value from each row of pixels.
* Given for any pixel, compute the conditional probability ![environment_eq8](environment_eq8.png).
* Given for any pixel, compute the conditional probability <img src="environment_eq8.png" style="height:40px">.
Given the marginal distribution for ![environment_eq9](environment_eq9.png) and the conditional distributions ![environment_eq10](environment_eq10.png) for environment map rows, it is easy to select a pixel as follows:
Given the marginal distribution for <img src="environment_eq9.png" style="height:14px"> and the conditional distributions <img src="environment_eq10.png" style="height:18px"> for environment map rows, it is easy to select a pixel as follows:
1. Use the inversion method to first select a "row" of the environment map according to ![environment_eq11](environment_eq11.png).
2. Given this row, use the inversion method to select a pixel in the row according to ![environment_eq12](environment_eq12.png).
\ No newline at end of file
1. Use the inversion method to first select a "row" of the environment map according to <img src="environment_eq11.png" style="height:18px">.
2. Given this row, use the inversion method to select a pixel in the row according to <img src="environment_eq12.png" style="height:18px">.
---
layout: default
title: "(Task 2) Intersecting Objects"
title: (Task 2) Intersections
permalink: /pathtracer/intersecting_objects
parent: "A3: Pathtracer"
has_children: true
has_toc: false
---
# (Task 2) Intersecting Objects
Now that your ray tracer generates camera rays, we need to be able to answer the core query in ray tracing: "does this ray hit this object?" Here, you will start by implementing ray-object intersection routines against the two types of objects in the starter code: triangles and spheres. Later, we will use a BVH to accelerate these queries, but for now we consider an intersection test against a single object.
First, take a look at the `rays/object.h` for the interface of `Object` class. An `Object` can be **either** a `Tri_Mesh`, a `Shape`, a BVH(which you will implement in Task 3), or a list of `Objects`. Right now, we are only dealing with `Tri_Mesh`'s case and `Shape`'s case, and their interfaces are in `rays/tri_mesh.h` and `rays/shapes.h`, respectively. `Tri_Mesh` contains a BVH of `Triangle`, and in this task you will be working with the `Triangle` class. For `Shape`, you are going to work with `Sphere`s, which is the major type of `Shape` in Scotty 3D.
First, take a look at the `rays/object.h` for the interface of `Object` class. An `Object` can be **either** a `Tri_Mesh`, a `Shape`, a BVH(which you will implement in Task 3), or a list of `Objects`. Right now, we are only dealing with `Tri_Mesh`'s case and `Shape`'s case, and their interfaces are in `rays/tri_mesh.h` and `rays/shapes.h`, respectively. `Tri_Mesh` contains a BVH of `Triangle`, and in this task you will be working with the `Triangle` class. For `Shape`, you are going to work with `Sphere`s, which is the major type of `Shape` in Scotty 3D.
Now, you need to implement the `hit` routine for both `Triangle` and `Sphere`. `hit` takes in a ray, and returns a `Trace` structure, which contains information on whether the ray hits the object and if hits, the information describing the surface at the point of the hit. See `rays/trace.h` for the definition of `Trace`.
......@@ -31,7 +34,7 @@ While faster implementations are possible, we recommend you implement ray-triang
There are two important details you should be aware of about intersection:
* When finding the first-hit intersection with a triangle, you need to fill in the `Trace` structure with details of the hit. The structure should be initialized with:
* `hit`: a boolean representing if there is a hit or not
* `time`: the ray's _t_-value of the hit point
* `position`: the exact position of the hit point. This can be easily computed by the `time` above as with the ray's `point` and `dir`.
......@@ -48,6 +51,11 @@ While you are working with `student/tri_mesh.cpp`, you should implement `Triangl
You also need to implement the `hit` routines for the `Sphere` class in `student/sphapes.cpp`. Remember that your intersection tests should respect the ray's `time_bound`. Because spheres always represent closed surfaces, you should not flip back-facing normals you did with triangles.
Note: take care **not** to use the `Vec3::normalize()` method when computing your
normal vector. You should instead use `Vec3::unit()`, since `Vec3::normalize()`
will actually change the `Vec3` object passed in rather than returning a
normalized version.
---
[Visualization of normals](visualization_of_normals.md) might be very helpful with debugging.
---
layout: default
title: "(Task 6) Materials"
title: (Task 6) Materials
permalink: /pathtracer/materials
parent: "A3: Pathtracer"
has_children: true
has_toc: false
---
# (Task 6) Materials
<center><img src="bsdf_diagrams.png" style="height:200px"></center>
Now that you have implemented the ability to sample more complex light paths, it's finally time to add support for more types of materials (other than the fully Lambertian material that you have implemented in Task 5). In this task you will add support for two types of materials: a perfect mirror and glass (a material featuring both specular reflection and transmittance) in `student/bsdf.cpp`.
To get started take a look at the BSDF interface in `rays/bsdf.h`. There are a number of key methods you should understand in `BSDF class`:
......@@ -20,8 +25,7 @@ There are also two helper functions in the BSDF class in `student/bsdf.cpp` that
* `Vec3 refract(Vec3 out_dir, float index_of_refraction, bool& was_internal)` returns the ray that results from refracting the ray in `out_dir` about the surface according to [Snell's Law](http://15462.courses.cs.cmu.edu/fall2015/lecture/reflection/slide_032). The surface's index of refraction is given by the argument `index_of_refraction`. Your implementation should assume that if the ray in `out_dir` **is entering the surface** (that is, if `cos(out_dir, N=[0,1,0]) > 0`) then the ray is currently in vacuum (index of refraction = 1.0). If `cos(out_dir, N=[0,1,0]) < 0` then your code should assume the ray is leaving the surface and entering vacuum. **In the case of total internal reflection, you should set `*was_internal` to `true`.**
* Note that in `reflect` and `refract`, both the `out_dir` and the returned in-direction are pointing away from the intersection point of the ray and the surface, as illustrated in this picture below.
![rays_dir](rays_dir.png)
<center><img src="rays_dir.png" style="height:420px"></center>
## Step 1
Implement the class `BSDF_Mirror` which represents a material with perfect specular reflection (a perfect mirror). You should Implement `BSDF_Mirror::sample`, `BSDF_Mirror::evaluate`, and `reflect`. **(Hint: what should the pdf sampled by `BSDF_Mirror::sample` be? What should the reflectance function `BSDF_Mirror::evalute` be?)**
......@@ -65,5 +69,5 @@ We described the BRDF for perfect specular reflection in class, however we did n
When you are done, you will be able to render images like these:
![cornell_classic](new_results/32k_large.png)
<center><img src="new_results/32k_large.png"></center>
---
layout: default
title: "PathTracer Overview"
title: "A3: Pathtracer"
permalink: /pathtracer/
nav_order: 6
has_children: true
has_toc: false
---
# PathTracer Overview
PathTracer is (as the name suggests) a simple path tracer that can render scenes with global illumination. The first part of the assignment will focus on providing an efficient implementation of **ray-scene geometry queries**. In the second half of the assignment you will **add the ability to simulate how light bounces around the scene**, which will allow your renderer to synthesize much higher-quality images. Much like in MeshEdit, input scenes are defined in COLLADA files, so you can create your own scenes to render using Scotty3D or other free software like [Blender](https://www.blender.org/).
![CBsphere](new_results/32k_large.png)
<center><img src="raytracing_diagram.png" style="height:240px"></center>
Implementing the functionality of PathTracer is split in to 7 tasks, and here are the instructions for each of them:
- [(Task 1) Generating Camera Rays](camera_rays.md)
- [(Task 2) Intersecting Objects](intersecting_objects.md)
- [(Task 3) Bounding Volume Hierarchy](bounding_volume_hierarchy.md)
- [(Task 4) Shadow Rays](shadow_rays.md)
- [(Task 5) Path Tracing](path_tracing.md)
- [(Task 6) Materials](materials.md)
- [(Task 7) Environment Lighting](environment_lighting.md)
- [(Task 1) Generating Camera Rays](camera_rays)
- [(Task 2) Intersecting Objects](intersecting_objects)
- [(Task 3) Bounding Volume Hierarchy](bounding_volume_hierarchy)
- [(Task 4) Shadow Rays](shadow_rays)
- [(Task 5) Path Tracing](path_tracing)
- [(Task 6) Materials](materials)
- [(Task 7) Environment Lighting](environment_lighting)
The files that you will work with for PathTracer are all under `src/student` directory. Some of the particularly important ones are outlined below. Methods that we expect you to implement are marked with "TODO (PathTracer)", which you may search for.
......
---
layout: default
title: "(Task 5) Path Tracing"
title: (Task 5) Path Tracing
permalink: /pathtracer/path_tracing
parent: "A3: Pathtracer"
---
# (Task 5) Path Tracing
......@@ -15,15 +16,21 @@ The basic structure will be as follows:
* (1) Randomly select a new ray direction using `bsdf.sample` (which you will implement in Step 2)
* (2) Potentially terminate the path (using Russian roulette)
* (3) Recursively trace the ray to evaluate weighted reflectance contribution due to light from this direction. Remember to respect the maximum number of bounces from `max_depth` (which is a member of class `Pathtracer`).
* (3) Recursively trace the ray to evaluate weighted reflectance contribution due to light from this direction. Remember to respect the maximum number of bounces from `max_depth` (which is a member of class `Pathtracer`). Don't forget to add in the BSDF emissive component!
## Step 2
Now, Implement `BSDF_Lambertian::sample` for diffusely reflecting, which randomly samples from the distribution and returns a `BSDF_Sample`. Note that the interface is in `rays/bsdf.h`. Task 6 contains further discussion of sampling BSDFs, reading ahead may help your understanding. The implementation of `BSDF_Lambertian::evaluate` is already provided to you.
Implement `BSDF_Lambertian::sample` for diffuse reflections, which randomly samples a direction from a uniform hemisphere distribution and returns a `BSDF_Sample`. Note that the interface is in `rays/bsdf.h`. Task 6 contains further discussion of sampling BSDFs, reading ahead may help your understanding. The implementation of `BSDF_Lambertian::evaluate` is already provided to you.
Note:
* When adding the recursive term to the total radiance, you will need to account
for emissive materials, like the ceiling light in the Cornell Box
(cbox.dae). To do this, simply add the BSDF sample's emissive term to your
total radiance, i.e. `L += sample.emisssive`.
* Functions in `student/sampler.cpp` from class `Sampler` contains helper functions for random sampling, which you will use for sampling. Our starter code uses uniform hemisphere sampling `Samplers::Hemisphere::Uniform sampler`(see `rays/bsdf.h` and `student/sampler.cpp`) which is already implemented. You are welcome to implement Cosine-Weighted Hemisphere sampling for extra credit, but it is not required. If you want to implement Cosine-Weighted Hemisphere sampling, fill in `Hemisphere::Cosine::sample` in `student/samplers.cpp` and then change `Samplers::Hemisphere::Uniform sampler` to `Samplers::Hemisphere::Cosine sampler` in `rays/bsdf.h`.
---
......@@ -36,11 +43,13 @@ Note the time-quality tradeoff here. With these commandline arguments, your path
![spheres](new_results/timing.png)
Also note that if you have enabled Russian Roulette, your result may seem noisier.
Also note that if you have enabled Russian Roulette, your result may seem noisier, but should complete faster. The point of Russian roulette is not to increase sample quality, but to allow the computation of more samples in the same amount of time, resulting in a higher quality result.
Here are a few tips:
* The termination probability of paths can be determined based on the [overall throughput](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_044) of the path (you'll likely need to add a field to the Ray structure to implement this) or based on the value of the BSDF given wo and wi in the current step. Keep in mind that delta function BSDFs can take on values greater than one, so clamping termination probabilities derived from BSDF values to 1 is wise.
* The path termination probability should be computed based on the [overall throughput](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_044) of the path. The throughput of the ray is recorded in its `throughput` member, which represents the multiplicative factor the current radiance will be affected by before contributing to the final pixel color. Hence, you should both use and update this field. To update it, simply multiply in the rendering equation factors: BSDF attenuation, `cos(theta)`, and (inverse) BSDF PDF. Remember to apply the coefficients from the current step before deriving the termination probability. Finally, note that the updated throughput should be copied to the recursive ray for later steps.
Keep in mind that delta function BSDFs can take on values greater than one, so clamping termination probabilities derived from BSDF values to 1 is wise.
* To convert a Spectrum to a termination probability, we recommend you use the luminance (overall brightness) of the Spectrum, which is available via `Spectrum::luma`
......
---
layout: default
title: Ray Sphere Intersection
permalink: /pathtracer/ray_sphere_intersection
grand_parent: "A3: Pathtracer"
parent: (Task 2) Intersections
---
# Ray Sphere Intersection
<center><img src="sphere_intersect_diagram.png" style="height:320px"></center>
<center><img src="sphere_intersect_eqns.png" style="height:400px"></center>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment