Commit c0f8c29a authored by allai5's avatar allai5
Browse files

port updates from dev, fix video autoplay issue

parent afa3f68f
...@@ -20,7 +20,7 @@ However, if the particle can have arbitrary forces applied to it, we no longer k ...@@ -20,7 +20,7 @@ However, if the particle can have arbitrary forces applied to it, we no longer k
<center><img src="task4_media/fma.png"></center> <center><img src="task4_media/fma.png"></center>
There are many different techniques for integrating our equations of motion, including forward, backward, and symplectic Euler, Verlet, Runge-Kutta, and Leapfrog. Each strategy comes with a slightly different way of computing how much to update our velocity and position across a single time-step. In this task, we will use the simplest - forward Euler - as we aren't too concerned with stability or energy conservation for our particle system. Forward Euler simply steps our position forward by our velocity, and then velocity by our acceleration: There are many different techniques for integrating our equations of motion, including forward, backward, and symplectic Euler, Verlet, Runge-Kutta, and Leapfrog. Each strategy comes with a slightly different way of computing how much to update our velocity and position across a single time-step. In this case, we will use the simplest - forward Euler - as we aren't too concerned with stability or energy conservation. Forward Euler simply steps our position forward by our velocity, and then velocity by our acceleration:
<center><img src="task4_media/euler.png"></center> <center><img src="task4_media/euler.png"></center>
...@@ -28,18 +28,31 @@ In `Particle::update`, use this integrator to step the current particle forward ...@@ -28,18 +28,31 @@ In `Particle::update`, use this integrator to step the current particle forward
## Collisions ## Collisions
The more substantial part of this task is colliding each particle with the rest of our scene geometry. Thankfully, you've already done most of the work required here during A3: we can use Scotty3D's ray-tracing capabilities to find collisions along our particles' paths. The more substantial part of this task is colliding each particle with the rest of our scene geometry. Thankfully, you've already done most of the work required here for A3: we can use Scotty3D's ray-tracing capabilities to find collisions.
During each timestep, we know that in the absence of a collision, our particle will travel in the direction of velocity for distance `||velocity|| * dt`. We can create a ray representing this position and velocity to look for collisions during the time-step. If the ray intersects with the scene, we can compute when the particle would experience a collision. Note that since we are representing particles as small spheres, you must take `radius` into account when finding the collision point. If the path intersects a surface, at what distance does the closest point on the sphere start colliding? (Hint - it depends on the angle of intersection.) Also note that if a collision would occur after the end of the current timestep, it may be ignored. During each timestep, we know that in the absence of a collision, our particle will travel in the direction of velocity for distance `||velocity|| * dt`. Hence, we can create a ray from the particle's position and velocity to look for collisions during the time-step. If the ray intersects with the scene, we can compute exactly when the particle would experience a collision.
When we find a collision, we could just place the particle at the collision point and be done, but we don't want our particles to simply stick the surface! Instead, we will assume all particles collide elastically (and massless-ly) - that is, the magnitude of their velocity should be the same before and after the collision, and its direction should be reflected about the normal of the collision surface. When collision is found, we could just place the particle at the collision point and be done, but we don't want our particles to simply stick the surface! Instead, we will assume all particles collide elastically (and massless-ly): the magnitude of a particle's velocity should be the same before and after a collision, but its direction should be reflected about the normal of the collided surface.
Finally, once we have a reflected velocity, we can compute how much of the time step remains after the collision, and step the particle forward that amount. However, what if the particle collided with the scene _again_ before the end of the time-step? If we are using very small time-steps, it might be OK to ignore this possibility, but we want to be able to resolve all collisions. So, we can repeat the ray-casting procedure in a loop until we have used up the entire time-step (up to some epsilon). Remember to only use the remaining portion of the time-step each iteration, and to step forward both the velocity and position at each sub-step. Once we have reflected velocity, we can compute how much of the time step remains after the collision. But what if the particle collided with the scene _again_ before the end of the time-step? If we are using very small time-steps, it might be acceptable to ignore this possibility, but we want to resolve all collisions. So, we can repeat the ray-casting procedure in a loop until we have used up the entire time-step, up to some epsilon. Remember to only use the remaining portion of the time-step each iteration, and to step forward both the velocity and position at each sub-step.
### Spherical Particles
The above procedure is perfectly realistic for point particles, but we would like to draw our particles with some non-zero size and still have the simulation appear natural.
We will hence represent particles as small spheres. This means you must take `radius` into account when finding intersections: if the collision ray intersects a surface, at what distance does the closest point on the sphere incur the collision? (Hint - it depends on the angle of intersection.)
Simply finding closer intersections based on `radius` will not, of course, resolve all sphere-scene intersections: the ray can miss all geometry while the edge of the sphere would see a collision. However, this will produce visually acceptable results, greatly reducing overlap and never letting particles 'leak' through geometry.
<center><img src="task4_media/collision.png"></center> <center><img src="task4_media/collision.png"></center>
Once you have got collisions working, you should be able to open `particles.dae` and see a randomized collision-fueled waterfall. Try rendering the scene! Once you have got collisions working, you should be able to open `particles.dae` and see a randomized collision-fueled waterfall. Try rendering the scene!
Tips:
- **Don't** use `abs()`. In GCC, this is the integer-only absolute value function. To get the float version, use `std::abs()` or `fabsf()`.
- When accounting for radius, you don't know how far along the ray a collision might occur. Look for a collision at any distance, and if it occurs after the end of the current timestep, ignore it.
- When accounting for radius, consider in what situation(s) you could find a collision at a negative time. In this case, you should not move the particle - just reflect the velocity.
## Lifetime ## Lifetime
Finally, note that `Particle::update` is supposed to return a boolean representing whether or not the particle should be removed from the simulation. Each particle has an `age` member that represents the remaining time it has to live. Each time-step, you should subtract `dt` from `age` and return whether `age` is still greater than zero. Finally, note that `Particle::update` is supposed to return a boolean representing whether or not the particle should be removed from the simulation. Each particle has an `age` member that represents the remaining time it has to live. Each time-step, you should subtract `dt` from `age` and return whether `age` is still greater than zero.
......
...@@ -24,8 +24,6 @@ Your implementation should have the following basic steps for each vertex: ...@@ -24,8 +24,6 @@ Your implementation should have the following basic steps for each vertex:
Below we have an equation representation. The ith vertex v is the new vertex position. The weight w is the weight metric computed as the inverse of distance between the ith vertex and the closest point on joint j. We multiply this term with the position of the ith vertex v with respect to joint j after joint's transformations has been applied. Below we have an equation representation. The ith vertex v is the new vertex position. The weight w is the weight metric computed as the inverse of distance between the ith vertex and the closest point on joint j. We multiply this term with the position of the ith vertex v with respect to joint j after joint's transformations has been applied.
<!--# TODO: fix this-->
<!--![skinnning_equations](task3_media/skinning_equations.png)-->
<center><img src="task3_media/skinning_eqn1.png" style="height:100px"> <center><img src="task3_media/skinning_eqn1.png" style="height:100px">
<img src="task3_media/skinning_eqn2.png" style="height:120px"></center> <img src="task3_media/skinning_eqn2.png" style="height:120px"></center>
......
...@@ -8,7 +8,9 @@ parent: "A3: Pathtracer" ...@@ -8,7 +8,9 @@ parent: "A3: Pathtracer"
# (Task 3) Bounding Volume Hierarchy # (Task 3) Bounding Volume Hierarchy
### Walkthrough Video ### Walkthrough Video
<iframe width="750" height="500" src="Task3_BVH.mp4" frameborder="0" allowfullscreen></iframe> <video width="750" height="500" controls>
<source src="Task3_BVH.mp4" type="video/mp4">
</video>
In this task you will implement a bounding volume hierarchy that accelerates ray-scene intersection. Most of this work will be in `student/bvh.inl`. Note that this file has an unusual extension (`.inl` = inline) because it is an implementation file for a template class. This means `bvh.h` must `#include` it, so all code that sees `bvh.h` will also see `bvh.inl`. In this task you will implement a bounding volume hierarchy that accelerates ray-scene intersection. Most of this work will be in `student/bvh.inl`. Note that this file has an unusual extension (`.inl` = inline) because it is an implementation file for a template class. This means `bvh.h` must `#include` it, so all code that sees `bvh.h` will also see `bvh.inl`.
......
...@@ -8,7 +8,9 @@ permalink: /pathtracer/camera_rays ...@@ -8,7 +8,9 @@ permalink: /pathtracer/camera_rays
# (Task 1) Generating Camera Rays # (Task 1) Generating Camera Rays
### Walkthrough Video ### Walkthrough Video
<iframe width="750" height="500" src="Task1_CameraRays.mp4" frameborder="0" allowfullscreen></iframe> <video width="750" height="500" controls>
<source src="Task1_CameraRays.mp4" type="video/mp4">
</video>
"Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.) Your job is to generate these rays, which is the first step in the raytracing procedure. "Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.) Your job is to generate these rays, which is the first step in the raytracing procedure.
......
...@@ -10,7 +10,9 @@ has_toc: false ...@@ -10,7 +10,9 @@ has_toc: false
# (Task 7) Environment Lighting # (Task 7) Environment Lighting
### Walkthrough Video ### Walkthrough Video
<iframe width="750" height="500" src="Task7_EnvMap.mp4" frameborder="0" allowfullscreen></iframe> <video width="750" height="500" controls>
<source src="Task7_EnvMap.mp4" type="video/mp4">
</video>
The final task of this assignment will be to implement a new type of light source: an infinite environment light. An environment light is a light that supplies incident radiance (really, the light intensity dPhi/dOmega) from all directions on the sphere. Rather than using a predefined collection of explicit lights, an environment light is a capture of the actual incoming light from some real-world scene; rendering using environment lighting can be quite striking. The final task of this assignment will be to implement a new type of light source: an infinite environment light. An environment light is a light that supplies incident radiance (really, the light intensity dPhi/dOmega) from all directions on the sphere. Rather than using a predefined collection of explicit lights, an environment light is a capture of the actual incoming light from some real-world scene; rendering using environment lighting can be quite striking.
...@@ -66,12 +68,8 @@ ennis.exr with 32 spp ...@@ -66,12 +68,8 @@ ennis.exr with 32 spp
uffiz.exr with 32 spp uffiz.exr with 32 spp
![uffiz](new_results/uffizi32importance.png) ![uffiz](new_results/uffiz32importance.png)
field.exr with 32 spp
![ennis](new_results/field32importance.png)
field.exr with 1024 spp field.exr with 1024 spp
![ennis](new_results/field1024imp.png) ![ennis](new_results/field1024importance.png)
...@@ -9,10 +9,10 @@ has_toc: false ...@@ -9,10 +9,10 @@ has_toc: false
# (Task 6) Materials # (Task 6) Materials
### Walkthrough Video ### Walkthrough Video
<iframe width="750" height="500" src="Task6_Materials.mp4" frameborder="0" allowfullscreen></iframe> <video width="750" height="500" controls>
<source src="Task6_Materials.mp4" type="video/mp4">
</video>
Now that you have implemented the ability to sample more complex light paths, it's finally time to add support for more types of materials (other than the fully Lambertian material that you have implemented in Task 5). In this task you will add support for two types of materials: a perfect mirror and glass (a material featuring both specular reflection and transmittance) in `student/bsdf.cpp`. Now that you have implemented the ability to sample more complex light paths, it's finally time to add support for more types of materials (other than the fully Lambertian material that you have implemented in Task 5). In this task you will add support for two types of materials: a perfect mirror and glass (a material featuring both specular reflection and transmittance) in `student/bsdf.cpp`.
...@@ -32,11 +32,9 @@ There are also two helper functions in the BSDF class in `student/bsdf.cpp` that ...@@ -32,11 +32,9 @@ There are also two helper functions in the BSDF class in `student/bsdf.cpp` that
* `Vec3 refract(Vec3 out_dir, float index_of_refraction, bool& was_internal)` returns the ray that results from refracting the ray in `out_dir` about the surface according to [Snell's Law](http://15462.courses.cs.cmu.edu/fall2015/lecture/reflection/slide_032). The surface's index of refraction is given by the argument `index_of_refraction`. Your implementation should assume that if the ray in `out_dir` **is entering the surface** (that is, if `cos(out_dir, N=[0,1,0]) > 0`) then the ray is currently in vacuum (index of refraction = 1.0). If `cos(out_dir, N=[0,1,0]) < 0` then your code should assume the ray is leaving the surface and entering vacuum. Remember to **flip the sign of the x and z components** of the output ray direction from the refract method. * `Vec3 refract(Vec3 out_dir, float index_of_refraction, bool& was_internal)` returns the ray that results from refracting the ray in `out_dir` about the surface according to [Snell's Law](http://15462.courses.cs.cmu.edu/fall2015/lecture/reflection/slide_032). The surface's index of refraction is given by the argument `index_of_refraction`. Your implementation should assume that if the ray in `out_dir` **is entering the surface** (that is, if `cos(out_dir, N=[0,1,0]) > 0`) then the ray is currently in vacuum (index of refraction = 1.0). If `cos(out_dir, N=[0,1,0]) < 0` then your code should assume the ray is leaving the surface and entering vacuum. Remember to **flip the sign of the x and z components** of the output ray direction from the refract method.
* There is a special case to account for, specifically known as **total internal * There is a special case to account for: **total internal reflection**. This happens for certain angles when the incoming ray is in the material with the
reflection**. This happens for certain angles when the incoming ray is in the material with the higher refractive index. Angles that are greater than the _critical angle_, which is the incident \theta_i that causes the
higher refractive index. These certain angles are angles that are greater than the _critical angle_, which is the incident angle \theta_i that causes the refracted \theta_t to be >= 90 degrees, cause there to be no real solution to Snell's Law.
refracted angle \theta_t to be >= 90 degrees, which causes there to be no real solution to Snell's
Law.
<center><img src="tir_eqns.png" width="200"></center> <center><img src="tir_eqns.png" width="200"></center>
......
...@@ -8,7 +8,9 @@ parent: "A3: Pathtracer" ...@@ -8,7 +8,9 @@ parent: "A3: Pathtracer"
# (Task 5) Path Tracing # (Task 5) Path Tracing
### Walkthrough Video ### Walkthrough Video
<iframe width="750" height="500" src="Task5_PathTracing.mp4" frameborder="0" allowfullscreen></iframe> <video width="750" height="500" controls>
<source src="Task5_PathTracing.mp4" type="video/mp4">
</video>
Up to this point, your renderer simulates light which begins at a source, bounces off a surface, and hits a camera. However, light can take much more complicated paths, bouncing off many surfaces before eventually reaching the camera. Simulating this multi-bounce light is referred to as _indirect illumination_, and it is critical to producing realistic images, especially when specular surfaces are present. Up to this point, your renderer simulates light which begins at a source, bounces off a surface, and hits a camera. However, light can take much more complicated paths, bouncing off many surfaces before eventually reaching the camera. Simulating this multi-bounce light is referred to as _indirect illumination_, and it is critical to producing realistic images, especially when specular surfaces are present.
...@@ -40,8 +42,6 @@ Note: ...@@ -40,8 +42,6 @@ Note:
* Functions in `student/sampler.cpp` from class `Sampler` contains helper functions for random sampling, which you will use for sampling. Our starter code uses uniform hemisphere sampling `Samplers::Hemisphere::Uniform sampler`(see `rays/bsdf.h` and `student/sampler.cpp`) which is already implemented for you. * Functions in `student/sampler.cpp` from class `Sampler` contains helper functions for random sampling, which you will use for sampling. Our starter code uses uniform hemisphere sampling `Samplers::Hemisphere::Uniform sampler`(see `rays/bsdf.h` and `student/sampler.cpp`) which is already implemented for you.
* If you want to implement Cosine-Weighted Hemisphere sampling for extra credit, fill in `Hemisphere::Cosine::sample` in `student/samplers.cpp` and then in `rays/bsdf.h`change `Samplers::Hemisphere::Uniform sampler` to `Samplers::Hemisphere::Cosine sampler`.
--- ---
After correctly implementing path tracing, your renderer should be able to make a beautifully lit picture of the Cornell Box with Lambertian spheres (`cbox_lambertian.dae`). Below is a render using 1024 samples per pixel (spp): After correctly implementing path tracing, your renderer should be able to make a beautifully lit picture of the Cornell Box with Lambertian spheres (`cbox_lambertian.dae`). Below is a render using 1024 samples per pixel (spp):
...@@ -56,10 +56,12 @@ Also note that if you have enabled Russian Roulette, your result may seem noisie ...@@ -56,10 +56,12 @@ Also note that if you have enabled Russian Roulette, your result may seem noisie
## Tips ## Tips
* The path termination probability should be computed based on the [overall throughput](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_044) of the path. The throughput of the ray is recorded in its `throughput` member, which represents the multiplicative factor the current radiance will be affected by before contributing to the final pixel color. Hence, you should both use and update this field. To update it, simply multiply in the rendering equation factors: BSDF attenuation and `cos(theta)`. Remember to apply the coefficients from the current step before deriving the termination probability. Finally, note that the updated throughput should be copied to the recursive ray for later steps. * The path termination probability should be computed based on the [overall throughput](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_044) of the path. The throughput of the ray is recorded in its `throughput` member, which represents the multiplicative factor the current radiance will be affected by before contributing to the final pixel color. Hence, you should both use and update this field. To update it, simply multiply in the rendering equation factors (BSDF attenuation and `cos(theta)`) and divide by the PDF. Remember to apply the coefficients from the current step before deriving the termination probability. Finally, note that the updated throughput should be copied to the recursive ray for later steps.
* Keep in mind that delta function BSDFs can take on values greater than one, so clamping termination probabilities derived from BSDF values to 1 is wise. * Keep in mind that the throughput can take on values greater than one, so clamping termination probabilities derived from BSDF values to `[0,1]` is wise. Remember that PDF values are _not_ probabilities, so they should _not_ be clamped to 1.
* To convert a Spectrum to a termination probability, we recommend you use the luminance (overall brightness) of the Spectrum, which is available via `Spectrum::luma` * To convert a Spectrum to a termination probability, we recommend you use the luminance (overall brightness) of the Spectrum, which is available via `Spectrum::luma`
* We've given you some [pretty good notes](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_047) on how to do this part of the assignment, but it can still be tricky to get correct. * We've given you some [pretty good notes](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_047) on how to do this part of the assignment, but it can still be tricky to get correct.
* **Don't** use `abs()`. In GCC, this is the integer-only absolute value function. To get the float version, use `std::abs()` or `fabsf()`.
...@@ -17,5 +17,5 @@ intersection point, for ray-triangle intersection. ...@@ -17,5 +17,5 @@ intersection point, for ray-triangle intersection.
<center><img src="triangle_intersect_eqns.png" style="height:400px"></center> <center><img src="triangle_intersect_eqns.png" style="height:400px"></center>
A few final notes and thoughts: A few final notes and thoughts:
- If the denominator _dot((e1 x d), e2)_ is zero, what does that mean about the relationship of the ray and the triangle? Can a triangle with this area be hit by a ray? Given _u_ and _v_, how do you know if the ray hits the triangle? Don't forget that the intersection point on the ray should be within the ray's `dist_bounds`.
If the denominator _dot((e1 x d), e2)_ is zero, what does that mean about the relationship of the ray and the triangle? Can a triangle with this area be hit by a ray? Given _u_ and _v_, how do you know if the ray hits the triangle? Don't forget that the intersection point on the ray should be within the ray's `dist_bounds`. - **Don't** use `abs()`. In GCC, this is the integer-only absolute value function. To get the float version, use `std::abs()` or `fabsf()`.
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment