{"0": { "doc": "Animate", "title": "Animate", "content": "# Animate When you select the Animate tab, a timeline window will show up at the bottom of your screen. Animation is performed by creating **keyframes** and **interpolating** between them. ### Keyframes A keyframe associates an object's pose and properties with a specific frame in the timeline. Keyframes can be associated with all objects (including the camera and individual joints associated with an object) in the scene. To create a keyframe for an object: - Select a frame location along that objects timeline by clicking on the timeline. Remember to do this *before* editing the pose. - Change the pose or properties of the selected object and press `Set`. To set a keyframe for every object in the scene, use `Set All`. Note that only the poses (rotations) of joints can be animated, not their extents. To remove a keyframe in the timeline, click on it and press `Clear` to remove it. Press `Clear All` to clear the current keyframe of every object in the scene. ![animating-cow](../../animation/task1_media/animate_cow.gif) To see your animation, press `Play [space]` . Once you've implemented **spline interpolation**, intermediate frames are generated by interpolating object poses between keyframes. Check `Draw Splines` to visualize the spline along which objects are animated. ![view-spline](../animate_mode/guide-animate-spline.png) `Add Frames` inserts 90 empty frames into the timeline. `Crop End` deletes frames from the selected location to the end of the timeline. ### Posing Once you have [rigged](../rig) an object with a skeleton, it can now be posed by selecting a joint and changing its pose i.e., rotating the joint. This is called Forward Kinematics. Joint poses can also be indirectly changed by using the IK (Inverse Kinematics) handles to provide target positions. Note that IK handles need to be explicitly enabled using the checkbox. Once you've implemented **forward kinematics**, **inverse kinematics** and **skinning**, as you change the pose, the mesh will deform. Different poses can be set as keyframes to animate the object. ", "url": "/guide/animate_mode/", "relUrl": "/guide/animate_mode/" },"1": { "doc": "Bevelling", "title": "Bevelling", "content": "# Beveling Here we provide some additional detail about the bevel operations and their implementation in Scotty3D. Each bevel operation has two components: 1. a method that modifies the _connectivity_ of the mesh, creating new beveled elements, and 2. a method the updates the _geometry_ of the mesh, insetting and offseting the new vertices according to user input. The methods that update the connectivity are `HalfedgeMesh::bevel_vertex`, `halfedgeMesh::bevel_edge`, and `HalfedgeMesh::bevel_face`. The methods that update geometry are `HalfedgeMesh::bevel_vertex_positions`, `HalfedgeMesh::bevel_edge_positions`, and `HalfedgeMesh::bevel_face_positions`. The methods for updating connectivity can be implemented following the general strategy outlined in [edge flip tutorial](edge_flip). **Note that the methods that update geometry will be called repeatedly for the same bevel, in order to adjust positions according to user mouse input. See the gif in the [User Guide](../guide/model).** To update the _geometry_ of a beveled element, you are provided with the following data: * `start_positions` - These are the original vertex positions of the beveled mesh element, without any insetting or offsetting. * `face` - This is a reference to the face currently being beveled. This was returned by one of the connectivity functions. * `tangent_offset` - The amount by which the new face should be inset (i.e., \"shrunk\" or \"expanded\") * `normal_offset` - (faces only) The amount by which the new face should be offset in the normal direction. Also note that we provide code to gather the halfedges contained in the beveled face, creating the array `new_halfedges`. You should only have to update the position (`Vertex::pos`) of the vertices associated with this list of halfedges. The basic recipe for updating these positions is: * Iterate over the list of halfedges (`new_halfedges`) * Grab the vertex coordinates that are needed to compute the new, updated vertex coordinates (this could be a mix of values from `start_positions`, or the members `Vertex::pos`) * Compute the updated vertex positions using the current values of `tangent_offset` (and possibly `normal_offset`) * Store the new vertex positions in `Vertex::pos` _for the vertices of the new, beveled polygon only_ (i.e., the vertices associated with each of `new_halfedges`). The reason for storing `new_halfedges` and `start_positions` in an array is that it makes it easy to access positions \"to the left\" and \"to the right\" of a given vertex. For instance, suppose we want to figure out the offset from the corner of a polygon. We might want to compute some geometric quantity involving the three vertex positions `start_positions[i-1]`, `start_positions[i]`, and `start_positions[i+1]` (as well as `inset`), then set the new vertex position `new_halfedges[i]->vertex()->pos` to this new value: A useful trick here is _modular arithmetic_: since we really have a \"loop\" of vertices, we want to make sure that indexing the next element (+1) and the previous element (-1) properly \"wraps around.\" This can be achieved via code like // Get the number of vertices in the new polygon int N = (int)hs.size(); // Assuming we're looking at vertex i, compute the indices // of the next and previous elements in the list using // modular arithmetic---note that to get the previous index, // we can't just subtract 1 because the mod operation in C++ // doesn't behave quite how you might expect for negative // values! int a = (i+N-1) % N; int b = i; int c = (i+1) % N; // Get the actual 3D vertex coordinates at these vertices Vec3 pa = start_positions[a]; Vec3 pb = start_positions[b]; Vec3 pc = start_positions[c]; From here, you will need to compute new coordinates for vertex `i`, which can be accessed from `new_halfedges[i]->vertex()->pos`. As a \"dummy\" example (i.e., this is NOT what you should actually do!!) this code will set the position of the new vertex to the average of the vertices above: new_halfedges[i].vertex()->pos = ( pa + pb + pc ) / 3.; // replace with something that actually makes sense! The only question remaining is: where _should_ you put the beveled vertex? **We will leave this decision up to you.** This question is one where you will have to think a little bit about what a good design would be. Questions to ask yourself: * How do I compute a point that is inset from the original geometry? * For faces, how do I shift the geometry in the normal direction? (You may wish to use the method `Face::normal()` here.) * What should I do as the offset geometry starts to look degenerate, e.g., shrinks to a point, or goes outside some reasonable bounds? * What should I do when the geometry is nonplanar? * Etc. The best way to get a feel for whether you have a good design is _to try it out!_ Can you successfully and easily use this tool to edit your mesh? Or is it a total pain, producing bizarre results? You be the judge! ", "url": "/meshedit/local/bevel/", "relUrl": "/meshedit/local/bevel/" },"2": { "doc": "(Task 3) BVH", "title": "(Task 3) BVH", "content": "# (Task 3) Bounding Volume Hierarchy In this task you will implement a bounding volume hierarchy that accelerates ray-scene intersection. Most of this work will be in `student/bvh.inl`. Note that this file has an unusual extension (`.inl` = inline) because it is an implementation file for a template class. This means `bvh.h` must `#include` it, so all code that sees `bvh.h` will also see `bvh.inl`. First, take a look at the definition for our `BVH` in `rays/bvh.h`. We represent our BVH using a vector of `Node`s, `nodes`, as an implicit tree data structure in the same fashion as heaps that you probably have seen in some other courses. A `Node` has the following fields: * `BBox bbox`: the bounding box of the node (bounds all primitives in the subtree rooted by this node) * `size_t start`: start index of primitives in the `BVH`'s primitive array * `size_t size`: range of index in the primitive list (number of primitives in the subtree rooted by the node) * `size_t l`: the index of the left child node * `size_t r`: the index of the right child node The BVH class also maintains a vector of all primitives in the BVH. The fields start and size in the BVH `Node` refer the range of contained primitives in this array. The primitives in this array are not initially in any particular order, and you will need to _rearrange the order_ as you build the BVH so that your BVH can accurately represent the spacial hierarchy. The starter code constructs a valid BVH, but it is a trivial BVH with a single node containing all scene primitives. Once you are done with this task, you can check the box for BVH in the left bar under \"Visualize\" when you start render to visualize your BVH and see each levels. Finally, note that the BVH visualizer will start drawing from `BVH::root_idx`, so be sure to set this to the proper index (probably 0 or `nodes.size() - 1`, depending on your implementation) when you build the BVH. ## Step 0: Bounding Box Calculation Implement `BBox::hit` in `student/bbox.cpp`. Also if you haven't already, implement `Triangle::bbox` in `student/tri_mesh.cpp` (`Triangle::bbox` should be fairly straightforward). We recommend checking out this [Scratchapixel article](https://www.scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection). ## Step 1: BVH Construction Your job is to construct a `BVH` using the [Surface Area Heuristic](http://15462.courses.cs.cmu.edu/fall2017/lecture/acceleratingqueries/slide_025) discussed in class. Tree construction would occur when the BVH object is constructed. Below is the pseudocode by which your BVH construction procedure should generally follow (copied from lecture slides). ## Step 2: Ray-BVH Intersection Implement the ray-BVH intersection routine `Trace BVH::hit(const Ray& ray)`. You may wish to consider the node visit order optimizations we discussed in class. Once complete, your renderer should be able to render all of the test scenes in a reasonable amount of time. [Visualization of normals](visualization_of_normals.md) may help with debugging. ## Visualization In Render mode, simply check the box for \"BVH\", and you would be able to see the BVH you generated in task 3 when you **start rendering**. You can click on the horizontal bar to see each level of your BVH. ## Sample BVHs The BVH constructed for Spot the Cow on the 10th level. The BVH constructed for a scene composed of several cubes and spheres on the 0th and 1st levels. The BVH constructed for the Stanford Bunny on the 10th level. ", "url": "/pathtracer/bounding_volume_hierarchy", "relUrl": "/pathtracer/bounding_volume_hierarchy" },"3": { "doc": "Building Scotty3D", "title": "Building Scotty3D", "content": "# Building Scotty3D ![Ubuntu Build Status](https://github.com/CMU-Graphics/Scotty3D/workflows/Ubuntu/badge.svg) ![MacOS Build Status](https://github.com/CMU-Graphics/Scotty3D/workflows/MacOS/badge.svg) ![Windows Build Status](https://github.com/CMU-Graphics/Scotty3D/workflows/Windows/badge.svg) To get a copy of the codebase, see [Git Setup](git). Note: the first build on any platform will be very slow, as it must compile most dependencies. Subsequent builds will only need to re-compile your edited Scotty3D code. ### Linux The following packages (ubuntu/debian) are required, as well as CMake and either gcc or clang: ``` sudo apt install pkg-config libgtk-3-dev libsdl2-dev ``` The version of CMake packaged with apt may be too old (we are using the latest version). If this is the case, you can install the latest version through pip: ``` pip install cmake export PATH=$PATH:/usr/local/bin ``` Finally, to build the project: ``` mkdir build cd build cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo .. make -j4 ``` The same process should also work modified for your distro/package manager of choice. Note that if you are using Wayland, you may need to set the environment variable ``SDL_VIDEODRIVER=wayland`` when running ``Scotty3D`` for acceptable performance. Notes: - You can instead use ``cmake -DCMAKE_BUILD_TYPE=Debug ..`` to build in debug mode, which, while far slower, makes the debugging experience much more intuitive. - You can replace ``4`` with the number of build processes to run in parallel (set to the number of cores in your machine for maximum utilization). - If you have both gcc and clang installed and want to build with clang, you should run ``CC=clang CXX=clang++ cmake ..`` instead. ### Windows The windows build is easiest to set up using the Visual Studio compiler (for now). To get the compiler, download and install the Visual Studio 2019 Build Tools [here](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019). If you want to instead use the full Visual Studio IDE, you can download Visual Studio Community 2019 [here](https://visualstudio.microsoft.com/downloads/). Be sure to install the \"Desktop development with C++\" component. You can download CMake for windows [here](https://cmake.org/download/). Once the Visual Studio compiler (MSVC) is installed, you can access it by running \"Developer Command Prompt for VS 2019,\" which opens a terminal with the utilities in scope. The compiler is called ``cl``. You can also import these utilities in any terminal session by running the script installed at ``C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Auxiliary\\Build\\vcvars64.bat``. We also provide a simple script, ``build_win.bat``, that will automatically import the compiler and build the project. You should be able to simply run it in the project root to build. ``Scotty3D.exe`` will be generated under ``build/RelWithDebInfo/``. If you want to build manually, the steps (assuming MSVC is in scope) are: ``` mkdir build cd build cmake .. cmake --build . --config RelWithDebInfo ``` You can also use ``--config Debug`` to build in debug mode, which, while far slower, makes the debugging experience much more intuitive. If you swap this, be sure to make a new build directory for it. Finally, also note that ``cmake ..`` generates a Visual Studio solution file in the current directory. You can open this solution (``Scotty3D.sln``) in Visual Studio itself and use its interface to build, run, and debug the project. (Using the Visual Studio debugger or the provided VSCode launch options for debugging is highly recommended.) ### MacOS The following packages are required, as well as CMake and clang. You can install them with [homebrew](https://brew.sh/): ``` brew install pkg-config sdl2 ``` To build the project: ``` mkdir build cd build cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo .. make -j4 ``` Notes: - You can instead use ``cmake -DCMAKE_BUILD_TYPE=Debug ..`` to build in debug mode, which, while far slower, makes the debugging experience much more intuitive. - You can replace ``4`` with the number of build processes to run in parallel (set to the number of cores in your machine for maximum utilization). ", "url": "/build/", "relUrl": "/build/" },"4": { "doc": "(Task 1) Camera Rays", "title": "(Task 1) Camera Rays", "content": "# (Task 1) Generating Camera Rays ### Walkthrough Video \"Camera rays\" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.) Take a look at `Pathtracer::trace_pixel` in `student/pathtracer.cpp`. The job of this function is to compute the amount of energy arriving at this pixel of the image. Conveniently, we've given you a function `Pathtracer::trace_ray(r)` that provides a measurement of incoming scene radiance along the direction given by ray `r`. See `lib/ray.h` for the interface of ray. Here are some [rough notes](https://drive.google.com/file/d/0B4d7cujZGEBqVnUtaEsxOUI4dTMtUUItOFR1alQ4bmVBbnU0/view) giving more detail on how to generate camera rays. This tutorial from [Scratchapixel](https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-generating-camera-rays/generating-camera-rays) also provides a detailed walkthrough of what you need to do. (Note that the coordinate convention that Scratchpixel adopted is different from the one we use, and you should stick to the coordinate system from the [rough notes](https://drive.google.com/file/d/0B4d7cujZGEBqVnUtaEsxOUI4dTMtUUItOFR1alQ4bmVBbnU0/view) all the time.) **Step 1:** Given the width and height of the screen, and point in screen space, compute the corresponding coordinates of the point in normalized ([0-1]x[0-1]) screen space in `Pathtracer::trace_pixel`. Pass these coordinates to the camera via `Camera::generate_ray` in `camera.cpp`. **Step 2:** Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point. We recommend that you compute this ray in camera space (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio. Note that the camera maintains camera-space-to-world space transform matrix `iview` that will come in handy. **Step 3:** Your implementation of `Pathtracer::trace_pixel` must support super-sampling. The member `Pathtracer::n_samples` specifies the number of samples of scene radiance to evaluate per pixel. The starter code will hence call `Pathtracer::trace_pixel` one time for each sample, so your implementation of `Pathtracer::trace_pixel` should choose a new location within the pixel each time. To choose a sample within the pixel, you should implement `Rect::Uniform::sample` (see `src/student/samplers.cpp`), such that it provides (random) uniformly distributed 2D points within the rectangular region specified by the origin and the member `Rect::Uniform::size`. Then you may then create a `Rect::Uniform` sampler with a one-by-one region and call `sample()` to obtain randomly chosen offsets within the pixel. Once you have implemented `Pathtracer::trace_pixel`, `Rect::Uniform::sample` and `Camera::generate_ray`, you should have a working camera. **Tip:** Since it'll be hard to know if you camera rays are correct until you implement primitive intersection, we recommend debugging your camera rays by checking what your implementation of `Camera::generate_ray` does with rays at the center of the screen (0.5, 0.5) and at the corners of the image. The code can log the results of raytracing for visualization and debugging. To do so, simply call function `Pathtracer::log_ray` in your `Pathtracer::trace_pixel`. Function `Pathtracer::log_ray` takes in 3 arguments: the ray tat you want to log, a float that specifies the distance to log that ray up to, and a color for the ray. If you don't pass a color, it will default to white. You should only log only a portion of the generated rays, or else the result will be hard to interpret. To do so, you can add `if(RNG::coin_flip(0.0005f)) log_ray(out, 10.0f);` to log 0.05% of camera rays. Finally, you can visualize the logged rays by checking the box for Logged rays under Visualize and then **starting the render** (Open Render Window -> Start Render). After running the path tracer, rays will be shown as lines in visualizer. Be sure to wait for rendering to complete so you see all rays while visualizing. ![logged_rays](new_results/log_rays.png) **Step 4:** `Camera` also includes the members `aperture` and `focal_dist`. These parameters are used to simulate the effects of de-focus blur and bokeh found in real cameras. Focal distance represents the distance between the camera aperture and the plane that is perfectly in focus. To use it, you must simply scale up the sensor position from step 2 (and hence ray direction) by `focal_dist` instead of leaving it on the `z = -1` plane. You might notice that this doesn't actually change anything about your result, since this is just scaling up a vector that is later normalized. However, now aperture comes in: by default, all rays start a single point, representing a pinhole camera. But when `aperture > 0`, we want to randomly choose the ray origin from an `aperture`x`aperture` square centered at the origin and facing the camera direction (-Z). Then, we use this point as the starting point of the ray while keeping its sensor position fixed (consider how that changes the ray direction). Now it's as if the same image was taken from slightly off origin. This simulates real cameras with non-pinhole apertures: the final photo is equivalent to averaging images taken by pinhole cameras placed at every point in the aperture. Finally, we can see that non-zero aperture makes focal distance matter: objects on the focal plane are unaffected, since where the ray hits on the sensor is the same regardless of the ray's origin. However, rays that hit objects objects closer or farther than the focal distance will be able to \"see\" slightly different parts of the object based on the ray origin. Averaging over many rays within a pixel, this results in collecting colors from a region larger slightly than that pixel would cover given zero aperture, causing the object to become blurry. We are using a square aperture, so bokeh effects will reflect this. You can test aperture/focal distance by adjusting `aperture` and `focal_dist` using the camera UI and examining logging rays. Once you have implemented primitive intersections and path tracing (tasks 3/5), you will be able to properly render `dof.dae`: ![depth of field test](new_results/dof.png) **Extra credit ideas:** * Write your own camera pixel sampler (replacing Rect::Uniform) that generates samples with improved distribution. Some examples include: * Jittered Sampling * Multi-jittered sampling * N-Rooks (Latin Hypercube) sampling * Sobol sequence sampling * Halton sequence sampling * Hammersley sequence sampling ", "url": "/pathtracer/camera_rays", "relUrl": "/pathtracer/camera_rays" },"5": { "doc": "Catmull-Clark Subdivision", "title": "Catmull-Clark Subdivision", "content": "# Catmull-Clark Subdivision For an in-practice example, see the [User Guide](/Scotty3D/guide/model). The only difference between Catmull-Clark and [linear](../linear) subdivision is the choice of positions for new vertices. Whereas linear subdivision simply takes a uniform average of the old vertex positions, Catmull-Clark uses a very carefully-designed _weighted_ average to ensure that the surface converges to a nice, round surface as the number of subdivision steps increases. The original scheme is described in the paper _\"Recursively generated B-spline surfaces on arbitrary topological meshes\"_ by (Pixar co-founder) Ed Catmull and James Clark. Since then, the scheme has been thoroughly discussed, extended, and analyzed; more modern descriptions of the algorithm may be easier to read, including those from the [Wikipedia](https://en.wikipedia.org/wiki/Catmull-Clark_subdivision_surface) and [this webpage](http://www.rorydriscoll.com/2008/08/01/catmull-clark-subdivision-the-basics/). In short, the new vertex positions can be calculated by: 1. setting the new vertex position at each face f to the average of all its original vertices (exactly as in linear subdivision), 2. setting the new vertex position at each edge e to the average of the new face positions (from step 1) and the original endpoint positions, and 3. setting the new vertex position at each vertex v to the weighted sum where _n_ is the degree of vertex _v_ (i.e., the number of faces containing _v_), and * _Q_ is the average of all new face position for faces containing _v_, * _R_ is the average of all original edge midpoints for edges containing _v_, and * _S_ is the original vertex position for vertex _v_. In other words, the new vertex positions are an \"average of averages.\" (Note that you _will_ need to divide by _n_ _both_ when computing _Q_ and _R_, _and_ when computing the final, weighted value---this is not a typo!) Apart from changing the way vertex positions are computed, there should be no difference in your implementation of linear and Catmull-Clark subdivision. This step should be implemented in the method `HalfedgeMesh::catmullclark_subdivide_positions` in `student/meshedit.cpp`. This subdivision rule **is not** required to support meshes with boundary, unless the implementer wishes to go above and beyond. ", "url": "/meshedit/global/catmull/", "relUrl": "/meshedit/global/catmull/" },"6": { "doc": "Dielectrics and Transmission", "title": "Dielectrics and Transmission", "content": "# Dielectrics and Transmission ## Fresnel Equations for a Dielectric The [Fresnel Equations](https://en.wikipedia.org/wiki/Fresnel_equations) (another [link](http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/freseq.html) here) describe the amount of reflection from a surface. The description below is an approximation for dielectric materials (materials that don't conduct electricity). In this assignment you're asked to implement a glass material, which is a dielectric. In the description below, and refer to the index of refraction of the medium containing an incoming ray, and the zenith angle of the ray to the surface of a new medium. and refer to the index of refraction of the new medium and the angle to the surface normal of a transmitted ray. The Fresnel equations state that reflection from a surface is a function of the surface's index of refraction, as well as the polarity of the incoming light. Since our renderer doesn't account for polarity, we'll apply a common approximation of averaging the reflectance of polarizes light in perpendicular and parallel polarized light: The parallel and perpendicular terms are given by: Therefore, for a dielectric material, the fraction of reflected light will be given by , and the amount of transmitted light will be given by . Alternatively, you may compute using [Schlick's approximation](https://en.wikipedia.org/wiki/Schlick%27s_approximation). ## Distribution Function for Transmitted Light We described the BRDF for perfect specular reflection in class, however we did not discuss the distribution function for transmitted light. Since refraction \"spreads\" or \"condenses\" a beam, unlike perfect reflection, the radiance along the ray changes due to a refraction event. In your assignment you should use Snell's Law to compute the direction of refraction rays, and use the following distribution function to compute the radiance of transmitted rays. We refer you guys to Pharr, Jakob, and and Humphries's book [Physically Based Rendering](http://www.pbr-book.org/) for a derivation based on Snell's Law and the relation . (But you are more than welcome to attempt a derivation on your own!) ", "url": "/pathtracer/dielectrics_and_transmission", "relUrl": "/pathtracer/dielectrics_and_transmission" },"7": { "doc": "Edge Flip Tutorial", "title": "Edge Flip Tutorial", "content": "# Edge Flip Tutorial Here we provide a step-by-step guide to implementing a simplified version of the _EdgeFlip_ operation for a pair of triangles---the final version, however, must be implemented for general polygons (i.e., any _n_-gon). The basic strategy for implementing the other local operations is quite similar to the procedure outlined below. **Note:** if you're not familiar with C++, you should definitely take a moment to learn about the [standard library class](http://en.cppreference.com/w/cpp/container/vector) `std::vector`, especially the method `push_back()`, which will make it easy to accumulate a list of pointers as you walk around a polygon, vertex, etc. We now consider the case of a triangle-triangle edge flip. ### PHASE 0: Draw a Diagram Suppose we have a pair of triangles (a,b,c) and (c,b,d). After flipping the edge (b,c), we should now have triangles (a,d,c) and (a,b,d). A good first step for implementing any local mesh operation is to draw a diagram that clearly labels all elements affected by the operation: Here we have drawn a diagram of the region around the edge both before and after the edge operation (in this case, \"flip\"), labeling each type of element (halfedge, vertex, edge, and face) from zero to the number of elements. It is important to include every element affected by the operation, thinking very carefully about which elements will be affected. If elements are omitted during this phase, everything will break---even if the code written in the two phases is correct! In this example, for instance, we need to remember to include the halfedges \"outside\" the neighborhood, since their \"twin\" pointers will be affected. ### PHASE I: Collect elements Once you've drawn your diagram, simply collect all the elements from the \"before\" picture. Give them the same names as in your diagram, so that you can debug your code by comparing with the picture. // HALFEDGES HalfedgeRef h0 = e0->halfedge(); HalfedgeRef h1 = h0->next(); HalfedgeRef h2 = h1->next(); HalfedgeRef h3 = h0->twin(); HalfedgeRef h4 = h3->next(); HalfedgeRef h5 = h4->next(); HalfedgeRef h6 = h1->twin(); HalfedgeRef h7 = h2->twin(); HalfedgeRef h8 = h4->twin(); HalfedgeRef h9 = h5->twin(); // VERTICES VertexRef v0 = h0->vertex(); VertexRef v1 = h3->vertex(); // ...you fill in the rest!... // EDGES EdgeRef e1 = h5->edge(); EdgeRef e2 = h4->edge(); // ...you fill in the rest!... // FACES FaceRef f0 = h0->face(); // ...you fill in the rest!... ### PHASE II: Allocate new elements If your edge operation requires new elements, now is the time to allocate them. For the edge flip, we don't need any new elements; but suppose that for some reason we needed a new vertex v4\\. At this point we would allocate the new vertex via VertexRef v4 = mesh.new_vertex(); (The name used for this new vertex should correspond to the label you give it in your \"after\" picture.) Likewise, new edges, halfedges, and faces can be allocated via the methods `mesh.new_edge()`, `mesh.new_halfedge()`, and `mesh.new_face()`. ### PHASE III: Reassign Elements Next, update the pointers for all the mesh elements that are affected by the edge operation. Be exhaustive! In other words, go ahead and specify every pointer for every element, even if it did not change. Once things are working correctly, you can always optimize by removing unnecessary assignments. But get it working correctly first! Correctness is more important than efficiency. // HALFEDGES h0->next() = h1; h0->twin() = h3; h0->vertex() = v2; h0->edge() = e0; h0->face() = f0; h1->next() = h2; h1->twin() = h7; h1->vertex() = v3; h1->edge() = e3; h1->face() = f0; // ...you fill in the rest!... // ...and don't forget about the \"outside\" elements!... h9->next() = h9->next(); // didn't change, but set it anyway! h9->twin() = h4; h9->vertex() = v1; h9->edge() = e1; h9->face() = h9->face(); // didn't change, but set it anyway! // VERTICES v0->halfedge() = h2; v1->halfedge() = h5; v2->halfedge() = h4; v3->halfedge() = h3; // EDGES e0->halfedge() = h0; //...you fill in the rest!... // FACES f0->halfedge() = h0; //...you fill in the rest!... ### PHASE IV: Delete unused elements If your edge operation eliminates elements, now is the best time to deallocate them: at this point, you can be sure that they are no longer needed. For instance, since we do not need the vertex allocated in PHASE II, we could write mesh.erase(v4); You should be careful that this mesh element is not referenced by any other element in the mesh. But if your \"before\" and \"after\" diagrams are correct, that should not be an issue! ### Design considerations The basic algorithm outlined above will handle most edge flips, but you should also think carefully about possible corner-cases. You should also think about other design issues, like \"how much should this operation cost?\" For instance, for this simple triangle-triangle edge flip it might be reasonable to: * Ignore requests to flip boundary edges (i.e., just return immediately if either neighboring face is a boundary loop). * Ignore requests to perform any edge flip that would make the surface non-manifold or otherwise invalidate the mesh. * Not add or delete any elements. Since there are the same number of mesh elements before and after the flip, you should only need to reassign pointers. * Perform only a constant amount of work -- the cost of flipping a single edge should **not** be proportional to the size of the mesh! Formally proving that your code is correct in all cases is challenging, but at least try to think about what could go wrong in degenerate cases (e.g., vertices of low degree, or very small meshes like a tetrahedron). The biggest challenge in properly implementing this type of local operation is making sure that all the pointers still point to the right place in the modified mesh, and will likely be the cause of most of your crashes! To help mitigate this, Scotty3D will automatically attempt to ``validate`` your mesh after each operation, and will warn you if it detects abnormalities. Note that it will still crash if you leave references to deleted mesh elements! ", "url": "/meshedit/local/edge_flip", "relUrl": "/meshedit/local/edge_flip" },"8": { "doc": "(Task 7) Environment Lighting", "title": "(Task 7) Environment Lighting", "content": "# (Task 7) Environment Lighting The final task of this assignment will be to implement a new type of light source: an infinite environment light. An environment light is a light that supplies incident radiance (really, the light intensity dPhi/dOmega) from all directions on the sphere. Rather than using a predefined collection of explicit lights, an environment light is a capture of the actual incoming light from some real-world scene; rendering using environment lighting can be quite striking. The intensity of incoming light from each direction is defined by a texture map parameterized by phi and theta, as shown below. ![envmap_figure](envmap_figure.jpg) In this task you need to implement the `Env_Map::sample` and `Env_Map::sample_direction` method in `student/env_light.cpp`. You'll start with uniform direction sampling to get things working, and then move to a more advanced implementation that uses **importance sampling** to significantly reduce variance in rendered images. ## Step 1: Uniform sampling To get things working, your first implementation of `Env_Map::sample` will be quite simple. You should generate a random direction on the sphere (**with uniform (1/4pi) probability with respect to solid angle**), convert this direction to coordinates (phi, theta) and then look up the appropriate radiance value in the texture map using **bilinear interpolation** (note: we recommend you begin with bilinear interpolation to keep things simple.) Since high dynamic range environment maps can be large files, we have not included them in the starter code repo. You can download a set of environment maps from this [link](http://15462.courses.cs.cmu.edu/fall2015content/misc/asst3_images/asst3_exr_archive.zip). You can designate rendering to use a particular environment map from the GUI: go to `layout` -> `new light` -> `environment map`-> `add`, and then select one of the environment maps that you have just downloaded. ![envmap_gui](envmap_gui.png) For more HDRIs for creative environment maps, check out [HDRIHAVEN](https://hdrihaven.com/) **Tips:** * You must write your own code to uniformly sample the sphere. * check out the interface of `Env_Map` in `rays/env_light.h`. For `Env_Map`, the `image` field is the actual map being represented as a `HDR_Image`, which contains the pixels of the environment map and size of the environment texture. The interface for `HDR_Image` is in `util/hdr_image.h`. ## Step 2: Importance sampling the environment map Much like light in the real world, most of the energy provided by an environment light source is concentrated in the directions toward bright light sources. **Therefore, it makes sense to bias selection of sampled directions towards the directions for which incoming radiance is the greatest.** In this final task you will implement an importance sampling scheme for environment lights. For environment lights with large variation in incoming light intensities, good importance sampling will significantly improve the quality of renderings. The basic idea is that you will assign a probability to each pixel in the environment map based on the total flux passing through the solid angle it represents. A pixel with coordinate subtends an area on the unit sphere (where and the angles subtended by each pixel -- as determined by the resolution of the texture). Thus, the flux through a pixel is proportional to . (We only care about the relative flux through each pixel to create a distribution.) **Summing the fluxes for all pixels, then normalizing the values so that they sum to one, yields a discrete probability distribution for picking a pixel based on flux through its corresponding solid angle on the sphere.** The question is now how to sample from this 2D discrete probability distribution. We recommend the following process which reduces the problem to drawing samples from two 1D distributions, each time using the inversion method discussed in class: * Given the probability distribution for all pixels, compute the marginal probability distribution for selecting a value from each row of pixels. * Given for any pixel, compute the conditional probability . Given the marginal distribution for and the conditional distributions for environment map rows, it is easy to select a pixel as follows: 1. Use the inversion method to first select a \"row\" of the environment map according to . 2. Given this row, use the inversion method to select a pixel in the row according to . **Here are a few tips:** * When computing areas corresponding to a pixel, use the value of theta at the pixel centers. * We recommend precomputing the joint distributions p(phi, theta) and marginal distributions p(theta) in the constructor of `Sampler::Sphere::Image` and storing the resulting values in fields `pdf`. See `rays/sampler.h`. * `Spectrum::luma()` returns the luminance (brightness) of a Spectrum. The probability of a pixel should be proportional to the product of its luminance and the solid angle it subtends. * `std::lower_bound` is your friend. Documentation is [here](https://en.cppreference.com/w/cpp/algorithm/lower_bound). ## Sample results for importance sampling: ennis.exr with 32 spp ![ennis](new_results/ennis32importance.png) uffiz.exr with 32 spp ![uffiz](new_results/uffiz32importance.png) field.exr with 1024 spp ![ennis](new_results/field1024importance.png) ", "url": "/pathtracer/environment_lighting", "relUrl": "/pathtracer/environment_lighting" },"9": { "doc": "GitHub Setup", "title": "GitHub Setup", "content": "# Github Setup Please do not use a public github fork of this repository! We do not want solutions to be public. You should work in your own private repo. We recommended creating a mirrored private repository with multiple remotes. The following steps go over how to achieve this. The easiest (but not recommended) way is to download a zip from GitHub and make a private repository from that. The main disadvantage with this is that whenever there is an update to the base code, you will have to re-download the zip and manually merge the differences into your code. This is a pain, and you already have a lot to do in 15462/662, so instead, let `git` take care of this cumbersome \"merging-updates\" task: 1. Clone Scotty3D normally - `git clone https://github.com/CMU-Graphics/Scotty3D.git` 2. Create a new private repository (e.g. `MyScotty3D`) - Do not initialize this repository - keep it completely empty. - Let's say your repository is now hosted here: `https://github.com/your_id/MyScotty3D.git` 3. Ensure that you understand the concept of `remote`s in git. - When you clone a git repository, the default remote is named 'origin' and set to the URL you cloned from. - We will set the `origin` of our local clone to point to `MyScotty3D.git`, but also have a remote called `sourcerepo` for the public `Scotty3D` repository. 4. Now go back to your clone of Scotty3D. This is how we add the private remote: - Since we cloned from the `CMU-Graphics/Scotty3D.git` repository, the current value of `origin` should be `https://github.com/CMU-Graphics/Scotty3D.git` - You can check this using `git remote -v`, which should show: ``` origin https://github.com/CMU-Graphics/Scotty3D.git (fetch) origin https://github.com/CMU-Graphics/Scotty3D.git (push) ``` - Rename `origin` to `sourcerepo`: - `git remote rename origin sourcerepo` - Add a new remote called `origin`: - `git remote add origin https://github.com/your_id/MyScotty3D.git` - We can now push the starter code to our private copy: - `git push origin -u master` 5. Congratulations! you have successfully _mirrored_ a git repository with all past commits intact. Let's see a case where this becomes very useful: we start doing an assignment and commit regularly to our private repo (our `origin`). Then the 15-462 staff push some new changes to the Scotty3D skeleton code. We now want to pull the changes from our `sourcerepo`. But, we don't want to mess up the changes we've added to our private copy. Here's where git comes to the rescue: - First commit all current changes to your `origin` - Run `git pull sourcerepo master` - this pulls all the changes from `sourcerepo` into your local folder - If there are files that differ in your `origin` and in the `sourcerepo`, git will attempt to automatically merge the changes. Git may create a \"merge\" commit for this. - Unfortunately, there may be merge conflicts. Git will handle as many merges as it can, and then will then tell you which files have conflicts that need manual resolution. You can resolve those conflicts in your text editor and create a new commit to complete the `merge` process. - After you have completed the merge, you now have all the updates locally. Push to your private origin to include the changes there too: - `git push origin master` ", "url": "/git/", "relUrl": "/git/" },"10": { "doc": "Global Operations", "title": "Global Operations", "content": "# Global Mesh Operations In addition to local operations on mesh connectivity, Scotty3D provides several global remeshing operations (as outlined in the [User Guide](/Scotty3D/guide/model)). Two different mechanisms are used to implement global operations: * _Repeated application of local operations._ Some mesh operations are most easily expressed by applying local operations (edge flips, etc.) to a sequence of mesh elements until the target output is achieved. A good example is [mesh simplification](simplify), which is a greedy algorithm that collapses one edge at a time. * _Global replacement of the mesh._ Other mesh operations are better expressed by temporarily storing new mesh elements in a simpler mesh data structure (e.g., an indexed list of faces) and completely re-building the halfedge data structure from this data. A good example is [Catmull-Clark subdivision](catmull), where every polygon must be simultaneously split into quadrilaterals. Note that in general there are no inter-dependencies among global remeshing operations (except that some of them require a triangle mesh as input, which can be achieved via the method `Halfedge_Mesh::triangulate`). ## Subdivision In image processing, we often have a low resolution image that we want to display at a higher resolution. Since we only have a few samples of the original signal, we need to somehow interpolate or _upsample_ the image. One idea would be to simply cut each pixel into four, leaving the color values unchanged, but this leads to a blocky appearance. Instead we might try a more sophisticated scheme (like bilinear or trilinear interpolation) that yields a smoother appearance. In geometry processing, one encounters the same situation: we may have a low-resolution polygon mesh that we wish to upsample for display, simulation, etc. Simply splitting each polygon into smaller pieces doesn't help, because it does nothing to alleviate blocky silhouettes or chunky features. Instead, we need an upsampling scheme that nicely interpolates or approximates the original data. Polygon meshes are quite a bit trickier than images, however, since our sample points are generally at _irregular_ locations, i.e., they are no longer found at regular intervals on a grid. Three subdivision schemes are supported by Scotty3D: [Linear](linear), [Catmull-Clark](catmull), and [Loop](loop). The first two can be used on any polygon mesh without boundary, and should be implemented via the global replacement strategy described above. Loop subdivision can be implemented using repeated application of local operations. For further details, see the linked pages. ## Performance All subdivision operations, as well as re-meshing and simplification, should complete almost instantaneously (no more than a second) on meshes of a few hundred polygons or fewer. If performance is worse than this, ensure that implementations are not repeatedly iterating over more elements than needed, or allocating/deallocating more memory than necessary. A useful debugging technique is to print out (or otherwise keep track of, e.g., via an integer counter or a profiler) the number of times basic methods like `Halfedge::next()` or `Halfedge_Mesh::new_vertex()` are called during a single execution of one of the methods; for most methods this number should be some reasonably small constant (no more than, say, 1000!) times the number of elements in the mesh. ", "url": "/meshedit/global/", "relUrl": "/meshedit/global/" },"11": { "doc": "User Guide", "title": "User Guide", "content": "# User Guide ## Modes and Actions The basic paradigm in Scotty3D is that there are six different _modes_, each of which lets you perform certain class of actions. For instance, in `Model` mode, you can perform actions associated with modeling, such as moving mesh elements and performing global mesh operations. When in `Animate` mode, you can perform actions associated with animation. Etc. Within a given mode, you can switch between actions by hitting the appropriate key; keyboard commands are listed below for each mode. Note that the input scheme may change depending on the mode. For instance, key commands in `Model` mode may result in different actions in `Render` mode. The current mode is displayed as the \"pressed\" button in the menu bar, and available actions are are detailed in the left sidebar. Note that some actions are only available when a model/element/etc. is selected. ## Global Navigation In all modes, you can move the camera around and select scene elements. Information about your selection will be shown in the left sidebar. The camera can be manipulated in three ways: - Rotate: holding shift, left-clicking, and dragging will orbit the camera about the scene. Holding middle click and dragging has the same effect. - Zoom: using the scroll wheel or scrolling on your trackpad will move the camera towards or away from its center. - Translate: right-clicking (or using multi-touch on a trackpad, e.g., two-finger click-and-drag) and dragging will move the camera around the scene. ## Global Preferences You can open the preferences window from the edit option in the menu bar. - Multisampling: this controls how many samples are used for MSAA when rendering scene objects in the Scotty3D interface. If your computer struggles to render complex scenes, try changing this to `1`. ## Global Undo As is typical, all operations on scene objects, meshes, etc. are un and re-doable using Control/Command-Z to undo and Control/Command-Y to redo. These actions are also available from the `Edit` option in the menu bar. ", "url": "/guide/", "relUrl": "/guide/" },"12": { "doc": "Halfedge Mesh", "title": "Halfedge Mesh", "content": "# Halfedge Mesh ## Geometric Data Structures Scotty3D uses a variety of geometric data structures, depending on the task. Some operations (e.g., ray tracing) use a simple list of triangles that can be compactly encoded and efficiently cached. For more sophisticated geometric tasks like mesh editing and sampling, a simple triangle list is no longer sufficient (or leads to unnecessarily poor asymptotic performance). Most actions in MeshEdit mode therefore use a topological data structure called a _halfedge mesh_ (also known as a _doubly-connected_ edge list), which provides a good tradeoff between simplicity and sophistication. ### The Halfedge Data Structure The basic idea behind the halfedge data structure is that, in addition to the usual vertices, edges, and faces that make up a polygonal mesh, we also have an entity called a _halfedge_ that acts like \"glue\" connecting the different elements. It is this glue that allow us to easily \"navigate\" the mesh, i.e., easily access mesh elements adjacent to a given element. In particular, there are two halfedges associated with each edge (see picture above). For an edge connecting two vertices i and j, one of its halfedges points from i to j; the other one points from j to i. In other words, we say that the two halfedges are _oppositely oriented_. On of the halfedges is associated with the face to the \"left\" of the edge; the other is associated with the face to the \"right.\" Each halfedge knows about the opposite halfedge, which we call its _twin_. It also knows about the _next_ halfedge around its face, as well as its associated edge, face, and vertex. In contrast, the standard mesh elements (vertices, edges, and faces) know only about _one_ of their halfedges. In particular: * a vertex knows about one of its \"outgoing\" halfedges, * an edge knows about one of its two halfedges, and * a face knows about one of the many halfedges circulating around its interior. In summary, we have the following relationships: | Mesh Element | Pointers | ------------ | ------------------------------ | Vertex | halfedge (just one) | Edge | halfedge (just one) | Face | halfedge (just one) | Halfedge | next, twin, vertex, edge, face | This list emphasizes that it is really the **halfedges** that connect everything up. An easy example is if we want to visit all the vertices of a given face. We can start at the face's halfedge, and follow the \"next\" pointer until we're back at the beginning. A more interesting example is visiting all the vertices adjacent to a given vertex v. We can start by getting its outgoing halfedge, then its twin, then its next halfedge; this final halfedge will also point out of vertex v, but it will point **toward** a different vertex than the first halfedge. By repeating this process, we can visit all the neighboring vertices: In some sense, a halfedge mesh is kind of like a supercharged linked list. For instance, the halfedges around a given face (connected by `next` pointers) form a sort of \"cyclic\" linked list, where the tail points back to the head. A nice consequence of the halfedge representation is that any valid halfedge mesh **must** be manifold and orientable. Scotty3D will therefore only produce manifold, oriented meshes as output (and will complain if the input does not satisfy these criteria). ### The `Halfedge_Mesh` Class The Scotty3D skeleton code already provides a fairly sophisticated implementation of the half edge data structure, in the `Halfedge_Mesh` class (see `geometry/halfedge.h` and `geometry/halfedge.cpp`). Although the detailed implementation may appear a bit complicated, the basic interface is not much different from the abstract description given above. For instance, suppose we have a face f and want to print out the positions of all its vertices. We would write a routine like this: void printVertexPositions(FaceRef f) { HalfEdgeRef h = f->halfedge(); // get the first halfedge of the face do { VertexRef v = h->vertex(); // get the vertex of the current halfedge cout pos next(); // move to the next halfedge around the face } while (h != f->halfedge()); // keep going until we're back at the beginning } Notice that we refer to a face as a `FaceRef` rather than just a `Face`. You can think of a `Ref` as a kind of _pointer_. Note that members of an iterator are accessed with an arrow `->` rather than a dot `.`, just as with pointers. (A more in-depth explanation of some of these details can be found in the inline documentation.) Similarly, to print out the positions of all the neighbors of a given vertex we could write a routine like this: void printNeighborPositions(VertexRef v) { HalfEdgeRef h = v->halfedge(); // get one of the outgoing halfedges of the vertex do { HalfEdgeRef h_twin = h->twin(); // get the vertex of the current halfedge VertexRef vN = h_twin->vertex(); // vertex is 'source' of the half edge. // so h->vertex() is v, // whereas h_twin->vertex() is the neighbor vertex. cout pos next(); // move to the next outgoing halfedge of the vertex. } while(h != v->halfedge()); // keep going until we're back at the beginning } To iterate over **all** the vertices in a halfedge mesh, we could write a loop like this: for(VertexRef v = mesh.vertices_begin(); v != mesh.vertices_end(); v++) { printNeighborPositions(v); // do something interesting here } Internally, the lists of vertices, edges, faces, and halfedges are stored as **linked lists**, which allows us to easily add or delete elements to our mesh. For instance, to add a new vertex we can write VertexRef v = mesh.new_vertex(); Likewise, to delete a vertex we can write mesh.erase(v); Note, however, that one should be **very, very careful** when adding or deleting mesh elements. New mesh elements must be properly linked to the mesh -- for instance, this new vertex must be pointed to one of its associated halfedges by writing something like v->halfedge() = h; Likewise, if we delete a mesh element, we must be certain that no existing elements still point to it; the halfedge data structure does not take care of these relationships for you automatically. In fact, that is exactly the point of this assignment: to get some practice directly manipulating the halfedge data structure. Being able to perform these low-level manipulations will enable you to write useful and interesting mesh code far beyond the basic operations in this assignment. The `Halfedge_Mesh` class provides a helper function called `validate` that checks whether the mesh iterators are valid. You might find it worthwhile calling this function to debug your implementation (please note that `validate` only checks that your mesh is valid - passing it does not imply that your specific operation is correct). Finally, the **boundary** of the surface (e.g., the ankles and waist of a pair of pants) requires special care in our halfedge implementation. At first glance, it would seem that the routine `printNeighborPositions()` above might break if the vertex `v` is on the boundary, because at some point we worry that we have no `twin()` element to visit. Fortunately, our implementation has been designed to avoid this kind of catastrophe. In particular, rather than having an actual hole in the mesh, we create a \"virtual\" boundary face whose edges are all the edges of the boundary loop. This way, we can iterate over boundary elements just like any other mesh element. If we ever need to check whether an element is on the boundary, we have the methods. Vertex::on_boundary() Edge::on_boundary() Face::is_boundary() Halfedge::is_boundary() These methods return true if and only if the element is contained in the domain boundary. Additionally, we store an explicit list of boundary faces, which we can iterate over like any other type of mesh element: for(FaceRef b = mesh.boundaries_begin(); b != mesh.boundaries_end(); b++) { // do something interesting with this boundary loop } These virtual faces are not stored in the usual face list, i.e., they will not show up when iterating over faces. The figure below should help to further explain the behavior of `Halfedge_Mesh` for surfaces with boundary: Dark blue regions indicate interior faces, whereas light blue regions indicate virtual boundary faces. Note that for vertices and edges, ``on_boundary()`` will return true if the element is attached to a boundary face, but ``is_boundary()`` for halfedges is only true if the halfedge is 'inside' the boundary face. For example, in the figure above the region ``b`` is a virtual boundary face, which means that vertex ``v'``, edge ``e'``, and halfedge ``h'`` are all part of the boundary; their methods will return true. In contrast, vertex ``v``, edge ``e``, face `f`, and halfedge `h` are not part of the boundary, and their methods will return false. Notice also that the boundary face b is a polygon with 12 edges. _Note:_ _the edge degree and face degree of a boundary vertex is not the same!_ Notice, for instance, that vertex `v'` is contained in three edges but only two interior faces. By convention, `Vertex::degree()` returns the face degree, not the edge degree. The edge degree can be computed by finding the face degree, and adding 1 if the vertex is a boundary vertex. Please refer to the inline comments (e.g. of `geometry/halfedge.h`) for further details about the `Halfedge_Mesh` data structure. ", "url": "/meshedit/halfedge", "relUrl": "/meshedit/halfedge" },"13": { "doc": "Environment Light Importance Sampling", "title": "Environment Light Importance Sampling", "content": "# Environment Light Importance Sampling A pixel with coordinate subtends an area on the unit sphere (where and the angles subtended by each pixel -- as determined by the resolution of the texture). Thus, the flux through a pixel is proportional to . (We only care about the relative flux through each pixel to create a distribution.) **Summing the fluxes for all pixels, then normalizing the values so that they sum to one, yields a discrete probability distribution for picking a pixel based on flux through its corresponding solid angle on the sphere.** The question is now how to sample from this 2D discrete probability distribution. We recommend the following process which reduces the problem to drawing samples from two 1D distributions, each time using the inversion method discussed in class: * Given the probability distribution for all pixels, compute the marginal probability distribution for selecting a value from each row of pixels. * Given for any pixel, compute the conditional probability . Given the marginal distribution for and the conditional distributions for environment map rows, it is easy to select a pixel as follows: 1. Use the inversion method to first select a \"row\" of the environment map according to . 2. Given this row, use the inversion method to select a pixel in the row according to . ", "url": "/pathtracer/importance_sampling", "relUrl": "/pathtracer/importance_sampling" },"14": { "doc": "Home", "title": "Home", "content": "![15-462 F20 Renders](results/me_f20_crop.png) # Scotty3D Welcome to Scotty3D! This 3D graphics software package includes components for interactive mesh editing, realistic path tracing, and dynamic animation. Implementing functionality in each of these areas constitutes the majority of the coursework for 15-462/662 (Computer Graphics) at Carnegie Mellon University These pages describe how to set up and use Scotty3D. Start here! - [Git Setup](git): create a private git mirror that can pull changes from Scotty3D. - [Building Scotty3D](build): build and run Scotty3D on various platforms. - [User Guide](guide): learn the intended functionality for end users. The developer manual describes what you must implement to complete Scotty3D. It is organized under the three main components of the software: - [MeshEdit](meshedit) - [PathTracer](pathtracer) - [Animation](animation) ## Project Philosophy Welcome to your first day of work at Scotty Industries! Over the next few months you will implement core features in Scotty Industries' flagship product Scotty3D, which is a modern package for 3D modeling, rendering, and animation. In terms of basic structure, this package doesn't look much different from \"real\" 3D tools like Maya, Blender, modo, or Houdini. Your overarching goal is to use the developer manual to implement a package that works as described in the [User Guide](guide), much as you would at a real software company (more details below). Note that the User Guide is **not** an Assignment Writeup. The User Guide contains only instructions on how to use the software, and serves as a high-level specification of _what the software should do_. The Developer Guide contains information about the internals of the code, i.e., _how the software works_. This division is quite common in software development: there is a **design specification** or \"design spec\", and an **implementation** that implements that spec. Also, as in the real world, the design spec does _not_ necessarily specify every tiny detail of how the software should behave! Some behaviors may be undefined, and some of these details are left up to the party who implements the specification. A good example you have already seen is OpenGL, which defines some important rules about how rasterization should behave, but is not a \"pixel-exact\" specification. In other words, two different OpenGL implementations from two different vendors (Intel and NVIDIA, say) may produce images that differ by a number of pixels. Likewise, in this assignment, your implementation may differ from the implementation of your classmates in terms of the exact values it produces, or the particular collection of corner-cases it handles. However, as a developer you should strive to provide a solution that meets a few fundamental criteria: * [Failing gracefully](https://en.wikipedia.org/wiki/Fault_tolerance) is preferable to failing utterly---for instance, if a rare corner case is difficult to handle, it is far better to simply refuse to perform the operation than to let the program crash! * Your implementation should follow the [principle of least surprise](https://en.wikipedia.org/wiki/Principle_of_least_astonishment). A user should be able to expect that things behave more or less as they are described in the User Guide. * You should not use an algorithm whose performance is [asymptotically worse](https://en.wikipedia.org/wiki/Asymptotic_computational_complexity) just because it makes your code easier to write (for instance, using [bubble sort](https://en.wikipedia.org/wiki/Bubble_sort) rather than [merge sort](https://en.wikipedia.org/wiki/Merge_sort) on large data sets). * That being said, when it comes to performance, [premature optimization is the root of all evil!](https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize) The only way to know whether an optimization matters is to [measure performance](https://en.wikipedia.org/wiki/Profiling_(computer_programming)), and understand [bottlenecks](https://en.wikipedia.org/wiki/Program_optimization#Bottlenecks). * Finally, you should take pride in your craft. Beautiful things just tend to work better. Just to reiterate the main point above: **As in real-world software development, we will not specify every little detail about how methods in this assignment should work!** If you encounter a tough corner case (e.g., \"how should edge flip behave for a tetrahedron\"), we want you to _think about what a good **design choice** might be_, and implement it to the best of your ability. This activity is part of becoming a world-class developer. However, we are more than happy to discuss good design choices with you, and you should also feel free to discuss these choices with your classmates. Practically speaking, it is ok for routines to simply show an error if they encounter a rare and difficult corner case---as long as it does not interfere with successful operation of the program (i.e., if it does not crash or yield bizarre behavior). Your main goal here above all else should be to develop _effective tool for modeling, rendering, and animation_. ", "url": "/", "relUrl": "/" },"15": { "doc": "(Task 2) Intersections", "title": "(Task 2) Intersections", "content": "# (Task 2) Intersecting Objects Now that your ray tracer generates camera rays, we need to be able to answer the core query in ray tracing: \"does this ray hit this object?\" Here, you will start by implementing ray-object intersection routines against the two types of objects in the starter code: triangles and spheres. Later, we will use a BVH to accelerate these queries, but for now we consider an intersection test against a single object. First, take a look at the `rays/object.h` for the interface of `Object` class. An `Object` can be **either** a `Tri_Mesh`, a `Shape`, a BVH(which you will implement in Task 3), or a list of `Objects`. Right now, we are only dealing with `Tri_Mesh`'s case and `Shape`'s case, and their interfaces are in `rays/tri_mesh.h` and `rays/shapes.h`, respectively. `Tri_Mesh` contains a BVH of `Triangle`, and in this task you will be working with the `Triangle` class. For `Shape`, you are going to work with `Sphere`s, which is the major type of `Shape` in Scotty 3D. Now, you need to implement the `hit` routine for both `Triangle` and `Sphere`. `hit` takes in a ray, and returns a `Trace` structure, which contains information on whether the ray hits the object and if hits, the information describing the surface at the point of the hit. See `rays/trace.h` for the definition of `Trace`. In order to correctly implement `hit` you need to understand some of the fields in the Ray structure defined in `lib/ray.h`. * `point`: represents the 3D point of origin of the ray * `dir`: represents the 3D direction of the ray (this direction will be normalized) * `time_bounds`: correspond to the minimum and maximum points on the ray with its x-component as the lower bound and y-component as the upper bound. That is, intersections that lie outside the [`ray.time_bounds.x`, `ray.time_bounds.y`] range should not be considered valid intersections with the primitive. One important detail of the Ray structure is that `time_bounds` is a mutable field of the Ray. This means that this fields can be modified by constant member functions such as `Triangle::hit`. When finding the first intersection of a ray and the scene, you almost certainly want to update the ray's `time_bounds` value after finding each hit with scene geometry. By bounding the ray as tightly as possible, your ray tracer will be able to avoid unnecessary tests with scene geometry that is known to not be able to result in a closest hit, resulting in higher performance. --- ### **Step 1: Intersecting Triangles** The first intersect routine that the `hit` routines for the triangle mesh in `student/tri_mesh.cpp`. While faster implementations are possible, we recommend you implement ray-triangle intersection using the method described in the [lecture slides](http://15462.courses.cs.cmu.edu/fall2017/lecture/acceleratingqueries). Further details of implementing this method efficiently are given in [these notes](ray_triangle_intersection.md). There are two important details you should be aware of about intersection: * When finding the first-hit intersection with a triangle, you need to fill in the `Trace` structure with details of the hit. The structure should be initialized with: * `hit`: a boolean representing if there is a hit or not * `time`: the ray's _t_-value of the hit point * `position`: the exact position of the hit point. This can be easily computed by the `time` above as with the ray's `point` and `dir`. * `normal`: the normal of the surface at the hit point. This normal should be the interpolated normal (obtained via interpolation of the per-vertex normals according to the barycentric coordinates of the hit point) Once you've successfully implemented triangle intersection, you will be able to render many of the scenes in the media directory. However, your ray tracer will be very slow! While you are working with `student/tri_mesh.cpp`, you should implement `Triangle::bbox` as well, which are important for task 3. ### **Step 2: Intersecting Spheres** You also need to implement the `hit` routines for the `Sphere` class in `student/sphapes.cpp`. Remember that your intersection tests should respect the ray's `time_bound`. Because spheres always represent closed surfaces, you should not flip back-facing normals you did with triangles. Note: take care **not** to use the `Vec3::normalize()` method when computing your normal vector. You should instead use `Vec3::unit()`, since `Vec3::normalize()` will actually change the `Vec3` object passed in rather than returning a normalized version. --- [Visualization of normals](visualization_of_normals.md) might be very helpful with debugging. ", "url": "/pathtracer/intersecting_objects", "relUrl": "/pathtracer/intersecting_objects" },"16": { "doc": "Layout", "title": "Layout", "content": "# Layout This is the main scene editing mode in Scotty3D, and does not contain tasks for the student to implement. This mode allows you to load full scenes from disk, create or load new objects, export your scene (COLLADA format), and edit transformations that place each object into your scene. ## Creating Objects There are three ways to add objects to your scene: - `Import New Scene`: clears the current scene (!) and replaces it with objects loaded from a file on disk. - `Import Objects`: loads objects from a file, adding them to the current scene. - `New Object`: creates a new object from a choice of various platonic solids. To save your scene to disk (including all meshes and their transformations) use the `Export Scene` option. Scotty3D supports loading objects from the following file formats: - dae (COLLADA) - obj - fbx - gltf / glb - 3ds - stl - blend - ply Scotty3D only supports exporting scenes to COLLADA. ## Managing Objects Left clicking on or enabling the check box of your object under `Select an Object` will select it. Information about that object's transformation will appear under `Edit Object` beneath the \"Select an Object\" options. Under `Edit Object`, you may directly edit the values of the object's position, rotation (X->Y->Z Euler angles), and scale. Note that clicking and dragging on the values will smoothly scale them, and Control/Command-clicking on the value will let you edit it as text. You can also edit the transformation using the `Move`, `Rotate`, and `Scale` tools. One of these options is always active. This determines the transformation widgets that appear at the origin of the object model. - `Move`: click and drag on the red (X), green (Y), or blue (Z) arrow to move the object along the X/Y/Z axis. Click and drag on the red (YZ), green (XZ), or blue (XY) squares to move the object in the YZ/XZ/XY plane. - `Rotate`: click and drag on the red (X), green (Y), or blue (Z) loop to rotate the object about the X/Y/Z axis. Note that these rotations are applied relative to the current pose, so they do not necessarily correspond to smooth transformations of the X/Y/Z Euler angles. - `Scale`: click and drag on the red (X), green (Y), or blue(Z) block to scale the object about the X/Y/Z axis. Again note that this scale is applied relative to the current pose. Finally, you may remove the object from the scene by pressing `Delete` or hitting the Delete key. You may swap to `Model` mode with this mesh selected by pressing `Edit Mesh`. Note that if this mesh is non-manifold, this option will not appear. ## Key Bindings | Key | Command | :-------------------: | :--------------------------------------------: | `m` | Use the `Move` tool. | `r` | Use the `Rotate` tool. | `s` | Use the `Scale` tool. | `delete` | Delete the currently selected object. | ## Demo ", "url": "/guide/layout_mode/", "relUrl": "/guide/layout_mode/" },"17": { "doc": "Linear Subdivision", "title": "Linear Subdivision", "content": "# Linear Subdivision For an in-practice example, see the [User Guide](/Scotty3D/guide/model). Unlike most other global remeshing operations, linear (and Catmull-Clark) subdivision will proceed by completely replacing the original halfedge mesh with a new one. The high-level procedure is: 1. Generate a list of vertex positions for the new mesh. 2. Generate a list of polygons for the new mesh, as a list of indices into the new vertex list (a la \"polygon soup\"). 3. Using these two lists, rebuild the halfedge connectivity from scratch. Given these lists, `Halfedge_Mesh::from_poly` will take care of allocating halfedges, setting up `next` and `twin` pointers, etc., based on the list of polygons generated in step 2---this routine is already implemented in the Scotty3D skeleton code. Both linear and Catmull-Clark subdivision schemes will handle general _n_-gons (i.e., polygons with _n_ sides) rather than, say, quads only or triangles only. Each _n_-gon (including but not limited to quadrilaterals) will be split into _n_ quadrilaterals according to the following template: The high-level procedure is outlined in greater detail in `student/meshedit.cpp`. ### Vertex Positions For global linear or Catmull-Clark subdivision, the strategy for assigning new vertex positions may at first appear a bit strange: in addition to updating positions at vertices, we will also calculate vertex positions associated with the _edges_ and _faces_ of the original mesh. Storing new vertex positions on edges and faces will make it extremely convenient to generate the polygons in our new mesh, since we can still use the halfedge data structure to decide which four positions get connected up to form a quadrilateral. In particular, each quad in the new mesh will consist of: * one new vertex associated with a face from the original mesh, * two new vertices associated with edges from the original mesh, and * one vertex from the original mesh. For linear subdivision, the rules for computing new vertex positions are very simple: * New vertices at original faces are assigned the average coordinates of all corners of that face (i.e., the arithmetic mean). * New vertices at original edges are assigned the average coordinates of the two edge endpoints. * New vertices at original vertices are assigned the same coordinates as in the original mesh. These values should be assigned to the members `Face::new_pos`, `Edge::new_pos`, and `Vertex::new_pos`, respectively. For instance, `f->new_pos = Vec3( x, y, z );` will assign the coordinates (x,y,z) to the new vertex associated with face `f`. The general strategy for assigning these new positions is to iterate over all vertices, then all edges, then all faces, assigning appropriate values to `new_pos`. **Note:** you _must_ copy the original vertex position `Vertex::pos` to the new vertex position `Vertex::new_pos`; these values will not be used automatically. This step should be implemented in the method `Halfedge_Mesh::linear_subdivide_positions` in `student/meshedit.cpp`. Steps 2 and 3 are already implemented by `Halfedge_Mesh::subdivide` in `geometry/halfedge.cpp`. For your understanding, an explanation of how these are implemented is provided below: ### Polygons Recall that in linear and Catmull-Clark subdivision _all polygons are subdivided simultaneously_. In other words, if we focus on the whole mesh (rather than a single polygon), then we are globally * creating one new vertex for each edge, * creating one new vertex for each face, and * keeping all the vertices of the original mesh. These vertices are then connected up to form quadrilaterals (_n_ quadrilaterals for each _n_-gon in the input mesh). Rather than directly modifying the halfedge connectivity, these new quads will be collected in a much simpler mesh data structure: a list of polygons. Note that with this subdivision scheme, _every_ polygon in the output mesh will be a quadrilateral, even if the input contains triangles, pentagons, etc. In Scotty3D, a list of polygons can be declared as std::vector> quads; where `std::vector` is a [class from the C++ standard template library](http://en.cppreference.com/w/cpp/container/vector), representing a dynamically-sized array. An `Index` is just another name for a `size_t`, which is the standard C++ type for integers that specify an element of an array. Polygons can be created by allocating a list of appropriate size, then specifying the indices of each vertex in the polygon. For example: std::vector quad( 4 ); // allocate an array with four elements // Build a quad with vertices specified by integers (a,b,c,d), starting at zero. // These indices should correspond to the indices computing when assigning vertex // positions, as described above. quad[0] = a; quad[1] = b; quad[2] = c; quad[3] = d; Once a quad has been created, it can be added to the list of quads by using the method `vector::push_back`, which appends an item to a vector: std::vector> newPolygons; newPolygons.push_back( quad ); The full array of new polygons will then be passed to the method `Halfedge_Mesh::from_poly`, together with the new vertex positions. ", "url": "/meshedit/global/linear/", "relUrl": "/meshedit/global/linear/" },"18": { "doc": "Local Operations", "title": "Local Operations", "content": "# Local Mesh Operations Many of the actions that need to be implemented in the MeshEdit mode are local mesh operations (like edge collapse, face bevel, etc.). A good recipe for ensuring that all pointers are still valid after a local remeshing operation is: 1. Draw a picture of all the elements (vertices, edges, faces, halfedges) that will be needed from the original mesh, and all the elements that should appear in the modified mesh. 2. Allocate any new elements that are needed in the modified mesh, but do not appear in the original mesh. 3. For every element in the \"modified\" picture, set **all** of its pointers -- even if they didn't change. For instance, for each halfedge, make sure to set `next`, `twin`, `vertex`, `edge`, and `face` to the correct values in the new (modified) picture. For each vertex, make sure to set its `halfedge` pointer. Etc. A convenience method `Halfedge::set_neighbors()` has been created for this purpose. 4. Deallocate any elements that are no longer used in the modified mesh, which can be done by calling `Halfedge_Mesh::erase()`. The reason for setting all the pointers (and not just the ones that changed) is that it is very easy to miss a pointer, causing your code to crash. ### Interface with global mesh operations To facilitate user interaction, as well as global mesh processing operations (described below), local mesh operations should return the following values when possible. However, should it happen that the specified values are not available, or that the operation should not work on the given input, we need a way to signify the failure case. To do so, each local operation actually returns a ``std::optional`` value parameterized on the type of element it returns. For example, ``Halfedge_Mesh::erase_vertex`` returns a ``std::optional``. An ``optional`` can hold a value of the specified type, or, similarly to a pointer, a null value (``std::nullopt``). See ``student/meshedit.cpp`` for specific examples. Also, remember that in any case, _the program should not crash!_ So for instance, you should never return a pointer to an element that was deleted. See the [User Guide](/Scotty3D/guide/model) for demonstrations of each local operation. * `Halfedge_Mesh::flip_edge` - should return the edge that was flipped ![](flip_edge.svg) * `Halfedge_Mesh::split_edge` - should return the inserted vertex ![](split_edge.svg) * `Halfedge_Mesh::collapse_edge` - should return the new vertex, corresponding to the collapsed edge ![](collapse_edge.svg) * `Halfedge_Mesh::collapse_face` - should return the new vertex, corresponding to the collapsed face ![](collapse_face.svg) * `Halfedge_Mesh::erase_vertex` - should return the new face, corresponding to the faces originally containing the vertex ![](erase_vertex.svg) * `Halfedge_Mesh::erase_edge` - should return the new face, corresponding to the faces originally containing the edge ![](erase_edge.svg) * `Halfedge_Mesh::bevel_vertex` - should return the new face, corresponding to the beveled vertex ![](bevel_vertex.svg) * `Halfedge_Mesh::bevel_edge` - should return the new face, corresponding to the beveled edge ![](bevel_edge.svg) * `Halfedge_Mesh::bevel_face` - should return the new, inset face ![](bevel_face.svg) ", "url": "/meshedit/local/", "relUrl": "/meshedit/local/" },"19": { "doc": "Loop Subdivision", "title": "Loop Subdivision", "content": "# Loop Subdivision For an in-practice example, see the [User Guide](/Scotty3D/guide/model). Loop subdivision (named after [Charles Loop](http://charlesloop.com/)) is a standard approximating subdivision scheme for triangle meshes. At a high level, it consists of two basic steps: 1. Split each triangle into four by connecting edge midpoints (sometimes called \"4-1 subdivision\"). 2. Update vertex positions as a particular weighted average of neighboring positions. The 4-1 subdivision looks like this: ![4-1 Subdivision](loop_41.png) And the following picture illustrates the weighted average: ![Loop subdivision weights](loop_weights.png) In words, the new position of an old vertex is (1 - nu) times the old position + u times the sum of the positions of all of its neighbors. The new position for a newly created vertex v that splits Edge AB and is flanked by opposite vertices C and D across the two faces connected to AB in the original mesh will be 3/8 * (A + B) + 1/8 * (C + D). If we repeatedly apply these two steps, we will converge to a fairly smooth approximation of our original mesh. We will implement Loop subdivision as the `Halfedge_Mesh::loop_subdivide()` method. In contrast to linear and Catmull-Clark subdivision, Loop subdivision **must** be implemented using the local mesh operations described above (simply because it provides an alternative perspective on subdivision implementation, which can be useful in different scenarios). In particular, 4-1 subdivision can be achieved by applying the following strategy: 1. Split every edge of the mesh _in any order whatsoever_. 2. Flip any new edge that touches a new vertex and an old vertex. The following pictures (courtesy Denis Zorin) illustrate this idea: ![Subdivision via flipping](loop_flipping.png) Notice that only blue (and not black) edges are flipped in this procedure; as described above, edges in the split mesh should be flipped if and only if they touch both an original vertex _and_ a new vertex (i.e., a midpoint of an original edge). When working with dynamic mesh data structures (like a halfedge mesh), one must think **very carefully** about the order in which mesh elements are processed---it is quite easy to delete an element at one point in the code, then try to access it later (typically resulting in a crash!). For instance, suppose we write a loop like this: // iterate over all edges in the mesh for (EdgeRef e = mesh.edges_begin(); e != mesh.edges_end(); e++) { if (some condition is met) { mesh.split_edge(e); } } Although this routine looks straightforward, it can very easily crash! The reason is fairly subtle: we are iterating over edges in the mesh by incrementing the iterator `e` (via the expression `e++`). But since `split_edge()` is allowed to create and delete mesh elements, it might deallocate the edge pointed to by `e` before we increment it! To be safe, one should instead write a loop like this: // iterate over all edges in the mesh int n = mesh.n_edges(); EdgeRef e = mesh.edges_begin(); for (int i = 0; i < n; i++) { // get the next edge NOW! EdgeRef nextEdge = e; nextEdge++; // now, even if splitting the edge deletes it... if (some condition is met) { mesh.split_edge(e); } // ...we still have a valid reference to the next edge. e = nextEdge; } Note that this loop is just a representative example, the implementer must consider which elements might be affected by a local mesh operation when writing such loops. We recommend ensuring that your atomic edge operations provide certain guarantees. For instance, if the implementation of `Halfedge_Mesh::flip_edge()` guarantees that no edges will be created or destroyed (as it should), then you can safely do edge flips inside a loop without worrying about these kinds of side effects. For Loop subdivision, there are some additional data members that will make it easy to keep track of the data needed to update the connectivity and vertex positions. In particular: * `Vertex::new_pos` can be used as temporary storage for the new position (computed via the weighted average above). Note that one should _not_ change the value of `Vertex::pos` until _all_ the new vertex positions have been computed -- otherwise, subsequent computation will take averages of values that have already been averaged! * Likewise, `Edge::new_pos` can be used to store the position of the vertices that will ultimately be inserted at edge midpoints. Again, these values should be computed from the original values (before subdivision), and applied to the new vertices only at the very end. The `Edge::new_pos` value will be used for the position of the vertex that will appear along the old edge after the edge is split. We precompute the position of the new vertex before splitting the edges and allocating the new vertices because it is easier to traverse the simpler original mesh to find the positions for the weighted average that determines the positions of the new vertices. * `Vertex::is_new` can be used to flag whether a vertex was part of the original mesh, or is a vertex newly inserted by subdivision (at an edge midpoint). * `Edge::is_new` likewise flags whether an edge is a piece of an edge in the original mesh, or is an entirely new edge created during the subdivision step. Given this setup, we strongly suggest that it will be easiest to implement subdivision according to the following \"recipe\" (though the implementer is of course welcome to try doing things a different way!). The basic strategy is to _first_ compute the new vertex positions (storing the results in the `new_pos` members of both vertices and edges), and only _then_ update the connectivity. Doing it this way will be much easier, since traversal of the original (coarse) connectivity is much simpler than traversing the new (fine) connectivity. In more detail: 1. Mark all vertices as belonging to the original mesh by setting `Vertex::is_new` to `false` for all vertices in the mesh. 2. Compute updated positions for all vertices in the original mesh using the vertex subdivision rule, and store them in `Vertex::new_pos`. 3. Compute new positions associated with the vertices that will be inserted at edge midpoints, and store them in `Edge::new_pos`. 4. Split every edge in the mesh, being careful about how the loop is written. In particular, you should make sure to iterate only over edges of the original mesh. Otherwise, the loop will keep splitting edges that you just created! 5. Flip any new edge that connects an old and new vertex. 6. Finally, copy the new vertex positions (`Vertex::new_pos`) into the usual vertex positions (`Vertex::pos`). It may be useful to ensure `Halfedge_Mesh::split_edge()` will now return an iterator to the newly inserted vertex, and particularly that the halfedge of this vertex will point along the edge of the original mesh. This iterator is useful because it can be used to (i) flag the vertex returned by the split operation as a new vertex, and (ii) flag each outgoing edge as either being new or part of the original mesh. (In other words, Step 3 is a great time to set the members `is_new` for vertices and edges created by the split. It is also a good time to copy the `new_pos` field from the edge being split into the `new_pos` field of the newly inserted vertex.) We recommend implementing this algorithm in stages, e.g., _first_ see if you can correctly update the connectivity, _then_ worry about getting the vertex positions right. Some examples below illustrate the correct behavior of the algorithm. This subdivision rule **is not** required to support meshes with boundary, unless the implementer wishes to go above and beyond. ", "url": "/meshedit/global/loop/", "relUrl": "/meshedit/global/loop/" },"20": { "doc": "(Task 6) Materials", "title": "(Task 6) Materials", "content": "# (Task 6) Materials Now that you have implemented the ability to sample more complex light paths, it's finally time to add support for more types of materials (other than the fully Lambertian material that you have implemented in Task 5). In this task you will add support for two types of materials: a perfect mirror and glass (a material featuring both specular reflection and transmittance) in `student/bsdf.cpp`. To get started take a look at the BSDF interface in `rays/bsdf.h`. There are a number of key methods you should understand in `BSDF class`: * `Spectrum evaluate(Vec3 out_dir, Vec3 in_dir)`: evaluates the distribution function for a given pair of directions. * `BSDF_Sample sample(Vec3 out_dir)`: given the `out_dir`, generates a random sample of the in-direction (which may be a reflection direction or a refracted transmitted light direction). It returns a `BSDF_Sample`, which contains the in-direction(`direction`), its probability (`pdf`), as well as the `attenuation` for this pair of directions. (You do not need to worry about the `emissive` for the materials that we are asking you to implement, since those materials do not emit light.) There are also two helper functions in the BSDF class in `student/bsdf.cpp` that you will need to implement: * `Vec3 reflect(Vec3 dir)` returns a direction that is the **perfect specular reflection** direction corresponding to `dir` (reflection of `dir` about the normal, which in the surface coordinate space is [0,1,0]). More detail about specular reflection is [here](http://15462.courses.cs.cmu.edu/fall2015/lecture/reflection/slide_028). * `Vec3 refract(Vec3 out_dir, float index_of_refraction, bool& was_internal)` returns the ray that results from refracting the ray in `out_dir` about the surface according to [Snell's Law](http://15462.courses.cs.cmu.edu/fall2015/lecture/reflection/slide_032). The surface's index of refraction is given by the argument `index_of_refraction`. Your implementation should assume that if the ray in `out_dir` **is entering the surface** (that is, if `cos(out_dir, N=[0,1,0]) > 0`) then the ray is currently in vacuum (index of refraction = 1.0). If `cos(out_dir, N=[0,1,0]) ## Step 1 Implement the class `BSDF_Mirror` which represents a material with perfect specular reflection (a perfect mirror). You should Implement `BSDF_Mirror::sample`, `BSDF_Mirror::evaluate`, and `reflect`. **(Hint: what should the pdf sampled by `BSDF_Mirror::sample` be? What should the reflectance function `BSDF_Mirror::evalute` be?)** ## Step 2 Implement the class `BSDF_Glass` which is a glass-like material that both reflects light and transmit light. As discussed in class the fraction of light that is reflected and transmitted through glass is given by the dielectric Fresnel equations. Specifically your implementation should: * Implement `refract` to add support for refracted ray paths. * Implement `BSDF_refract::sample` as well as `BSDF_Glass::sample`. Your implementation should use the Fresnel equations to compute the fraction of reflected light and the fraction of transmitted light. The returned ray sample should be either a reflection ray or a refracted ray, with the probability of which type of ray to use for the current path proportional to the Fresnel reflectance. (e.g., If the Fresnel reflectance is 0.9, then you should generate a reflection ray 90% of the time. What should the pdf be in this case?) Note that you can also use [Schlick's approximation](https://en.wikipedia.org/wiki/Schlick's_approximation) instead. * You should read the notes below on the Fresnel equations as well as on how to compute a transmittance BSDF. ### Dielectrics and Transmission ### Fresnel Equations for Dielectric The [Fresnel Equations](https://en.wikipedia.org/wiki/Fresnel_equations) (another [link](http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/freseq.html) here) describe the amount of reflection from a surface. The description below is an approximation for dielectric materials (materials that don't conduct electricity). In this assignment you're asked to implement a glass material, which is a dielectric. In the description below, and refer to the index of refraction of the medium containing an incoming ray, and the zenith angle of the ray to the surface of a new medium. and refer to the index of refraction of the new medium and the angle to the surface normal of a transmitted ray. The Fresnel equations state that reflection from a surface is a function of the surface's index of refraction, as well as the polarity of the incoming light. Since our renderer doesn't account for polarity, we'll apply a common approximation of averaging the reflectance of polarizes light in perpendicular and parallel polarized light: The parallel and perpendicular terms are given by: Therefore, for a dielectric material, the fraction of reflected light will be given by , and the amount of transmitted light will be given by . Alternatively, you may compute using [Schlick's approximation](https://en.wikipedia.org/wiki/Schlick%27s_approximation). ### Distribution Function for Transmitted Light We described the BRDF for perfect specular reflection in class, however we did not discuss the distribution function for transmitted light. Since refraction \"spreads\" or \"condenses\" a beam, unlike perfect reflection, the radiance along the ray changes due to a refraction event. In your assignment you should use Snell's Law to compute the direction of refraction rays, and use the following distribution function to compute the radiance of transmitted rays. We refer you guys to Pharr, Jakob, and and Humphries's book [Physically Based Rendering](http://www.pbr-book.org/) for a derivation based on Snell's Law and the relation . (But you are more than welcome to attempt a derivation on your own!) When you are done, you will be able to render images like these: ", "url": "/pathtracer/materials", "relUrl": "/pathtracer/materials" },"21": { "doc": "Model", "title": "Model", "content": "# Model When in `Model` mode, Scotty3D provides a polygon-based 3D modeler with basic subdivision capabilities. The central modeling paradigm is \"box modeling\", i.e., starting with a simple cube, you can add progressively more detail to produce interesting 3D shapes. You can also use _subdivision_ to get smooth approximations of these shapes. MeshEdit supports four basic actions on mesh elements (move, rotate, scale, and bevel), plus a collection of local and global mesh editing commands. Note that MeshEdit (and more broadly, Scotty3D) will only operate on meshes that are _manifold_ (i.e., the union of faces containing any given vertex _v_ is a topological disk). Likewise, all mesh operations in Scotty3D will preserve the manifold property, i.e., manifold input will always get mapped to manifold output. This property is key for ensuring that many algorithms in Scotty3D are \"well-behaved\", and that it always produces nice output for other programs to use. If you load a mesh that is non-manifold, you can still use it in your scene and render with it, but editing will not be supported. ### Editing Mesh Elements In `Model` mode you can inspect mesh elements by left-clicking on vertices, edges, faces, and halfedges. Information about these elements will be shown in the left sidebar. In this mode you can change the geometry (i.e., the shape) of the mesh by transforming mesh elements in the same way you can transform scene objects. Note that the transformation widget again has three modes of operation, which you can toggle through by pressing the `r` key. - `Move`: click and drag on the red (X), green (Y), or blue (Z) arrow to move the object along the X/Y/Z axis. Click and drag on the red (YZ), green (XZ), or blue (XY) squares to move the object in the YZ/XZ/XY plane. - `Rotate`: click and drag on the red (X), green (Y), or blue (Z) loop to rotate the object about the X/Y/Z axis. Note that these rotations are applied relative to the current pose, so they do not necessarily correspond to smooth transformations of the X/Y/Z Euler angles. - `Scale`: click and drag on the red (X), green (Y), or blue(Z) block to scale the object about the X/Y/Z axis. Again note that this scale is applied relative to the current pose. ![selecting an edge](model_select.png) ### Beveling The bevel action creates a new copy of the selected element that is inset and offset from the original element. Clicking and dragging on an element will perform a bevel; the horizontal motion of the cursor controls the amount by which the new element shrinks or expands relative to the original element, and the vertical motion of the cursor controls the amount by which the new element is offset (in the normal direction) from the original element. It is important to note that a new element will be created upon click _even if no inset or offset is applied_. Therefore, if you're not careful you may end up with duplicate elements that are not immediately visible. (To check, you can drag one of the vertices mode.) There are three possible types of bevels: - Vertex Bevel: The selected vertex _v_ is replaced by a face _f_ whose vertices are connected to the edges originally incident on _v_. The new face is inset (i.e., shunken or expanded) by a user-controllable amount. - Edge Bevel: The selected edge _e_ is replaced by a face _f_ whose vertices are connected to the edges originally incident on the endpoints of _e_. The new face is inset and offset by some user-controllable amount, as with the vertex bevel. - Face Bevel: The selected face _f_ is replaced by a new face _g_, as well as a ring of faces around _g_, such that the vertices of _g_ connect to the original vertices of _f_. The new face is inset and offset by some user-controllable amount. ### Local Connectivity Editing In addition to beveling, a variety of commands can be used to alter the connectivity of the mesh (for instance, splitting or collapsing edges). These commands are applied by selecting a mesh element (in any mode) and pressing the appropriate key, as listed below. Local mesh editing operations include: - Erase Vertex: The selected vertex _v_ together with all incident edges and faces will be replaced with a single face _f_, that is the union of all faces originally incident on _v_. - Erase Edge: The selected edge _e_ will be replaced with the union of the faces containing it, producing a new face _e_ (if _e_ is a boundary edge, nothing happens). - Edge Collapse: The selected edge _e_ is replaced by a single vertex _v_. This vertex is connected by edges to all vertices previously connected to either endpoint of _e_. Moreover, if either of the polygons containing _e_ was a triangle, it will be replaced by an edge (rather than a degenerate polygon with only two edges). - Face Collapse: The selected face _f_ is replaced by a single vertex _v_. All edges previously connected to vertices of _f_ are now connected directly to _v_. - Edge Flip: The selected edge _e_ is \"rotated\" around the face, in the sense that each endpoint moves to the next vertex (in counter-clockwise order) along the boundary of the two polygons containing _e_. - Edge Split: [Note: this method is for triangle meshes only!] The selected edge _e_ is split at its midpoint, and the new vertex _v_ is connected to the two opposite vertices (or one in the case of a surface with boundary). ### Global Mesh Processing A number of commands can be used to create a more global change in the mesh (e.g., subdivision or simplification). These commands can be applied by pressing the appropriate sidebar button with a mesh selected. Note that in scenes with multiple meshes (e.g., those used by the path tracer), this command will be applied only to the selected mesh. - Triangulate: Each polygon is split into triangles. - Linear Subdivision: Each polygon in the selected mesh is split into quadrilaterals by inserting a vertex at the midpoint and connecting it to the midpoint of all edges. New vertices are placed at the average of old vertices so that, e.g., flat faces stay flat, and old vertices remain where they were. - Catmull-Clark Subdivision: _[Note: this method is for meshes without boundary only!]_ Just as with linear subdivision, each polygon is split into quadrilaterals, but this time the vertex positions are updated according to the [Catmull-Clark subdivision rules](https://en.wikipedia.org/wiki/Catmull_Clark_subdivision_surface), ultimately generating a nice rounded surface. - Loop Subdivision: _[Note: this method is for triangle meshes without boundary only!]_ Each triangle is split into four by connecting the edge midpoints. Vertex positions are updated according to the [Loop subdivision rules](https://en.wikipedia.org/wiki/Loop_subdivision_surface). - Isotropic Remeshing: _[Note: this method is for triangle meshes only!]_ The mesh is resampled so that triangles all have roughly the same size and shape, and vertex valence is close to regular (i.e., about six edges incident on every vertex). - Simplification _[Note: this method is for triangle meshes only!]_ The number of triangles in the mesh is reduced by a factor of about four, aiming to preserve the appearance of the original mesh as closely as possible. ### Key Bindings | Key | Command | :-------------------: | :--------------------------------------------: | `c` | Center the camera on the current element. | `m` | Use the `Move` tool. | `r` | Use the `Rotate` tool. | `s` | Use the `Scale` tool. | `b` | Use the `Bevel` tool. | `v` | Select the current halfedge's vertex | `e` | Select the current halfedge's edge | `f` | Select the current halfedge's face | `t` | Select the current halfedge's twin | `n` | Select the current halfedge's next | `h` | Select the current element's halfedge | `delete` | Erase the currently selected vertex or edge. | ", "url": "/guide/model_mode/", "relUrl": "/guide/model_mode/" },"22": { "doc": "A4: Animation", "title": "A4: Animation", "content": "# Animation Overview There are four primary components that must be implemented to support Animation functionality. **A4.0** - [(Task 1) Spline Interpolation](splines) - [(Task 2) Skeleton Kinematics](skeleton_kinematics) **A4.5** - [(Task 3) Linear Blend Skinning](skinning) - [(Task 4) Particle Simulation](particles) Each task is described at the linked page. ## Converting Frames to Video Additionally, we will ask you to create your own animation. Once you've rendered out each frame of your animation, you can combine them into a video by using: `ffmpeg -r 30 -f image2 -s 640x360 -pix_fmt yuv420p -i ./%4d.png -vcodec libx264 out.mp4` You may want to change the default `30` and `640x360` to the frame rate and resolution you chose to render at. If you don't have ffmpeg installed on your system, you can get it through most package managers, or you can [download it directly](https://ffmpeg.org/download.html). Alternatively, you may use your preferred video editing tool. ", "url": "/animation/", "relUrl": "/animation/" },"23": { "doc": "A2: MeshEdit", "title": "A2: MeshEdit", "content": "# MeshEdit Overview MeshEdit is the first major component of Scotty3D, which performs 3D modeling, subdivision, and mesh processing. When implementation of this tool is completed, it will enable the user to transform a simple cube model into beautiful, organic 3D surfaces described by high-quality polygon meshes. This tool can import, modify, and export industry-standard COLLADA files, allowing Scotty3D to interact with the broader ecosystem of computer graphics software. The `media/` subdirectory of the project contains a variety of meshes and scenes on which the implementation may be tested. The simple `cube.dae` input should be treated as the primary test case -- when properly implemented MeshEdit contains all of the modeling tools to transform this starting mesh into a variety of functional and beautiful geometries. For further testing, a collection of other models are also included in this directory, but it is not necessarily reasonable to expect every algorithm to be effective on every input. The implementer must use judgement in selecting meaningful test inputs for the algorithms in MeshEdit. The following sections contain guidelines for implementing the functionality of MeshEdit: - [Halfedge Mesh](halfedge) - [Local Mesh Operations](local) - [Tutorial: Edge Flip](local/edge_flip) - [Beveling](local/bevel) - [Global Mesh Operations](global) - [Triangulation](global/triangulate) - [Linear Subdivision](global/linear) - [Catmull-Clark Subdivision](global/catmull) - [Loop Subdivision](global/loop) - [Isotropic Remeshing](global/remesh) - [Simplification](global/simplify) As always, be mindful of the [project philosophy](..). ", "url": "/meshedit/", "relUrl": "/meshedit/" },"24": { "doc": "A3: Pathtracer", "title": "A3: Pathtracer", "content": "# PathTracer Overview PathTracer is (as the name suggests) a simple path tracer that can render scenes with global illumination. The first part of the assignment will focus on providing an efficient implementation of **ray-scene geometry queries**. In the second half of the assignment you will **add the ability to simulate how light bounces around the scene**, which will allow your renderer to synthesize much higher-quality images. Much like in MeshEdit, input scenes are defined in COLLADA files, so you can create your own scenes to render using Scotty3D or other free software like [Blender](https://www.blender.org/). Implementing the functionality of PathTracer is split in to 7 tasks, and here are the instructions for each of them: - [(Task 1) Generating Camera Rays](camera_rays) - [(Task 2) Intersecting Objects](intersecting_objects) - [(Task 3) Bounding Volume Hierarchy](bounding_volume_hierarchy) - [(Task 4) Shadow Rays](shadow_rays) - [(Task 5) Path Tracing](path_tracing) - [(Task 6) Materials](materials) - [(Task 7) Environment Lighting](environment_lighting) The files that you will work with for PathTracer are all under `src/student` directory. Some of the particularly important ones are outlined below. Methods that we expect you to implement are marked with \"TODO (PathTracer)\", which you may search for. You are also provided with some very useful debugging tool in `src/student/debug.h` and `src/student/debug.cpp`. Please read the comments in those two files to learn how to use them effectively. | File(s) | Purpose | Need to modify? |----------|-------------------|------------------| `student/pathtracer.cpp` | This is the main workhorse class. Inside the ray tracer class everything begins with the method `Pathtracer::trace_pixel` in pathtracer.cpp. This method computes the value of the specified pixel in the output image. | Yes | `student/camera.cpp` | You will need to modify `Camera::generate_ray` in Part 1 of the assignment to generate the camera rays that are sent out into the scene. | Yes | `student/tri_mesh.cpp`, `student/shapes.cpp` | Scene objects (e.g., triangles and spheres) are instances of the `Object` class interface defined in `rays/object.h`. You will need to implement the `bbox` and intersect routine `hit` for both triangles and spheres. | Yes |`student/bvh.inl`|A major portion of the first half of the assignment concerns implementing a bounding volume hierarchy (BVH) that accelerates ray-scene intersection queries. Note that a BVH is also an instance of the Object interface (A BVH is a scene object that itself contains other primitives.)|Yes|`rays/light.h`|Describes lights in the scene. The initial starter code has working implementations of directional lights and constant hemispherical lights.|No|`lib/spectrum.h`|Light energy is represented by instances of the Spectrum class. While it's tempting, we encourage you to avoid thinking of spectrums as colors -- think of them as a measurement of energy over many wavelengths. Although our current implementation only represents spectrums by red, green, and blue components (much like the RGB representations of color you've used previously in this class), this abstraction makes it possible to consider other implementations of spectrum in the future. Spectrums can be converted into a vector using the `Spectrum::to_vec` method.| No|`student/bsdf.cpp`|Contains implementations of several BSDFs (diffuse, mirror, glass). For each, you will define the distribution of the BSDF and write a method to sample from that distribution.|Yes|`student/samplers.cpp`|When implementing raytracing and environment light, we often want to sample randomly from a hemisphere, uniform grid, or shphere. This file contains various functions that simulate such random sampling.|Yes| ", "url": "/pathtracer/", "relUrl": "/pathtracer/" },"25": { "doc": "Particles", "title": "Particles", "content": "# Particle Simulation And now for something completely different: physics simulation for particles. ## Ray traced physics ", "url": "/animation/particles", "relUrl": "/animation/particles" },"26": { "doc": "(Task 5) Path Tracing", "title": "(Task 5) Path Tracing", "content": "# (Task 5) Path Tracing Up to this point, your renderer simulates light which begins at a source, bounces off a surface, and hits a camera. However in the real world, light can take much more complicated paths, bouncing of many surfaces before eventually reaching the camera. Simulating this multi-bounce light is referred to as _indirect illumination_, and it is critical to producing realistic images, especially when specular surfaces are present. In this task you will modify your ray tracer to simulate multi-bounce light, adding support for indirect illumination. You must modify `Pathtracer::trace_ray` to simulate multiple bounces. We recommend using the [Russian Roulette](http://15462.courses.cs.cmu.edu/spring2020/lecture/montecarloraytracing/slide_044) algorithm discussed in class. The basic structure will be as follows: * (1) Randomly select a new ray direction using `bsdf.sample` (which you will implement in Step 2) * (2) Potentially terminate the path (using Russian roulette) * (3) Recursively trace the ray to evaluate weighted reflectance contribution due to light from this direction. Remember to respect the maximum number of bounces from `max_depth` (which is a member of class `Pathtracer`). Don't forget to add in the BSDF emissive component! ## Step 2 Implement `BSDF_Lambertian::sample` for diffuse reflections, which randomly samples a direction from a uniform hemisphere distribution and returns a `BSDF_Sample`. Note that the interface is in `rays/bsdf.h`. Task 6 contains further discussion of sampling BSDFs, reading ahead may help your understanding. The implementation of `BSDF_Lambertian::evaluate` is already provided to you. Note: * When adding the recursive term to the total radiance, you will need to account for emissive materials, like the ceiling light in the Cornell Box (cbox.dae). To do this, simply add the BSDF sample's emissive term to your total radiance, i.e. `L += sample.emisssive`. * Functions in `student/sampler.cpp` from class `Sampler` contains helper functions for random sampling, which you will use for sampling. Our starter code uses uniform hemisphere sampling `Samplers::Hemisphere::Uniform sampler`(see `rays/bsdf.h` and `student/sampler.cpp`) which is already implemented. You are welcome to implement Cosine-Weighted Hemisphere sampling for extra credit, but it is not required. If you want to implement Cosine-Weighted Hemisphere sampling, fill in `Hemisphere::Cosine::sample` in `student/samplers.cpp` and then change `Samplers::Hemisphere::Uniform sampler` to `Samplers::Hemisphere::Cosine sampler` in `rays/bsdf.h`. --- After correctly implementing path tracing, your renderer should be able to make a beautifully lit picture of the Cornell Box. Below is the rendering result of 1024 sample per pixel. ![cornell_lambertian](new_results/lambertian.png) Note the time-quality tradeoff here. With these commandline arguments, your path tracer will be running with 8 worker threads at a sample rate of 1024 camera rays per pixel, with a max ray depth of 4. This will produce an image with relatively high quality but will take quite some time to render. Rendering a high quality image will take a very long time as indicated by the image sequence below, so start testing your path tracer early! Below are the result and runtime of rendering cornell box with different sample per pixel at 640 by 430 on Macbook Pro(3.1 GHz Dual-Core Intel Core i5). ![spheres](new_results/timing.png) Also note that if you have enabled Russian Roulette, your result may seem noisier, but should complete faster. The point of Russian roulette is not to increase sample quality, but to allow the computation of more samples in the same amount of time, resulting in a higher quality result. Here are a few tips: * The path termination probability should be computed based on the [overall throughput](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_044) of the path. The throughput of the ray is recorded in its `throughput` member, which represents the multiplicative factor the current radiance will be affected by before contributing to the final pixel color. Hence, you should both use and update this field. To update it, simply multiply in the rendering equation factors: BSDF attenuation, `cos(theta)`, and (inverse) BSDF PDF. Remember to apply the coefficients from the current step before deriving the termination probability. Finally, note that the updated throughput should be copied to the recursive ray for later steps. Keep in mind that delta function BSDFs can take on values greater than one, so clamping termination probabilities derived from BSDF values to 1 is wise. * To convert a Spectrum to a termination probability, we recommend you use the luminance (overall brightness) of the Spectrum, which is available via `Spectrum::luma` * We've given you some [pretty good notes](http://15462.courses.cs.cmu.edu/fall2015/lecture/globalillum/slide_047) on how to do this part of the assignment, but it can still be tricky to get correct. ", "url": "/pathtracer/path_tracing", "relUrl": "/pathtracer/path_tracing" },"27": { "doc": "Ray Sphere Intersection", "title": "Ray Sphere Intersection", "content": "# Ray Sphere Intersection ", "url": "/pathtracer/ray_sphere_intersection", "relUrl": "/pathtracer/ray_sphere_intersection" },"28": { "doc": "Ray Triangle Intersection", "title": "Ray Triangle Intersection", "content": "# Ray Triangle Intersection We recommend that you implement the *Moller-Trumbore algorithm*, a fast algorithm that takes advantage of a barycentric coordinates parameterization of the intersection point, for ray-triangle intersection. A few final notes and thoughts: If the denominator _dot((e1 x d), e2)_ is zero, what does that mean about the relationship of the ray and the triangle? Can a triangle with this area be hit by a ray? Given _u_ and _v_, how do you know if the ray hits the triangle? Don't forget that the intersection point on the ray should be within the ray's `time_bound`. ", "url": "/pathtracer/ray_triangle_intersection", "relUrl": "/pathtracer/ray_triangle_intersection" },"29": { "doc": "Isotropic Remeshing", "title": "Isotropic Remeshing", "content": "# Isotropic Remeshing For an in-practice example, see the [User Guide](/Scotty3D/guide/model). Scotty3D also supports remeshing, an operation that keeps the number of samples roughly the same while improving the shape of individual triangles. The isotropic remeshing algorithm tries to make the mesh as \"uniform\" as possible, i.e., triangles as close as possible to equilateral triangles of equal size, and vertex degrees as close as possible to 6 (note: this algorithm is for **triangle meshes only**). The algorithm to be implemented is based on the paper [Botsch and Kobbelt, \"A Remeshing Approach to Multiresolution Modeling\"](http://graphics.uni-bielefeld.de/publications/disclaimer.php?dlurl=sgp04.pdf) (Section 4), and can be summarized in just a few simple steps: 1. If an edge is too long, split it. 2. If an edge is too short, collapse it. 3. If flipping an edge improves the degree of neighboring vertices, flip it. 4. Move vertices toward the average of their neighbors. Repeating this simple process several times typically produces a mesh with fairly uniform triangle areas, angles, and vertex degrees. However, each of the steps deserves slightly more explanation. ### Edge Splitting / Collapsing Ultimately we want all of our triangles to be about the same size, which means we want edges to all have roughly the same length. As suggested in the paper by Botsch and Kobbelt, we will aim to keep our edges no longer than 4/3rds of the **mean** edge length _L_ in the input mesh, and no shorter than 4/5ths of _L_. In other words, if an edge is longer than 4L/3, split it; if it is shorter than 4L/5, collapse it. We recommend performing all of the splits first, then doing all of the collapses (though as usual, you should be careful to think about when and how mesh elements are being allocated/deallocated). ### Edge Flipping We want to flip an edge any time it reduces the total deviation from regular degree (degree 6). In particular, let _a1_, _a2_ be the degrees of an edge that we're thinking about flipping, and let _b1_, _b2_ be the degrees of the two vertices across from this edge. The total deviation in the initial configuration is `|a1-6| + |a2-6| + |b1-6| + |b2-6|`. You should be able to easily compute the deviation after the edge flip **without actually performing the edge flip**; if this number decreases, then the edge flip should be performed. We recommend flipping all edges in a single pass, after the edge collapse step. ### Vertex Averaging Finally, we also want to optimize the geometry of the vertices. A very simple heuristic is that a mesh will have reasonably well-shaped elements if each vertex is located at the center of its neighbors. To keep your code clean and simple, we recommend using the method `Vertex::neighborhood_center()`, which computes the average position of the vertex's neighbors. Note that you should not use this to immediately replace the current position: we don't want to be taking averages of vertices that have already been averaged. Doing so can yield some bizarre behavior that depends on the order in which vertices are traversed (if you're interested in learning more about this issue, Google around for the terms \"Jacobi iterations\" and \"Gauss-Seidel). So, the code should (i) first compute the new positions (stored in `Vertex::new_pos`) for all vertices using their neighborhood centroids, and (ii) _then_ update the vertices with new positions (copy `new_pos` to `pos`). How exactly should the positions be updated? One idea is to simply replace each vertex position with its centroid. We can make the algorithm slightly more stable by moving _gently_ toward the centroid, rather than immediately snapping the vertex to the center. For instance, if _p_ is the original vertex position and _c_ is the centroid, we might compute the new vertex position as _q_ = _p_ + _w_(_c_ - _p_) where _w_ is some weighting factor between 0 and 1 (we use 1/5 in the examples below). In other words, we start out at _p_ and move a little bit in the update direction _v_ = _c_ - _p_. Another important issue arises if the update direction _v_ has a large _normal_ component, then we'll end up pushing the surface in or out, rather than just sliding our sample points around on the surface. As a result, the shape of the surface will change much more than we'd like (try it!). To ameliorate this issue, we will move the vertex only in the _tangent_ direction, which we can do by projecting out the normal component, i.e., by replacing _v_ with _v_ - dot(_N_,_v_)_N_, where _N_ is the unit normal at the vertex. To get this normal, you will implement the method `Vertex::normal()`, which computes the vertex normal as the area-weighted average of the incident triangle normals. In other words, at a vertex i the normal points in the direction where A_ijk is the area of triangle ijk, and N_ijk is its unit normal; this quantity can be computed directly by just taking the cross product of two of the triangle's edge vectors (properly oriented). ### Implementation The final implementation requires very little information beyond the description above; the basic recipe is: 1. Compute the mean edge length _L_ of the input. 2. Split all edges that are longer than 4L/3. 3. Collapse all edges that are shorter than 4L/5. 4. Flip all edges that decrease the total deviation from degree 6. 5. Compute the centroids for all the vertices. 6. Move each vertex in the tangent direction toward its centroid. Repeating this procedure about 5 or 6 times should yield results like the ones seen below; you may want to repeat the smoothing step 10-20 times for each \"outer\" iteration. ", "url": "/meshedit/global/remesh/", "relUrl": "/meshedit/global/remesh/" },"30": { "doc": "Render", "title": "Render", "content": "# Render Welcome! This is Scotty3D's realistic, globally illuminated renderer, capable of creating images of complex scenes using path tracing. ## Render Window In render mode, click on \"Open Render Window\", and you will be able to set the parameters to render your model. Enjoy the excitement of seeing the images becoming clearer and clearer ;-) ![light](window.png) ## Moving Camera The render mode comes with its own camera, representing the position, view direction, field of view, and aspect ratio with which to render the scene. These parameters are visually represented by the camera control cage, which shows up as a black wire-frame pyramid that traces out the unit-distance view plane. Note that changing camera settings (e.g. field of view) will adjust the geometry of the camera cage. To move the render camera to the current view, click \"Move to View.\" This will reposition the camera so that it has exactly what you have on screen in view. To freely move the camera without updating its field of view/aspect ratio to match the current viewport, click \"Free Move,\" and use the normal 3D navigation tools to position the camera. When you are done, click \"Confirm Move\" or \"Cancel Move\" to apply or cancel the updated camera position. Feel free to play around with the field of view (FOV) and aspect ratio (AR) sliders while in the free move mode - they will adjust your current view to use these values, so you can see how exactly the effect the visible region. ## Create light To create a lighting for your scene, simply go to the menu on the left side, click \"New Light\", and you will be able to choose from a variaty of light objects and environmental lights. (you will implement the support for environmental light in Task 7. See the corresponding documentation for more guide.) ![light](light.png) ## Enable Ray Logging for Debugging In Render mode, simply check the box for \"Logged Rays\", and you would be able to see the camera rays that you generated in task 1 when you start render. ![ray](log_ray.png) ## Visualize BVH In Render mode, simply check the box for \"BVH\", and you would be able to see the BVH you generated in task 3 when you start rendering. You can click on the horizontal bar to see each level of your BVH. ![ray](bvh.png) ## Materials and Other Object Options You can change the material and other property of your mesh by selecting the object and choose \"Edit Pose\", \"Edit Mesh\", and \"Edit Material\". For example, you can make a colored cow by \"Edit Material\" -> \"Diffuse light\", and pick a color that you like. ![material](material.png) ", "url": "/guide/render_mode/", "relUrl": "/guide/render_mode/" },"31": { "doc": "Rig", "title": "Rig", "content": "# Rig ### Rigging Setup Select the `Rig` tab to create a skeletal rig for an object. You can create new bone by first selecting a parent joint and pressing `New Bone`, then click anywhere else on the object to place the bone. From thereon, you can repeat this process to create a chain of bones connected along the selected joint. If you want to branch off at a joint, simply click on the joint to branch off of, then start another chain by adding a new bone from there. To view a rigged example, see `media/human.dae` example and select the object in the Rig tab to view its joints. Once you've implemented forward kinematics the skeleton should be setup like so: ![rigged-human](guide-rigging-human.png) ### Editing Skinning Weight Threshold Radius Each joint has an associated `Radius` which controls the part of the mesh influenced by the selected bone during animaton. The radius is visualized by the blue capsule around each bone and can be edited using the menu. The position of the joint can also be edited using the `Extent` values in the menu. Note that rigging only uses extents of the bone for skeleton setup, joint pose does not influence the skeleton. Once rigging is done, the object can be posed by changing joint rotations in the [animate](../animate_mode) mode. ## Inverse Kinematics Instead of computing the positions of the bones from the joint poses (forward kinematics), in inverse kinematics, joint positions are computed from target positions. To associate a target position with a joint, select `Add IK` and edit the target position. Multiple target positions can be associated with the same joint but targets need to be explicitly enabled using the checkbox. In the [animate](../animate_mode) mode, once inverse kinematics is implemented, joint rotation(pose) is updated based on the enabled IK handles. ", "url": "/guide/rigging_mode/", "relUrl": "/guide/rigging_mode/" },"32": { "doc": "(Task 4) Shadow Rays", "title": "(Task 4) Shadow Rays", "content": "# (Task 4) Shadow Rays In this task you will modify `Pathtracer::trace_ray` to implement accurate shadows. Currently `Pathtracer::trace_ray` computes the following: * It computes the intersection of ray `r` with the scene. * It computes the amount of light arriving at the hit point `hit.position` (the irradiance at the hit point) by integrating radiance from all scene light sources. * It computes the radiance reflected from the hit point in the direction of -`r`. (The amount of reflected light is based on the BSDF of the surface at the hit point.) Shadows occur when another scene object blocks light emitted from scene light sources towards the hit point. Fortunately, determining whether or not a ray of light from a light source to the hit point is occluded by another object is easy given a working ray tracer (which you have at this point!). **You simply want to know whether a ray originating from the hit point (`hit.position`), and traveling towards the light source (direction to light) hits any scene geometry before reaching the light.** (Note that you need to consider light's distance from the hit point is given, more on this in the notes below.) Your job is to implement the logic needed to compute whether hit point is in shadow with respect to the current light source sample. Below are a few notes: * In the starter code, when we call `light.sample(hit.position)`, it returns us a `Light_sample sample` at the hit point . (You might want to take a look at `rays/light.h` for the definition of `struct Light_sample` and `class light`.) A `Light_sample` contains fields `radiance`, `pdf`, `direction`, and `distance`. In particular, `sample.direction` is the direction from the hit point to the light source, and `sample.distance` is the distance from the hit point to the light source. * A common ray tracing pitfall is for the \"shadow ray\" shot into the scene to accidentally hit the same objecr as `r` (the surface is erroneously determined to be occluded because the shadow ray is determined to hit the surface!). We recommend that you make sure the origin of the shadow ray is offset from the surface to avoid these erroneous \"self-intersections\". For example, consider setting the origin of the shadow ray to be `hit.position + epsilon * sample.direction` instead of simply `hit.position`. `EPS_F` is defined in for this purpose(see `lib/mathlib.h`). * Another common pitfall is forgetting that it doesn't matter if the shadow ray hits any scene geometry after reaching the light. Note that the light's distance from the hit point is given by `sample.distance`. Also note that `Ray` has a member called `time_bound`... * You will find it useful to debug your shadow code using the `DirectionalLight` since it produces hard shadows that are easy to reason about. * You would want to comment out the line `Spectrum radiance_out = Spectrum(0.5f);` and initialize the `radiance_out` to a more reasonable value. Hint: is there supposed to have any amount of light before we even start considering each light sample? At this point you should be able to render very striking images. ## Sample results: At this point, you can add all kinds of lights among the options you have when you create \"New Light\" in Layout mode, except for Sphere Light and Environment Map which you will implement in task 7 (Note that you can still fill in `Sphere::Uniform::sample` in `Samplers.cpp` now to view the result of a mesh under Sphere Light). The head of Peter Schröder rendered with hemishphere lighting. A sphere and a cube with hemishphere lighting Hex and cube under directional lighting Bunny on a plane under point light Spot on a sphere under diretional lighting Spot on a sphere under hemisphere lighting ", "url": "/pathtracer/shadow_rays", "relUrl": "/pathtracer/shadow_rays" },"33": { "doc": "Simplification", "title": "Simplification", "content": "# Simplification ![Surface simplification via quadric error metric](quad_simplify.png) For an in-practice example, see the [User Guide](/Scotty3D/guide/model). Just as with images, meshes often have far more samples than we really need. The simplification method in Scotty3D simplifies a given triangle mesh by applying _quadric error simplification_ (note that this method is for **triangle meshes only**!). This method was originally developed at CMU by Michael Garland and Paul Heckbert, in their paper [Surface Simplification Using Quadric Error Metrics](http://www.cs.cmu.edu/~./garland/quadrics/quadrics.html). (Looking at this paper -- or the many slides and presentations online that reference it -- may be very helpful in understanding and implementing this part of the assignment!) The basic idea is to iteratively collapse edges until we reach the desired number of triangles. The more edges we collapse, the simpler the mesh becomes. The only question is: which edges should we collapse? And where should we put the new vertex when we collapse an edge? Finding the sequence of edge collapses (and vertex positions) that give an _optimal_ approximation of the surface would be very difficult -- likely impossible! Garland and Heckbert instead proposed a simple, greedy scheme that works quite well in practice, and is the basis of many mesh simplification tools today. Roughly speaking, we're going to write down a function that measures the distance to a given triangle, and then \"accumulate\" this function as many triangles get merged together. More precisely, we can write the distance d of a point _x_ to a plane with normal _N_ passing through a point _p_ as dist(_x_) = dot(_N_, _x_ - _p_) In other words, we measure the extent of the vector from _p_ to _x_ along the normal direction. This quantity gives us a value that is either _positive_ (above the plane), or _negative_ (below the plane). Suppose that _x_ has coordinates (_x_,_y_,_z_), _N_ has coordinates (_a_,_b_,_c_), and let _d_(_x_) = -dot(_N_, _p_), then in _homogeneous_ coordinates, the distance to the plane is just dot(_u_, _v_) where _u_ = (_x_,_y_,_z_,_1_) and _v_ = (_a_,_b_,_c_,_d_). When we're measuring the quality of an approximation, we don't care whether we're above or below the surface; just how _far away_ we are from the original surface. Therefore, we're going to consider the _square_ of the distance, which we can write in homogeneous coordinates as where T denotes the transpose of a vector. The term _vv_^T is an [outer product](https://en.wikipedia.org/wiki/Outer_product) of the vector _v_ with itself, which gives us a symmetric matrix _K_ = _vv_^T. In components, this matrix would look like a^2 ab ac ad ab b^2 bc bd ac bc c^2 cd ad bd cd d^2 but in Scotty3D it can be constructed by simply calling the method `outer( Vec4, Vec4 )` in `lib/mat4.h` that takes a pair of vectors in homogeneous coordinates and returns the outer product as a 4x4 matrix. We will refer to this matrix as a \"quadric,\" because it also describes a [quadric surface](https://en.wikipedia.org/wiki/Quadric). The matrix _K_ tells us something about the distance to a plane. We can also get some idea of how far we are from a _vertex_ by considering the sum of the squared distances to the planes passing through all triangles that touch that vertex. In other words, we will say that the distance to a small neighborhood around the vertex i can be approximated by the sum of the quadrics on the incident faces ijk: Likewise, the distance to an _edge_ ij will be approximated by the sum of the quadrics at its two endpoints: The sums above should then be easy to compute -- you can just add up the `Mat4` objects around a vertex or along an edge using the usual \"+\" operator. You do not need to write an explicit loop over the 16 entries of the matrix. Once you have a quadric _K_ associated with an edge _ij_, you can ask the following question: if we collapse the edge to a point _x_, where should we put the new point so that it minimizes the (approximate) distance to the original surface? In other words, where should it go so that it minimizes the quantity _x_^T _K x_? Just like any other function, we can look for the minimum by taking the derivative with respect to _x_ and setting it equal to zero. (By the way, in this case we're always going to get a _minimum_ and not a _maximum_ because the matrices K are all [positive-definite](https://en.wikipedia.org/wiki/Positive-definite_matrix).) In other words, we want to solve the small (4x4) linear system _K u_ = _0_ for the optimal position _u_, expressed in homogeneous coordinates. We can simplify this situation a bit by remembering that the homogeneous coordinate for a point in 3-space is just 1\\. After a few simple manipulations, then, we can rewrite this same system as an even smaller 3x3 linear system _Ax_ = _b_ where A is the upper-left 3x3 block of K, and b is _minus_ the upper-right 3x1 column. In other words, the entries of A are just and the entries of b are The cost associated with this solution can be found by plugging _x_ back into our original expression, i.e., the cost is just _x_^T _K_ _x_ where _K_ is the quadric associated with the edge. Fortunately, _you do not need to write any code to solve this linear system_. It can be solved using the method `Mat4::inverse()` which computes the inverse of a 4x4 matrix. Note that while we really want to work with a 3x3 matrix here, using the upper left 3x3 block of a 4x4 matrix is equivalent, given that the 4th row/column remain as in the identity matrix. In particular, you can write something like this: Mat4 A; // computed by accumulating quadrics and then extacting the upper-left 3x3 block Vec3 b; // computed by extracting minus the upper-right 3x1 column from the same matrix Vec3 x = A.inverse() * b; // solve Ax = b for x by hitting both sides with the inverse of A However, A might not always be invertible: consider the case where the mesh is composed of points all on the same plane. In this case, you need to select an optimal point along the original edge. Please read [Garland's paper](http://reports-archive.adm.cs.cmu.edu/anon/1999/CMU-CS-99-105.pdf) on page 62 section 3.5 for more details. If you're a bit lost at this point, don't worry! There are a lot of details to go through, and we'll summarize everything again in the implementation section. The main idea to keep in mind right now is: * we're storing a matrix at every vertex that encodes (roughly) the distance to the surface, and * for each edge, we want to find the point that is (roughly) as close as possible to the surface, according to the matrices at its endpoints. As we collapse edges, the matrices at endpoints will be combined by just adding them together. So, as we perform more and more edge collapses, these matrices will try to capture the distance to a larger and larger region of the original surface. The one final thing we want to think about is performance. At each iteration, we want to collapse the edge that results in the _least_ deviation from our original surface. But testing every edge, every single iteration sounds pretty expensive! (Something like O(n^2).) Instead, we're going to put all our edges into a [priority queue](https://en.wikipedia.org/wiki/Priority_queue) that efficiently keeps track of the \"best\" edge for us, even as we add and remove edges from our mesh. In the code framework, we actually introduce a new class called an `Edge_Record` that encodes all the essential information about our edge: // An edge record keeps track of all the information about edges // that we need while applying our mesh simplification algorithm. class Edge_Record { public: Edge_Record() {} Edge_Record(std::unordered_map& vertex_quadrics, Halfedge_Mesh::EdgeRef e) : edge(e) { // The second constructor takes a dictionary mapping vertices // to quadric error matrices and an edge reference. It then // computes the sum of the quadrics at the two endpoints // and solves for the optimal midpoint position as measured // by this quadric. It also stores the value of this quadric // as the \"score\" used by the priority queue. } EdgeRef edge; // the edge referred to by this record Vec3 optimal; // the optimal point, if we were // to collapse this edge next float cost; // the cost associated with collapsing this edge, // which is very (very!) roughly something like // the distance we'll deviate from the original // surface if this edge is collapsed }; Within `Halfedge_Mesh::simplify`, you will create a dictionary `vertex_quadrics` mapping vertices to quadric error matrices. We will use a `std::unordered_map` for this purpose, which is the hash map provided by the STL. Its usage is detailed in the [C++ documentation](https://en.cppreference.com/w/cpp/container/unordered_map). To initialize the record for a given edge `e`, you can use this dictionary to write Edge_Record record(vertex_quadrics, e); Similarly to how we created a dictionary mapping vertices to quadric matrices, we will also want to associate this record with its edge using the `edge_records` dictionary: edge_records[e] = record; Further, we will want to add the record to a priority queue, which is always sorted according to the cost of collapsing each edge. The starter code also provides the helper class `PQueue` for this purpose. For example: PQueue queue; queue.insert(record); If we ever want to know what the best edge is to collapse, we can just look at the top of the priority queue: Edge_Record bestEdge = queue.top(); More documentation is provided inline in `student/meshedit.cpp`. Though conceptually sophisticated, quadric error simplification is actually not too hard to implement. It basically boils down to two methods: Edge_Record::Edge_Record(std::unordered_map& vertex_quadrics, EdgeIter e); Halfedge_Mesh::simplify(); As discussed above, the edge record initializer should: 1. Compute a quadric for the edge as the sum of the quadrics at endpoints. 2. Build a 3x3 linear system for the optimal collapsed point, as described above. 3. Solve this system and store the optimal point in `Edge_Record::optimal`. 4. Compute the corresponding error value and store it in `Edge_Record::cost`. 5. Store the edge in `Edge_Record::edge`. The downsampling routine can then be implemented by following this basic recipe: 1. Compute quadrics for each face by simply writing the plane equation for that face in homogeneous coordinates, and building the corresponding quadric matrix using `outer()`. This matrix should be stored in the yet-unmentioned dictionary `face_quadrics`. 2. Compute an initial quadric for each vertex by adding up the quadrics at all the faces touching that vertex. This matrix should be stored in `vertex_quadrics`. (Note that these quadrics must be updated as edges are collapsed.) 3. For each edge, create an `Edge_Record`, insert it into the `edge_records` dictionary, and add it to one global `PQueue` queue. 4. Until a target number of triangles is reached, collapse the best/cheapest edge (as determined by the priority queue) and set the quadric at the new vertex to the sum of the quadrics at the endpoints of the original edge. You will also have to update the cost of any edge connected to this vertex. The algorithm should terminate when a target number of triangles is reached -- for the purpose of this assignment, you should set this number to 1/4th the number of triangles in the input (since subdivision will give you a factor of 4 in the opposite direction). Note that to _get_ the best element from the queue you call `PQueue::top()`, whereas to _remove_ the best element from the top you must call `PQueue::pop()` (the separation of these two tasks is fairly standard in STL-like data structures). As with subdivision, it is critical that you carefully reason about which mesh elements get added/deleted in what order -- particularly in Step 4\\. A good way to implement Step 4 would be: 1. Get the cheapest edge from the queue. 2. **Remove the cheapest edge from the queue by calling `pop()`.** 3. Compute the new quadric by summing the quadrics at its two endpoints. 4. **Remove any edge touching either of its endpoints from the queue.** 5. Collapse the edge. 6. Set the quadric of the new vertex to the quadric computed in Step 3. 7. **Insert any edge touching the new vertex into the queue, creating new edge records for each of them.** Steps 4 and 7 are highlighted because it is easy to get these steps wrong. For instance, if you collapse the edge first, you may no longer be able to access the edges that need to be removed from the queue. A working implementation should look something like the examples below. You may find it easiest to implement this algorithm in stages. For instance, _first_ get the edge collapses working, using just the edge midpoint rather than the optimal point, _then_ worry about solving for the point that minimizes quadric error. ", "url": "/meshedit/global/simplify/", "relUrl": "/meshedit/global/simplify/" },"34": { "doc": "Simulate", "title": "Simulate", "content": "# Simulate The simulation view provides a way to create and manage particle emitters. To add an emitter, open the dropdown menu, adjust desired parameters, and press `Add`. ![add emitter](simulate_mode/add_emitter.png) - Color: color with which to render the particles. - Angle: angle of cone within which particles are generated (pointing in the emitter object's direction). - Scale: the scale factor to apply to the particle mesh when rendering particles. - Lifetime: how long (in seconds) each particle should live before it is deleted. - Particles/Sec: how many particles should be generated per second. The total amount of live particles is hence `lifetime * particles_per_second`. - Particle: choose the shape of each particle. If mesh objects are present in the scene, they will also show up here, allowing the creation of particles with custom shapes. - Enabled: whether to immediately enable the emitter Once an enabled emitter is added to the scene (and animation task 4: particle simulation is implemented), particles will start generating and following trajectories based on the emitter parameters. Particles should collide with scene objects. When moving existing objects that particles interact with, the simulation will not be updated until the movement is completed. For example, the `particles.dae` test scene: Finally, note that you can render particles just like any other scene objects. In the path tracer, each particle is also a point light source! Rendering `particles.dae` with depth of field: ![particles render](simulate_mode/render.png) ", "url": "/guide/simulate_mode/", "relUrl": "/guide/simulate_mode/" },"35": { "doc": "Skeleton Kinematics", "title": "Skeleton Kinematics", "content": "# Skeleton Kinematics A `Skeleton`(defined in `scene/skeleton.h`) is what we use to drive our animation. You can think of them like the set of bones we have in our own bodies and joints that connect these bones. For convenience, we have merged the bones and joints into the `Joint` class which holds the orientation of the joint relative to its parent as euler angle in its `pose`, and `extent` representing the direction and length of the bone with respect to its parent `Joint`. Each `Mesh` has an associated `Skeleton` class which holds a rooted tree of `Joint`s, where each `Joint` can have an arbitrary number of children. All of our joints are ball `Joint`s which have a set of 3 rotations around the , , and axes, called _Euler angles_. Whenever you deal with angles in this way, a fixed order of operations must be enforced, otherwise the same set of angles will not represent the same rotation. In order to get the full rotational transformation matrix, , we can create individual rotation matrices around the , , and axes, which we call , , and respectively. The particular order of operations that we adopted for this assignment is that . ### Forward Kinematics _Note: These diagrams are in 2D for visual clarity, but we will work with a 3D kinematic skeleton._ When a joint's parent is rotated, that transformation should be propagated down to all of its children. In the diagram below, is the parent of and is the parent of . When a translation of and rotation of is applied to , all of the descendants are affected by this transformation as well. Then, is rotated by which affects itself and . Finally, when rotation of is applied to , it only affects itself because it has no children. You need to implement these routines in `student/skeleton.cpp` for forward kinematics. * `Joint::joint_to_bind` Rreturn a matrix transforming points in the space of this joint to points in mesh space in bind position up to the base of this joint (end of its parent joint). You should traverse upwards from this joint's parent all the way up to the root joint and accumulate their transformations. * `Joint::joint_to_posed` Return a matrix transforming points in the space of this joint to points in mesh space, taking into account joint poses. Again, you should traverse upwards from this joint's parent to the root joint. * `Skeleton::end_of` Returns the end position of the joint in world coordinate frame, and you should take into account the base position of the skeleton (`Skeleton::base_pos`). * `Skeleton::posed_end_of` Returns the end position of the joint in world coordinate frame with poses, and you should take into account `Skeleton::base_pos`. * `Skeleton::joint_to_bind` Rreturn a matrix transforming points in the space of this joint to points in mesh space in bind position but with the base position of the skeleton taken in to account. Hint: use some function that you have implemented wisely! * `Skeleton::joint_to_posed` Return a matrix transforming points in the space of this joint to points in mesh space, taking into account joint poses but with the base position of the skeleton taken in to account. Hint: use some function that you have implemented wisely! Once you have implemented these basic kinematics, you should be able to define skeletons, set their positions at a collection of keyframes, and watch the skeleton smoothly interpolate the motion (see the [user guide](../guide/animate.md) for an explanation of the interface). The gif below shows a very hasty demo defining a few joints and interpolating their motion. Note that the skeleton does not yet influence the geometry of the cube in this scene -- that will come in Task 3! ### Task 2b - Inverse Kinematics ### Single Target IK Now that we have a logical way to move joints around, we can implement Inverse Kinematics, which will move the joints around in order to reach a target point. There are a few different ways we can do this, but for this assignment we'll implement an iterative method called gradient descent in order to find the minimum of a function. For a function , we'll have the update scheme: Where is a small timestep. For this task, we'll be using gradient descent to find the minimum of the cost function: Where is the position in world space of the target joint, and is the position in world space of the target point. More specifically, we'll be using a technique called Jacobian Transpose, which relies on the assumption: Where: * (n x 1) is the function , where is the angle of joint around the axis of rotation * is a constant * (3 x n) is the Jacobian of Note that here refers to the number of joints in the skeleton. Although in reality this can be reduced to just the number of joints between the target joint and the root, inclusive, because all joints not on that path should stay where they are, so their columns in will be 0\\. So can just be the number of joints between the target and the root, inclusive. Additionally note that since this will get multiplied by anyways, you can ignore the value of , and just consider the timestep as . Now we just need a way to calcluate the Jacobian of . For this, we can use the fact that: Where: * is the column of * is the axis of rotation * is the vector from the base of joint to the end point of the target joint For a more in-depth derivation of Jacobian transpose (and a look into other inverse kinematics algorithms), please check out [this presentation](https://web.archive.org/web/20190501035728/https://autorob.org/lectures/autorob_11_ik_jacobian.pdf). (Pages 45-56 in particular) Now, all of this will work for updating the angle along a single axis, but we have 3 axes to deal with. Luckily, extending it to 3 dimensions isn't very difficult, we just need to update the angle along each axis independently. ### Multi-Target We'll extend this so we can have multiple targets, which will then use the function to minimize: which is a simple extension actually. Since each term is independent and added together, we can get the gradient of this new cost function just by summing the gradients of each of the constituent cost functions! You should implement multi-target IK, which will take a `vector` of `IK_Handle*`s called `active_handles` which stores the information a target point for a joint. See `scene/skeleton.h` for the definition of `IK_Handle` structure. In order to implement this, you should update `Joint::compute_gradient` and `Skeleton::step_ik`. `Joint::compute_gradient` should calculate the gradient of in the x,y, and z directions, and add them to `Joint::angle_gradient` for all relevant joints. `Skeleton::step_ik` should actually do the gradient descent calculations and update the `pose` of each joint. In this function, you should probably use a very small timestep, but do several iterations (say, 10s to 100s) of gradient descent in order to speed things up. For even faster and better results, you can also implement a variable timestep instead of just using a fixed one. Note also that the root joint should never be updated. A key thing for this part is to _remember what coordinate frame you're in_, because if you calculate the gradients in the wrong coordinate frame or use the axis of rotation in the wrong coordinate frame your answers will come out very wrong! ### Using your IK! Once you have IK implemented, you should be able to create a series of joints, and get a particular joint to move to the desired final position you have selected. ", "url": "/animation/skeleton_kinematics", "relUrl": "/animation/skeleton_kinematics" },"36": { "doc": "Skinning", "title": "Skinning", "content": "# Linear Blend Skinning Now that we have a skeleton set up, we need to link the skeleton to the mesh in order to get the mesh to follow the movements of the skeleton. We will implement linear blend skinning using the following functions: `Skeleton::skin()`, `Skeleton::find_joints()`, and `closest_on_line_segment`. The easiest way to do this is to update each of mesh vertices' positions in relation to the bones (Joints) in the skeleton. There are 3 types of coordinate spaces: bind, joint, and pose. Bind is the initial coordinate frame of the vertices of where they are bound to relative to the mesh. Joint is the position of the vertex relative to a given joint. Pose is the world-space position after the joint transforms have been applied. You'll want to compute transforms that take vertices in bind space and convert them to posed space (Hint: `joint_to_bind`, `joint_to_posed`, and `inverse()` will come in handy.) Your implementation should have the following basic steps for each vertex: - Compute the vertex's position with respect to each joint j in the skeleton in j's coordinate frame when no transformations have been applied to the skeleton (bind pose, vertex bind position). - Find where this vertex would end up (in world coordinates) if it were transformed along with bone j. - Find the closest point on joint j's bone segment (axis) and compute the distance to this closest point (Hint: `closest_on_line_segment` might come in handy). - Diagram of `closest_on_line_segment`: - Compute the resulting position of the vertex by doing a weighted average of the bind-to-posed transforms from each bone and applying it to the vertex. The weights for the weighted average should be the inverse distance to the joint, so closer bones have a stronger influence. Below we have an equation representation. The ith vertex v is the new vertex position. The weight w is the weight metric computed as the inverse of distance between the ith vertex and the closest point on joint j. We multiply this term with the position of the ith vertex v with respect to joint j after joint's transformations has been applied. In Scotty3D, the `Skeleton::skin()` function gets called on every frame draw iteration, recomputing all skinning related quantities. In this function, you should read vertices from `input.verts()` and indices from `input.indices()`, and write the resulting positions and norms to `v.pos` and `v.norm` for every vertex in the input vertices list. You will be implementing a Capsule-Radius Linear Blend Skin method, which only moves vertices with a joint if they lie in the joint's radius. The `Skeleton::skin()` function also takes in a `map` of vertex index to relevant joints that you must compute the above distance/transformation metrics on. You are also responsible for creating this `map`, which is done so in `Skeleton::find_joints()`. Don't worry about calling this function, it is called automatically before skin is called, populating the `map` field and sending it over to the `skin()` function. Your `Skeleton::find_joints()` implementation should iterate over all the vertices and add joint j to vertex index i in the map if the distance between the vertex and joint is less than `j->radius` (remember make sure they're both in the same coordinate frame.) ", "url": "/animation/skinning", "relUrl": "/animation/skinning" },"37": { "doc": "Splines", "title": "Splines", "content": "# Spline Interpolation As we discussed in class, data points in time can be interpolated by constructing an approximating piecewise polynomial or spline. In this assignment you will implement a particular kind of spline, called a Catmull-Rom spline. A Catmull-Rom spline is a piecewise cubic spline defined purely in terms of the points it interpolates. It is a popular choice in real animation systems, because the animator does not need to define additional data like tangents, etc. (However, your code may still need to numerically evaluate these tangents after the fact; more on this point later.) All of the methods relevant to spline interpolation can be found in `spline.h` with implementations in `spline.inl`. ### Task 1a - Hermite Curve over the Unit Interval Recall that a cubic polynomial is a function of the form: where , and are fixed coefficients. However, there are many different ways of specifying a cubic polynomial. In particular, rather than specifying the coefficients directly, we can specify the endpoints and tangents we wish to interpolate. This construction is called the \"Hermite form\" of the polynomial. In particular, the Hermite form is given by where are endpoint positions, are endpoint tangents, and are the Hermite bases Your first task is to implement the method `Spline::cubic_unit_spline()`, which evaluates a spline defined over the time interval given a pair of endpoints and tangents at endpoints. Your basic strategy for implementing this routine should be: * Evaluate the time, its square, and its cube (for readability, you may want to make a local copy). * Using these values, as well as the position and tangent values, compute the four basis functions of a cubic polynomial in Hermite form. * Finally, combine the endpoint and tangent data using the evaluated bases, and return the result. Notice that this function is templated on a type T. In C++, a templated class can operate on data of a variable type. In the case of a spline, for instance, we want to be able to interpolate all sorts of data: angles, vectors, colors, etc. So it wouldn't make sense to rewrite our spline class once for each of these types; instead, we use templates. In terms of implementation, your code will look no different than if you were operating on a basic type (e.g., doubles). However, the compiler will complain if you try to interpolate a type for which interpolation doesn't make sense! For instance, if you tried to interpolate `Skeleton` objects, the compiler would likely complain that there is no definition for the sum of two skeletons (via a + operator). In general, our spline interpolation will only make sense for data that comes from a vector space, since we need to add T values and take scalar multiples. ### Task 1B: Evaluation of a Catmull-Rom spline The routine from part 1A just defines the interpolated spline between two points, but in general we will want smooth splines between a long sequence of points. You will now use your solution from part 1A to implement the method `Spline::at()` which evaluates a general Catmull-Romspline at the specified time in a sequence of points (called \"knots\"). Since we now know how to interpolate a pair of endpoints and tangents, the only task remaining is to find the interval closest to the query time, and evaluate its endpoints and tangents. The basic idea behind Catmull-Rom is that for a given time t, we first find the four closest knots at times We then use t1 and t2 as the endpoints of our cubic \"piece,\" and for tangents we use the values In other words, a reasonable guess for the tangent is given by the difference between neighboring points. (See the Wikipedia and our course slides for more details.) This scheme works great if we have two well-defined knots on either side of the query time t. But what happens if we get a query time near the beginning or end of the spline? Or what if the spline contains fewer than four knots? We still have to somehow come up with a reasonable definition for the positions and tangents of the curve at these times. For this assignment, your Catmull-Rom spline interpolation should satisfy the following properties: * If there are no knots at all in the spline, interpolation should return the default value for the interpolated type. This value can be computed by simply calling the constructor for the type: T(). For instance, if the spline is interpolating Vector3D objects, then the default value will be . * If there is only one knot in the spline, interpolation should always return the value of that knot (independent of the time). In other words, we simply have a constant interpolant. * If the query time is less than or equal to the initial knot, return the initial knot's value. * If the query time is greater than or equal to the final knot, return the final knot's value. Once we have two or more knots, interpolation can be handled using general-purpose code. In particular, we can adopt the following \"mirroring\" strategy to obtain the four knots used in our computation: * Any query time between the first and last knot will have at least one knot \"to the left\" and one \"to the right\" . * Suppose we don't have a knot \"two to the left\" . Then we will define a \"virtual\" knot . In other words, we will \"mirror\" the difference be observe between and to the other side of . * Likewise, if we don't have a knot \"two to the right\" ), then we will \"mirror\" the difference to get a \"virtual\" knot . * At this point, we have four valid knot values (whether \"real\" or \"virtual\"), and can compute our tangents and positions as usual. * These values are then handed off to our subroutine that computes cubic interpolation over the unit interval. An important thing to keep in mind is that `Spline::cubic_unit_spline()` assumes that the time value t is between 0 and 1, whereas the distance between two knots on our Catmull-Rom spline can be arbitrary. Therefore, when calling this subroutine you will have to normalize t such that it is between 0 and 1, i.e., you will have to divide by the length of the current interval over which you are interpolating. You should think very carefully about how this normalization affects the value computed by the subroutine, in comparison to the values we want to return. A transformation is necessary for both the tangents that you feed in to specify the unit spline. Internally, a Spline object stores its data in an STL map that maps knot times to knot values. A nice thing about an STL map is that it automatically keeps knots in sorted order. Therefore, we can quickly access the knot closest to a given time using the method `map::upper_bound()`, which returns an iterator to knot with the smallest time greater than the given query time (you can find out more about this method via online documentation for the Standard Template Library). ### Using the splines Once you have implemented the functions in `spline.cpp`, you should be able to make simple animations by translating, rotating or scaling the mesh in the scene. The main idea is to: * create an initial keyframe by clicking at a point on the white timeline at the bottom of the screen * specify the initial location/orientation/scale of your mesh using the controls provided * create more keyframes with different mesh locations/orientations/scales and watch the splines smoothly interpolate the movement of your mesh! ", "url": "/animation/splines", "relUrl": "/animation/splines" },"38": { "doc": "Triangulation", "title": "Triangulation", "content": "# Triangulation For an in-practice example, see the [User Guide](/Scotty3D/guide/model). A variety of geometry processing algorithms become easier to implement (or are only well defined) when the input consists purely of triangles. The method `Halfedge_Mesh::triangulate` converts any polygon mesh into a triangle mesh by splitting each polygon into triangles. This transformation is performed in-place, i.e., the original mesh data is replaced with the new, triangulated data (rather than making a copy). The implementation of this method will look much like the implementation of the local mesh operations, with the addition of looping over every face in the mesh. There is more than one way to split a polygon into triangles. Two common patterns are to connect every vertex to a single vertex, or to \"zig-zag\" the triangulation across the polygon: The `triangulate` routine is not required to produce any particular triangulation so long as: * all polygons in the output are triangles, * the vertex positions remain unchanged, and * the output is a valid, manifold halfedge mesh. Note that triangulation of nonconvex or nonplanar polygons may lead to geometry that is unattractive or difficult to interpret. However, the purpose of this method is simply to produce triangular _connectivity_ for a given polygon mesh, and correct halfedge connectivity is the only invariant that must be preserved by the implementation. The _geometric_ quality of the triangulation can later be improved by running other global algorithms (e.g., isotropic remeshing); ambitious developers may also wish to consult the following reference: * Zou et al, [\"An Algorithm for Triangulating Multiple 3D Polygons\"](http://www.cs.wustl.edu/~taoju/research/triangulate_final.pdf) ", "url": "/meshedit/global/triangulate/", "relUrl": "/meshedit/global/triangulate/" },"39": { "doc": "Visualization of normals", "title": "Visualization of normals", "content": "# Visualization of normals For debugging purposes: You can set the `bool normal_colors` to true in `student/debug.h` to check if the normals that you have computed at the hit point are correct or not for debugging purposes. Here are some reference results: ", "url": "/pathtracer/visualization_of_normals", "relUrl": "/pathtracer/visualization_of_normals" } }