Commit 2f5b85ee authored by Hui Wang's avatar Hui Wang
Browse files

use relative path in md

parent c29a3a27
......@@ -13,12 +13,12 @@ There are four primary components that must be implemented to support Animation
**A4.0**
- [(Task 1) Spline Interpolation](splines)
- [(Task 2) Skeleton Kinematics](skeleton_kinematics)
- [(Task 1) Spline Interpolation](./splines)
- [(Task 2) Skeleton Kinematics](./skeleton_kinematics)
**A4.5**
- [(Task 3) Linear Blend Skinning](skinning)
- [(Task 4) Particle Simulation](particles)
- [(Task 3) Linear Blend Skinning](./skinning)
- [(Task 4) Particle Simulation](./particles)
Each task is described at the linked page.
......
[[Home]](index) [[Mesh Edit]](meshedit/overview) [[Path Tracer]](pathtracer/overview) [[Animation]](animation/overview)
[[Home]](index) [[Mesh Edit]](./meshedit/overview) [[Path Tracer]](./pathtracer/overview) [[Animation]](./animation/overview)
---
......@@ -6,7 +6,7 @@
<!-- ![Ubuntu Build Status](https://github.com/CMU-Graphics/Scotty3D/workflows/Ubuntu/badge.svg) ![MacOS Build Status](https://github.com/CMU-Graphics/Scotty3D/workflows/MacOS/badge.svg) ![Windows Build Status](https://github.com/CMU-Graphics/Scotty3D/workflows/Windows/badge.svg) -->
To get a copy of the codebase, see [Git Setup](git).
To get a copy of the codebase, see [Git Setup](./git).
Note: the first build on any platform will be very slow, as it must compile most dependencies. Subsequent builds will only need to re-compile your edited Scotty3D code.
......
[[Home]](index) [[Mesh Edit]](meshedit/overview) [[Path Tracer]](pathtracer/overview) [[Animation]](animation/overview)
[[Home]](index) [[Mesh Edit]](./meshedit/overview) [[Path Tracer]](./pathtracer/overview) [[Animation]](./animation/overview)
---
......
......@@ -27,17 +27,17 @@ To see your animation, press `Play [space]` . Once you've implemented **spline i
Check `Draw Splines` to visualize the spline along which objects are animated.
![view-spline](animate_mode/guide-animate-spline.png)
![view-spline](./animate_mode/guide-animate-spline.png)
`Add Frames` inserts 90 empty frames into the timeline. `Crop End` deletes frames from the selected location to the end of the timeline.
### Posing
Once you have [rigged](rig) an object with a skeleton, it can now be posed by selecting a joint and changing its pose i.e., rotating the joint. This is called Forward Kinematics.
Once you have [rigged](./rig) an object with a skeleton, it can now be posed by selecting a joint and changing its pose i.e., rotating the joint. This is called Forward Kinematics.
Joint poses can also be indirectly changed by using the IK (Inverse Kinematics) handles to provide target positions.
Note that IK handles need to be explicitly enabled using the checkbox.
Once you've implemented **forward kinematics**, **inverse kinematics** and **skinning**, as you change the pose, the mesh will deform.
Different poses can be set as keyframes to animate the object.
<video src="{{ site.baseurl }}/guide/animate_mode/guide-posing-rig.mp4" controls preload muted loop style="max-width: 100%; margin: 0 auto;"></video>
<video src="./animate_mode/guide-posing-rig.mp4" controls preload muted loop style="max-width: 100%; margin: 0 auto;"></video>
......@@ -38,7 +38,7 @@ toggle through by pressing the `r` key.
- `Rotate`: click and drag on the red (X), green (Y), or blue (Z) loop to rotate the object about the X/Y/Z axis. Note that these rotations are applied relative to the current pose, so they do not necessarily correspond to smooth transformations of the X/Y/Z Euler angles.
- `Scale`: click and drag on the red (X), green (Y), or blue(Z) block to scale the object about the X/Y/Z axis. Again note that this scale is applied relative to the current pose.
![selecting an edge](model_mode/model_select.png)
![selecting an edge](./model_mode/model_select.png)
### Beveling
......
......@@ -13,7 +13,7 @@ Welcome! This is Scotty3D's realistic, globally illuminated renderer, capable of
In render mode, click on "Open Render Window", and you will be able to set the parameters to render your model. Enjoy the excitement of seeing the images becoming clearer and clearer ;-)
![light](render_mode/window.png)
![light](./render_mode/window.png)
## Moving Camera
......@@ -27,7 +27,7 @@ To freely move the camera without updating its field of view/aspect ratio to mat
To add lighting to the scene, simply go to the menu on the left side, click "New Light", and you will be able to choose from a variety of point objects and infinite environment lights. (To implement support for environment lights, see PathTracer task 7.)
![light](render_mode/light.png)
![light](./render_mode/light.png)
Additionally, any object can be made into an emissive area light by changing its material to `Diffuse Light`. Mesh-based area lights can produce much more realistic lighting conditions.
......@@ -35,16 +35,16 @@ Additionally, any object can be made into an emissive area light by changing its
In Render mode, simply check the box for "Logged Rays", and you would be able to see the camera rays that you generated in task 1 when you start render.
![ray](render_mode/ray_log.png)
![ray](./render_mode/ray_log.png)
## Visualize BVH
In Render mode, simply check the box for "BVH", and you would be able to see the BVH you generated in task 3 when you start rendering. You can click on the horizontal bar to see each level of your BVH.
![ray](render_mode/bvh.png)
![ray](./render_mode/bvh.png)
## Materials and Other Object Options
You can change the material and other property of your mesh by selecting the object and choose "Edit Pose", "Edit Mesh", and "Edit Material". For example, you can make a colored cow by "Edit Material" -> "Diffuse light", and pick a color that you like.
![material](render_mode/material.png)
![material](./render_mode/material.png)
......@@ -19,7 +19,7 @@ If you want to branch off at a joint, simply click on the joint to branch off of
To view a rigged example, see `media/human.dae` example and select the object in the Rig tab to view its joints.
Once you've implemented forward kinematics the skeleton should be setup like so:
![rigged-human](rigging_mode/guide-rigging-human.png)
![rigged-human](./rigging_mode/guide-rigging-human.png)
......@@ -29,14 +29,14 @@ Each joint has an associated `Radius` which controls the part of the mesh influ
<video src="rigging_mode/guide-rigging-2.mov" controls preload muted loop style="max-width: 100%; margin: 0 auto;"></video>
Note that rigging only uses extents of the bone for skeleton setup, joint pose does not influence the skeleton. Once rigging is done, the object can be posed by changing joint rotations in the [animate](animate) mode.
Note that rigging only uses extents of the bone for skeleton setup, joint pose does not influence the skeleton. Once rigging is done, the object can be posed by changing joint rotations in the [animate](./animate) mode.
## Inverse Kinematics
Instead of computing the positions of the bones from the joint poses (forward kinematics), in inverse kinematics, joint positions are computed from target positions.
To associate a target position with a joint, select `Add IK` and edit the target position. Multiple target positions can be associated with the same joint but targets need to be explicitly enabled using the checkbox.
In the [animate](animate) mode, once inverse kinematics is implemented, joint rotation(pose) is updated based on the enabled IK handles.
In the [animate](./animate) mode, once inverse kinematics is implemented, joint rotation(pose) is updated based on the enabled IK handles.
<video src="rigging_mode/guide-ik.mp4" controls preload muted loop style="max-width: 100%; margin: 0 auto;"></video>
......
......@@ -11,7 +11,7 @@ The simulation view provides a way to create and manage particle emitters.
To add an emitter, open the dropdown menu, adjust desired parameters, and press `Add`.
![add emitter](simulate_mode/add_emitter.png)
![add emitter](./simulate_mode/add_emitter.png)
- Color: color with which to render the particles.
- Angle: angle of cone within which particles are generated (pointing in the emitter object's direction).
......@@ -25,8 +25,8 @@ Once an enabled emitter is added to the scene (and animation task 4: particle si
For example, the `particles.dae` test scene:
<video src="simulate_mode/guide-simulate-1.mp4" controls preload muted loop style="max-width: 100%; margin: 0 auto;"></video>
<video src="./simulate_mode/guide-simulate-1.mp4" controls preload muted loop style="max-width: 100%; margin: 0 auto;"></video>
Finally, note that you can render particles just like any other scene objects. Rendering `particles.dae` with depth of field:
![particles render](simulate_mode/render.png)
![particles render](./simulate_mode/render.png)
[[Home]](index) [[Mesh Edit]](meshedit/overview) [[Path Tracer]](pathtracer/overview) [[Animation]](animation/overview)
[[Home]](index) [[Mesh Edit]](./meshedit/overview) [[Path Tracer]](./pathtracer/overview) [[Animation]](./animation/overview)
---
......@@ -11,14 +11,14 @@ constitutes the majority of the coursework for CS403 (Computer Graphics) at Shan
These pages describe how to set up and use Scotty3D. Start here!
- [Git Setup](git): create a private git mirror that can pull changes from Scotty3D.
- [Building Scotty3D](building): build and run Scotty3D on various platforms.
- [User Guide](guide/guide): learn the intended functionality for end users.
- [Git Setup](./git): create a private git mirror that can pull changes from Scotty3D.
- [Building Scotty3D](./building): build and run Scotty3D on various platforms.
- [User Guide](./guide/guide): learn the intended functionality for end users.
The developer manual describes what you must implement to complete Scotty3D. It is organized under the three main components of the software:
- [MeshEdit](meshedit/overview)
- [PathTracer](pathtracer/overview)
- [Animation](animation/overview)
- [MeshEdit](./meshedit/overview)
- [PathTracer](./pathtracer/overview)
- [Animation](./animation/overview)
## Project Philosophy
......@@ -28,7 +28,7 @@ Scotty3D, which is a modern package for 3D modeling, rendering, and animation.
In terms of basic structure, this package doesn't look much different from
"real" 3D tools like Maya, Blender, modo, or Houdini. Your overarching goal is
to use the developer manual to implement a package that
works as described in the [User Guide](guide/guide), much as you would at a real
works as described in the [User Guide](./guide/guide), much as you would at a real
software company (more details below).
Note that the User Guide is **not** an Assignment Writeup. The User Guide
......
......@@ -13,7 +13,7 @@ The methods that update the connectivity are `HalfedgeMesh::bevel_vertex`, `half
`HalfedgeMesh::extrude_vertex` will update both connectivity and geometry, as it should first perform a flat bevel on the vertex, and then insert a vertex into the new face. TODO: not used in stanford
The methods for updating connectivity can be implemented following the general strategy outlined in [edge flip tutorial](edge_flip). **Note that the methods that update geometry will be called repeatedly for the same bevel, in order to adjust positions according to user mouse input. See the gif in the [User Guide](../guide/model).**
The methods for updating connectivity can be implemented following the general strategy outlined in [edge flip tutorial](./edge_flip). **Note that the methods that update geometry will be called repeatedly for the same bevel, in order to adjust positions according to user mouse input. See the gif in the [User Guide](../guide/model).**
To update the _geometry_ of a beveled element, you are provided with the following data:
......
......@@ -7,7 +7,7 @@
For an in-practice example, see the [User Guide](../guide/model).
The only difference between Catmull-Clark and [linear](linear) subdivision is the choice of positions for new vertices. Whereas linear subdivision simply takes a uniform average of the old vertex positions, Catmull-Clark uses a very carefully-designed _weighted_ average to ensure that the surface converges to a nice, round surface as the number of subdivision steps increases. The original scheme is described in the paper _"Recursively generated B-spline surfaces on arbitrary topological meshes"_ by (Pixar co-founder) Ed Catmull and James Clark. Since then, the scheme has been thoroughly discussed, extended, and analyzed; more modern descriptions of the algorithm may be easier to read, including those from the [Wikipedia](https://en.wikipedia.org/wiki/Catmull-Clark_subdivision_surface) and [this webpage](http://www.rorydriscoll.com/2008/08/01/catmull-clark-subdivision-the-basics/). In short, the new vertex positions can be calculated by:
The only difference between Catmull-Clark and [linear](./linear) subdivision is the choice of positions for new vertices. Whereas linear subdivision simply takes a uniform average of the old vertex positions, Catmull-Clark uses a very carefully-designed _weighted_ average to ensure that the surface converges to a nice, round surface as the number of subdivision steps increases. The original scheme is described in the paper _"Recursively generated B-spline surfaces on arbitrary topological meshes"_ by (Pixar co-founder) Ed Catmull and James Clark. Since then, the scheme has been thoroughly discussed, extended, and analyzed; more modern descriptions of the algorithm may be easier to read, including those from the [Wikipedia](https://en.wikipedia.org/wiki/Catmull-Clark_subdivision_surface) and [this webpage](http://www.rorydriscoll.com/2008/08/01/catmull-clark-subdivision-the-basics/). In short, the new vertex positions can be calculated by:
1. setting the new vertex position at each face f to the average of all its original vertices (exactly as in linear subdivision),
2. setting the new vertex position at each edge e to the average of the new face positions (from step 1) and the original endpoint positions, and
......
......@@ -7,8 +7,8 @@
In addition to local operations on mesh connectivity, Scotty3D provides several global remeshing operations (as outlined in the [User Guide](../guide/model)). Two different mechanisms are used to implement global operations:
* _Repeated application of local operations._ Some mesh operations are most easily expressed by applying local operations (edge flips, etc.) to a sequence of mesh elements until the target output is achieved. A good example is [mesh simplification](simplify), which is a greedy algorithm that collapses one edge at a time.
* _Global replacement of the mesh._ Other mesh operations are better expressed by temporarily storing new mesh elements in a simpler mesh data structure (e.g., an indexed list of faces) and completely re-building the halfedge data structure from this data. A good example is [Catmull-Clark subdivision](catmull), where every polygon must be simultaneously split into quadrilaterals.
* _Repeated application of local operations._ Some mesh operations are most easily expressed by applying local operations (edge flips, etc.) to a sequence of mesh elements until the target output is achieved. A good example is [mesh simplification](./simplify), which is a greedy algorithm that collapses one edge at a time.
* _Global replacement of the mesh._ Other mesh operations are better expressed by temporarily storing new mesh elements in a simpler mesh data structure (e.g., an indexed list of faces) and completely re-building the halfedge data structure from this data. A good example is [Catmull-Clark subdivision](./catmull), where every polygon must be simultaneously split into quadrilaterals.
Note that in general there are no inter-dependencies among global remeshing operations (except that some of them require a triangle mesh as input, which can be achieved via the method `Halfedge_Mesh::triangulate`).
......@@ -18,7 +18,7 @@ In image processing, we often have a low resolution image that we want to displa
In geometry processing, one encounters the same situation: we may have a low-resolution polygon mesh that we wish to upsample for display, simulation, etc. Simply splitting each polygon into smaller pieces doesn't help, because it does nothing to alleviate blocky silhouettes or chunky features. Instead, we need an upsampling scheme that nicely interpolates or approximates the original data. Polygon meshes are quite a bit trickier than images, however, since our sample points are generally at _irregular_ locations, i.e., they are no longer found at regular intervals on a grid.
Three subdivision schemes are supported by Scotty3D: [Linear](linear), [Catmull-Clark](catmull), and [Loop](loop). The first two can be used on any polygon mesh without boundary, and should be implemented via the global replacement strategy described above. Loop subdivision can be implemented using repeated application of local operations. For further details, see the linked pages.
Three subdivision schemes are supported by Scotty3D: [Linear](./linear), [Catmull-Clark](./catmull), and [Loop](./loop). The first two can be used on any polygon mesh without boundary, and should be implemented via the global replacement strategy described above. Loop subdivision can be implemented using repeated application of local operations. For further details, see the linked pages.
## Performance
......
......@@ -26,48 +26,48 @@ See the [User Guide](../guide/model) for demonstrations of each local operation.
* `Halfedge_Mesh::flip_edge` - should return the edge that was flipped
![](local/flip_edge.svg)
![](./local/flip_edge.svg)
* `Halfedge_Mesh::split_edge` - should return the inserted vertex
![](local/split_edge.svg)
![](./local/split_edge.svg)
* `Halfedge_Mesh::bisect_edge` - should bisect the edge and return the inserted vertex
![](local/bisect_edge.svg)
![](./local/bisect_edge.svg)
* `Halfedge_Mesh::collapse_edge` - should return the new vertex, corresponding to the collapsed edge
![](local/collapse_edge.svg)
![](./local/collapse_edge.svg)
* `Halfedge_Mesh::collapse_face` - should return the new vertex, corresponding to the collapsed face
![](local/collapse_face.svg)
![](./local/collapse_face.svg)
* `Halfedge_Mesh::inset_vertex` - should return the newly inserted vertex
![](local/inset_vertex.svg)
![](./local/inset_vertex.svg)
* `Halfedge_Mesh::erase_vertex` - should return the new face, corresponding to the faces originally containing the vertex
![](local/erase_vertex.svg)
![](./local/erase_vertex.svg)
* `Halfedge_Mesh::erase_edge` - should return the new face, corresponding to the faces originally containing the edge
![](local/erase_edge.svg)
![](./local/erase_edge.svg)
* `Halfedge_Mesh::bevel_vertex` - should return the new face, corresponding to the beveled vertex
![](local/bevel_vertex.svg)
![](./local/bevel_vertex.svg)
* `Halfedge_Mesh::bevel_edge` - should return the new face, corresponding to the beveled edge
![](local/bevel_edge.svg)
![](./local/bevel_edge.svg)
* `Halfedge_Mesh::bevel_face` / `Halfedge_Mesh::extrude_face` / `Halfedge_Mesh::inset_face` - should return the new, inset face
![](local/bevel_face.svg)
![](./local/bevel_face.svg)
* `Halfedge_Mesh::extrude_vertex` - should return the new vertex
![](local/extrude_vertex.svg)
\ No newline at end of file
![](./local/extrude_vertex.svg)
\ No newline at end of file
......@@ -14,11 +14,11 @@ Loop subdivision (named after [Charles Loop](http://charlesloop.com/)) is a stan
The 4-1 subdivision looks like this:
![4-1 Subdivision](global/loop/loop_41.png)
![4-1 Subdivision](./global/loop/loop_41.png)
And the following picture illustrates the weighted average of the newly created vertex on the edge(left) and the old vertex(right):
![Loop subdivision weights](global/loop/loop_weights.png)
![Loop subdivision weights](./global/loop/loop_weights.png)
In words, the new position of an old vertex is (1 - nu) times the old position + u times the sum of the positions of all of its neighbors. The new position for a newly created vertex v that splits Edge AB and is flanked by opposite vertices C and D across the two faces connected to AB in the original mesh will be 3/8 * (A + B) + 1/8 * (C + D). If we repeatedly apply these two steps, we will converge to a fairly smooth approximation of our original mesh.
......@@ -29,7 +29,7 @@ We will implement Loop subdivision as the `Halfedge_Mesh::loop_subdivide()` meth
The following pictures (courtesy Denis Zorin) illustrate this idea:
![Subdivision via flipping](global/loop/loop_flipping.png)
![Subdivision via flipping](./global/loop/loop_flipping.png)
Notice that only blue (and not black) edges are flipped in this procedure; as described above, edges in the split mesh should be flipped if and only if they touch both an original vertex _and_ a new vertex (i.e., a midpoint of an original edge).
......
......@@ -11,16 +11,16 @@ The `media/` subdirectory of the project contains a variety of meshes and scenes
The following sections contain guidelines for implementing the functionality of MeshEdit:
- [Halfedge Mesh](halfedge)
- [Local Mesh Operations](local)
- [Tutorial: Edge Flip](edge_flip)
- [Beveling](bevel)
- [Global Mesh Operations](global)
- [Triangulation](triangulate)
- [Linear Subdivision](linear)
- [Catmull-Clark Subdivision](catmull)
- [Loop Subdivision](loop)
- [Isotropic Remeshing](remesh)
- [Simplification](simplify)
As always, be mindful of the [project philosophy](..).
- [Halfedge Mesh](./halfedge)
- [Local Mesh Operations](./local)
- [Tutorial: Edge Flip](./edge_flip)
- [Beveling](./bevel)
- [Global Mesh Operations](./global)
- [Triangulation](./triangulate)
- [Linear Subdivision](./linear)
- [Catmull-Clark Subdivision](./catmull)
- [Loop Subdivision](./loop)
- [Isotropic Remeshing](./remesh)
- [Simplification](./simplify)
As always, be mindful of the [project philosophy](../index).
......@@ -5,7 +5,7 @@
# Simplification
![Surface simplification via quadric error metric](global/simplify/quad_simplify.png)
![Surface simplification via quadric error metric](./global/simplify/quad_simplify.png)
For an in-practice example, see the [User Guide](../guide/model).
......@@ -165,4 +165,4 @@ Steps 4 and 7 are highlighted because it is easy to get these steps wrong. For i
A working implementation should look something like the examples below. You may find it easiest to implement this algorithm in stages. For instance, _first_ get the edge collapses working, using just the edge midpoint rather than the optimal point, _then_ worry about solving for the point that minimizes quadric error.
<!--![Quadric error simplification examples](quad_example.png)-->
<center><img src="global/simplify/quad_example.png" style="height:480px"></center>
<center><img src="./global/simplify/quad_example.png" style="height:480px"></center>
......@@ -50,7 +50,7 @@ You should only log only a small fraction of the generated rays, or else the res
Finally, you can visualize the logged rays by checking the box for Logged rays under Visualize and then **starting the render** (Open Render Window -> Start Render). After running the path tracer, rays will be shown as lines in visualizer. Be sure to wait for rendering to complete so you see all rays while visualizing.
![logged_rays](images/ray_log.png)
![logged_rays](./images/ray_log.png)
---
......
......@@ -18,7 +18,7 @@ The final task of this assignment will be to implement a new type of light sourc
The intensity of incoming light from each direction is defined by a texture map parameterized by phi and theta, as shown below.
![envmap_figure](figures/envmap_figure.jpg)
![envmap_figure](./figures/envmap_figure.jpg)
In this task you will implement `Env_Map::sample`, `Env_Map::pdf`, and `Env_Map::evaluate` in `student/env_light.cpp`. You'll start with uniform sampling to get things working, and then move onto a more advanced implementation that uses **importance sampling** to significantly reduce variance in rendered images.
......@@ -36,7 +36,7 @@ Since high dynamic range environment maps can be large files, we have not includ
To use a particular environment map with your scene, select `layout` -> `new light` -> `environment map`-> `add`, and select your file. For more creative environment maps, check out [Poly Haven](https://polyhaven.com/)
![envmap_gui](images/envmap_gui.png)
![envmap_gui](./images/envmap_gui.png)
## Step 2: Importance sampling the environment map
......@@ -94,6 +94,6 @@ Altogether, the final Jacobian is (wh / 2pi^2 sin(\theta)).
## Reference Results
![ennis](images/ennis.png)
![uffiz](images/uffiz.png)
![grace](images/grace.png)
![ennis](./images/ennis.png)
![uffiz](./images/uffiz.png)
![grace](./images/grace.png)
......@@ -31,7 +31,7 @@ One important detail of the ray structure is that `dist_bounds` is a mutable fie
You should now be able to render all of the example scenes colored based on surface normals. Note that scenes with high geometric complexity will be extremely slow until you implement task 3. Here is `dodecahedron.dae`, `cbox.dae`, and `cow.dae`:
![dodecahedron](images/dodecahedron_normals.png)
![cbox](images/cbox_normals.png)
![cow](images/cow_normals.png)
![dodecahedron](./images/dodecahedron_normals.png)
![cbox](./images/cbox_normals.png)
![cow](./images/cow_normals.png)
......@@ -47,13 +47,13 @@ Note: separately sampling direct lighting might seem silly, as we could have jus
After correctly implementing task 4, your renderer should be able to make a beautifully lit picture of the Cornell Box with Lambertian spheres (`cbox_lambertian.dae`). Below is a render using 1024 samples per pixel (spp):
![cbox_lambertian](images/cbox_lambertian.png)
![cbox_lambertian](./images/cbox_lambertian.png)
Note the time-quality tradeoff here. This image was rendered with a sample rate of 1024 camera rays per pixel and a max ray depth of 8. This will produce a relatively high quality result, but will take quite some time to render. Rendering a fully converged image may take a even longer, so start testing your path tracer early!
Thankfully, runtime will scale (roughly) linearly with the number of samples. Below are the results and runtime of rendering the Lambertian cornell box at 720p on a Ryzen 5950x (max ray depth 8):
![cbox_lambertian_timing](images/cbox_lambertian_timing.png)
![cbox_lambertian_timing](./images/cbox_lambertian_timing.png)
---
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment