*<imgsrc="task2_media/0075.png"style="height:24px"> is the axis of rotation
*<imgsrc="task2_media/0076.png"style="height:24px"> is the vector from the base of joint <imgsrc="task2_media/0077.png"style="height:24px"> to the end point of the target joint
Note that <imgsrc="task2_media/0076.png"style="height:24px"> and <imgsrc="task2_media/0054.png"style="height:24px"> (from above) are not the same!
For a more in-depth derivation of Jacobian transpose (and a look into other inverse kinematics algorithms), please check out [this presentation](https://web.archive.org/web/20190501035728/https://autorob.org/lectures/autorob_11_ik_jacobian.pdf). (Pages 45-56 in particular)
Now, all of this will work for updating the angle along a single axis, but we have 3 axes to deal with. Luckily, extending it to 3 dimensions isn't very difficult, we just need to update the angle along each axis independently.
@@ -59,6 +59,8 @@ There are three possible types of bevels:
vertices are connected to the edges originally incident on _v_. The new face is
inset (i.e., shunken or expanded) by a user-controllable amount.
- Vertex Extrude: The selected vertex _v_ is beveled by a flat amount (we use 1/3 the length of the edge from the original vertex _v_ to an adjacent vertex endpoint as the tangential offset), a new vertex _v'_ is inserted into the resulting face, and _v'_ is offset in its normal direction by a user-controllable amount.
- Edge Bevel: The selected edge _e_ is replaced by a face _f_ whose
...
...
@@ -73,6 +75,10 @@ as a ring of faces around _g_, such that the vertices of _g_ connect to the
original vertices of _f_. The new face is inset and offset by some
user-controllable amount.
- Face Extrude: The selected face _f_ is replaced by a new face _g_ as in Face Bevel, and _g_ is offset only in the normal direction by some user-controllable amount.
- Face Inset: The selected face _f_ is replaced by a new face _g_ as in Face Bevel, but its vertices are only offset in the tangent direction by a constant factor (we use 1/3 in the example).
- Edge Bisect: The selected edge _e_ is split at its midpoint, and the new vertex _v_ at the split is returned. Note that the new vertex _v_ is not connected to the two opposite vertices as in Edge Split.
### Global Mesh Processing
A number of commands can be used to create a more global change in the mesh
@@ -13,7 +13,9 @@ Here we provide some additional detail about the bevel operations and their impl
1. a method that modifies the _connectivity_ of the mesh, creating new beveled elements, and
2. a method the updates the _geometry_ of the mesh, insetting and offseting the new vertices according to user input.
The methods that update the connectivity are `HalfedgeMesh::bevel_vertex`, `halfedgeMesh::bevel_edge`, and `HalfedgeMesh::bevel_face`. The methods that update geometry are `HalfedgeMesh::bevel_vertex_positions`, `HalfedgeMesh::bevel_edge_positions`, and `HalfedgeMesh::bevel_face_positions`.
The methods that update the connectivity are `HalfedgeMesh::bevel_vertex`, `halfedgeMesh::bevel_edge`, and `HalfedgeMesh::bevel_face`. The methods that update geometry are `HalfedgeMesh::bevel_vertex_positions`, `HalfedgeMesh::extruve_vertex_position`, `HalfedgeMesh::bevel_edge_positions`, and `HalfedgeMesh::bevel_face_positions`.
`HalfedgeMesh::extrude_vertex` will update both connectivity and geometry, as it should first perform a flat bevel on the vertex, and then insert a vertex into the new face.
The methods for updating connectivity can be implemented following the general strategy outlined in [edge flip tutorial](edge_flip). **Note that the methods that update geometry will be called repeatedly for the same bevel, in order to adjust positions according to user mouse input. See the gif in the [User Guide](/Scotty3D/guide/model_mode).**
The job of this function is to compute the amount of energy arriving at this pixel of the image. Take a look at `Pathtracer::trace_pixel` in `student/pathtracer.cpp`. Conveniently, we've given you a function `Pathtracer::trace(r)` that provides a measurement of incoming scene radiance along the direction given by ray `r`, split into emissive and reflected components. See `lib/ray.h` for the interface of ray.
Take a look at `Pathtracer::trace_pixel` in `student/pathtracer.cpp`. The job of this function is to compute the amount of energy arriving at this pixel of the image. Conveniently, we've given you a function `Pathtracer::trace(r)` that provides a measurement of incoming scene radiance along the direction given by ray `r`, split into emissive (direct) and reflected (indirect) components. See `lib/ray.h` for the interface of ray.
Given the width and height of the screen, and a point's _screen space_ coordinates (`size_t x, size_t y`), compute the point's _normalized_ ([0-1] x [0-1]) screen space coordinates in `Pathtracer::trace_pixel`. Pass these coordinates to the camera via `Camera::generate_ray` in `camera.cpp` (note that `Camera::generate_ray` accepts a `Vec2` object as its input argument)
## Step 2: `Camera::generate_ray`
Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point, i.e. the input argument. Compute this ray in **camera space** (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio.
Implement `Camera::generate_ray`. This function should return a ray **in world space** that reaches the given sensor sample point, i.e. the input argument. We recommend that you compute this ray in camera space (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.). In `util/camera.h`, the `Camera` class stores `vert_fov` and `aspect_ratio` indicating the vertical field of view of the camera (in degrees, not radians) as well as the aspect ratio. Note that the camera maintains camera-space-to-world space transform matrix `iview` that will come in handy. Note that since `iview` is a transform matrix, it contains translation, rotation, and scale factors. Be careful in how you use it directly on specific objects, and take a look at `lib/ray.h` and `lib/mat4.h` to see what functions are available for the `Ray` and `Mat4` objects.
Note that the camera maintains camera-space-to-world space transform matrix `iview` that you will need to use in order to get the new ray back into **world space**. Note that since `iview` is a transform matrix, it contains translation, rotation, and scale factors. Be careful in how you use it on specific objects, and take a look at `lib/ray.h` and `lib/mat4.h` to see what functions are available for the `Ray` and `Mat4` objects.
Note that the camera maintains camera-space-to-world space transform matrix `iview` that you will need to use in order to get the new ray back into **world space**.
Once you have implemented `Pathtracer::trace_pixel`, `Rect::sample` and `Camera::generate_ray`ou should now have a camera that can shoot rays into the scene! See the
Once you have implemented `Pathtracer::trace_pixel`, `Rect::Uniform::sample` and `Camera::generate_ray`ou should now have a camera that can shoot rays into the scene! See the
**Raytracing Visualization** below to confirm this.
@@ -36,7 +36,7 @@ Your implementation of `Pathtracer::trace_pixel` must support super-sampling. Th
To choose a sample within the pixel, you should implement `Rect::sample` (see `src/student/samplers.cpp`), such that it provides (random) uniformly distributed 2D points within the rectangular region specified by the origin and the member `Rect::size`. Then you may then create a `Rect` sampler with a one-by-one region and call `sample()` to obtain randomly chosen offsets within the pixel.
---
Once you have implemented `Pathtracer::trace_pixel`, `Rect::sample` and `Camera::generate_ray`, you should have a working camera (see **Raytracing Visualization** section below to confirm that your camera is indeed working).