Commit d9f1f3aa authored by miyehn's avatar miyehn Committed by cmu462
Browse files

Add clarifications, rename variables, etc. based on last semester's documentation piazza post (#8)

* wip of adding clarifications

* more clarification changes

* added windows clion build instructions

* files overview & a few more ta advice
parent 8c1a146f
......@@ -42,6 +42,19 @@ If you plan on using Visual Studio to debug your program, you can change `drawsv
If you feel that your program is running slowly, you can also change the build mode to `Release` from `Debug` by clicking the Solution Configurations drop down menu on the top menu bar. Note that you will have to set `Command Arguments` again if you change the build mode.
#### Windows build instructions using CLion
(tested on CLion 2018.3)
Open CLion, then do `File -> Import Project..`
In the popped out window, find and select the project folder `...\DrawSVG`, click OK, click Open Existing Project, then select New Window
Make sure the drop down menu on top right has drawsvg selected (it should say `drawsvg | Debug`). Then open the drop down menu again and go to Edit Configurations..
Fill in Program arguments, say, `./svg/basic`, then click Apply and close the popup
Now you should be able to click on the green run button on top right to run the project.
### Using the Mini-SVG Viewer App
......@@ -91,6 +104,13 @@ The assignment is divided into nine major tasks, which are described below in th
Before you start, here are some basic information on the structure of the starter code.
All the source code files are contained in `src` directory. You're welcome to browse through and/or edit any file, but the following ones and their headers are probably the most relevant:
- `hardware/hardware_renderer` (task 1)
- `software_renderer` (most tasks)
- `viewport` (task 5)
- `texture` (tasks 6, 7)
Most of your work will be constrained to implementing part of the class `SoftwareRendererImp` in `software_renderer.cpp`. The most important method is `draw_svg` which (not surprisingly) accepts an SVG object to draw. An SVG file defines its canvas (which defines a 2D coordinate space), and specifies a list of shape elements (such as points, lines, triangles, and images) that should be drawn on that canvas. Each shape element has a number of style parameters (e.g., color) as well as a modeling transform used to determine the element's position on the canvas. You can find the definition of the SVG class (and all the associated `SVGElements`) in `svg.h`. Notice that one type of `SVGElement` is a group that itself contains child elements. Therefore, you should think of an SVG file as defining a tree of shape elements. (Interior nodes of the tree are groups, and leaves are shapes.)
Another important method on the `SoftwareRendererImp` class is `set_render_target()`, which provides your code a buffer corresponding to the output image (it also provides width and height of the buffer in pixels, which are stored locally as `target_w` and `target_h`). This buffer is often called the "render target" in many applications, since it is the "target" of rendering commands. **We use the term pixel here on purpose because the values in this buffer are the values that will be displayed on screen.** Pixel values are stored in row-major format, and each pixel is an 8-bit RGBA value (32 bits in total). Your implementation needs to fill in the contents of this buffer when it is asked to draw an SVG file.
......@@ -142,7 +162,7 @@ At this time the starter code does not correctly handle transparent points. We'l
#### Task 1: Hardware Renderer
In this task, you will finish implementing parts of the hardware renderer by using the knowledge from the OpenGL tutorial session. In particular, you will be responsible for implementing `rasterize_point()`, `rasterize_line()`, and `rasterize_triangle()` in `hardware_renderer.cpp`. All other OpenGL context has been set up for you outside of these methods, so you only need to use `glBegin()`, `glEnd()`, and appropriate function calls in between those two functions.
In this task, you will finish implementing parts of the hardware renderer by using the knowledge from the OpenGL tutorial session. In particular, you will be responsible for implementing `rasterize_point()`, `rasterize_line()`, and `rasterize_triangle()` in `hardware/hardware_renderer.cpp`. All other OpenGL context has been set up for you outside of these methods, so you only need to use `glBegin()`, `glEnd()`, and appropriate function calls in between those two functions.
#### Task 2 : Warm Up: Drawing Lines
......@@ -169,7 +189,7 @@ In this task, you will implement `rasterize_triangle()` in `software_renderer.cp
Your implementation should:
- Sample triangle coverage using the methods discussed in Lecture 4. While in Task 2 you were given choice in how you defined the outputs of line drawing, there is an exact solution to the problem of sampling triangle coverage. The position of screen sample points--at half-integer coordinates in screen space--was described above.
- Sample triangle coverage using the methods discussed in Lecture _Drawing a Triangle_. While in Task 2 you were given choice in how you defined the outputs of line drawing, there is an exact solution to the problem of sampling triangle coverage. The position of screen sample points--at half-integer coordinates in screen space--was described above.
- To receive full credit in Task 3 your implementation should assume that a sample point on a triangle edge is covered by the triangle. Your implementation **DOES NOT** need to respect the triangle "edge rules" to avoid "double counting" as discussed in class. (but we encourage you to try!)
- Your implementation should use an algorithm that is more work efficient than simply testing all samples on screen. To receive full credit it should at least constrain coverage tests to samples that lie within a screen-space bounding box of the triangle. However, we encourage exploration of even more efficient implementations, such as ones that employ "early out" optimizations discussed in lecture.
- When a triangle covers a sample, you should write the triangle's color to the location corresponding to this sample in `render_target`.
......@@ -180,15 +200,17 @@ When you are done, you should be able to draw `basic/test3.svg`, `basic/test4.sv
#### Task 4: Anti-Aliasing Using Supersampling
**This part of the assignment requires only knowledge of concepts from Lectures 1, 4, and 5.**
**This part of the assignment requires only knowledge of concepts from Lectures _Course Introduction_ and _Drawing a Triangle_.**
In this task, you will extend your rasterizer to anti-alias triangle edges via supersampling. In response to the user changing the screen sampling rate (the = and - keys), the application will call `set_sample_rate()` . The parameter `sample_rate` defines the sampling rate in each dimension, so a value of 2 would correspond to a sample density of 4 samples per pixel. In this case, the samples lying within the top-left pixel of the screen would be located at locations (0.25, 0.25), (0.75, 0.25), (0.25, 0.75), and (0.75, 0.75).
![Sample locations](misc/coord_4spp.png?raw=true)
It's reasonable to think of supersampled rendering as rendering an image that is `sample_rate` times larger than the actual output image in each dimension, then resampling the larger rendered output down to the screen sampling rate after rendering is complete. To help you out, here is a sketch of an implementation. **Note: If you implemented your triangle rasterizer in terms of sampling coverage in screen-space coordinates (and not in terms of pixels), then the code changes to support supersampling should be fairly simple for triangles:**
It's reasonable to think of supersampled rendering as rendering an image that is `sample_rate` times larger than the actual output image in each dimension, then resampling the larger rendered output down to the screen sampling rate after rendering is complete. **Note: If you implemented your triangle rasterizer in terms of sampling coverage in screen-space coordinates (and not in terms of pixels), then the code changes to support supersampling should be fairly simple for triangles.**
To help you out, here is a sketch of an implementation:
- When rasterizing primitives such as triangles, rather than directly updating `render_target`, your rasterization should update the contents of a larger buffer (perhaps call it `supersample_target`) that holds the per-super-sample results. Yes, you will have to allocate/free this buffer yourself. Question: when is the right time to perform this allocation in the code?
- The image being rendered is stored in `render_target`, an array that stores each pixel's color components as an `uint8_t` in rgbargba.. order. Refer to `rasterize_point` to see how it can be modified. When rasterizing primitives such as triangles, rather than directly updating `render_target`, your rasterization should update the contents of a larger buffer (perhaps call it `supersample_target`) that holds the per-super-sample results. Yes, you will have to allocate/free this buffer yourself. Question: when is the right time to perform this allocation in the code?
- After rendering is complete, your implementation must resample the supersampled results buffer to obtain sample values for the render target. This is often called "resolving" the supersample buffer into the render target. Please implement resampling using a simple unit-area box filter.
Note that the function `SoftwareRendererImp::resolve()` is called by `draw_svg()` after the SVG file has been drawn. Thus it's a very convenient place to perform resampling.
......@@ -206,9 +228,9 @@ Also observe that after enabling supersampled rendering, something might have go
##### Part 1: Modeling Transforms
**This part of the assignment assumes knowledge of concepts in Lecture 5.**
**This part of the assignment assumes knowledge of concepts in Lecture _Transformations_.**
In Lecture 3 and Lecture 4 we discussed how it is common (and often very useful) to describe objects and shapes in their own local coordinate spaces and then build up more complicated objects by positioning many individual components in a single coordinate space. In this task you will extend the renderer to properly interpret the hierarchy of modeling transforms expressed in SVG files.
In previous lectures we discussed how it is common (and often very useful) to describe objects and shapes in their own local coordinate spaces and then build up more complicated objects by positioning many individual components in a single coordinate space. In this task you will extend the renderer to properly interpret the hierarchy of modeling transforms expressed in SVG files.
Recall that an SVG object consists of a hierarchy of shape elements. Each element in an SVG is associated with a modeling transform (see `SVGElement.transform` in `svg.h`) that defines the relationship between the object's local coordinate space and the parent element's coordinate space. At present, the implementation of `draw_element()`ignores these modeling transforms, so the only SVG objects your renderer has been able to correctly draw were objects that contained only identity modeling transforms.
......@@ -222,15 +244,17 @@ When you are done, you should be able to draw `basic/test6.svg`.
Notice the staff reference solution supports image pan and zoom behavior (drag the mouse to pan, use the scroll wheel to zoom). To implement this functionality in your solution, you will need to implement `ViewportImp::set_viewbox()` in `viewport.cpp`.
A viewport defines a region of the SVG canvas that is visible in the app. When the application initially launches, the entire canvas is in view. For example, if the SVG canvas is of size 400x300, then the viewport will initially be centered on the center of the canvas, and have a vertical field of view that spans the entire canvas. Specifically, the member values of the `Viewport` class will be: `x=200, y=150, span=150`.
A viewport defines a region of the SVG canvas that is visible in the app. The 3 properties we care about here are `centerX`, `centerY` and `vspan` (vertical span). They are all defined in normalized svg coordinate space.
When user actions require the viewport be changed, the application will call `update_viewbox()` with the appropriate parameters. Given this change in view parameters, you should implement `set_viewbox()` to compute a transform `canvas_to_norm` based on the new view parameters. This transform should map the SVG canvas coordinate space to a normalized space where the top left of the viewport region maps to (0,0) and the bottom right maps to (1, 1). For example, for the values `x=200,y=150, span=10`, then SVG canvas coordinate (190, 140) transforms to normalized coordinate (0, 0) and canvas coordinate (210, 160) transforms to (1, 1).
When the application initially launches, the entire canvas is in view. For example, when it loads an SVG canvas, the viewport is initially centered on the canvas, and have a vertical field of view that spans the entire canvas. Specifically, the member values of the `Viewport` class will be: `centerX=0.5, centerY=0.5`, and `vspan` will be some value >= 0.5.
Once you have correctly implemented `set_viewbox()`, your solution will respond to mouse controls in the same way as the reference implementation.
When user actions require the viewport be changed, the application will call `update_viewbox()` with the appropriate parameters. Given this change in view parameters, you should implement `set_viewbox()` to compute and set transform `svg_2_norm` (by calling `set_svg_2_norm`) based on the new view parameters. This transform should map the normalized SVG canvas coordinate space to a normalized device coordinate space, where the top left of the viewport region maps to (0,0) and the bottom right maps to (1, 1). For example, for the values `centerX=0, centerY=0, vspan=1`, then normalized SVG canvas coordinate `(0, 0)` (bottom left corner) transforms to normalized coordinate `(0.5, 0.5)` (center of screen) and normalized SVG canvas coordinate `(0, 1)` (bottom right corner) transforms to `(0.5, 1)` (bottom center).
Once you have correctly implemented `set_viewbox()`, your solution will respond to mouse pan and zoom in the same way as the reference implementation.
#### Task 6: Drawing Scaled Images
**This part of the assignment requires knowledge of concepts in Lecture 6.**
**This part of the assignment requires knowledge of concepts in Lecture _Perspective Projection and Texture Mapping_.**
In this task, you will implement `rasterize_image()` in `software_renderer.cpp`.
......@@ -238,15 +262,15 @@ To keep things very simple, we are going to constrain this problem to rasterizin
- The image element should cover all screen samples inside the specified rectangle.
- For each image, texture space spans a [0-1]^2 domain as described in class. That is, given the example above, the mapping from screen-space to texture-space is as follows: `(x0, y0)` in screen space maps to image texture coordinate `(0, 0)` and `(x1, y1)` maps to `(1, 1)`.
- You may wish to look at the implementation of input texture images in `texture.h/.cpp`. The class `Sampler2D` provides skeleton of methods for nearest-neighbor (`sampler_nearest()`), bilinear (`sampler_bilinear()`), and trilinear filtering (`sample_trilinear()`). In this task, for each covered sample, the color of the image at the specified sample location should be computed using **bilinear filtering** of the input texture. Therefore you should implement `Sampler2D::sampler_bilinear()` in `texture.cpp` and call it from `rasterize_image()`. (However, we recommend first implementing `Sampler2D::sampler_nearest()` -- as nearest neighbor filtering is simpler and will be given partial credit.)
- You may wish to look at the implementation of input texture images in `texture.h/.cpp`. The class `Sampler2D` provides skeleton of methods for nearest-neighbor (`sample_nearest()`), bilinear (`sample_bilinear()`), and trilinear filtering (`sample_trilinear()`). In this task, for each covered sample, the color of the image at the specified sample location should be computed using **bilinear filtering** of the input texture. Therefore you should implement `Sampler2D::sample_bilinear()` in `texture.cpp` and call it from `rasterize_image()`. (However, we recommend first implementing `Sampler2D::sample_nearest()` -- as nearest neighbor filtering is simpler and will be given partial credit.)
- As discussed in class, please assume that image pixels correspond to samples at half-integer coordinates in texture space.
- The `Texture` struct stored in the `Sampler2D` class maintains multiple image buffers corresponding to a mipmap hierarchy. In this task, you will sample from level 0 of the hierarchy: `Texture::mipmap[0]`.
- The `Texture` struct stored in the `Sampler2D` class maintains multiple image buffers corresponding to a mipmap hierarchy. In this task, you will sample from level 0 of the hierarchy: `Texture::mipmap[0]`. In other words, if you call one of the samplers above, you should pass in `0` for the level parameter.
When you are done, you should be able to draw `basic/test7.svg`.
#### Task 7: Anti-Aliasing Image Elements Using Trilinear Filtering
**This part of the assignment requires knowledge of concepts in Lecture 6.**
**This part of the assignment requires knowledge of concepts in Lecture _Perspective Projection and Texture Mapping_.**
In this task you will improve your anti-aliasing of image elements by adding trilinear filtering. This will involve generating mipmaps for image elements at SVG load time and then modifying your sampling code from Task 6 to implement trilinear filtering using the mipmap. Your implementation is only required to work for images that have power-of-two dimensions in each direction.
......@@ -261,6 +285,8 @@ At this point, zooming in and out of your image should produce nicely filtered r
Up until this point your renderer was not able to properly draw semi-transparent elements. Therefore, your last programming task in this assignment is to modify your code to implement [Simple Alpha Blending](http://www.w3.org/TR/SVGTiny12/painting.html#CompositingSimpleAlpha) in the SVG specification.
Note that in the above link, all the element and canvas color values assume **premultiplied alpha**. Refer to lecture _Depth and Transparency_ and [this blog post](https://developer.nvidia.com/content/alpha-blending-pre-or-not-pre) for the differences between premultiplied alpha and non-premultiplied alpha.
While the application will always clear the render target buffer to the canvas color at the beginning of a frame to opaque white ((255,255,255,255) in RGBA) before drawing any SVG element, your transparency implementation should make no assumptions about the state of the target at the beginning of a frame.
When you are done, you should be able to correctly draw the tests in `/alpha`.
......@@ -275,15 +301,19 @@ Now that you have implemented a few basic features of the SVG format, it is time
We have provided you with a couple of examples of subdividing complex, smooth complex shapes into much simpler triangles in `/subdiv`. Subdivision is something you will dig into in great detail in the next assignment. You can see subdivision in action as you step though the test files we provided.
In addition to what you have implemented already, the [SVG Basic Shapes](http://www.w3.org/TR/SVG/shapes.html) also include circles and ellipses. We may support these features by converting them to triangulated polygons. But if we zoom in on the edges, there will be a point at which the approximation breaks down and the image no longer will look like a smooth curve. Triangulating more finely can be costly as a large number of triangles may be necessary to get a good approximation. Is there a better way to sample these shapes? For example, implement `drawEllipse` in `drawsvg.cpp` (2 pts).
In addition to what you have implemented already, the [SVG Basic Shapes](http://www.w3.org/TR/SVG/shapes.html) also include circles and ellipses. We may support these features by converting them to triangulated polygons. But if we zoom in on the edges, there will be a point at which the approximation breaks down and the image no longer will look like a smooth curve. Triangulating more finely can be costly as a large number of triangles may be necessary to get a good approximation. Is there a better way to sample these shapes? For example, implement `draw_ellipse` in `software_renderer.cpp` and `hardware/hardware_renderer.cpp` (2 pts).
### Friendly Advice from your TAs
- As always, start early. There is a lot to implement in this assignment, and no official checkpoint, so don't fall behind!
- Open `.../DrawSVG/CMU462/docs/html/index.html` with a browser to see documentation of many utility classes, especially the ones related to vectors and matrices.
- Be careful with memory allocation, as too many or too frequent heap allocations will severely degrade performance.
- Make sure you have a submission directory that you can write to as soon as possible. Notify course staff if this is not the case.
- While C has many pitfalls, C++ introduces even more wonderful ways to shoot yourself in the foot. It is generally wise to stay away from as many features as possible, and make sure you fully understand the features you do use.
- The reference solution is **for reference only**, and we compare your result to reference solution only qualitatively. It contains bugs, too, for example sometimes lines are not drawn when its endpoints are offscreen, and lines get thinner when supersampling. So don't panic if number of pixels different from reference seems large. Still, looking at that diff image could be a good sanity check.
- We also mostly run your code with svg files in the `basic` folder, so don't worry too much about the hardcore ones.
- Currently, DrawSVG does not support rendering `<circle>` svg elements (which is different from `<ellipse>`).
### Resources and Notes
......
......@@ -73,8 +73,8 @@ void DrawSVG::init() {
// auto adjust
auto_adjust(i);
// set initial canvas_to_norm for imp using ref
viewport_imp[i]->set_canvas_to_norm(viewport_ref[i]->get_canvas_to_norm());
// set initial svg_2_norm for imp using ref
viewport_imp[i]->set_svg_2_norm(viewport_ref[i]->get_svg_2_norm());
// generate mipmaps
regenerate_mipmap(i);
......@@ -436,12 +436,12 @@ void DrawSVG::redraw() {
clear();
// set canvas_to_screen transformation
Matrix3x3 m_imp = norm_to_screen * viewport_imp[current_tab]->get_canvas_to_norm();
Matrix3x3 m_ref = norm_to_screen * viewport_ref[current_tab]->get_canvas_to_norm();
software_renderer_imp->set_canvas_to_screen( m_imp );
software_renderer_ref->set_canvas_to_screen( m_ref );
hardware_renderer->set_canvas_to_screen( m_ref );
// set svg_2_screen transformation
Matrix3x3 m_imp = norm_to_screen * viewport_imp[current_tab]->get_svg_2_norm();
Matrix3x3 m_ref = norm_to_screen * viewport_ref[current_tab]->get_svg_2_norm();
software_renderer_imp->set_svg_2_screen( m_imp );
software_renderer_ref->set_svg_2_screen( m_ref );
hardware_renderer->set_svg_2_screen( m_ref );
switch (method) {
......
......@@ -50,7 +50,7 @@ void HardwareRenderer::draw_svg( SVG& svg ) {
begin2DDrawing();
// set top level transformation
transformation = canvas_to_screen;
transformation = svg_2_screen;
// draw all elements
for ( size_t i = 0; i < svg.elements.size(); ++i ) {
......
......@@ -33,8 +33,8 @@ class HardwareRenderer : public SVGRenderer {
}
// Set svg to screen transformation
inline void set_canvas_to_screen( Matrix3x3 canvas_to_screen ) {
this->canvas_to_screen = canvas_to_screen;
inline void set_svg_2_screen( Matrix3x3 svg_2_screen ) {
this->svg_2_screen = svg_2_screen;
}
private:
......@@ -96,7 +96,7 @@ class HardwareRenderer : public SVGRenderer {
size_t context_w; size_t context_h;
// SVG coordinates to screen space coordinates
Matrix3x3 canvas_to_screen;
Matrix3x3 svg_2_screen;
}; // class HardwareRenderer
......
......@@ -17,7 +17,7 @@ namespace CMU462 {
void SoftwareRendererImp::draw_svg( SVG& svg ) {
// set top level transformation
transformation = canvas_to_screen;
transformation = svg_2_screen;
// draw all elements
for ( size_t i = 0; i < svg.elements.size(); ++i ) {
......
......@@ -39,8 +39,8 @@ class SoftwareRenderer : public SVGRenderer {
}
// Set svg to screen transformation
inline void set_canvas_to_screen( Matrix3x3 canvas_to_screen ) {
this->canvas_to_screen = canvas_to_screen;
inline void set_svg_2_screen( Matrix3x3 svg_2_screen ) {
this->svg_2_screen = svg_2_screen;
}
protected:
......@@ -58,7 +58,7 @@ class SoftwareRenderer : public SVGRenderer {
Sampler2D* sampler;
// SVG coordinates to screen space coordinates
Matrix3x3 canvas_to_screen;
Matrix3x3 svg_2_screen;
}; // class SoftwareRenderer
......
......@@ -4,23 +4,23 @@
namespace CMU462 {
void ViewportImp::set_viewbox( float x, float y, float span ) {
void ViewportImp::set_viewbox( float centerX, float centerY, float vspan ) {
// Task 5 (part 2):
// Set svg to normalized device coordinate transformation. Your input
// arguments are defined as SVG canvans coordinates.
this->x = x;
this->y = y;
this->span = span;
// Set normalized svg to normalized device coordinate transformation. Your input
// arguments are defined as normalized SVG canvas coordinates.
this->centerX = centerX;
this->centerY = centerY;
this->vspan = vspan;
}
void ViewportImp::update_viewbox( float dx, float dy, float scale ) {
this->x -= dx;
this->y -= dy;
this->span *= scale;
set_viewbox( x, y, span );
this->centerX -= dx;
this->centerY -= dy;
this->vspan *= scale;
set_viewbox( centerX, centerY, vspan );
}
} // namespace CMU462
......@@ -11,27 +11,26 @@ class Viewport {
Viewport( ) : svg_2_norm( Matrix3x3::identity() ) { }
inline Matrix3x3 get_canvas_to_norm() {
inline Matrix3x3 get_svg_2_norm() {
return svg_2_norm;
}
inline void set_canvas_to_norm( Matrix3x3 m ) {
inline void set_svg_2_norm( Matrix3x3 m ) {
svg_2_norm = m;
}
// set viewbox to look at (x,y) in svg coordinate space. Span defineds
// the view radius of the viewbox in number of pixels (the amout of pixels
// included in the viewbox in both x and y direction).
virtual void set_viewbox( float x, float y, float span ) = 0;
// set viewbox to look at (centerX, centerY) in normalized svg coordinate space. vspan defines
// the vertical view radius of the viewbox (ie. vspan>=0.5 means the entire svg canvas is in view)
virtual void set_viewbox( float centerX, float centerY, float vspan ) = 0;
// Move the viewbox by (dx,dy) in svg coordinate space. Scale the the view
// Move the viewbox by (dx,dy) in normalized svg coordinate space. Scale the the view
// range by scale.
virtual void update_viewbox( float dx, float dy, float scale ) = 0;
protected:
// current viewbox properties
float x, y, span;
float centerX, centerY, vspan;
// SVG coordinate to normalized display coordinates
Matrix3x3 svg_2_norm;
......@@ -42,7 +41,7 @@ class Viewport {
class ViewportImp : public Viewport {
public:
virtual void set_viewbox( float x, float y, float size );
virtual void set_viewbox( float centerX, float centerY, float size );
virtual void update_viewbox( float dx, float dy, float scale );
}; // class ViewportImp
......@@ -51,7 +50,7 @@ class ViewportImp : public Viewport {
class ViewportRef : public Viewport {
public:
virtual void set_viewbox( float x, float y, float size );
virtual void set_viewbox( float centerX, float centerY, float size );
virtual void update_viewbox( float dx, float dy, float scale );
}; // class ViewportRef
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment