Teaser

Due Date

Mon 04/04, 11:59pm

Overview

In this project you will implement a simple path tracer that can render pictures with global illumination effects. The first part of the assignment will focus on providing an efficient implementation of ray-scene geometry queries. In the second half of the assignment you will add the ability to simulate how light bounces around the scene, which will allow your renderer to synthesize much higher-quality images. Much like in assignment 2, input scenes are defined in COLLADA files, so you can create your own scenes for your scenes to render using free software like Blender.)

Getting started

We will be distributing assignments with git. You can find the repository for this assignment at http://462cmu.github.io/asst3_pathtracer/. If you are unfamiliar with git, here is what you need to do to get the starter code:

$ git clone https://github.com/462cmu/asst3_pathtracer.git

This will create a asst3_pathtracer folder with all the source files.

Build Instructions

In order to ease the process of running on different platforms, we will be using CMake for our assignments. You will need a CMake installation of version 2.8+ to build the code for this assignment. The GHC 5xxx cluster machines have all the packages required to build the project. It should also be relatively easy to build the assignment and work locally on your OSX or Linux. Building on Windows is currently not supported.

If you are working on OS X and do not have CMake installed, we recommend installing it through Macports:

sudo port install cmake

Or Homebrew:

brew install cmake

To build your code for this assignment:

$ cd asst3_pathtracer && mkdir build && cd build
$ cmake ..
$ make
$ make install

Using the Path Tracer app

When you have successfully built your code, you will get an executable named pathtracer. The pathtracer executable takes exactly one argument from the command line, which is the path of a COLLADA file describing the scene. For example, to load the Keenan cow dae/meshEdit/cow.dae from your build directory:

./pathtracer ../dae/meshEdit/cow.dae

The following are pathtracer app command line options, which are provided for convenience and to debug debugging:

Commandline Option Description
-t <INT> Number of threads used for rendering (default=1)
-s <INT> Set the number of camera rays per pixel (default=1) (should be a power of two)
-l <INT> Number of samples to integrate light from area light sources (default=1, higher numbers decrease noise but increase rendering time)
-m <INT> Maximum ray "depth" (the number of bounces on a ray path before the path is terminated)
-h Print command line help

Mesh Editor Mode

When you first run the application, you will see an interactive wireframe view of the scene that should be familiar to you from Assignment 2. You can rotate the camera by left-clicking and dragging, zoom in/out using the scroll wheel (or multi-touch scrolling on a trackpad), and translate (dolly) the camera using right-click drag. Hitting the spacebar will reset the view.

As with assignment 2, you'll notice that mesh elements (faces, edges, and vertices) under the cursor are highlighted. Clicking on these mesh elements will display information about the element and its associated data. The UI has all the same mesh editing controls as the MeshEdit app from Assignment 2 (listed below). If you want, you can copy your implementation of these operators from Assignment 2 into your Assignment 3 codebase, then you will be able to edit scene geometry using the app.

PathTracer GUI

Rendered Output Mode and BVH Visualization Mode

In addition to the mesh editing UI, the app features two other UI modes. Pressing the R key toggles display to the rendered output of your ray tracer. If you press R in the starter code, you will see a black screen (You have not implemented your ray tracer yet! ). However, a correct implementation of the assignment will make pictures of the cow that looks like the one below.

PathTracer GUI

Pressing E returns to the mesh editor view. Pressing V displays the BVH visualizer mode, which will be a helpful visualization tool for debugging the bounding volume hierarchy you will need to implement for this assignment. (More on this later.)

Summary of Viewer Controls

A table of all the keyboard controls in the interactive mesh viewer part of the pathtracer application is provided below.

Command Key
Flip the selected edge F
Split the selected edge S
Collapse the selected edge C
Upsample the current mesh U
Downsample the current mesh D
Resample the current mesh M
Toggle information overlay H
Return to mesh edit mode E
Show BVH visualizer mode V
Show ray traced output R
Decrease area light samples (RT mode) -
Increase area light samples (RT mode) +
Decrease samples (camera rays) per pixel [
Increase samples (camera rays) per pixel ]
Descend to left child (BVH viz mode) LEFT
Descend to right child (BVH viz mode) RIGHT
Move to parent node (BVH viz mode) UP
Reset camera to default position SPACE
Edit a vertex position (left-click and drag on vertex)
Rotate camera (left-click and drag on background)
Zoom camera (mouse wheel)
Dolly (translate) camera (right-click and drag on background)

Getting Acquainted with the Starter Code

Following the design of modern ray tracing systems, we have chosen to implement the ray tracing components of the Assignment 3 starter code in a very modular fashion. Therefore, unlike previous assignments, your implementation will touch a number of files in the starter code. The main structure of the code base is:

Please refer to the inline comments (or the Doxygen documentation) for further details.

Task 1: Generating Camera Rays

"Camera rays" emanate from the camera and measure the amount of scene radiance that reaches a point on the camera's sensor plane. (Given a point on the virtual sensor plane, there is a corresponding camera ray that is traced into the scene.)

Take a look at Pathtracer::raytrace_pixel() in pathtracer.cpp. The job of this function is to compute the amount of energy arriving at this pixel of the image. Conveniently, we've given you a function Pathtracer::trace_ray(r) that provides a measurement of incoming scene radiance along the direction given by ray r.

When the number of samples per pixel is 1, you should sample incoming radiance at the center of each pixel by constructing a ray r that begins at this sensor location and travels through the camera's pinhole. Once you have computed this ray, then call Pathtracer::trace_ray(r) to get the energy deposited in the pixel.

Step 1: Given the width and height of the screen, and point in screen space, compute the corresponding coordinates of the point in normalized ([0-1]x[0x1]) screen space in Pathtracer::raytrace_pixel(). Pass these coordinates to the camera via Camera::generate_ray() in camera.cpp.

Step 2: Implement Camera::generate_ray(). This function should return a ray in world space that reaches the given sensor sample point. We recommend that you compute this ray in camera space (where the camera pinhole is at the origin, the camera is looking down the -Z axis, and +Y is at the top of the screen.) Note that the camera maintains camera-space-to-world space transform c2w that will be handy.

Step 3: Your implementation of Pathtracer::raytrace_pixel() must support supersampling (more than one sample per pixel). The member Pathtracer::ns_aa in the raytracer class gives the number of samples of scene radiance your ray tracer should take per pixel (a.k.a. the number of camera rays per pixel. Note that Pathtracer::gridSampler->get_sample() provides uniformly distributed random 2D points in the [0-1]^2 box (see the implementation in sampler.cpp).

Tips:

Extra credit ideas:

Task 2: Intersecting Triangles and Spheres

Now that your ray tracer generates camera rays, you need to implement ray-primitive intersection routines for the two primitives in the starter code: triangles and spheres. This handout will discuss the requirements of intersecting primitives in terms of triangles.

The Primitive interface contains two types of intersection routines:

You will need to implement both of these routines. Correctly doing so requires you to understand the fields in the Ray structure defined in ray.h.

There are also two additional fields in the Ray structure that can be helpful in accelerating your intersection computations with bounding boxes (see the BBox class in bbox.h). You may or may not find these precomputed values helpful in your computations.

One important detail of the Ray structure is that min_t and max_t are mutable fields of the Ray. This means that these fields can be modified by constant member functions such as Triangle::Intersect(). When finding the first intersection of a ray and the scene, you almost certainly want to update the ray's max_t value after finding hits with scene geometry. By bounding the ray as tightly as possible, your ray tracer will be able to avoid unnecessary tests with scene geometry that is known to not be able to result in a closest hit, resulting in higher performance.

Step 1: Intersecting Triangles

While faster implementations are possible, we recommend you implement ray-triangle intersection using the method described in the lecture slides. Further details of implementing this method efficiently are given in these notes.

There are two important details you should be aware of about intersection:

Once you've successfully implemented triangle intersection, you will be able to render many of the scenes in the scenes directory ( /dae)). However, your ray tracer will be very slow!

Step 2: Intersecting Spheres

Please also implement the intersection routines for the Sphere class in sphere.cpp. Remember that your intersection tests should respect the ray's min_t and max_t values.

Task 3: Implementing a Bounding Volume Hierarchy (BVH)

In this task you will implement a bounding volume hierarchy that accelerates ray-scene intersection. All of this work will be in the BVHAccel class in bvh.cpp.

The starter code constructs a valid BVH, but it is a trivial BVH with a single node containing all scene primitives. A BVHNode has the following fields:

The BVHAccel class maintains an array of all primitives in the BVH (primitives). The fields start and range in the BVHNode refer the range of contained primitives in this array.

Step 1: Your job is to construct a BVH using the Surface Area Heuristic discussed in class. Tree construction should occur when the BVHAccel object is constructed.

We have implemented a number of tools to help you debug the BVH. Press the V key to enter BVH visualization mode. This mode allows you to directly visualize a BVH as shown below. The current BVH node is highlighted in red. Primitives in the left and right subtrees of the current BVH node are rendered in different colors. Press the LEFT or RIGHT keys to descend to child nodes of the mesh. Press UP to move the parent of the current node.

BVH Vis

Another view showing the contents of a lower node in the BVH:

BVH Vis

Step 2: Implement the ray-BVH intersection routines required by the Primitive interface. You may wish to consider the node visit order optimizations we discussed in class. Once complete, your renderer should be able to render all of the test scenes in a reasonable amount of time.

Task 4: Implementing Shadow Rays

In this task you will modify Raytracer::trace_ray() to implement accurate shadows.

Currently trace_ray computes the following:

Shadows occur when another scene object blocks light emitted from scene light sources towards the hit point (hit_p). Fortunately, determining whether or not a ray of light from a light source to the hit point is occluded by another object is easy given a working ray tracer (which you have at this point!). You simply want to know whether a ray originating from the hit point (p_hit), and traveling towards the light source (dir_to_light) hits any scene geometry before reaching the light (note, the light's distance from the hit point is given by dist_to_light).

Your job is to implement the logic needed to compute whether hit_p is in shadow with respect to the current light source sample. Below are a few tips:

At this point you should be able to render very striking images. For example, here is the Stanford Dragon model rendered with both a directional light and a hemispherical light.

Shadow directional

Shadow directional

Task 5: Adding Path Tracing

A few notes before getting started:

The new release of the starter code for tasks 5-7 makes a few changes and improvements to the original starter code of the assignment:

You should change your implementation of the reflectance estimate due to direct lighting in Pathtracer::trace_ray() to iterate over the list of scene light sources using the following code:

for (SceneLight* light : scene->lights) {
    /// do work here...
}

In this task you will modify your ray tracer to add support for indirect illumination. We wish for you to implement the path tracing algorithm that terminates ray paths using Russian Roulette, as discussed in class. Recommend that you restructure the code in Pathtracer::trace_ray() as follows:

Pathtracer::trace_ray() {
  if (surface hit) {
       //
       // compute reflectance due to direct lighting only
       //
       for each light:
          accumulate reflectance contribution due to light

       //
       // add reflectance due to indirect illumination
       //
       randomly select a new ray direction (it may be
       reflection or transmittence ray depending on
       surface type -- see BSDF::sample_f()

       potentially kill path (using Russian roulette)

       evaluate weighted reflectance contribution due
       to light from this direction
  }
}

As a warmup for the next task, implement BSDF::sample_f for diffuse surfaces (DiffuseBSDF:sample_f). The implementation of DiffuseBSDF::f is already provided to you. After correctly implementing diffuse BSDF and path tracing, your renderer should be able to make a beautifully lit picture of the Cornell Box with:

    ./pathtracer -s 1024 -m 2 -t 8 ../dae/sky/CBspheres_lambertian.dae

Cornell Box Lambertian

Note the time-quality tradeoff here. With these commandline arguments, your path tracer will be running with 8 worker threads at a sample rate of 256 camera rays per pixel, with a max ray depth of 4. This will produce an image with relatively high quality but will take quite some time to render. Rendering a high quality image will take a very long time as indicated by the image sequence below, so start testing your path tracer early!

Time-Quality Tradeoff

Here are a few tips:

Task 6: Adding New Materials

Now that you have implemented the ability to sample more complex light paths, it's finally time to add support for more types of materials (other than the fully Lambertian material provided to you in the starter code). In this task you will add support for two types of materials: a perfect mirror and glass (a material featuring both specular reflection and transmittance).

To get started take a look at the BSDF interface in bsdf.cpp. There are a number of key methods you should understand:

There are also two helper functions in the BSDF class that you will need to implement:

What you need to do:

  1. Implement the class MirrorBSDF which represents a material with perfect specular reflection (a perfect mirror). You should Implement MirrorBSDF::f(), MirrorBSFD::sample_f(), and BSDF::reflect(). (Hint: what should the pdf computed by MirrorBSFD::sample_f() be? What should the reflectance function f() be?)

  2. Implement the class GlassBSDF which is a glass-like material that both reflects light and transmit light. As discussed in class the fraction of light that is reflected and transmitted through glass is given by the dielectric Fresnel equations, which are documented in detail here. Specifically your implementation should:

    • Implement BSDF::refract() to add support for refracted ray paths.
    • Use the Fresnel equations to compute the fraction of reflected light and the fraction of transmitted light. Your implementation of
    • Implement GlassBSDF::sample_f(). Your implementation should use the Fresnel equations to compute the fraction of reflected light and the fraction of transmitted light. The returned ray sample should be either a reflection ray or a refracted ray, with the probability of which type of ray to use for the current path proportional to the Fresnel reflectance. (e.g., If the Fresnel reflectance is 0.9, then you should generate a reflection ray 90% of the time. What should the pdf be in this case?)
    • You should read the provided notes on the Fresnel equations as well as on how to compute a transmittance BRDF.

When you are done, you will be able to render images like these:

Cornell box spheres 256

Task 7: Infinite Environment Lighting

The final task of this assignment will be to implement a new type of light source: an infinite environment light. An environment light is a light that supplies incident radiance (really, the light intensity dPhi/dOmega) from all directions on the sphere. The source is thought to be "infinitely far away", and is representative of realistic lighting environments in the real world: as a result, rendering using environment lighting can be quite striking.

The intensity of incoming light from each direction is defined by a texture map parameterized by phi and theta, as shown below.

Environment map

In this task you need to implement the EnvironmentLight::sample_L() method in static_scene/environment_light.cpp. You'll start with uniform direction sampling to get things working, and then move to a more advanced implementation that uses importance sampling to significantly reduce variance in rendered images.

Step one: uniform sampling

To get things working, your first implementation of EnvironmentLight::sample_L() will be quite simple. You should generate a random direction on the sphere (with uniform (1/4pi) probability with respect to solid angle), convert this direction to coordinates (phi, theta) and then look up the appropriate radiance value in the texture map using bilinear interpolation (note: we recommend you begin with bilinear interpolation to keep things simple.)

You an designate rendering to use a particular environment map using the -e commandline parameter: (e.g., -e ../exr/grace.exr )

Since high dynamic range environment maps can be large files, we have not included them in the starter code repo. You can download a set of environment maps from this link.

Tips:

Step two: importance sampling the environment map

Much like light in the real world, most of the energy provided by an environment light source is concentrated in the directions toward bright light sources. Therefore, it makes sense to bias selection of sampled directions towards the directions for which incoming radiance is the greatest. In this final task you will implement an importance sampling scheme for environment lights. For environment lights with large variation in incoming light intensities, good importance sampling will significantly improve the quality of renderings.

The basic idea is that you will assign a probability to each pixel in the environment map based on the total flux passing through the solid angle it represents. We've written up a detailed set of notes for you here (see "Task 7 notes").

Here are a few tips:

Grading

Your code must run on the GHC 5xxxx cluster machines as we will grade on those machines. Do not wait until the submission deadline to test your code on the cluster machines. Keep in mind that there is no perfect way to run on arbitrary platforms. If you experience trouble building on your computer, while the staff may be able to help, but the GHC 5xxx machines will always work and we recommend you work on them.

The assignment consists of a total of 100 pts. The point breakdown is as follows:

Handin Instructions

Your handin directory is on AFS under:

/afs/cs/academic/class/15462-s16-users/ANDREWID/asst3/, or
/afs/cs/academic/class/15662-s16-users/ANDREWID/asst3/

You will need to create the asst3 directory yourself. All your files should be placed there. Please make sure you have a directory and are able to write to it well before the deadline; we are not responsible if you wait until 10 minutes before the deadline and run into trouble. Also, you may need to run aklog cs.cmu.edu after you login in order to read from/write to your submission directory.

You should submit all files needed to build your project, this include:

Note: You can save your rendered images from the application by pressing S when your path tracer is done rendering. The screenshots you submit should be rendered at relatively high quality configurations. Feel free to include additional images you have rendered with your path tracer, especially the ones that demonstrates the extra credit features you have implemented.

You should also include, in your README file, the pathtracer configuration you used to render the images are submitting. If you have implemented any of the extra credit features, clearly indicate which extra credit features you have implemented. You should also briefly state anything that you think the grader should be aware of.

Please do not include:

Do not add levels of indirection when submitting. And please use the same arrangement as the handout. We will enter your handin directory, and run:

mkdir build && cd build && cmake .. && make

and your code should build correctly. The code must compile and run on the GHC 5xxx cluster machines. Be sure to check to make sure you submit all files and that your code builds correctly.

Friendly advice from your TAs

Resources and Notes