**Advanced Computer Graphics Project Report** Motivation ================ My goal was it to create a small scale scene featuring a number of terrariums as shown below. The main elements would therefore be plants and metals of various roughness. To relate my image to the theme of this year's Rendering Competition, I wanted to build the scene as if the terrarium creator was in the middle of his construction process: with a crowded, messy, dynamic workspace that indicated recent use, and a single plant -- the "Last One" -- left outside of the terrariums, ready to be properly situated as soon as the creator returned. Simple Features ================ Texture Mapping --------------- Textures are a crucial aspect in rendering and can be used in a number of different ways, ranging from storing albedo information for a diffuse BRDF model to environment maps. I implemented a nested texture model in __nori/texture.h__ which currently has only 4 implementations: - Constant - Gradient: a bilinear interpolation of 4 corners - Scale: a linear interpolation of two nested textures given a factor from a third texture - Image: loads a bitmap and supports different wrap modes (Clamp, Mirror, Repeat) Below is an example of my Gradient implementation, generated with a full-screen quad mesh, and my procedurally generated gradient texture:
Following are two simple texture examples. The left image is a simple bitmap texture, while the right one is constructed from the expression: $(\text{gradient} \cdot \text{checkerboard})^2$. I also implemented two basic filtering modes for texture evaluation: Both images were generated by forcing Nori to perform one sample per pixel at the pixel center without sub sampling the pixel. Thinlens Camera --------------- To increase the realism of a scene, and to focus the viewer on specific parts of the image, one can use the effect of depth of field. This is not supported by the standard perspective camera implemented in Nori, so I implemented the classical thinlens model in __src/thinlens.cpp__. The following images show the effect of increasing the aperture over each image, while keeping the focus plane constant at the second teapot.
Bump Mapping ------------ Often, it is difficult to digitally reproduce the small defects that are an integral aspect of real world surfaces -- either the geometry used in rendering is not of high enough resolution to model these defects, or the performance impact of such high resolution geometry is undesirable. Bump mapping is a common technique to overcome this problem, by storing information about the surface in a texture. The first version of bump mapping is normal mapping (in __src/normalmap.cpp__) where the texture stores tangent space normals. These are then transferred into world space and used as a new normal (by constructing a new orthogonal frame from the old one and the new normal).
The second version is height mapping (__src/heightmap.cpp__) where the texture stores height values. The partial differentials of the texture at a position then gives the adjusted local frame. A user defined parameter is used to scale the partials to vary the strength of the height mapping effect. Computing the new local frame is done by adjusting the tangent and binormal towards the normal with the value of the partials.
Lastly, I implemented Parallax Occlusion Mapping, which is a technique mostly used in realtime graphics. Here, a form of ray tracing is used to traverse the height map. This traversal only gives a new texture coordinate position at which the incoming ray should actually have intersected the surface. This new texture coordinate is then used instead of the old one and, as before, the same procedure as in height mapping is applied. This creates very obvious artifacts at the sides of the objects, but it is more realistic than normal and simple height mapping because it doesn't create artifacts at shallow viewing angles.
I implemented all versions of bump mapping as wrapper BRDFs. These take another BRDF as a child, and their only function is to compute a modified local tangent frame before calling the respective method of the child BRDF. (Rough) Conductor --------- For my scene, I needed to have a BRDF model for metal surfaces. Such objects are called conductors because of their electrical properties (in contrast to dielectrics). In the limit, they are described by a delta BRDF. I used the 'spd' files provided in pbrt as the complex valued index of refraction for a conductor model. The accompanying fresnel equations are implemented in __src/common.cpp__, while the conductor BSDF itself is implemented in __src/conductor.cpp__. Here is a validation against Mitsuba's conductor models:
I then used the Beckmann distribution (originally implemented for the microfacet model) to implement a rough conductor in __src/roughconductor.cpp__. Here, the process is very similar to the microfacet model, but instead of using the real valued fresnel equation, we employ the complex one, and there is no diffuse term. Since my Beckmann distribution had already been validated by the warptest during the Homework assignments, I instead used Mitsuba to render the same scene.
I also verified that my procedure for loading the spd file computes correct rgb values from the spectral data. The difference to Mitsuba-loaded values is always marginal, and can be explained by the much better integration scheme used in Mitsuba. Anisotropic Rough Conductor ------------ Often, metals are brushed (because of their manufacturing process), which creates surface imperfections in circular patterns. To render this, the isotropic rough conductor is not sufficient; an anisotropic version is needed, which reflects light differently based on the viewing angle. I extended the isotropic Beckmann distribution to an anisotropic one in __src/warp.cpp__. Initially, I had numerical issues for a subset of $\alpha$ combinations. I solved this by not using the equation verbatim but instead comparing with Mitsuba and using the there suggested modified exponent formula by Walter. I validated the anisotropic Beckmann distribution in the warptest by testing a large number of combinations:
For the combination of $\alpha_u = 0.1, \alpha_v=0.2$ I had to increase the number of samples used to pass the test. The isotropic rough conductor has to be extended in only one place, namely to compute a combined alpha value for the BRDF evaluation. The 2014 precompiled version of Mitsuba 0.5 does not provide an anisotropic version of the Beckmann distribution, and therefore no comparisons could be made. But since the isotropic version was compared and the only difference is in the also-verified Beckmann distribution, this is deemed sufficient validation. One key element necessary for anisotropic BRDFs are continous tangent frames. These were initially not provided in Nori, but I added support for smooth vertex normal computation and tangent frames aligned with the UV parameterization (more on this in the Advanced Texture Filtering section). Intermediate Features ================ Image Based Lighting ------------ My project includes image based lighting, which surrounds the entire scene with an environment map that is then treated as an emitter. This is a widely-used technique for simulating natural looking illumination and distant visual surroundings. My implementation in __src/environment.cpp__ does not use the hierarchical sampling method that I implemented for an earlier homework assignment. Instead, it is based on a scheme with a set of discrete $1D$ PDFs: the first $1D$ distribution presents the distribution of light over the rows of the image (obtained by summing the pixel values in each row). This is used to choose a row of the image, and then another discrete $1D$ distribution (different for each row) samples a column based on the relative intensity of each pixel. I used two strategies to validate the correctness. First, I verified that there is no difference between doing standard Path Tracing (which does not sample the emitter) and Path Tracing with NEE (which samples the emitter):
This visually verifies that the sampling density matches the corresponding PDF that was computed for it. Furthermore, I used the warptest for statistical validation. Sadly this was not as straightforward. The test passed for small environment map sizes (here, 64x32 pixels):
However, this was no longer the case for large sizes (here, 2048x1024). This is especially striking because the visual comparison between the histogram and the integrated density looks extremely similar.
Here, the aperture is given by its diameter in millimeters. I tried a specification using the commonly used f-stop, but this is more difficult here because the f-stop can only be related to the focal length and the effective aperture, which is the projection/tracing of the aperture through the lens towards the scene. This proved troublesome to compute correctly. There are a number of components one can validate for this system. In particular, I chose the following 3 tests: - Focusing the lens: I validated this by adding an arbitrary distance from the last lens element to the film plane in the lens description file. As expected, this resulted in a focusing distance which was exactly that much smaller. - The exit pupils used for sampling ray directions can be tested by checking that if one makes them larger, the image will not change but only converge slower (because of the discarded rays). - I validated my unified aperture test (whether a ray hits the aperture) by checking that if the number of n-gon points is large, and if the ellipse axes are the same length, the result will be the same as the standard circular aperture. Advanced Texture Filtering ------------ To improve over standard bilinear interpolation for textures, I also implemented more advanced methods (even though I did not use them for my final rendering since it was not necessary). The first step is to compute partial derivatives for the vertex positions $\frac{\partial \textbf{p}}{\partial u}$ and $\frac{\partial \textbf{p}}{\partial v}$ while loading the mesh file. We must also trace not only a single ray from the camera, but a tuple of 3 rays -- typically called a ray differential. These are the standard primary ray and offset rays in each pixel direction. Given these, for a particular hit point, one can then (together with the partial derivatives of the hitpoint) compute the texture differentials with respect to pixel positions $\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}, \frac{\partial v}{\partial x}, \frac{\partial v}{\partial y}$, where $x, y$ are pixel coordinates (__src/mesh.cpp__). This is the crucial information needed for better texture filtering, because it gives information regarding the area that one pixel covers in texture space. There are two common filtering methods based on this insight: - Trilinear Filtering assumes that the pixel covers an isotropic circle in the texture space, and interpolates between two mipmap levels (__src/mipmap.cpp__). This creates artifacts due to over blurring (as seen in the images). - Elliptical Weighted Averages lift this assumption, and computes the ellipse which this pixel covers in texture space. While I did implement this method, I would like to state that the code for computing this elliptical sampling is (while not copied from PBRT) very similar to the implementation found there; I therefore am not seeking credit for implementing this method.
Homogeneous Absorbing Medium ---------- To simulate a simple liquid in a glass, I used a homogeneous absorbing medium without any in-scattering. This is only parameterized with the absorption coefficient $\sigma_a$. I extended the BRDF base class with another method that returns whether the object contains a medium inside (for the given pair of $\omega_i, \omega_o$) and, if so, the coefficient of that medium. All but the dielectric in __src/dielectric.cpp__ return no medium. The Path Tracer calls this method after each BRDF sampling event and after the next ray trace it uses (if necessary) the last $\sigma_a$ coefficient to compute the transmittance. Final Image ================ Below is the final rendering which I submitted to the 2018 Rendering Competition: