Posts

Showing posts from May, 2017

Refraction for rays in transparent material

Image
Yesterday we realized we havn't implemented refraction yet, so I decided to try it out. Refraction is when waves hits a denser/lighter material and the wave changes direction ever so slightly. Example is when you look through a glass ball and the object behind the glass ball gets distorted. After some research of how refraction works i implemented it using snells law wich is the formula to calculate how waves refract when crossing another material. The refraction is not perfect, and I probably don't have the time to fix it either. But for now it looks pretty close to the real world equivalent for glass.   Glass pawn in front of a KTH emblem.   Glass ball in front of a KTH emblem.

Per-channel reflectivity and opacity plus metallic reflection mode

Image
Going against the project spec ever so slightly, I decided to make the material properties for reflectivity and opacity into vec3's. This gives more flexibility to the material system in that reflectivity and opacity can be controlled for each color component (RGB) individually. Having completed the change, I tried to use the new flexibility to create a metallic material, but had trouble getting the reflections to get tinted the way i wanted. As a test, I added a "metallic" flag to the material structure and had the raytracer give special treatment to such materials by converting the color gathered from reflection into grayscale (by RGB average) and multiplying with the diffuse color of the material. This is probably wildly inaccurate in a physical sense, but it actually seems to work pretty well so I decided to leave it in. Without "metallic" flag Using the "metallic" flag

Ordering of lighting, reflection and opacity

Image
I realized the ordering of lighting, reflections and opacity were pretty wacky. Previously, color gathered by reflection actually went into the Phong ambient/diffuse/specular factors of the reflective material and proceeded to actually be lit as if the reflection had somehow transferred an actual texture onto the reflective material. This also meant that reflections vanished if the surface was not lit, an obvious bug that had gone unnoticed for a while. The new order is quite a bit more physically reasonable, with reflected light simply being added to whatever light the material itself is reflecting (off the light source[s]).

Texture mapping

Image
Texture mapping for triangles and spheres has been implemented. This involved adding a new abstract (interface) Texture class along with a concrete implementation in the form of an SDLTexture class which loads image files using SDL_image . Barycentric coordinates are used to compute UV texture coordinates for ray-triangle intersections. For UV for spheres I eventually found an elegant method on Wikipedia . I first tried to come up with the sphere UV maths myself and actually came somewhat close to the method described on Wikipedia. My method did manage to map the entire surface of the sphere to the desired ranges x,y in [0,1], but resulted in incorrect stretching centered along the horizontal equator. As an unintended bonus, it turned out that the code had no problem texturing the inside of the sphere. Handy to know should I ever need to implement a "photo sphere" viewer. I'm thinking of adding a property to the sphere object for indicating whether the sphere is hollo...

Wireframe for spheres

Image
I had originally planned to implement really nice sphere wireframes by drawing curved lines but as time is running out i went with the easier backup plan of simply generating points interconnected by straight line segments. The code still contains the framework support for the original idea. The wireframe renderer gets fed with a list of "WireframeElement" objects, each containing a set of points, normals for each of the points and an indicator for which kind of "connector" function is to be used for drawing the connecting lines. At this time the only available connector is "Linear" which as the name implies performs linear interpolation between points and thus draws straight lines.

Specular reflection and Phong shading/lighting

Image
Specular reflection has been implemented according to plan, recursively calling the raycasting function with new rays that have been given new directions according to the law of reflection. A highly reflective sphere on the floor of the Cornell box Phong shading and lighting has been added as well, presenting somewhat more of a challenge. Barycentric coordinates are computed and used to interpolate normals across triangles. Switching to actually reading and using individual vertex normals (instead of the edges-cross-product surface normal of each triangle) caused the backface culling code to stop working as it relied on the first vertex of each triangle having a normal vector value set to the surface normal vector. This was corrected by adding a new property to our Triangle structure to hold the geometric surface normal which is then used for backface culling. The wireframe mode has been enhanced with the ability to draw both vertex and surface normals. In the following ...

First attempt at opacity

Image
 Except for the inaccurate color blending, handling opacity in the raytracer seems fairly straightforward. When a semitransparent material is hit, a new ray is cast from the point of intersection in the same direction to gather color from behind the transparent object. You can see in the video how rendering slows down when i move closer to the transparent triangle, having it fill more of the screen and thus casting a lot of these extra rays. I did run into the familiar problem with rays instantly intersecting the triangle they are "sent from", and just like in lab 2 solved it by moving the origin of the ray ever so slightly along the ray direction vector. As a bonus, back-face culling can be seen in the video in the form of walls disappearing at certain angles.

Progress so far

Image
A couple of days into the project we were up and running with SDL2. The switch from SDL1.2 used in the labs wasn't really worth the effort, though. The program was now able to switch between wireframe (albeit with no wires) and raytracing to render a lonely test triangle. Camera controls were in as well. Next we added loading of .OBJ files and created an .OBJ for the exact Cornell box used in the labs. Flat colors are pretty boring so we copied in the point lighting code from lab 2 (to be replaced later with the Phong lighting model). The triangles are contained in a Mesh object and each triangle contains a Material instance, at this point in time only supporting the single property " color ". Not having actual wires in wireframe mode was pretty silly, so we added some. Next addition were spheres. These are instances of the class Sphere , inheriting from the abstract class (interface) Object , which Mesh also inherits from. These Object inst...

Project specification

Idea The main idea of the project is to explore more features (more than were explored in lab 2) common in ray tracing rendering in an effort to gain an increased understanding of the techniques, strengths and weaknesses of conventional ray tracing in general. Starting from the feature set established by the ray tracer implemented in lab 2 (but not necessarily directly building upon that code) we will be adding a number of features to be implemented in a single executable application. Project plan The following section contains two sets of implementation goals. The first set (”Baseline”) is what we consider to be the minimum for project completion, while the second set (”Bonus”) contains ideas that are not essential but could improve the project should we find the time to implement them. Baseline features Fast wireframe view The application should default to a fast wireframe renderer for positioning the cam-era before initiating the potentially time-consuming high-quality ren...

Blog intro

Hello, World! This blog is used to document the progress of a student project in the course Computer Graphics and Interaction (DH2323, Spring 2017) at the Royal Institute of Technology (KTH) in Stockholm, Sweden. The project team consists of Mikael Forsberg and Robin Gunning, both studying computer science at KTH. We actually more or less forgot the requirement to blog about our progress. The dates of the first couple of posts here will not be accurate indicators of when we did the work mentioned in the post as we are initially trying to retroactively describe the progress made until now (May 18th).