**Ray Tracing in One Weekend**
[Peter Shirley][]
edited by [Steve Hollasch][] and [Trevor David Black][]
translated to Rust by [Daniel Busch][]
Version 3.2.3, 2020-12-07
Copyright 2018-2020 Peter Shirley. All rights reserved.
Overview
====================================================================================================
I’ve taught many graphics classes over the years. Often I do them in ray tracing, because you are
forced to write all the code, but you can still get cool images with no API. I decided to adapt my
course notes into a how-to, to get you to a cool program as quickly as possible. It will not be a
full-featured ray tracer, but it does have the indirect lighting which has made ray tracing a staple
in movies. Follow these steps, and the architecture of the ray tracer you produce will be good for
extending to a more extensive ray tracer if you get excited and want to pursue that.
When somebody says “ray tracing” it could mean many things. What I am going to describe is
technically a path tracer, and a fairly general one. While the code will be pretty simple (let the
computer do the work!) I think you’ll be very happy with the images you can make.
I’ll take you through writing a ray tracer in the order I do it, along with some debugging tips. By
the end, you will have a ray tracer that produces some great images. You should be able to do this
in a weekend. If you take longer, don’t worry about it. I use C++ as the driving language, but you
don’t need to. However, I suggest you do, because it’s fast, portable, and most production movie and
video game renderers are written in C++. Note that I avoid most “modern features” of C++, but
inheritance and operator overloading are too useful for ray tracers to pass on. I do not provide the
code online, but the code is real and I show all of it except for a few straightforward operators in
the `Vec3` class. I am a big believer in typing in code to learn it, but when code is available I
use it, so I only practice what I preach when the code is not available. So don’t ask!
I have left that last part in because it is funny what a 180 I have done. Several readers ended up
with subtle errors that were helped when we compared code. So please do type in the code, but if you
want to look at mine it is at:
https://github.com/RayTracing/raytracing.github.io/
I assume a little bit of familiarity with vectors (like dot product and vector addition). If you
don’t know that, do a little review. If you need that review, or to learn it for the first time,
check out Marschner’s and my graphics text, Foley, Van Dam, _et al._, or McGuire’s graphics codex.
If you run into trouble, or do something cool you’d like to show somebody, send me some email at
ptrshrl@gmail.com.
I’ll be maintaining a site related to the book including further reading and links to resources at a
blog https://in1weekend.blogspot.com/ related to this book.
Thanks to everyone who lent a hand on this project. You can find them in the acknowledgments section
at the end of this book.
Let’s get on with it!
Prologue to the Rust version of this book
====================================================================================================
When learning a new programming language a suitable project to learn from which is fun, challenging
and still a reachable goal is often difficult and maybe even keeping folks from starting at all.
I personally think a ray tracer is such a project, because of the visual improvements you see after
each step forward and the beautiful result waiting at the end.
This translated version should guide you through some basics of the Rust programming language by
implementing a ray tracer. For now mainly the code sections have been adapted to Rust, but I am
planning to add more pedagogical explanations and introductions to the Rust specific content to the
main text and I encourage you to help doing this for example with pull requests!
And now let's really get on with it!
Output an Image
====================================================================================================
The PPM Image Format
---------------------
Whenever you start a renderer, you need a way to see an image. The most straightforward way is to
write it to a file. The catch is, there are so many formats. Many of those are complex. I always
start with a plain text ppm file. Here’s a nice description from Wikipedia:
![Figure [ppm]: PPM Example](images/fig-1.01-ppm.jpg)
Let’s make some Rust code to output such a thing:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn main() {
const IMAGE_WIDTH: u64 = 256;
const IMAGE_HEIGHT: u64 = 256;
println!("P3");
println!("{} {}", IMAGE_WIDTH, IMAGE_HEIGHT);
println!("255");
for j in (0..IMAGE_HEIGHT).rev() {
for i in 0..IMAGE_WIDTH {
let r = (i as f64) / ((IMAGE_WIDTH - 1) as f64);
let g = (j as f64) / ((IMAGE_HEIGHT - 1) as f64);
let b = 0.25;
let ir = (255.999 * r) as u64;
let ig = (255.999 * g) as u64;
let ib = (255.999 * b) as u64;
println!("{} {} {}", ir, ig, ib);
}
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-initial]: [main.rs] Creating your first image]
There are some things to note in that code:
1. The pixels are written out in rows with pixels left to right.
2. The rows are written out from top to bottom.
3. By convention, each of the red/green/blue components range from 0.0 to 1.0. We will relax that
later when we internally use high dynamic range, but before output we will tone map to the zero
to one range, so this code won’t change.
4. Red goes from fully off (black) to fully on (bright red) from left to right, and green goes
from black at the bottom to fully on at the top. Red and green together make yellow so we
should expect the upper right corner to be yellow.
Creating an Image File
-----------------------
Because the file is written to the program output, you'll need to redirect it to an image file.
Typically this is done from the command-line by using the `>` redirection operator, like so:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cargo run > image.ppm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Opening the output file (in `ToyViewer` on my Mac, but try it in your favorite viewer and Google
“ppm viewer” if your viewer doesn’t support it) shows this result:
![Image 1: First PPM image](images/img-1.01-first-ppm-image.png class=pixel)
Hooray! This is the graphics “hello world”. If your image doesn’t look like that, open the output
file in a text editor and see what it looks like. It should start something like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
P3
256 256
255
0 255 63
1 255 63
2 255 63
3 255 63
4 255 63
5 255 63
6 255 63
7 255 63
8 255 63
9 255 63
..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [first-img]: First image output]
If it doesn’t, then you probably just have some newlines or something similar that is confusing the
image reader.
Adding a Progress Indicator
----------------------------
Before we continue, let's add a progress indicator to our output. This is a handy way to track the
progress of a long render, and also to possibly identify a run that's stalled out due to an infinite
loop or other problem.
Our program outputs the image to the standard output stream (`println!`), so leave that alone and
instead write to the error output stream (`eprintln!`):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
use std::io::{stderr, Write};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn main() {
const IMAGE_WIDTH: u64 = 256;
const IMAGE_HEIGHT: u64 = 256;
println!("P3");
println!("{} {}", IMAGE_WIDTH, IMAGE_HEIGHT);
println!("255");
for j in (0..IMAGE_HEIGHT).rev() {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
eprint!("\rScanlines remaining: {:3}", IMAGE_HEIGHT - j - 1);
stderr().flush().unwrap();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
for i in 0..IMAGE_WIDTH {
let r = (i as f64) / ((IMAGE_WIDTH - 1) as f64);
let g = (j as f64) / ((IMAGE_HEIGHT - 1) as f64);
let b = 0.25;
let ir = (255.999 * r) as u64;
let ig = (255.999 * g) as u64;
let ib = (255.999 * b) as u64;
println!("{} {} {}", ir, ig, ib);
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
eprintln!("Done.");
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-progress]: [main.rs] Main with progress reporting]
The Vec3 Struct
====================================================================================================
Almost all graphics programs have some class(es) for storing geometric vectors and Colors. In many
systems these vectors are 4D (3D plus a homogeneous coordinate for geometry, and RGB plus an alpha
transparency channel for Colors). For our purposes, three coordinates suffices. We’ll use the same
class `Vec3` for Colors, locations, directions, offsets, whatever. Some people don’t like this
because it doesn’t prevent you from doing something silly, like adding a Color to a location. They
have a good point, but we’re going to always take the “less code” route when not obviously wrong.
In spite of this, we do declare two aliases for `Vec3`: `Point3` and `Color`. Since these two types
are just aliases for `Vec3`, you won't get warnings if you pass a `Color` to a function expecting a
`Point3`, for example. We use them only to clarify intent and use.
Variables and Methods
----------------------
We use `f64` here, but some ray tracers use `f32`. Either one is fine -- follow your own
tastes.
Vec3 Utility Functions
-----------------------
We further add some vector utility functions to the implementation for `Vec3` and implement the `Display` trait:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub fn x(self) -> f64 {
self[0]
}
pub fn y(self) -> f64 {
self[1]
}
pub fn z(self) -> f64 {
self[2]
}
pub fn dot(self, other: Vec3) -> f64 {
self[0] * other[0] + self[1] * other[1] + self[2] * other[2]
}
pub fn length(self) -> f64 {
self.dot(self).sqrt()
}
pub fn cross(self, other: Vec3) -> Vec3 {
Vec3 {
e: [
self[1] * other[2] - self[2] * other[1],
self[2] * other[0] - self[0] * other[2],
self[0] * other[1] - self[1] * other[0]
]
}
}
pub fn normalized(self) -> Vec3 {
self / self.length()
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [Vec3-utility]: [vec.rs] Vec3 utility functions]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
..
use std::fmt;
use std::fmt::Display;
..
impl Display for Vec3 {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "({}, {}, {})", self[0], self[1], self[2])
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [Vec3-display]: [vec.rs] Vec3 display trait implementation]
Color Utility Functions
------------------------
Using our new `Vec3` struct, we'll create a utility function to format a single pixel's color.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub fn format_color(self) -> String {
format!("{} {} {}", (255.999 * self[0]) as u64,
(255.999 * self[1]) as u64,
(255.999 * self[2]) as u64)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [Color]: [vec.rs] Color utility functions]
Now we can change our main to use this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
mod vec;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
use std::io::{stderr, Write};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
use vec::{Vec3, Color};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn main() {
const IMAGE_WIDTH: u64 = 256;
const IMAGE_HEIGHT: u64 = 256;
println!("P3");
println!("{} {}", IMAGE_WIDTH, IMAGE_HEIGHT);
println!("255");
for j in (0..IMAGE_HEIGHT).rev() {
eprint!("\rScanlines remaining: {:3}", IMAGE_HEIGHT - j - 1);
stderr().flush().unwrap();
for i in 0..IMAGE_WIDTH {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let pixel_color = Color::new((i as f64) / ((IMAGE_WIDTH - 1) as f64),
(j as f64) / ((IMAGE_HEIGHT - 1) as f64),
0.25);
println!("{}", pixel_color.format_color());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
}
eprintln!("Done.");
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ppm-2]: [main.rs] Final code for the first PPM image]
Rays, a Simple Camera, and Background
====================================================================================================
The ray Struct
--------------
The one thing that all ray tracers have is a ray struct and a computation of what Color is seen along
a ray. Let’s think of a ray as a function $\mathbf{P}(t) = \mathbf{A} + t \mathbf{b}$. Here
$\mathbf{P}$ is a 3D position along a line in 3D. $\mathbf{A}$ is the ray origin and $\mathbf{b}$ is
the ray direction. The ray parameter $t$ is a real number (`f64` in the code). Plug in a
different $t$ and $\mathbf{P}(t)$ moves the point along the ray. Add in negative $t$ values and you
can go anywhere on the 3D line. For positive $t$, you get only the parts in front of $\mathbf{A}$,
and this is what is often called a half-line or ray.
![Figure [lerp]: Linear interpolation](images/fig-1.02-lerp.jpg)
The function $\mathbf{P}(t)$ in more verbose code form I call `ray::at(t)`:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
use super::vec::{Vec3, Point3};
struct Ray {
orig: Point3,
dir: Vec3
}
impl Ray {
pub fn new(origin: Point3, direction: Vec3) -> Ray {
Ray {
orig: origin,
dir: direction
}
}
pub fn origin(&self) -> Point3 {
self.orig
}
pub fn direction(&self) -> Vec3 {
self.dir
}
pub fn at(&self, t: f64) -> Point3 {
self.orig + t * self.dir
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-initial]: [ray.rs] The ray struct]
Sending Rays Into the Scene
----------------------------
Now we are ready to turn the corner and make a ray tracer. At the core, the ray tracer sends rays
through pixels and computes the Color seen in the direction of those rays. The involved steps are
(1) calculate the ray from the eye to the pixel, (2) determine which objects the ray intersects, and
(3) compute a Color for that intersection point. When first developing a ray tracer, I always do a
simple camera for getting the code up and running. I also make a simple `ray_color(ray)` function
that returns the Color of the background (a simple gradient).
I’ve often gotten into trouble using square images for debugging because I transpose $x$ and $y$ too
often, so I’ll use a non-square image. For now we'll use a 16:9 aspect ratio, since that's so
common.
In addition to setting up the pixel dimensions for the rendered image, we also need to set up a
virtual viewport through which to pass our scene rays. For the standard square pixel spacing, the
viewport's aspect ratio should be the same as our rendered image. We'll just pick a viewport two
units in height. We'll also set the distance between the projection plane and the projection point
to be one unit. This is referred to as the “focal length”, not to be confused with “focus distance”,
which we'll present later.
I’ll put the “eye” (or camera center if you think of a camera) at $(0,0,0)$. I will have the y-axis
go up, and the x-axis to the right. In order to respect the convention of a right handed coordinate
system, into the screen is the negative z-axis. I will traverse the screen from the upper left hand
corner, and use two offset vectors along the screen sides to move the ray endpoint across the
screen. Note that I do not make the ray direction a unit length vector because I think not doing
that makes for simpler and slightly faster code.
![Figure [cam-geom]: Camera geometry](images/fig-1.03-cam-geom.jpg)
Below in code, the ray `r` goes to approximately the pixel centers (I won’t worry about exactness
for now because we’ll add antialiasing later):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
mod vec;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
mod ray;
use vec::{Vec3, Point3, Color};
use ray::Ray;
fn ray_color(r: &Ray) -> Color {
let unit_direction = r.direction().normalized();
let t = 0.5 * (unit_direction.y() + 1.0);
(1.0 - t) * Color::new(1.0, 1.0, 1.0) + t * Color::new(0.5, 0.7, 1.0)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn main() {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
// Image
const ASPECT_RATIO: f64 = 16.0 / 9.0;
const IMAGE_WIDTH: u64 = 256;
const IMAGE_HEIGHT: u64 = ((256 as f64) / ASPECT_RATIO) as u64;
// Camera
let viewport_height = 2.0;
let viewport_width = ASPECT_RATIO * viewport_height;
let focal_length = 1.0;
let origin = Point3::new(0.0, 0.0, 0.0);
let horizontal = Vec3::new(viewport_width, 0.0, 0.0);
let vertical = Vec3::new(0.0, viewport_height, 0.0);
let lower_left_corner = origin - horizontal / 2.0 - vertical / 2.0
- Vec3::new(0.0, 0.0, focal_length);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
println!("P3");
println!("{} {}", IMAGE_WIDTH, IMAGE_HEIGHT);
println!("255");
for j in (0..IMAGE_HEIGHT).rev() {
eprint!("\rScanlines remaining: {:3}", IMAGE_HEIGHT - j - 1);
stderr().flush().unwrap();
for i in 0..IMAGE_WIDTH {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let u = (i as f64) / ((IMAGE_WIDTH - 1) as f64);
let v = (j as f64) / ((IMAGE_HEIGHT - 1) as f64);
let r = Ray::new(origin,
lower_left_corner + u * horizontal + v * vertical - origin);
let pixel_color = ray_color(&r);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
println!("{}", pixel_color.format_color());
}
}
eprintln!("Done.");
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-blue-white-blend]: [main.rs] Rendering a blue-to-white gradient]
The `ray_color` function linearly blends white and blue depending on the height of the $y$
coordinate _after_ scaling the ray direction to unit length (so $-1.0 < y < 1.0$). Because we're
looking at the $y$ height after normalizing the vector, you'll notice a horizontal gradient to the
Color in addition to the vertical gradient.
I then did a standard graphics trick of scaling that to $0.0 ≤ t ≤ 1.0$. When $t = 1.0$ I want blue.
When $t = 0.0$ I want white. In between, I want a blend. This forms a “linear blend”, or “linear
interpolation”, or “lerp” for short, between two things. A lerp is always of the form
$$ \text{blendedValue} = (1-t)\cdot\text{startValue} + t\cdot\text{endValue}, $$
with $t$ going from zero to one. In our case this produces:
![Image 2: A blue-to-white gradient depending on ray Y coordinate
](images/img-1.02-blue-to-white.png class=pixel)
Adding a Sphere
====================================================================================================
Let’s add a single object to our ray tracer. People often use spheres in ray tracers because
calculating whether a ray hits a sphere is pretty straightforward.
Ray-Sphere Intersection
------------------------
Recall that the equation for a sphere centered at the origin of radius $R$ is $x^2 + y^2 + z^2 =
R^2$. Put another way, if a given point $(x,y,z)$ is on the sphere, then $x^2 + y^2 + z^2 = R^2$. If
the given point $(x,y,z)$ is _inside_ the sphere, then $x^2 + y^2 + z^2 < R^2$, and if a given point
$(x,y,z)$ is _outside_ the sphere, then $x^2 + y^2 + z^2 > R^2$.
It gets uglier if the sphere center is at $(C_x, C_y, C_z)$:
$$ (x - C_x)^2 + (y - C_y)^2 + (z - C_z)^2 = r^2 $$
In graphics, you almost always want your formulas to be in terms of vectors so all the x/y/z stuff
is under the hood in the `Vec3` struct. You might note that the vector from center
$\mathbf{C} = (C_x,C_y,C_z)$ to point $\mathbf{P} = (x,y,z)$ is $(\mathbf{P} - \mathbf{C})$, and
therefore
$$ (\mathbf{P} - \mathbf{C}) \cdot (\mathbf{P} - \mathbf{C})
= (x - C_x)^2 + (y - C_y)^2 + (z - C_z)^2
$$
So the equation of the sphere in vector form is:
$$ (\mathbf{P} - \mathbf{C}) \cdot (\mathbf{P} - \mathbf{C}) = r^2 $$
We can read this as “any point $\mathbf{P}$ that satisfies this equation is on the sphere”. We want
to know if our ray $\mathbf{P}(t) = \mathbf{A} + t\mathbf{b}$ ever hits the sphere anywhere. If it
does hit the sphere, there is some $t$ for which $\mathbf{P}(t)$ satisfies the sphere equation. So
we are looking for any $t$ where this is true:
$$ (\mathbf{P}(t) - \mathbf{C}) \cdot (\mathbf{P}(t) - \mathbf{C}) = r^2 $$
or expanding the full form of the ray $\mathbf{P}(t)$:
$$ (\mathbf{A} + t \mathbf{b} - \mathbf{C})
\cdot (\mathbf{A} + t \mathbf{b} - \mathbf{C}) = r^2 $$
The rules of vector algebra are all that we would want here. If we expand that equation and move all
the terms to the left hand side we get:
$$ t^2 \mathbf{b} \cdot \mathbf{b}
+ 2t \mathbf{b} \cdot (\mathbf{A}-\mathbf{C})
+ (\mathbf{A}-\mathbf{C}) \cdot (\mathbf{A}-\mathbf{C}) - r^2 = 0
$$
The vectors and $r$ in that equation are all constant and known. The unknown is $t$, and the
equation is a quadratic, like you probably saw in your high school math class. You can solve for $t$
and there is a square root part that is either positive (meaning two real solutions), negative
(meaning no real solutions), or zero (meaning one real solution). In graphics, the algebra almost
always relates very directly to the geometry. What we have is:
![Figure [ray-sphere]: Ray-sphere intersection results](images/fig-1.04-ray-sphere.jpg)
Creating Our First Raytraced Image
-----------------------------------
If we take that math and hard-code it into our program, we can test it by Coloring red any pixel
that hits a small sphere we place at -1 on the z-axis:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
fn hit_sphere(center: Point3, radius: f64, r: &Ray) -> bool {
let oc = r.origin() - center;
let a = r.direction().dot(r.direction());
let b = 2.0 * oc.dot(r.direction());
let c = oc.dot(oc) - radius * radius;
let discriminant = b * b - 4.0 * a * c;
discriminant > 0.0
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn ray_color(r: &Ray) -> Color {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
if hit_sphere(Point3::new(0.0, 0.0, -1.0), 0.5, r) {
return Color::new(1.0, 0.0, 0.0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let unit_direction = r.direction().normalized();
let t = 0.5 * (unit_direction.y() + 1.0);
(1.0 - t) * Color::new(1.0, 1.0, 1.0) + t * Color::new(0.5, 0.7, 1.0)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-red-sphere]: [main.rs] Rendering a red sphere]
What we get is this:
![Image 3: A simple red sphere](images/img-1.03-red-sphere.png class=pixel)
Now this lacks all sorts of things -- like shading and reflection rays and more than one object --
but we are closer to halfway done than we are to our start! One thing to be aware of is that we
tested whether the ray hits the sphere at all, but $t < 0$ solutions work fine. If you change your
sphere center to $z = +1$ you will get exactly the same picture because you see the things behind
you. This is not a feature! We’ll fix those issues next.
Surface Normals and Multiple Objects
====================================================================================================
Shading with Surface Normals
-----------------------------
First, let’s get ourselves a surface normal so we can shade. This is a vector that is perpendicular
to the surface at the point of intersection. There are two design decisions to make for normals.
The first is whether these normals are unit length. That is convenient for shading so I will say
yes, but I won’t enforce that in the code. This could allow subtle bugs, so be aware this is
personal preference as are most design decisions like that. For a sphere, the outward normal is in
the direction of the hit point minus the center:
![Figure [sphere-normal]: Sphere surface-normal geometry](images/fig-1.05-sphere-normal.jpg)
On the earth, this implies that the vector from the earth’s center to you points straight up. Let’s
throw that into the code now, and shade it. We don’t have any lights or anything yet, so let’s just
visualize the normals with a Color map. A common trick used for visualizing normals (because it’s
easy and somewhat intuitive to assume $\mathbf{n}$ is a unit length vector -- so each
component is between -1 and 1) is to map each component to the interval from 0 to 1, and then map
x/y/z to r/g/b. For the normal, we need the hit point, not just whether we hit or not. We only have
one sphere in the scene, and it's directly in front of the camera, so we won't worry about negative
values of $t$ yet. We'll just assume the closest hit point (smallest $t$). These changes in the code
let us compute and visualize $\mathbf{n}$:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
fn hit_sphere(center: Point3, radius: f64, r: &Ray) -> f64 {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let oc = r.origin() - center;
let a = r.direction().dot(r.direction());
let b = 2.0 * oc.dot(r.direction());
let c = oc.dot(oc) - radius * radius;
let discriminant = b * b - 4.0 * a * c;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
if discriminant < 0.0 {
-1.0
} else {
(-b - discriminant.sqrt()) / (2.0 * a)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
fn ray_color(r: &Ray) -> Color {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let t = hit_sphere(Point3::new(0.0, 0.0, -1.0), 0.5, r);
if t > 0.0 {
let n = (r.at(t) - Point3::new(0.0, 0.0, -1.0)).normalized();
return 0.5 * Color::new(n.x() + 1.0, n.y() + 1.0, n.z() + 1.0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let unit_direction = r.direction().normalized();
let t = 0.5 * (unit_direction.y() + 1.0);
(1.0 - t) * Color::new(1.0, 1.0, 1.0) + t * Color::new(0.5, 0.7, 1.0)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [render-surface-normal]: [main.rs] Rendering surface normals on a sphere]
And that yields this picture:
![Image 4: A sphere Colored according to its normal vectors
](images/img-1.04-normals-sphere.png class=pixel)
Simplifying the Ray-Sphere Intersection Code
---------------------------------------------
Let’s revisit the ray-sphere equation:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn hit_sphere(center: Point3, radius: f64, r: &Ray) -> f64 {
let oc = r.origin() - center;
let a = r.direction().dot(r.direction());
let b = 2.0 * oc.dot(r.direction());
let c = oc.dot(oc) - radius * radius;
let discriminant = b * b - 4.0 * a * c;
if discriminant < 0.0 {
-1.0
} else {
(-b - discriminant.sqrt()) / (2.0 * a)
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-sphere-before]: [main.rs] Ray-sphere intersection code (before)]
First, recall that a vector dotted with itself is equal to the squared length of that vector.
Second, notice how the equation for `b` has a factor of two in it. Consider what happens to the
quadratic equation if $b = 2h$:
$$ \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} $$
$$ = \frac{-2h \pm \sqrt{(2h)^2 - 4ac}}{2a} $$
$$ = \frac{-2h \pm 2\sqrt{h^2 - ac}}{2a} $$
$$ = \frac{-h \pm \sqrt{h^2 - ac}}{a} $$
Using these observations, we can now simplify the sphere-intersection code to this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn hit_sphere(center: Point3, radius: f64, r: &Ray) -> f64 {
let oc = r.origin() - center;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let a = r.direction().length().powi(2);
let half_b = oc.dot(r.direction());
let c = oc.length().powi(2) - radius * radius;
let discriminant = half_b * half_b - a * c;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
if discriminant < 0.0 {
-1.0
} else {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
(-half_b - discriminant.sqrt()) / a
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-sphere-after]: [main.rs] Ray-sphere intersection code (after)]
An Abstraction for Hittable Objects
------------------------------------
Now, how about several spheres? While it is tempting to have an array of spheres, a very clean
solution is the make a “trait” for anything a ray might hit, and make both a sphere and a
list of spheres just something you can hit. What that struct should be called is something of a
quandary -- calling it an “object” would be good if not for “object oriented” programming. “Surface”
is often used, with the weakness being maybe we will want volumes. “hittable” emphasizes the member
function that unites them. I don’t love any of these, but I will go with “hittable”.
This `Hit` trait will have a hit function that takes in a ray. Most ray tracers have
found it convenient to add a valid interval for hits $t_{min}$ to $t_{max}$, so the hit only
“counts” if $t_{min} < t < t_{max}$. For the initial rays this is positive $t$, but as we will see,
it can help some details in the code to have an interval $t_{min}$ to $t_{max}$. One design question
is whether to do things like compute the normal if we hit something. We might end up hitting
something closer as we do our search, and we will only need the normal of the closest thing. I will
go with the simple solution and compute a bundle of stuff I will store in some structure. Here’s
the trait:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
use super::vec::{Vec3, Point3};
use super::ray::Ray;
pub struct HitRecord {
pub p: Point3,
pub normal: Vec3,
pub t: f64
}
pub trait Hit {
fn hit(&self, r: &Ray, t_min: f64, t_max: f64) -> Option;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [hittable-initial]: [hit.rs] The Hit trait]
And here’s the sphere:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
use super::vec::{Point3, Vec3};
use super::ray::Ray;
use super::hit::{Hit, HitRecord};
pub struct Sphere {
center: Point3,
radius: f64
}
impl Sphere {
pub fn new(cen: Point3, r: f64) -> Sphere {
Sphere {
center: cen,
radius: r
}
}
}
impl Hit for Sphere {
fn hit(&self, r: &Ray, t_min: f64, t_max: f64) -> Option {
let oc = r.origin() - self.center;
let a = r.direction().length().powi(2);
let half_b = oc.dot(r.direction());
let c = oc.length().powi(2) - self.radius.powi(2);
let discriminant = half_b.powi(2) - a * c;
if discriminant < 0.0 {
return None;
}
// Find the nearest root that lies in the acceptable range
let sqrtd = discriminant.sqrt();
let mut root = (-half_b - sqrtd) / a;
if root < t_min || t_max < root {
root = (-half_b + sqrtd) / a;
if root < t_min || t_max < root {
return None;
}
}
let p = r.at(root);
let rec = HitRecord {
t: root,
p: p,
normal: (p - self.center) / self.radius;
};
Some(rec)
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [sphere-initial]: [sphere.rs] The sphere struct]
Front Faces Versus Back Faces
------------------------------
The second design decision for normals is whether they should always point out. At present, the
normal found will always be in the direction of the center to the intersection point (the normal
points out). If the ray intersects the sphere from the outside, the normal points against the ray.
If the ray intersects the sphere from the inside, the normal (which always points out) points with
the ray. Alternatively, we can have the normal always point against the ray. If the ray is outside
the sphere, the normal will point outward, but if the ray is inside the sphere, the normal will
point inward.
![Figure [normal-sides]: Possible directions for sphere surface-normal geometry
](images/fig-1.06-normal-sides.jpg)
We need to choose one of these possibilities because we will eventually want to determine which
side of the surface that the ray is coming from. This is important for objects that are rendered
differently on each side, like the text on a two-sided sheet of paper, or for objects that have an
inside and an outside, like glass balls.
If we decide to have the normals always point out, then we will need to determine which side the
ray is on when we Color it. We can figure this out by comparing the ray with the normal. If the ray
and the normal face in the same direction, the ray is inside the object, if the ray and the normal
face in the opposite direction, then the ray is outside the object. This can be determined by
taking the dot product of the two vectors, where if their dot is positive, the ray is inside the
sphere.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
if ray_direction.dot(outward_normal) > 0.0 {
// ray is inside the sphere
..
} else {
// ray is outside the sphere
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-normal-comparison]: Comparing the ray and the normal]
If we decide to have the normals always point against the ray, we won't be able to use the dot
product to determine which side of the surface the ray is on. Instead, we would need to store that
information:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let front_face;
if ray_direction.dot(outward_normal) > 0.0 {
// ray is inside the sphere
normal = -outward_normal;
front_face = false;
} else {
// ray is outside the sphere
normal = outward_normal;
front_face = true;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [normals-point-against]: Remembering the side of the surface]
We can set things up so that normals always point “outward” from the surface, or always point
against the incident ray. This decision is determined by whether you want to determine the side of
the surface at the time of geometry intersection or at the time of Coloring. In this book we have
more material types than we have geometry types, so we'll go for less work and put the determination
at geometry time. This is simply a matter of preference, and you'll see both implementations in the
literature.
We add the `front_face` bool to the `hit_record` struct. We'll also add a function to solve this
calculation for us.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct HitRecord {
pub p: Point3,
pub normal: Vec3,
pub t: f64,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub front_face: bool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
impl HitRecord {
pub fn set_face_normal(&mut self, r: &Ray, outward_normal: Vec3) -> () {
self.front_face = r.direction().dot(outward_normal) < 0.0;
self.normal = if self.front_face {
outward_normal
} else {
(-1.0) * outward_normal
};
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [front-face-tracking]: [hit.rs]
Adding front-face tracking to hit_record]
And then we add the surface side determination to the trait implementation:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn hit(&self, r: &Ray, t_min: f64, t_max: f64) -> Option {
..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let mut rec = HitRecord {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
t: root,
p: r.at(root),
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
normal: Vec3::new(0.0, 0.0, 0.0),
front_face: false
};
let outward_normal = (rec.p - self.center) / self.radius;
rec.set_face_normal(r, outward_normal);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
Some(rec)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [sphere-final]: [sphere.rs] The sphere struct with normal determination]
A List of Hittable Objects
---------------------------
We have a generic object called a `hittable` that the ray can intersect with. We now add a struct
that stores a list of `Hit`s:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub type World = Vec>;
impl Hit for World {
fn hit(&self, r: &Ray, t_min: f64, t_max: f64) -> Option {
let mut tmp_rec = None;
let mut closest_so_far = t_max;
for object in self {
if let Some(rec) = object.hit(r, t_min, closest_so_far) {
closest_so_far = rec.t;
tmp_rec = Some(rec);
}
}
tmp_rec
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [world-initial]: [hit.rs] The World struct]
And the new main:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
mod vec;
mod ray;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
mod hit;
mod sphere;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
use vec::{Vec3, Point3, Color};
use ray::Ray;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
use hit::{Hit, World};
use sphere::Sphere;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn ray_color(r: &Ray, world: &World) -> Color {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
if let Some(rec) = world.hit(r, 0.0, f64::INFINITY) {
0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0))
} else {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let unit_direction = r.direction().normalized();
let t = 0.5 * (unit_direction.y() + 1.0);
(1.0 - t) * Color::new(1.0, 1.0, 1.0) + t * Color::new(0.5, 0.7, 1.0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
fn main() {
// Image
const ASPECT_RATIO: f64 = 16.0 / 9.0;
const IMAGE_WIDTH: u64 = 256;
const IMAGE_HEIGHT: u64 = ((256 as f64) / ASPECT_RATIO) as u64;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
// World
let mut world = World::new();
world.push(Box::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
world.push(Box::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// Camera
let viewport_height = 2.0;
let viewport_width = ASPECT_RATIO * viewport_height;
let focal_length = 1.0;
let origin = Point3::new(0.0, 0.0, 0.0);
let horizontal = Vec3::new(viewport_width, 0.0, 0.0);
let vertical = Vec3::new(0.0, viewport_height, 0.0);
let lower_left_corner = origin - horizontal / 2.0 - vertical / 2.0
- Vec3::new(0.0, 0.0, focal_length);
println!("P3");
println!("{} {}", IMAGE_WIDTH, IMAGE_HEIGHT);
println!("255");
for j in (0..IMAGE_HEIGHT).rev() {
eprint!("\rScanlines remaining: {:3}", IMAGE_HEIGHT - j - 1);
stderr().flush().unwrap();
for i in 0..IMAGE_WIDTH {
let u = (i as f64) / ((IMAGE_WIDTH - 1) as f64);
let v = (j as f64) / ((IMAGE_HEIGHT - 1) as f64);
let r = Ray::new(origin,
lower_left_corner + u * horizontal + v * vertical - origin);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let pixel_color = ray_color(&r, &world);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
println!("{}", pixel_color.format_color());
}
}
eprintln!("Done.");
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-with-hittables-h]: [main.rs] The new main with hittables]
This yields a picture that is really just a visualization of where the spheres are along with their
surface normal. This is often a great way to look at your model for flaws and characteristics.
![Image 5: Resulting render of normals-Colored sphere with ground
](images/img-1.05-normals-sphere-ground.png class=pixel)
Antialiasing
====================================================================================================
When a real camera takes a picture, there are usually no jaggies along edges because the edge pixels
are a blend of some foreground and some background. We can get the same effect by averaging a bunch
of samples inside each pixel. We will not bother with stratification. This is controversial, but is
usual for my programs. For some ray tracers it is critical, but the kind of general one we are
writing doesn’t benefit very much from it and it makes the code uglier. We abstract the camera struct
a bit so we can make a cooler camera later.
Some Random Number Utilities
-----------------------------
One thing we need is a random number generator that returns real random numbers. We need a function
that returns a canonical random number which by convention returns a random real in the range
$0 ≤ r < 1$. The “less than” before the 1 is important as we will sometimes take advantage of that.
A simple approach to this is to use the `rand` crate. We need to add it to the dependencies of our
`Cargo.toml`:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
[package]
name = "raytracinginrust"
version = "0.1.0"
authors = ["danb "]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
rand = "*"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [rand-dependency]: [Cargo.toml] added rand crate dependency]
Generating Pixels with Multiple Samples
----------------------------------------
For a given pixel we have several samples within that pixel and send rays through each of the
samples. The Colors of these rays are then averaged:
![Figure [pixel-samples]: Pixel samples](images/fig-1.07-pixel-samples.jpg)
Now's a good time to create a `Camera` struct to manage our virtual camera and the related tasks of
scene scampling. The following struct implements a simple camera using the axis-aligned camera from
before:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
use super::vec::{Vec3, Point3};
use super::ray::Ray;
struct Camera {
origin: Point3,
lower_left_corner: Point3,
horizontal: Vec3,
vertical: Vec3
}
impl Camera {
pub fn new() -> Camera {
const ASPECT_RATIO: f64 = 16.0 / 9.0;
const VIEWPORT_HEIGHT: f64 = 2.0;
const VIEWPORT_WIDTH: f64 = ASPECT_RATIO * VIEWPORT_HEIGHT;
const FOCAL_LENGTH: f64 = 1.0;
let orig = Point3::new(0.0, 0.0, 0.0);
let h = Vec3::new(VIEWPORT_WIDTH, 0.0, 0.0);
let v = Vec3::new(0.0, VIEWPORT_HEIGHT, 0.0);
let llc = orig - h / 2.0 - v / 2.0 - Vec3::new(0.0, 0.0, FOCAL_LENGTH);
Camera {
origin: orig,
horizontal: h,
vertical: v,
lower_left_corner: llc
}
}
pub fn get_ray(&self, u: f64, v: f64) -> Ray {
Ray::new(self.origin,
self.lower_left_corner + u * self.horizontal + v * self.vertical - self.origin)
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-initial]: [camera.rs] The camera struct]
To handle the multi-sampled Color computation, we'll update the `format_color()` function. Rather
than adding in a fractional contribution each time we accumulate more light to the Color, just add
the full Color each iteration, and then perform a single divide at the end (by the number of
samples) when writing out the Color. In addition, we'll use `std::num::clamp`, which clamps the value
to the range [min,max]:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub fn format_color(self, samples_per_pixel: u64) -> String {
let ir = (256.0 * (self[0] / (samples_per_pixel as f64)).clamp(0.0, 0.999)) as u64;
let ig = (256.0 * (self[1] / (samples_per_pixel as f64)).clamp(0.0, 0.999)) as u64;
let ib = (256.0 * (self[2] / (samples_per_pixel as f64)).clamp(0.0, 0.999)) as u64;
format!("{} {} {}", ir, ig, ib)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [format-color-clamped]: [vec.rs] The multi-sample write_Color() function]
Main is also changed:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
use camera::Camera;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
..
fn main() {
// Image
const ASPECT_RATIO: f64 = 16.0 / 9.0;
const IMAGE_WIDTH: u64 = 256;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
const IMAGE_HEIGHT: u64 = ((IMAGE_WIDTH as f64) / ASPECT_RATIO) as u64;
const SAMPLES_PER_PIXEL: u64 = 100;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// World
let mut world = World::new();
world.push(Box::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
world.push(Box::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));
// Camera
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let cam = Camera::new();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
println!("P3");
println!("{} {}", IMAGE_WIDTH, IMAGE_HEIGHT);
println!("255");
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let mut rng = rand::thread_rng();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
for j in (0..IMAGE_HEIGHT).rev() {
eprint!("\rScanlines remaining: {:3}", IMAGE_HEIGHT - j - 1);
stderr().flush().unwrap();
for i in 0..IMAGE_WIDTH {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let mut pixel_color = Color::new(0.0, 0.0, 0.0);
for _ in 0..SAMPLES_PER_PIXEL {
let random_u: f64 = rng.gen();
let random_v: f64 = rng.gen();
let u = ((i as f64) + random_u) / ((IMAGE_WIDTH - 1) as f64);
let v = ((j as f64) + random_v) / ((IMAGE_HEIGHT - 1) as f64);
let r = cam.get_ray(u, v);
pixel_color += ray_color(&r, &world);
}
println!("{}", pixel_color.format_color(SAMPLES_PER_PIXEL));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
}
eprintln!("Done.");
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-multi-sample]: [main.rs] Rendering with multi-sampled pixels]
Zooming into the image that is produced, we can see the difference in edge pixels.
![Image 6: Before and after antialiasing
](images/img-1.06-antialias-before-after.png class=pixel)
Diffuse Materials
====================================================================================================
Now that we have objects and multiple rays per pixel, we can make some realistic looking materials.
We’ll start with diffuse (matte) materials. One question is whether we mix and match geometry and
materials (so we can assign a material to multiple spheres, or vice versa) or if geometry and
material are tightly bound (that could be useful for procedural objects where the geometry and
material are linked). We’ll go with separate -- which is usual in most renderers -- but do be aware
of the limitation.
A Simple Diffuse Material
--------------------------
Diffuse objects that don’t emit light merely take on the Color of their surroundings, but they
modulate that with their own intrinsic Color. Light that reflects off a diffuse surface has its
direction randomized. So, if we send three rays into a crack between two diffuse surfaces they will
each have different random behavior:
![Figure [light-bounce]: Light ray bounces](images/fig-1.08-light-bounce.jpg)
They also might be absorbed rather than reflected. The darker the surface, the more likely
absorption is. (That’s why it is dark!) Really any algorithm that randomizes direction will produce
surfaces that look matte. One of the simplest ways to do this turns out to be exactly correct for
ideal diffuse surfaces. (I used to do it as a lazy hack that approximates mathematically ideal
Lambertian.)
(Reader Vassillen Chizhov proved that the lazy hack is indeed just a lazy hack and is inaccurate.
The correct representation of ideal Lambertian isn't much more work, and is presented at the end of
the chapter.)
There are two unit radius spheres tangent to the hit point $p$ of a surface. These two spheres have
a center of $(\mathbf{P} + \mathbf{n})$ and $(\mathbf{P} - \mathbf{n})$, where $\mathbf{n}$ is the
normal of the surface. The sphere with a center at $(\mathbf{P} - \mathbf{n})$ is considered
_inside_ the surface, whereas the sphere with center $(\mathbf{P} + \mathbf{n})$ is considered
_outside_ the surface. Select the tangent unit radius sphere that is on the same side of the surface
as the ray origin. Pick a random point $\mathbf{S}$ inside this unit radius sphere and send a ray
from the hit point $\mathbf{P}$ to the random point $\mathbf{S}$ (this is the vector
$(\mathbf{S}-\mathbf{P})$):
![Figure [rand-vec]: Generating a random diffuse bounce ray](images/fig-1.09-rand-vec.jpg)
We need a way to pick a random point in a unit radius sphere. We’ll use what is usually the easiest
algorithm: a rejection method. First, pick a random point in the unit cube where x, y, and z all
range from -1 to +1. Reject this point and try again if the point is outside the sphere.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
impl Vec3 {
..
pub fn random(r: Range) -> Vec3 {
let mut rng = rand::thread_rng();
Vec3 {
e: [rng.gen_range(r.clone()), rng.gen_range(r.clone()), rng.gen_range(r.clone())]
}
}
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [vec-rand-util]: [vec.rs] Vec3 random utility function]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
impl Vec3 {
..
pub fn random_in_unit_sphere() -> Vec3 {
loop {
let v = Vec3::random(-1.0..1.0);
if v.length() < 1.0 {
return v;
}
}
}
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [random-in-unit-sphere]: [vec.rs] The random_in_unit_sphere function]
Then update the `ray_color` function to use the new random direction generator:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn ray_color(r: &Ray, world: &World) -> Color {
if let Some(rec) = world.hit(r, 0.0, f64::INFINITY) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let target = rec.p + rec.normal + Vec3::random_in_unit_sphere();
let r = Ray::new(rec.p, target - rec.p);
0.5 * ray_color(&r, world)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
} else {
let unit_direction = r.direction().normalized();
let t = 0.5 * (unit_direction.y() + 1.0);
(1.0 - t) * Color::new(1.0, 1.0, 1.0) + t * Color::new(0.5, 0.7, 1.0)
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-random-unit]: [main.rs] ray_color using a random ray direction]
Limiting the Number of Child Rays
----------------------------------
There's one potential problem lurking here. Notice that the `ray_color` function is recursive. When
will it stop recursing? When it fails to hit anything. In some cases, however, that may be a long
time — long enough to blow the stack. To guard against that, let's limit the maximum recursion
depth, returning no light contribution at the maximum depth:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
fn ray_color(r: &Ray, world: &World, depth: u64) -> Color {
if depth <= 0 {
// If we've exceeded the ray bounce limit, no more light is gathered
return Color::new(0.0, 0.0, 0.0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
if let Some(rec) = world.hit(r, 0.0, f64::INFINITY) {
let target = rec.p + rec.normal + Vec3::random_in_unit_sphere();
let r = Ray::new(rec.p, target - rec.p);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
0.5 * ray_color(&r, world, depth - 1)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
} else {
let unit_direction = r.direction().normalized();
let t = 0.5 * (unit_direction.y() + 1.0);
(1.0 - t) * Color::new(1.0, 1.0, 1.0) + t * Color::new(0.5, 0.7, 1.0)
}
}
fn main() {
// Image
const ASPECT_RATIO: f64 = 16.0 / 9.0;
const IMAGE_WIDTH: u64 = 256;
const IMAGE_HEIGHT: u64 = ((IMAGE_WIDTH as f64) / ASPECT_RATIO) as u64;
const SAMPLES_PER_PIXEL: u64 = 10;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
const MAX_DEPTH: u64 = 5;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// World
let mut world = World::new();
world.push(Box::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
world.push(Box::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));
// Camera
let cam = Camera::new();
println!("P3");
println!("{} {}", IMAGE_WIDTH, IMAGE_HEIGHT);
println!("255");
let mut rng = rand::thread_rng();
for j in (0..IMAGE_HEIGHT).rev() {
eprint!("\rScanlines remaining: {:3}", IMAGE_HEIGHT - j - 1);
stderr().flush().unwrap();
for i in 0..IMAGE_WIDTH {
let mut pixel_color = Color::new(0.0, 0.0, 0.0);
for _ in 0..SAMPLES_PER_PIXEL {
let random_u: f64 = rng.gen();
let random_v: f64 = rng.gen();
let u = ((i as f64) + random_u) / ((IMAGE_WIDTH - 1) as f64);
let v = ((j as f64) + random_v) / ((IMAGE_HEIGHT - 1) as f64);
let r = cam.get_ray(u, v);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pixel_color += ray_color(&r, &world, MAX_DEPTH);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
println!("{}", pixel_color.format_color(SAMPLES_PER_PIXEL));
}
}
eprintln!("Done.");
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-depth]: [main.rs] ray_color with depth limiting]
This gives us:
![Image 7: First render of a diffuse sphere](images/img-1.07-first-diffuse.png class=pixel)
Using Gamma Correction for Accurate Color Intensity
----------------------------------------------------
Note the shadowing under the sphere. This picture is very dark, but our spheres only absorb half the
energy on each bounce, so they are 50% reflectors. If you can’t see the shadow, don’t worry, we will
fix that now. These spheres should look pretty light (in real life, a light grey). The reason for
this is that almost all image viewers assume that the image is “gamma corrected”, meaning the 0 to 1
values have some transform before being stored as a byte. There are many good reasons for that, but
for our purposes we just need to be aware of it. To a first approximation, we can use “gamma 2”
which means raising the Color to the power $1/\gamma$, or in our simple case ½, which is just
square-root:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub fn format_color(self, samples_per_pixel: u64) -> String {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let ir = (256.0 * (self[0] / (samples_per_pixel as f64)).sqrt().clamp(0.0, 0.999)) as u64;
let ig = (256.0 * (self[1] / (samples_per_pixel as f64)).sqrt().clamp(0.0, 0.999)) as u64;
let ib = (256.0 * (self[2] / (samples_per_pixel as f64)).sqrt().clamp(0.0, 0.999)) as u64;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
format!("{} {} {}", ir, ig, ib)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [format-color-gamma]: [vec.rs] format_color with gamma correction]
That yields light grey, as we desire:
![Image 8: Diffuse sphere, with gamma correction
](images/img-1.08-gamma-correct.png class=pixel)
Fixing Shadow Acne
-------------------
There’s also a subtle bug in there. Some of the reflected rays hit the object they are reflecting
off of not at exactly $t=0$, but instead at $t=-0.0000001$ or $t=0.00000001$ or whatever floating
point approximation the sphere intersector gives us. So we need to ignore hits very near zero:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
if let Some(rec) = world.hit(r, 0.001, f64::INFINITY) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [reflect-tolerance]: [main.rs] Calculating reflected ray origins with tolerance]
This gets rid of the shadow acne problem. Yes it is really called that.
The rejection method presented here produces random points in the unit ball offset along the surface
normal. This corresponds to picking directions on the hemisphere with high probability close to the
normal, and a lower probability of scattering rays at grazing angles. This distribution scales by
the $\cos^3 (\phi)$ where $\phi$ is the angle from the normal. This is useful since light arriving
at shallow angles spreads over a larger area, and thus has a lower contribution to the final Color.
However, we are interested in a Lambertian distribution, which has a distribution of $\cos (\phi)$.
True Lambertian has the probability higher for ray scattering close to the normal, but the
distribution is more uniform. This is achieved by picking random points on the surface of the unit
sphere, offset along the surface normal. Picking random points on the unit sphere can be achieved by
picking random points _in_ the unit sphere, and then normalizing those.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let target = rec.p + rec.normal + Vec3::random_in_unit_sphere().normalized();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-unit-sphere]: [main.rs] ray_color with replacement diffuse]
After rendering we get a similar image:
![Image 9: Correct rendering of Lambertian spheres
](images/img-1.09-correct-lambertian.png class=pixel)
It's hard to tell the difference between these two diffuse methods, given that our scene of two
spheres is so simple, but you should be able to notice two important visual differences:
1. The shadows are less pronounced after the change
2. Both spheres are lighter in appearance after the change
Both of these changes are due to the more uniform scattering of the light rays, fewer rays are
scattering toward the normal. This means that for diffuse objects, they will appear _lighter_
because more light bounces toward the camera. For the shadows, less light bounces straight-up, so
the parts of the larger sphere directly underneath the smaller sphere are brighter.
An Alternative Diffuse Formulation
-----------------------------------
The initial hack presented in this book lasted a long time before it was proven to be an incorrect
approximation of ideal Lambertian diffuse. A big reason that the error persisted for so long is
that it can be difficult to:
1. Mathematically prove that the probability distribution is incorrect
2. Intuitively explain why a $\cos (\phi)$ distribution is desirable (and what it would look like)
Not a lot of common, everyday objects are perfectly diffuse, so our visual intuition of how these
objects behave under light can be poorly formed.
In the interest of learning, we are including an intuitive and easy to understand diffuse method.
For the two methods above we had a random vector, first of random length and then of unit length,
offset from the hit point by the normal. It may not be immediately obvious why the vectors should be
displaced by the normal.
A more intuitive approach is to have a uniform scatter direction for all angles away from the hit
point, with no dependence on the angle from the normal. Many of the first raytracing papers used
this diffuse method (before adopting Lambertian diffuse).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
impl Vec3 {
..
pub fn random_in_hemisphere(normal: Vec3) -> Vec3 {
let in_unit_sphere = Self::random_in_unit_sphere();
if in_unit_sphere.dot(normal) > 0.0 {
// In the same hemisphere as the normal
in_unit_sphere
} else {
(-1.0) * in_unit_sphere
}
}
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [random-in-hemisphere]: [vec.rs] The random_in_hemisphere function]
Plugging the new formula into the `ray_color` function:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let target = rec.p + Vec3::random_in_hemisphere(rec.normal);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-hemisphere]: [main.rs] ray_color with hemispherical scattering]
Gives us the following image:
![Image 10: Rendering of diffuse spheres with hemispherical scattering
](images/img-1.10-rand-hemispherical.png class=pixel)
Scenes will become more complicated over the course of the book. You are encouraged to
switch between the different diffuse renderers presented here. Most scenes of interest will contain
a disproportionate amount of diffuse materials. You can gain valuable insight by understanding the
effect of different diffuse methods on the lighting of the scene.
Metal
====================================================================================================
A Trait for Materials
--------------------------------
If we want different objects to have different materials, we have a design decision. We could have a
universal material with lots of parameters and different material types just zero out some of those
parameters. This is not a bad approach. Or we could have a material trait that
encapsulates behavior. I am a fan of the latter approach. For our program the material needs to do
two things:
1. Produce a scattered ray (or say it absorbed the incident ray).
2. If scattered, say how much the ray should be attenuated.
This suggests the trait:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub trait Scatter {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<(Color, Ray)>;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [material-initial]: [material.rs] The scatter trait]
A Data Structure to Describe Ray-Object Intersections
------------------------------------------------------
The `hit_record` is to avoid a bunch of arguments so we can stuff whatever info we want in there.
You can use arguments instead; it’s a matter of taste. Hittables and materials need to know each
other so there is some circularity of the references. In C++ you just need to alert the compiler
that the pointer is to a class, which the “class material” in the hittable class below does:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct HitRecord {
pub p: Point3,
pub normal: Vec3,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub mat: Rc,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub t: f64,
pub front_face: bool
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [hit-with-material]: [hit.rs] Hit record with added material]
What we have set up here is that material will tell us how rays interact with the surface.
`hit_record` is just a way to stuff a bunch of arguments into a struct so we can send them as a
group. When a ray hits a surface (a particular sphere for example), the material pointer in the
`hit_record` will be set to point at the material pointer the sphere was given when it was set up in
`main()` when we start. When the `ray_Color()` routine gets the `hit_record` it can call member
functions of the material pointer to find out what ray, if any, is scattered.
To achieve this, we must have a reference to the material for our sphere struct to returned
within `hit_record`. See the highlighted lines below:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct Sphere {
center: Point3,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
radius: f64,
mat: Rc
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
impl Sphere {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub fn new(cen: Point3, r: f64, m: Rc) -> Sphere {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
Sphere {
center: cen,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
radius: r,
mat: m
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
}
}
impl Hit for Sphere {
fn hit(&self, r: &Ray, t_min: f64, t_max: f64) -> Option {
..
let mut rec = HitRecord {
t: root,
p: r.at(root),
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
mat: self.mat.clone(),
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
normal: Vec3::new(0.0, 0.0, 0.0),
front_face: false
};
..
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [sphere-material]: [sphere.rs] Ray-sphere intersection with added material information]
Modeling Light Scatter and Reflectance
---------------------------------------
For the Lambertian (diffuse) case we already have, it can either scatter always and attenuate by its
reflectance $R$, or it can scatter with no attenuation but absorb the fraction $1-R$ of the rays, or
it could be a mixture of those strategies. For Lambertian materials we get this simple class:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct Lambertian {
albedo: Color
}
impl Lambertian {
pub fn new(a: Color) -> Lambertian {
Lambertian {
albedo: a
}
}
}
impl Scatter for Lambertian {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<(Color, Ray)> {
let scatter_direction = rec.normal + Vec3::random_in_unit_sphere().normalized();
let scattered = Ray::new(rec.p, scatter_direction);
Some((self.albedo, scattered))
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [lambertian-initial]: [material.rs] The lambertian material struct]
Note we could just as well only scatter with some probability $p$ and have attenuation be
$albedo/p$. Your choice.
If you read the code above carefully, you'll notice a small chance of mischief. If the random unit
vector we generate is exactly opposite the normal vector, the two will sum to zero, which will
result in a zero scatter direction vector. This leads to bad scenarios later on (infinities and
NaNs), so we need to intercept the condition before we pass it on.
In service of this, we'll create a new vector function -- `near_zero` -- that returns true if
the vector is very close to zero in all dimensions.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
impl Vec3 {
..
pub fn near_zero(self) -> bool {
const EPS: f64 = 1.0e-8;
self[0].abs() < EPS && self[1].abs() < EPS && self[2].abs() < EPS
}
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [vec3-near-zero]: [vec.rs] The near_zero method]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<(Color, Ray)> {
let mut scatter_direction = rec.normal + Vec3::random_in_unit_sphere().normalized();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
if scatter_direction.near_zero() {
// Catch degenerate scatter direction
scatter_direction = rec.normal;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let scattered = Ray::new(rec.p, scatter_direction);
Some((self.albedo, scattered))
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [lambertian-catch-zero]: [material.h] Lambertian scatter, bullet-proof]
For smooth metals the ray won’t be randomly scattered. The key math is: how does a ray get
reflected from a metal mirror? Vector math is our friend here:
![Figure [reflection]: Ray reflection](images/fig-1.11-reflection.jpg)
The reflected ray direction in red is just $\mathbf{v} + 2\mathbf{b}$. In our design, $\mathbf{n}$
is a unit vector, but $\mathbf{v}$ may not be. The length of $\mathbf{b}$ should be $\mathbf{v}
\cdot \mathbf{n}$. Because $\mathbf{v}$ points in, we will need a minus sign, yielding:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub fn reflect(self, n: Vec3) -> Vec3 {
self - 2.0 * self.dot(n) * n
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [Vec3-reflect]: [vec.rs] Vec3 reflection function]
The metal material just reflects rays using that formula:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct Metal {
albedo: Color
}
impl Metal {
pub fn new(a: Color) -> Metal {
Metal {
albedo: a
}
}
}
impl Scatter for Metal {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<(Color, Ray)> {
let reflected = r_in.direction().reflect(rec.normal).normalized();
let scattered = Ray::new(rec.p, reflected);
if scattered.direction().dot(rec.normal) > 0.0 {
Some((self.albedo, scattered))
} else {
None
}
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [metal-material]: [material.rs] Metal material with reflectance function]
We need to modify the `ray_color` function to use this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn ray_color(r: &Ray, world: &World, depth: u64) -> Color {
if depth <= 0 {
// If we've exceeded the ray bounce limit, no more light is gathered
return Color::new(0.0, 0.0, 0.0);
}
if let Some(rec) = world.hit(r, 0.001, f64::INFINITY) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
if let Some((attenuation, scattered)) = rec.mat.scatter(r, &rec) {
attenuation * ray_color(&scattered, world, depth - 1)
} else {
Color::new(0.0, 0.0, 0.0)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
} else {
let unit_direction = r.direction().normalized();
let t = 0.5 * (unit_direction.y() + 1.0);
(1.0 - t) * Color::new(1.0, 1.0, 1.0) + t * Color::new(0.5, 0.7, 1.0)
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-scatter]: [main.rs] Ray color with scattered reflectance]
Note that you need to implement another trait to make this possible.
A Scene with Metal Spheres
---------------------------
Now let’s add some metal spheres to our scene:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
use material::{Lambertian, Metal};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
..
fn main() {
// Image
const ASPECT_RATIO: f64 = 16.0 / 9.0;
const IMAGE_WIDTH: u64 = 256;
const IMAGE_HEIGHT: u64 = ((IMAGE_WIDTH as f64) / ASPECT_RATIO) as u64;
const SAMPLES_PER_PIXEL: u64 = 300;
const MAX_DEPTH: u64 = 50;
// World
let mut world = World::new();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let mat_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
let mat_center = Rc::new(Lambertian::new(Color::new(0.7, 0.3, 0.3)));
let mat_left = Rc::new(Metal::new(Color::new(0.8, 0.8, 0.8)));
let mat_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2)));
let sphere_ground = Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0, mat_ground);
let sphere_center = Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5, mat_center);
let sphere_left = Sphere::new(Point3::new(-1.0, 0.0, -1.0), 0.5, mat_left);
let sphere_right = Sphere::new(Point3::new(1.0, 0.0, -1.0), 0.5, mat_right);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
world.push(Box::new(sphere_ground));
world.push(Box::new(sphere_center));
world.push(Box::new(sphere_left));
world.push(Box::new(sphere_right));
// Camera
let cam = Camera::new();
println!("P3");
println!("{} {}", IMAGE_WIDTH, IMAGE_HEIGHT);
println!("255");
let mut rng = rand::thread_rng();
for j in (0..IMAGE_HEIGHT).rev() {
eprint!("\rScanlines remaining: {:3}", IMAGE_HEIGHT - j - 1);
stderr().flush().unwrap();
for i in 0..IMAGE_WIDTH {
let mut pixel_color = Color::new(0.0, 0.0, 0.0);
for _ in 0..SAMPLES_PER_PIXEL {
let random_u: f64 = rng.gen();
let random_v: f64 = rng.gen();
let u = ((i as f64) + random_u) / ((IMAGE_WIDTH - 1) as f64);
let v = ((j as f64) + random_v) / ((IMAGE_HEIGHT - 1) as f64);
let r = cam.get_ray(u, v);
pixel_color += ray_color(&r, &world, MAX_DEPTH);
}
println!("{}", pixel_color.format_color(SAMPLES_PER_PIXEL));
}
}
eprintln!("Done.");
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-with-metal]: [main.rs] Scene with metal spheres]
Which gives:
![Image 11: Shiny metal](images/img-1.11-metal-shiny.png class=pixel)
Fuzzy Reflection
-----------------
We can also randomize the reflected direction by using a small sphere and choosing a new endpoint
for the ray:
![Figure [reflect-fuzzy]: Generating fuzzed reflection rays](images/fig-1.12-reflect-fuzzy.jpg)
The bigger the sphere, the fuzzier the reflections will be. This suggests adding a fuzziness
parameter that is just the radius of the sphere (so zero is no perturbation). The catch is that for
big spheres or grazing rays, we may scatter below the surface. We can just have the surface
absorb those.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct Metal {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
albedo: Color,
fuzz: f64
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
impl Metal {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub fn new(a: Color, f: f64) -> Metal {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
Metal {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
albedo: a,
fuzz: f
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
}
}
impl Scatter for Metal {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<(Color, Ray)> {
let reflected = r_in.direction().reflect(rec.normal).normalized();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let scattered = Ray::new(rec.p, reflected + self.fuzz * Vec3::random_in_unit_sphere());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
if scattered.direction().dot(rec.normal) > 0.0 {
Some((self.albedo, scattered))
} else {
None
}
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [metal-fuzz]: [material.rs] Metal material fuzziness]
We can try that out by adding fuzziness 0.3 and 1.0 to the metals:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn main() {
..
let mat_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
let mat_center = Rc::new(Lambertian::new(Color::new(0.7, 0.3, 0.3)));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let mat_left = Rc::new(Metal::new(Color::new(0.8, 0.8, 0.8), 0.3));
let mat_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [metal-fuzz-spheres]: [main.rs] Metal spheres with fuzziness]
![Image 12: Fuzzed metal](images/img-1.12-metal-fuzz.png class=pixel)
Dielectrics
====================================================================================================
Clear materials such as water, glass, and diamonds are dielectrics. When a light ray hits them, it
splits into a reflected ray and a refracted (transmitted) ray. We’ll handle that by randomly
choosing between reflection or refraction, and only generating one scattered ray per interaction.
Refraction
-----------
The hardest part to debug is the refracted ray. I usually first just have all the light refract if
there is a refraction ray at all. For this project, I tried to put two glass balls in our scene, and
I got this (I have not told you how to do this right or wrong yet, but soon!):
![Image 13: Glass first](images/img-1.13-glass-first.png class=pixel)
Is that right? Glass balls look odd in real life. But no, it isn’t right. The world should be
flipped upside down and no weird black stuff. I just printed out the ray straight through the middle
of the image and it was clearly wrong. That often does the job.
Snell's Law
------------
The refraction is described by Snell’s law:
$$ \eta \cdot \sin\theta = \eta' \cdot \sin\theta' $$
Where $\theta$ and $\theta'$ are the angles from the normal, and $\eta$ and $\eta'$ (pronounced
"eta" and "eta prime") are the refractive indices (typically air = 1.0, glass = 1.3–1.7, diamond =
2.4). The geometry is:
![Figure [refraction]: Ray refraction](images/fig-1.13-refraction.jpg)
In order to determine the direction of the refracted ray, we have to solve for $\sin\theta'$:
$$ \sin\theta' = \frac{\eta}{\eta'} \cdot \sin\theta $$
On the refracted side of the surface there is a refracted ray $\mathbf{R'}$ and a normal
$\mathbf{n'}$, and there exists an angle, $\theta'$, between them. We can split $\mathbf{R'}$ into
the parts of the ray that are perpendicular to $\mathbf{n'}$ and parallel to $\mathbf{n'}$:
$$ \mathbf{R'} = \mathbf{R'}_{\bot} + \mathbf{R'}_{\parallel} $$
If we solve for $\mathbf{R'}_{\bot}$ and $\mathbf{R'}_{\parallel}$ we get:
$$ \mathbf{R'}_{\bot} = \frac{\eta}{\eta'} (\mathbf{R} + \cos\theta \mathbf{n}) $$
$$ \mathbf{R'}_{\parallel} = -\sqrt{1 - |\mathbf{R'}_{\bot}|^2} \mathbf{n} $$
You can go ahead and prove this for yourself if you want, but we will treat it as fact and move on.
The rest of the book will not require you to understand the proof.
We still need to solve for $\cos\theta$. It is well known that the dot product of two vectors can
be explained in terms of the cosine of the angle between them:
$$ \mathbf{a} \cdot \mathbf{b} = |\mathbf{a}| |\mathbf{b}| \cos\theta $$
If we restrict $\mathbf{a}$ and $\mathbf{b}$ to be unit vectors:
$$ \mathbf{a} \cdot \mathbf{b} = \cos\theta $$
We can now rewrite $\mathbf{R'}_{\bot}$ in terms of known quantities:
$$ \mathbf{R'}_{\bot} =
\frac{\eta}{\eta'} (\mathbf{R} + (\mathbf{-R} \cdot \mathbf{n}) \mathbf{n}) $$
When we combine them back together, we can write a function to calculate $\mathbf{R'}$:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub fn refract(self, n: Vec3, etai_over_etat: f64) -> Vec3 {
let cos_theta = ((-1.0) * self).dot(n).min(1.0);
let r_out_perp = etai_over_etat * (self + cos_theta * n);
let r_out_parallel = -(1.0 - r_out_perp.length().powi(2)).abs().sqrt() * n;
r_out_perp + r_out_parallel
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [refract]: [vec.rs] Refraction function]
And the dielectric material that always refracts is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct Dielectric {
ir: f64
}
impl Dielectric {
pub fn new(index_of_refraction: f64) -> Dielectric {
Dielectric {
ir: index_of_refraction
}
}
}
impl Scatter for Dielectric {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<(Color, Ray)> {
let refraction_ratio = if rec.front_face {
1.0 / self.ir
} else {
self.ir
};
let unit_direction = r_in.direction().normalized();
let refracted = unit_direction.refract(rec.normal, refraction_ratio);
let scattered = Ray::new(rec.p, refracted);
Some((Color::new(1.0, 1.0, 1.0), scattered))
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [dielectric]: [material.rs] Dielectric material class that always refracts]
Now we'll update the scene to change the left and center spheres to glass:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let mat_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let mat_center = Rc::new(Dielectric::new(1.5));
let mat_left = Rc::new(Dielectric::new(1.5));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let mat_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [two-glass]: [main.rs] Changing left and center spheres to glass]
This gives us the following result:
![Image 14: Glass sphere that always refracts
](images/img-1.14-glass-always-refract.png class=pixel)
Total Internal Reflection
--------------------------
That definitely doesn't look right. One troublesome practical issue is that when the ray is in the
material with the higher refractive index, there is not always be a solution to Snell’s law within the
real numbers, and thus there is no refraction possible. If we refer back to Snell's law and the
derivation of $\sin\theta'$:
$$ \sin\theta' = \frac{\eta}{\eta'} \cdot \sin\theta $$
If the ray is inside glass and outside is air ($\eta = 1.5$ and $\eta' = 1.0$):
$$ \sin\theta' = \frac{1.5}{1.0} \cdot \sin\theta $$
The value of $\sin\theta'$ cannot be greater than 1. So, if,
$$ \frac{1.5}{1.0} \cdot \sin\theta > 1.0 $$,
the equality between the two sides of the equation is broken, and a solution cannot exist. If a
solution does not exist, the glass cannot refract, and therefore must reflect the ray:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
if refraction_ratio * sin_theta > 1.0 {
// Must Reflect
..
} else {
// Can Refract
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [dielectric]: [material.rs] Determining if the ray can refract]
Here all the light is reflected, and because in practice that is usually inside solid objects, it
is called “total internal reflection”. This is why sometimes the water-air boundary acts as a
perfect mirror when you are submerged.
We can solve for `sin_theta` using the trigonometric qualities:
$$ \sin\theta = \sqrt{1 - \cos^2\theta} $$
and
$$ \cos\theta = \mathbf{R} \cdot \mathbf{n} $$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let cos_theta = ((-1.0) * unit_direction).dot(rec.normal).min(1.0);
let sin_theta = (1.0 - cos_theta.powi(2)).sqrt();
if refraction_ratio * sin_theta > 1.0 {
// Must Reflect
..
} else {
// Can Refract
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [dielectric]: [material.rs] Determining if the ray can refract]
And the dielectric material that always refracts (when possible) is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
impl Scatter for Dielectric {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<(Color, Ray)> {
let refraction_ratio = if rec.front_face {
1.0 / self.ir
} else {
self.ir
};
let unit_direction = r_in.direction().normalized();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let cos_theta = ((-1.0) * unit_direction).dot(rec.normal).min(1.0);
let sin_theta = (1.0 - cos_theta.powi(2)).sqrt();
let direction = if refraction_ratio * sin_theta > 1.0 {
unit_direction.reflect(rec.normal)
} else {
unit_direction.refract(rec.normal, refraction_ratio)
};
let scattered = Ray::new(rec.p, direction);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
Some((Color::new(1.0, 1.0, 1.0), scattered))
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [dielectric]: [material.rs] Dielectric material struct with reflection]
Attenuation is always 1 -- the glass surface absorbs nothing. If we try that out with these
parameters:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let mat_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
let mat_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
let mat_left = Rc::new(Dielectric::new(1.5));
let mat_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 0.0));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-dielectric]: [main.rs] Scene with dielectric and shiny sphere]
We get:
![Image 15: Glass sphere that sometimes refracts
](images/img-1.15-glass-sometimes-refract.png class=pixel)
Schlick Approximation
----------------------
Now real glass has reflectivity that varies with angle -- look at a window at a steep angle and it
becomes a mirror. There is a big ugly equation for that, but almost everybody uses a cheap and
surprisingly accurate polynomial approximation by Christophe Schlick. This yields our full glass
material:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
impl Dielectric {
..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
fn reflectance(cosine: f64, ref_idx: f64) -> f64 {
// Use Schlick's approximation for reflectance
let r0 = ((1.0 - ref_idx) / (1.0 + ref_idx)).powi(2);
r0 + (1.0 - r0) * (1.0 - cosine).powi(5)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
impl Scatter for Dielectric {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<(Color, Ray)> {
let refraction_ratio = if rec.front_face {
1.0 / self.ir
} else {
self.ir
};
let unit_direction = r_in.direction().normalized();
let cos_theta = ((-1.0) * unit_direction).dot(rec.normal).min(1.0);
let sin_theta = (1.0 - cos_theta.powi(2)).sqrt();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let mut rng = rand::thread_rng();
let cannot_refract = refraction_ratio * sin_theta > 1.0;
let will_reflect = rng.gen::() < Self::reflectance(cos_theta, refraction_ratio);
let direction = if cannot_refract || will_reflect {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
unit_direction.reflect(rec.normal)
} else {
unit_direction.refract(rec.normal, refraction_ratio)
};
let scattered = Ray::new(rec.p, direction);
Some((Color::new(1.0, 1.0, 1.0), scattered))
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [glass]: [material.rs] Full glass material]
Modeling a Hollow Glass Sphere
-------------------------------
An interesting and easy trick with dielectric spheres is to note that if you use a negative radius,
the geometry is unaffected, but the surface normal points inward. This can be used as a bubble to
make a hollow glass sphere:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let mat_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
let mat_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
let mat_left = Rc::new(Dielectric::new(1.5));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let mat_left_inner = Rc::new(Dielectric::new(1.5));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let mat_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
let mat_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
let sphere_ground = Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0, mat_ground);
let sphere_center = Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5, mat_center);
let sphere_left = Sphere::new(Point3::new(-1.0, 0.0, -1.0), 0.5, mat_left);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let sphere_left_inner = Sphere::new(Point3::new(-1.0, 0.0, -1.0), -0.4, mat_left_inner);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let mat_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
let sphere_right = Sphere::new(Point3::new(1.0, 0.0, -1.0), 0.5, mat_right);
world.push(Box::new(sphere_ground));
world.push(Box::new(sphere_center));
world.push(Box::new(sphere_left));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
world.push(Box::new(sphere_left_inner));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let mat_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
world.push(Box::new(sphere_right));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-hollow-glass]: [main.rs] Scene with hollow glass sphere]
This gives:
![Image 16: A hollow glass sphere](images/img-1.16-glass-hollow.png class=pixel)
Positionable Camera
====================================================================================================
Cameras, like dielectrics, are a pain to debug. So I always develop mine incrementally. First, let’s
allow an adjustable field of view (_fov_). This is the angle you see through the portal. Since our
image is not square, the fov is different horizontally and vertically. I always use vertical fov. I
also usually specify it in degrees and change to radians inside a constructor -- a matter of
personal taste.
Camera Viewing Geometry
------------------------
I first keep the rays coming from the origin and heading to the $z = -1$ plane. We could make it the
$z = -2$ plane, or whatever, as long as we made $h$ a ratio to that distance. Here is our setup:
![Figure [cam-view-geom]: Camera viewing geometry](images/fig-1.14-cam-view-geom.jpg)
This implies $h = \tan(\frac{\theta}{2})$. Our camera now becomes:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub fn new(vfov: f64, aspect_ratio: f64) -> Camera {
const FOCAL_LENGTH: f64 = 1.0;
// Vertical field-of-view in degrees
let theta = std::f64::consts::PI / 180.0 * vfov;
let viewport_height = 2.0 * (theta / 2.0).tan();
let viewport_width = aspect_ratio * viewport_height;
let orig = Point3::new(0.0, 0.0, 0.0);
let h = Vec3::new(viewport_width, 0.0, 0.0);
let v = Vec3::new(0.0, viewport_height, 0.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let llc = orig - h / 2.0 - v / 2.0 - Vec3::new(0.0, 0.0, FOCAL_LENGTH);
Camera {
origin: orig,
horizontal: h,
vertical: v,
lower_left_corner: llc
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-fov]: [camera.rs] Camera with adjustable field-of-view (fov)]
When calling it with camera `Camera::new(90, ASPECT_RATIO)` and these spheres:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn main() {
..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
// World
let r: f64 = (std::f64::consts::PI / 4.0).cos();
let mut world = World::new();
let mat_left = Rc::new(Lambertian::new(Color::new(0.0, 0.0, 1.0)));
let mat_right = Rc::new(Lambertian::new(Color::new(1.0, 0.0, 0.0)));
let sphere_left = Sphere::new(Point3::new(-r, 0.0, -1.0), r, mat_left);
let sphere_right = Sphere::new(Point3::new(r, 0.0, -1.0), r, mat_right);
world.push(Box::new(sphere_left));
world.push(Box::new(sphere_right));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// Camera
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
let cam = Camera::new(90.0, ASPECT_RATIO);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-wide-angle]: [main.rs] Scene with wide-angle camera]
gives:
![Image 17: A wide-angle view](images/img-1.17-wide-view.png class=pixel)
Positioning and Orienting the Camera
-------------------------------------
To get an arbitrary viewpoint, let’s first name the points we care about. We’ll call the position
where we place the camera _lookfrom_, and the point we look at _lookat_. (Later, if you want, you
could define a direction to look in instead of a point to look at.)
We also need a way to specify the roll, or sideways tilt, of the camera: the rotation around the
lookat-lookfrom axis. Another way to think about it is that even if you keep `lookfrom` and `lookat`
constant, you can still rotate your head around your nose. What we need is a way to specify an “up”
vector for the camera. This up vector should lie in the plane orthogonal to the view direction.
![Figure [cam-view-dir]: Camera view direction](images/fig-1.15-cam-view-dir.jpg)
We can actually use any up vector we want, and simply project it onto this plane to get an up vector
for the camera. I use the common convention of naming a “view up” (_vup_) vector. A couple of cross
products, and we now have a complete orthonormal basis $(u,v,w)$ to describe our camera’s
orientation.
![Figure [cam-view-up]: Camera view up direction](images/fig-1.16-cam-view-up.jpg)
Remember that `vup`, `v`, and `w` are all in the same plane. Note that, like before when our fixed
camera faced -Z, our arbitrary view camera faces -w. And keep in mind that we can -- but we don’t
have to -- use world up $(0,1,0)$ to specify vup. This is convenient and will naturally keep your
camera horizontally level until you decide to experiment with crazy camera angles.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub fn new(lookfrom: Point3,
lookat: Point3,
vup: Vec3,
vfov: f64,
aspect_ratio: f64) -> Camera {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// Vertical field-of-view in degrees
let theta = std::f64::consts::PI / 180.0 * vfov;
let viewport_height = 2.0 * (theta / 2.0).tan();
let viewport_width = aspect_ratio * viewport_height;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let cw = (lookfrom - lookat).normalized();
let cu = vup.cross(cw).normalized();
let cv = cw.cross(cu);
let h = viewport_width * cu;
let v = viewport_height * cv;
let llc = lookfrom - h / 2.0 - v / 2.0 - cw;
Camera {
origin: lookfrom,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
horizontal: h,
vertical: v,
lower_left_corner: llc
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub fn get_ray(&self, s: f64, t: f64) -> Ray {
Ray::new(self.origin,
self.lower_left_corner + s * self.horizontal + t * self.vertical - self.origin)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-orient]: [camera.rs] Positionable and orientable camera]
We'll change back to the prior scene, and use the new viewpoint:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// World
let mut world = World::new();
let mat_ground = Rc::new(Lambertian::new(Color::new(0.8, 0.8, 0.0)));
let mat_center = Rc::new(Lambertian::new(Color::new(0.1, 0.2, 0.5)));
let mat_left = Rc::new(Dielectric::new(1.5));
let mat_left_inner = Rc::new(Dielectric::new(1.5));
let mat_right = Rc::new(Metal::new(Color::new(0.8, 0.6, 0.2), 1.0));
let sphere_ground = Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0, mat_ground);
let sphere_center = Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5, mat_center);
let sphere_left = Sphere::new(Point3::new(-1.0, 0.0, -1.0), 0.5, mat_left);
let sphere_left_inner = Sphere::new(Point3::new(-1.0, 0.0, -1.0), -0.45, mat_left_inner);
let sphere_right = Sphere::new(Point3::new(1.0, 0.0, -1.0), 0.5, mat_right);
world.push(Box::new(sphere_ground));
world.push(Box::new(sphere_center));
world.push(Box::new(sphere_left));
world.push(Box::new(sphere_left_inner));
world.push(Box::new(sphere_right));
// Camera
let cam = Camera::new(Point3::new(-2.0, 2.0, 1.0),
Point3::new(0.0, 0.0, -1.0),
Vec3::new(0.0, 1.0, 0.0),
90.0,
ASPECT_RATIO);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-free-view]: [main.rs] Scene with alternate viewpoint]
to get:
![Image 18: A distant view](images/img-1.18-view-distant.png class=pixel)
And we can change field of view:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// Camera
let cam = Camera::new(Point3::new(-2.0, 2.0, 1.0),
Point3::new(0.0, 0.0, -1.0),
Vec3::new(0.0, 1.0, 0.0),
20.0,
ASPECT_RATIO);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [change-field-view]: [main.rs] Change field of view]
to get:
![Image 19: Zooming in](images/img-1.19-view-zoom.png class=pixel)
Defocus Blur
====================================================================================================
Now our final feature: defocus blur. Note, all photographers will call it “depth of field” so be
aware of only using “defocus blur” among friends.
The reason we defocus blur in real cameras is because they need a big hole (rather than just a
pinhole) to gather light. This would defocus everything, but if we stick a lens in the hole, there
will be a certain distance where everything is in focus. You can think of a lens this way: all light
rays coming _from_ a specific point at the focus distance -- and that hit the lens -- will be bent
back _to_ a single point on the image sensor.
We call the distance between the projection point and the plane where everything is in perfect focus
the _focus distance_. Be aware that the focus distance is not the same as the focal length -- the
_focal length_ is the distance between the projection point and the image plane.
In a physical camera, the focus distance is controlled by the distance between the lens and the
film/sensor. That is why you see the lens move relative to the camera when you change what is in
focus (that may happen in your phone camera too, but the sensor moves). The “aperture” is a hole to
control how big the lens is effectively. For a real camera, if you need more light you make the
aperture bigger, and will get more defocus blur. For our virtual camera, we can have a perfect
sensor and never need more light, so we only have an aperture when we want defocus blur.
A Thin Lens Approximation
--------------------------
A real camera has a complicated compound lens. For our code we could simulate the order: sensor,
then lens, then aperture. Then we could figure out where to send the rays, and flip the image after
it's computed (the image is projected upside down on the film). Graphics people, however, usually
use a thin lens approximation:
![Figure [cam-lens]: Camera lens model](images/fig-1.17-cam-lens.jpg)
We don’t need to simulate any of the inside of the camera. For the purposes of rendering an image
outside the camera, that would be unnecessary complexity. Instead, I usually start rays from the
lens, and send them toward the focus plane (`focus_dist` away from the lens), where everything on
that plane is in perfect focus.
![Figure [cam-film-plane]: Camera focus plane](images/fig-1.18-cam-film-plane.jpg)
Generating Sample Rays
-----------------------
Normally, all scene rays originate from the `lookfrom` point. In order to accomplish defocus blur,
generate random scene rays originating from inside a disk centered at the `lookfrom` point. The
larger the radius, the greater the defocus blur. You can think of our original camera as having a
defocus disk of radius zero (no blur at all), so all rays originated at the disk center
(`lookfrom`).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub fn random_in_unit_disk() -> Vec3 {
let mut rng = rand::thread_rng();
loop {
let p = Vec3::new(rng.gen_range(-1.0..1.0), rng.gen_range(-1.0..1.0), 0.0);
if p.length() < 1.0 {
return p;
}
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [rand-in-unit-disk]: [vec.rs] Generate random point inside unit disk]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct Camera {
origin: Point3,
lower_left_corner: Point3,
horizontal: Vec3,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
vertical: Vec3,
cu: Vec3,
cv: Vec3,
lens_radius: f64
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
impl Camera {
pub fn new(lookfrom: Point3,
lookat: Point3,
vup: Vec3,
vfov: f64,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
aspect_ratio: f64,
aperture: f64,
focus_dist: f64) -> Camera {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// Vertical field-of-view in degrees
let theta = std::f64::consts::PI / 180.0 * vfov;
let viewport_height = 2.0 * (theta / 2.0).tan();
let viewport_width = aspect_ratio * viewport_height;
let cw = (lookfrom - lookat).normalized();
let cu = vup.cross(cw).normalized();
let cv = cw.cross(cu);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let h = focus_dist * viewport_width * cu;
let v = focus_dist * viewport_height * cv;
let llc = lookfrom - h / 2.0 - v / 2.0 - focus_dist * cw;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
Camera {
origin: lookfrom,
horizontal: h,
vertical: v,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
lower_left_corner: llc,
cu: cu,
cv: cv,
lens_radius: aperture / 2.0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
}
pub fn get_ray(&self, s: f64, t: f64) -> Ray {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let rd = self.lens_radius * Vec3::random_in_unit_disk();
let offset = self.cu * rd.x() + self.cv * rd.y();
Ray::new(self.origin + offset,
self.lower_left_corner + s * self.horizontal + t * self.vertical - self.origin - offset)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-dof]: [camera.rs] Camera with adjustable depth-of-field (dof)]
Using a big aperture:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// Camera
let lookfrom = Point3::new(3.0, 3.0, 2.0);
let lookat = Point3::new(0.0, 0.0, -1.0);
let vup = Vec3::new(0.0, 1.0, 0.0);
let dist_to_focus = (lookfrom - lookat).length();
let aperture = 2.0;
let cam = Camera::new(lookfrom,
lookat,
vup,
20.0,
ASPECT_RATIO,
aperture,
dist_to_focus);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-camera-dof]: [main.rs] Scene camera with depth-of-field]
We get:
![Image 20: Spheres with depth-of-field](images/img-1.20-depth-of-field.png class=pixel)
Where Next?
====================================================================================================
A Final Render
---------------
First let’s make the image on the cover of this book -- lots of random spheres:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
fn random_scene() -> World {
let mut rng = rand::thread_rng();
let mut world = World::new();
let ground_mat = Rc::new(Lambertian::new(Color::new(0.5, 0.5, 0.5)));
let ground_sphere = Sphere::new(Point3::new(0.0, -1000.0, 0.0), 1000.0, ground_mat);
world.push(Box::new(ground_sphere));
for a in -11..=11 {
for b in -11..=11 {
let choose_mat: f64 = rng.gen();
let center = Point3::new((a as f64) + rng.gen_range(0.0..0.9),
0.2,
(b as f64) + rng.gen_range(0.0..0.9));
if choose_mat < 0.8 {
// Diffuse
let albedo = Color::random(0.0..1.0) * Color::random(0.0..1.0);
let sphere_mat = Rc::new(Lambertian::new(albedo));
let sphere = Sphere::new(center, 0.2, sphere_mat);
world.push(Box::new(sphere));
} else if choose_mat < 0.95 {
// Metal
let albedo = Color::random(0.4..1.0);
let fuzz = rng.gen_range(0.0..0.5);
let sphere_mat = Rc::new(Metal::new(albedo, fuzz));
let sphere = Sphere::new(center, 0.2, sphere_mat);
world.push(Box::new(sphere));
} else {
// Glass
let sphere_mat = Rc::new(Dielectric::new(1.5));
let sphere = Sphere::new(center, 0.2, sphere_mat);
world.push(Box::new(sphere));
}
}
}
let mat1 = Rc::new(Dielectric::new(1.5));
let mat2 = Rc::new(Lambertian::new(Color::new(0.4, 0.2, 0.1)));
let mat3 = Rc::new(Metal::new(Color::new(0.7, 0.6, 0.5), 0.0));
let sphere1 = Sphere::new(Point3::new(0.0, 1.0, 0.0), 1.0, mat1);
let sphere2 = Sphere::new(Point3::new(-4.0, 1.0, 0.0), 1.0, mat2);
let sphere3 = Sphere::new(Point3::new(4.0, 1.0, 0.0), 1.0, mat3);
world.push(Box::new(sphere1));
world.push(Box::new(sphere2));
world.push(Box::new(sphere3));
world
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn main() {
// Image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
const ASPECT_RATIO: f64 = 3.0 / 2.0;
const IMAGE_WIDTH: u64 = 1200;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
const IMAGE_HEIGHT: u64 = ((IMAGE_WIDTH as f64) / ASPECT_RATIO) as u64;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
const SAMPLES_PER_PIXEL: u64 = 500;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
const MAX_DEPTH: u64 = 50;
// World
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let world = random_scene();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
// Camera
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let lookfrom = Point3::new(13.0, 2.0, 3.0);
let lookat = Point3::new(0.0, 0.0, 0.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let vup = Vec3::new(0.0, 1.0, 0.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let dist_to_focus = 10.0;
let aperture = 0.1;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let cam = Camera::new(lookfrom,
lookat,
vup,
20.0,
ASPECT_RATIO,
aperture,
dist_to_focus);
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-final]: [main.rs] Final scene]
This gives:
![Image 21: Final scene](images/img-1.21-book1-final.jpg)
An interesting thing you might note is the glass balls don’t really have shadows which makes them
look like they are floating. This is not a bug -- you don’t see glass balls much in real life, where
they also look a bit strange, and indeed seem to float on cloudy days. A point on the big sphere
under a glass ball still has lots of light hitting it because the sky is re-ordered rather than
blocked.
Parallelism
------------
In order to make our program run in parallel, we will use the crate `Rayon` and have to use data
structures which are thread-safe. In Rust this is ensured by the borrow checker and the `Send` and
`Sync` marker traits. For more information have a look at https://doc.rust-lang.org/book/ch16-00-concurrency.html.
For now, the only thing we are interested in is to make our code thread-safe or in other words: make
it compile. We must replace all `Rc`s with `Arc`s, these are thread-safe versions of `Rc`. Also,
since our `World` is needed in each thread it, it must be `Send` and `Sync` so it can be used with
`Rayon`. We can simply achieve this by imposing those traits on `Hit` and `Scatter`. As a consequence
our `Sphere` will be `Send` and `Sync` as well and everything is fine:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct HitRecord {
pub p: Point3,
pub normal: Vec3,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub mat: Arc,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub t: f64,
pub front_face: bool
}
..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub trait Hit : Send + Sync {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [hit-thread-safety]: [hit.rs] Use a thread-safe reference counter and presume thread-safety traits]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
pub trait Scatter : Send + Sync {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
..
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scatter-thread-safety]: [material.rs] Use a thread-safe reference counter and presume thread-safety traits]
There is also an `Rc` in our Sphere struct to be replaced:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
pub struct Sphere {
center: Point3,
radius: f64,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
mat: Arc
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [sphere-thread-safety]: [sphere.rs] Use a thread-safe reference counter in the Sphere struct]
Finally, we can parallelize our code:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
fn random_scene() -> World {
let mut rng = rand::thread_rng();
let mut world = World::new();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let ground_mat = Arc::new(Lambertian::new(Color::new(0.5, 0.5, 0.5)));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let ground_sphere = Sphere::new(Point3::new(0.0, -1000.0, 0.0), 1000.0, ground_mat);
world.push(Box::new(ground_sphere));
for a in -11..=11 {
for b in -11..=11 {
let choose_mat: f64 = rng.gen();
let center = Point3::new((a as f64) + rng.gen_range(0.0..0.9),
0.2,
(b as f64) + rng.gen_range(0.0..0.9));
if choose_mat < 0.8 {
// Diffuse
let albedo = Color::random(0.0..1.0) * Color::random(0.0..1.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let sphere_mat = Arc::new(Lambertian::new(albedo));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let sphere = Sphere::new(center, 0.2, sphere_mat);
world.push(Box::new(sphere));
} else if choose_mat < 0.95 {
// Metal
let albedo = Color::random(0.4..1.0);
let fuzz = rng.gen_range(0.0..0.5);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let sphere_mat = Arc::new(Metal::new(albedo, fuzz));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let sphere = Sphere::new(center, 0.2, sphere_mat);
world.push(Box::new(sphere));
} else {
// Glass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let sphere_mat = Arc::new(Dielectric::new(1.5));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let sphere = Sphere::new(center, 0.2, sphere_mat);
world.push(Box::new(sphere));
}
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust highlight
let mat1 = Arc::new(Dielectric::new(1.5));
let mat2 = Arc::new(Lambertian::new(Color::new(0.4, 0.2, 0.1)));
let mat3 = Arc::new(Metal::new(Color::new(0.7, 0.6, 0.5), 0.0));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Rust
let sphere1 = Sphere::new(Point3::new(0.0, 1.0, 0.0), 1.0, mat1);
let sphere2 = Sphere::new(Point3::new(-4.0, 1.0, 0.0), 1.0, mat2);
let sphere3 = Sphere::new(Point3::new(4.0, 1.0, 0.0), 1.0, mat3);
world.push(Box::new(sphere1));
world.push(Box::new(sphere2));
world.push(Box::new(sphere3));
world
}
..
fn main() {
..
for j in (0..IMAGE_HEIGHT).rev() {
eprintln!("Scanlines remaining: {}", j + 1);
let scanline: Vec = (0..IMAGE_WIDTH).into_par_iter().map(|i| {
let mut pixel_color = Color::new(0.0, 0.0, 0.0);
for _ in 0..SAMPLES_PER_PIXEL {
let mut rng = rand::thread_rng();
let random_u: f64 = rng.gen();
let random_v: f64 = rng.gen();
let u = ((i as f64) + random_u) / ((IMAGE_WIDTH - 1) as f64);
let v = ((j as f64) + random_v) / ((IMAGE_HEIGHT - 1) as f64);
let r = cam.get_ray(u, v);
pixel_color += ray_color(&r, &world, MAX_DEPTH);
}
pixel_color
}).collect();
for pixel_color in scanline {
println!("{}", pixel_color.format_color(SAMPLES_PER_PIXEL));
}
}
..
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [parallel]: [main.rs] Compute all pixels of a scanline in parallel]
Next Steps
-----------
You now have a cool ray tracer! What next?
1. Lights -- You can do this explicitly, by sending shadow rays to lights, or it can be done
implicitly by making some objects emit light, biasing scattered rays toward them, and then
downweighting those rays to cancel out the bias. Both work. I am in the minority in favoring
the latter approach.
2. Triangles -- Most cool models are in triangle form. The model I/O is the worst and almost
everybody tries to get somebody else’s code to do this.
3. Surface Textures -- This lets you paste images on like wall paper. Pretty easy and a good thing
to do.
4. Solid textures -- Ken Perlin has his code online. Andrew Kensler has some very cool info at his
blog.
5. Volumes and Media -- Cool stuff and will challenge your software architecture. I favor making
volumes have the hittable interface and probabilistically have intersections based on density.
Your rendering code doesn’t even have to know it has volumes with that method.
6. Parallelism -- Run $N$ copies of your code on $N$ cores with different random seeds. Average
the $N$ runs. This averaging can also be done hierarchically where $N/2$ pairs can be averaged
to get $N/4$ images, and pairs of those can be averaged. That method of parallelism should
extend well into the thousands of cores with very little coding.
Have fun, and please send me your cool images!
(insert acknowledgments.md.html here)
Citing This Book
====================================================================================================
Consistent citations make it easier to identify the source, location and versions of this work. If
you are citing this book, we ask that you try to use one of the following forms if possible.
Basic Data
-----------
- **Title (series)**: “Ray Tracing in One Weekend Series”
- **Title (book)**: “Ray Tracing in One Weekend”
- **Author**: Peter Shirley
- **Editors**: Steve Hollasch, Trevor David Black
- **Rust translator**: Daniel Busch
- **Version/Edition**: v3.2.3
- **Date**: 2020-12-07
- **URL (series)**: https://raytracing.github.io/
- **URL (book)**: https://raytracing.github.io/books/RayTracingInOneWeekend.html
Snippets
---------
### Markdown
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[_Ray Tracing in One Weekend_](https://raytracing.github.io/books/RayTracingInOneWeekend.html)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### HTML
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ray Tracing in One Weekend
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### LaTeX and BibTex
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~\cite{Shirley2020RTW1}
@misc{Shirley2020RTW1,
title = {Ray Tracing in One Weekend},
author = {Peter Shirley},
year = {2020},
month = {December},
note = {\small \texttt{https://raytracing.github.io/books/RayTracingInOneWeekend.html}},
url = {https://raytracing.github.io/books/RayTracingInOneWeekend.html}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### BibLaTeX
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\usepackage{biblatex}
~\cite{Shirley2020RTW1}
@online{Shirley2020RTW1,
title = {Ray Tracing in One Weekend},
author = {Peter Shirley},
year = {2020},
month = {December}
url = {https://raytracing.github.io/books/RayTracingInOneWeekend.html}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### IEEE
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
“Ray Tracing in One Weekend.” raytracing.github.io/books/RayTracingInOneWeekend.html
(accessed MMM. DD, YYYY)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### MLA:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ray Tracing in One Weekend. raytracing.github.io/books/RayTracingInOneWeekend.html
Accessed DD MMM. YYYY.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Peter Shirley]: https://github.com/petershirley
[Steve Hollasch]: https://github.com/hollasch
[Trevor David Black]: https://github.com/trevordblack
[Daniel Busch]: https://github.com/misterdanb