-
Notifications
You must be signed in to change notification settings - Fork 885
/
RayTracingTheRestOfYourLife.html
4097 lines (3084 loc) · 180 KB
/
RayTracingTheRestOfYourLife.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<meta charset="utf-8">
<link rel="icon" type="image/png" href="../favicon.png">
<!-- Markdeep: https://casual-effects.com/markdeep/ -->
**Ray Tracing: The Rest of Your Life**
[Peter Shirley][], [Trevor David Black][], [Steve Hollasch][]
<br>
Version 4.0.1, 2024-08-31
<br>
Copyright 2018-2024 Peter Shirley. All rights reserved.
Overview
====================================================================================================
In _Ray Tracing in One Weekend_ and _Ray Tracing: the Next Week_, you built a “real” ray tracer.
If you are motivated, you can take the source and information contained in those books to implement
any visual effect you want. The source provides a meaningful and robust foundation upon which to
build out a raytracer for a small hobby project. Most of the visual effects found in commercial ray
tracers rely on the techniques described in these first two books. However, your capacity to add
increasingly complicated visual effects like subsurface scattering or nested dielectrics will be
severely limited by a missing mathematical foundation. In this volume, I assume that you are either
a highly interested student, or are someone who is pursuing a career related to ray tracing. We will
be diving into the math of creating a very serious ray tracer. When you are done, you should be well
equipped to use and modify the various commercial ray tracers found in many popular domains, such as
the movie, television, product design, and architecture industries.
There are many many things I do not cover in this short volume. For example, there are many ways of
writing Monte Carlo rendering programs--I dive into only one of them. I don’t cover shadow rays
(deciding instead to make rays more likely to go toward lights), nor do I cover bidirectional
methods, Metropolis methods, or photon mapping. You'll find many of these techniques in the
so-called "serious ray tracers", but they are not covered here because it is more important to cover
the concepts, math, and terms of the field. I think of this book as a deep exposure that should be
your first of many, and it will equip you with some of the concepts, math, and terms that you'll
need in order to study these and other interesting techniques.
I hope that you find the math as fascinating as I do.
See the [project README][readme] file for information about this project, the repository on GitHub,
directory structure, building & running, and how to make or reference corrections and contributions.
As before, see [our Further Reading wiki page][wiki-further] for additional project related
resources.
These books have been formatted to print well directly from your browser. We also include PDFs of
each book [with each release][releases], in the "Assets" section.
Thanks to everyone who lent a hand on this project. You can find them in the acknowledgments section
at the end of this book.
A Simple Monte Carlo Program
====================================================================================================
Let’s start with one of the simplest Monte Carlo programs. If you're not familiar with Monte Carlo
programs, then it'll be good to pause and catch you up. There are two kinds of randomized
algorithms: Monte Carlo and Las Vegas. Randomized algorithms can be found everywhere in computer
graphics, so getting a decent foundation isn't a bad idea. A randomized algorithm uses some amount
of randomness in its computation. A Las Vegas random algorithm always produces the correct result,
whereas a Monte Carlo algorithm _may_ produce a correct result--and frequently gets it wrong! But
for especially complicated problems such as ray tracing, we may not place as huge a priority on
being perfectly exact as on getting an answer in a reasonable amount of time. Las Vegas algorithms
will eventually arrive at the correct result, but we can't make too many guarantees on how long it
will take to get there. The classic example of a Las Vegas algorithm is the _quicksort_ sorting
algorithm. The quicksort algorithm will always complete with a fully sorted list, but, the time it
takes to complete is random. Another good example of a Las Vegas algorithm is the code that we use
to pick a random point in a unit sphere:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
inline vec3 random_in_unit_sphere() {
while (true) {
auto p = vec3::random(-1,1);
if (p.length_squared() < 1)
return p;
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [las-vegas-algo]: <kbd>[vec3.h]</kbd> A Las Vegas algorithm]
This code will always eventually arrive at a random point in the unit sphere, but we can't say
beforehand how long it'll take. It may take only 1 iteration, it may take 2, 3, 4, or even longer.
Whereas, a Monte Carlo program will give a statistical estimate of an answer, and this estimate will
get more and more accurate the longer you run it. Which means that at a certain point, we can just
decide that the answer is accurate _enough_ and call it quits. This basic characteristic of simple
programs producing noisy but ever-better answers is what Monte Carlo is all about, and is especially
good for applications like graphics where great accuracy is not needed.
Estimating Pi
--------------
The canonical example of a Monte Carlo algorithm is estimating $\pi$, so let's do that. There are
many ways to estimate $\pi$, with _Buffon's needle problem_ being a classic case study. In Buffon's
needle problem, one is presented with a floor made of parallel strips of floor board, each of the
same width. If a needle is randomly dropped onto the floor, what is the probability that the needle
will lie across two boards? (You can find more information on this problem with a simple Internet
search.)
We’ll do a variation inspired by this method. Suppose you have a circle inscribed inside a square:
![Figure [circ-square]: Estimating $\pi$ with a circle inside a square
](../images/fig-3.01-circ-square.jpg)
Now, suppose you pick random points inside the square. The fraction of those random points that end
up inside the circle should be proportional to the area of the circle. The exact fraction should in
fact be the ratio of the circle area to the square area:
$$ \frac{\pi r^2}{(2r)^2} = \frac{\pi}{4} $$
<div class='together'>
Since the $r$ cancels out, we can pick whatever is computationally convenient. Let’s go with $r=1$,
centered at the origin:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include <iostream>
#include <iomanip>
int main() {
std::cout << std::fixed << std::setprecision(12);
int inside_circle = 0;
int N = 100000;
for (int i = 0; i < N; i++) {
auto x = random_double(-1,1);
auto y = random_double(-1,1);
if (x*x + y*y < 1)
inside_circle++;
}
std::cout << "Estimate of Pi = " << (4.0 * inside_circle) / N << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [estpi-1]: <kbd>[pi.cc]</kbd> Estimating $\pi$ -- version one]
The answer of $\pi$ found will vary from computer to computer based on the initial random seed. On
my computer, this gives me the answer `Estimate of Pi = 3.143760000000`.
</div>
Showing Convergence
--------------------
If we change the program to run forever and just print out a running estimate:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include <iostream>
#include <iomanip>
int main() {
std::cout << std::fixed << std::setprecision(12);
int inside_circle = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
int runs = 0;
while (true) {
runs++;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto x = random_double(-1,1);
auto y = random_double(-1,1);
if (x*x + y*y < 1)
inside_circle++;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
if (runs % 100000 == 0)
std::cout << "\rEstimate of Pi = " << (4.0 * inside_circle) / runs;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ delete
std::cout << "Estimate of Pi = " << (4.0 * inside_circle) / N << '\n';
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [estpi-2]: <kbd>[pi.cc]</kbd> Estimating $\pi$ -- version two]
Stratified Samples (Jittering)
-------------------------------
We get very quickly near $\pi$, and then more slowly zero in on it. This is an example of the _Law
of Diminishing Returns_, where each sample helps less than the last. This is the worst part of Monte
Carlo. We can mitigate this diminishing return by _stratifying_ the samples (often called
_jittering_), where instead of taking random samples, we take a grid and take one sample within
each:
![Figure [jitter]: Sampling areas with jittered points](../images/fig-3.02-jitter.jpg)
<div class='together'>
This changes the sample generation, but we need to know how many samples we are taking in advance
because we need to know the grid. Let’s take a million and try it both ways:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include <iostream>
#include <iomanip>
int main() {
std::cout << std::fixed << std::setprecision(12);
int inside_circle = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
int inside_circle_stratified = 0;
int sqrt_N = 1000;
for (int i = 0; i < sqrt_N; i++) {
for (int j = 0; j < sqrt_N; j++) {
auto x = random_double(-1,1);
auto y = random_double(-1,1);
if (x*x + y*y < 1)
inside_circle++;
x = 2*((i + random_double()) / sqrt_N) - 1;
y = 2*((j + random_double()) / sqrt_N) - 1;
if (x*x + y*y < 1)
inside_circle_stratified++;
}
}
std::cout
<< "Regular Estimate of Pi = "
<< (4.0 * inside_circle) / (sqrt_N*sqrt_N) << '\n'
<< "Stratified Estimate of Pi = "
<< (4.0 * inside_circle_stratified) / (sqrt_N*sqrt_N) << '\n';
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [estpi-3]: <kbd>[pi.cc]</kbd> Estimating $\pi$ -- version three]
</div>
<div class='together'>
On my computer, I get:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Regular Estimate of Pi = 3.141184000000
Stratified Estimate of Pi = 3.141460000000
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Where the first 12 decimal places of pi are:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3.141592653589
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
Interestingly, the stratified method is not only better, it converges with a better asymptotic rate!
Unfortunately, this advantage decreases with the dimension of the problem (so for example, with the
3D sphere volume version the gap would be less). This is called the _Curse of Dimensionality_. Ray
tracing is a very high-dimensional algorithm, where each reflection adds two new dimensions:
$\phi_o$ and $\theta_o$. We won't be stratifying the output reflection angle in this book, simply
because it is a little bit too complicated to cover, but there is a lot of interesting research
currently happening in this space.
As an intermediary, we'll be stratifying the locations of the sampling positions around each pixel
location. Let's start with our familiar Cornell box scene.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include "camera.h"
#include "hittable_list.h"
#include "material.h"
#include "quad.h"
#include "sphere.h"
int main() {
hittable_list world;
auto red = make_shared<lambertian>(color(.65, .05, .05));
auto white = make_shared<lambertian>(color(.73, .73, .73));
auto green = make_shared<lambertian>(color(.12, .45, .15));
auto light = make_shared<diffuse_light>(color(15, 15, 15));
// Cornell box sides
world.add(make_shared<quad>(point3(555,0,0), vec3(0,0,555), vec3(0,555,0), green));
world.add(make_shared<quad>(point3(0,0,555), vec3(0,0,-555), vec3(0,555,0), red));
world.add(make_shared<quad>(point3(0,555,0), vec3(555,0,0), vec3(0,0,555), white));
world.add(make_shared<quad>(point3(0,0,555), vec3(555,0,0), vec3(0,0,-555), white));
world.add(make_shared<quad>(point3(555,0,555), vec3(-555,0,0), vec3(0,555,0), white));
// Light
world.add(make_shared<quad>(point3(213,554,227), vec3(130,0,0), vec3(0,0,105), light));
// Box 1
shared_ptr<hittable> box1 = box(point3(0,0,0), point3(165,330,165), white);
box1 = make_shared<rotate_y>(box1, 15);
box1 = make_shared<translate>(box1, vec3(265,0,295));
world.add(box1);
// Box 2
shared_ptr<hittable> box2 = box(point3(0,0,0), point3(165,165,165), white);
box2 = make_shared<rotate_y>(box2, -18);
box2 = make_shared<translate>(box2, vec3(130,0,65));
world.add(box2);
camera cam;
cam.aspect_ratio = 1.0;
cam.image_width = 600;
cam.samples_per_pixel = 64;
cam.max_depth = 50;
cam.background = color(0,0,0);
cam.vfov = 40;
cam.lookfrom = point3(278, 278, -800);
cam.lookat = point3(278, 278, 0);
cam.vup = vec3(0, 1, 0);
cam.defocus_angle = 0;
cam.render(world);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [estpi-3]: <kbd>[main.cc]</kbd> Cornell box, revisited]
Run this program to generate an un-stratified render and save for comparison.
Now make the following changes to implement a stratified sampling procedure:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
public:
...
void render(const hittable& world) {
initialize();
std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";
for (int j = 0; j < image_height; j++) {
std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush;
for (int i = 0; i < image_width; i++) {
color pixel_color(0,0,0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
for (int s_j = 0; s_j < sqrt_spp; s_j++) {
for (int s_i = 0; s_i < sqrt_spp; s_i++) {
ray r = get_ray(i, j, s_i, s_j);
pixel_color += ray_color(r, max_depth, world);
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
write_color(std::cout, pixel_samples_scale * pixel_color);
}
}
std::clog << "\rDone. \n";
}
private:
int image_height; // Rendered image height
double pixel_samples_scale; // Color scale factor for a sum of pixel samples
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
int sqrt_spp; // Square root of number of samples per pixel
double recip_sqrt_spp; // 1 / sqrt_spp
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
point3 center; // Camera center
...
void initialize() {
image_height = int(image_width / aspect_ratio);
image_height = (image_height < 1) ? 1 : image_height;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
sqrt_spp = int(std::sqrt(samples_per_pixel));
pixel_samples_scale = 1.0 / (sqrt_spp * sqrt_spp);
recip_sqrt_spp = 1.0 / sqrt_spp;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
center = lookfrom;
...
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
ray get_ray(int i, int j, int s_i, int s_j) const {
// Construct a camera ray originating from the defocus disk and directed at a randomly
// sampled point around the pixel location i, j for stratified sample square s_i, s_j.
auto offset = sample_square_stratified(s_i, s_j);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto pixel_sample = pixel00_loc
+ ((i + offset.x()) * pixel_delta_u)
+ ((j + offset.y()) * pixel_delta_v);
auto ray_origin = (defocus_angle <= 0) ? center : defocus_disk_sample();
auto ray_direction = pixel_sample - ray_origin;
auto ray_time = random_double();
return ray(ray_origin, ray_direction, ray_time);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
vec3 sample_square_stratified(int s_i, int s_j) const {
// Returns the vector to a random point in the square sub-pixel specified by grid
// indices s_i and s_j, for an idealized unit square pixel [-.5,-.5] to [+.5,+.5].
auto px = ((s_i + random_double()) * recip_sqrt_spp) - 0.5;
auto py = ((s_j + random_double()) * recip_sqrt_spp) - 0.5;
return vec3(px, py, 0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
vec3 sample_square() const {
...
}
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [render-estpi-3]: <kbd>[camera.h]</kbd> Stratifying the samples inside pixels]
<div class='together'>
If we compare the results from without stratification:
![<span class='num'>Image 1:</span> Cornell box, no stratification
](../images/img-3.01-cornell-no-strat.png class='pixel')
</div>
<div class='together'>
To after, with stratification:
![<span class='num'>Image 2:</span> Cornell box, with stratification
](../images/img-3.02-cornell-strat.png class='pixel')
You should, if you squint, be able to see sharper contrast at the edges of planes and at the edges
of boxes. The effect will be more pronounced at locations that have a higher frequency of change.
High frequency change can also be thought of as high information density. For our cornell box scene,
all of our materials are matte, with a soft area light overhead, so the only locations of high
information density are at the edges of objects. The effect will be more obvious with textures and
reflective materials.
If you are ever doing single-reflection or shadowing or some strictly 2D problem, you definitely
want to stratify.
</div>
One Dimensional Monte Carlo Integration
====================================================================================================
Our variation of Buffon's needle problem is a way of calculating $\pi$ by solving for the ratio of
the area of the circle and the area of the circumscribed square:
$$ \frac{\operatorname{area}(\mathit{circle})}{\operatorname{area}(\mathit{square})}
= \frac{\pi}{4}
$$
We picked a bunch of random points in the circumscribed square and counted the fraction of them that
were also in the unit circle. This fraction was an estimate that tended toward $\frac{\pi}{4}$ as
more points were added. If we didn't know the area of a circle, we could still solve for it using
the above ratio. We know that the ratio of areas of the unit circle and the circumscribed square is
$\frac{\pi}{4}$, and we know that the area of a circumscribed square is $4r^2$, so we could then use
those two quantities to get the area of a circle:
$$ \frac{\operatorname{area}(\mathit{circle})}{\operatorname{area}(\mathit{square})}
= \frac{\pi}{4}
$$
$$ \frac{\operatorname{area}(\mathit{circle})}{(2r)^2} = \frac{\pi}{4} $$
$$ \operatorname{area}(\mathit{circle}) = \frac{\pi}{4} 4r^2 $$
$$ \operatorname{area}(\mathit{circle}) = \pi r^2 $$
We choose a circle with radius $r = 1$ and get:
$$ \operatorname{area}(\mathit{circle}) = \pi $$
<div class='together'>
Our work above is equally valid as a means to solve for $\pi$ as it is a means to solve for the area
of a circle. So we could make the following substitution in one of the first versions of our pi
program:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ delete
std::cout << "Estimate of Pi = " << (4.0 * inside_circle) / N << '\n';
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
std::cout << "Estimated area of unit circle = " << (4.0 * inside_circle) / N << '\n';
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [estunitcircle]: <kbd>[pi.cc]</kbd> Estimating area of unit circle]
</div>
Expected Value
--------------
Let's take a step back and think about our Monte Carlo algorithm a little bit more generally.
If we assume that we have all of the following:
1. A list of values $X$ that contains members $x_i$:
$$ X = (x_0, x_1, ..., x_{N-1}) $$
2. A continuous function $f(x)$ that takes members from the list:
$$ y_i = f(x_i) $$
3. A function $F(X)$ that takes the list $X$ as input and produces the list $Y$ as output:
$$ Y = F(X) $$
4. Where output list $Y$ has members $y_i$:
$$ Y = (y_0, y_1, ..., y_{N-1}) = (f(x_0), f(x_1), ..., f(x_{N-1})) $$
If we assume all of the above, then we could solve for the arithmetic mean--the average--of the list
$Y$ with the following:
$$ \operatorname{average}(Y_i) = E[Y] = \frac{1}{N} \sum_{i=0}^{N-1} y_i $$
$$ = \frac{1}{N} \sum_{i=0}^{N-1} f(x_i) $$
$$ = E[F(X)] $$
Where $E[Y]$ is referred to as the _expected value of_ $Y$.
Note the subtle difference between _average value_ and _expected value_ here:
- A set may have many different subsets of selections from that set. Each subset has an _average
value_, which is the sum of all selections divided by the count of selections. Note that a given
item may occur zero times, one times, or multiple times in a subset.
- A set has only one _expected value_: the sum of _all_ members of a set, divided by the total
number of items in that set. Put another way, the _expected value_ is the _average value_ of _all_
members of a set.
It is important to note that as the number of random samples from a set increases, the average value
of a set will converge to the expected value.
If the values of $x_i$ are chosen randomly from a continuous interval $[a,b]$ such that $ a \leq x_i
\leq b $ for all values of $i$, then $E[F(X)]$ will approximate the average of the continuous
function $f(x')$ over the the same interval $ a \leq x' \leq b $.
$$ E[f(x') | a \leq x' \leq b] \approx E[F(X) | X =
\{\small x_i | a \leq x_i \leq b \normalsize \} ] $$
$$ \approx E[Y = \{\small y_i = f(x_i) | a \leq x_i \leq b \normalsize \} ] $$
$$ \approx \frac{1}{N} \sum_{i=0}^{N-1} f(x_i) $$
If we take the number of samples $N$ and take the limit as $N$ goes to $\infty$, then we get the
following:
$$ E[f(x') | a \leq x' \leq b] = \lim_{N \to \infty} \frac{1}{N} \sum_{i=0}^{N-1} f(x_i) $$
Within the continuous interval $[a,b]$, the expected value of continuous function $f(x')$ can be
perfectly represented by summing an infinite number of random points within the interval. As this
number of points approaches $\infty$ the average of the outputs tends to the exact answer. This is a
Monte Carlo algorithm.
Sampling random points isn't our only way to solve for the expected value over an interval. We can
also choose where we place our sampling points. If we had $N$ samples over an interval $[a,b]$ then
we could choose to equally space points throughout:
$$ x_i = a + i \Delta x $$
$$ \Delta x = \frac{b - a}{N} $$
Then solving for their expected value:
$$ E[f(x') | a \leq x' \leq b] \approx \frac{1}{N} \sum_{i=0}^{N-1} f(x_i)
\Big|_{x_i = a + i \Delta x} $$
$$ E[f(x') | a \leq x' \leq b] \approx \frac{\Delta x}{b - a} \sum_{i=0}^{N-1} f(x_i)
\Big|_{x_i = a + i \Delta x} $$
$$ E[f(x') | a \leq x' \leq b] \approx \frac{1}{b - a} \sum_{i=0}^{N-1} f(x_i) \Delta x
\Big|_{x_i = a + i \Delta x} $$
Take the limit as $N$ approaches $\infty$
$$ E[f(x') | a \leq x' \leq b] = \lim_{N \to \infty} \frac{1}{b - a} \sum_{i=0}^{N-1}
f(x_i) \Delta x \Big|_{x_i = a + i \Delta x} $$
This is, of course, just a regular integral:
$$ E[f(x') | a \leq x' \leq b] = \frac{1}{b - a} \int_{a}^{b} f(x) dx $$
If you recall your introductory calculus class, the integral of a function is the area under the
curve over that interval:
$$ \operatorname{area}(f(x), a, b) = \int_{a}^{b} f(x) dx$$
Therefore, the average over an interval is intrinsically linked with the area under the curve in
that interval.
$$ E[f(x) | a \leq x \leq b] = \frac{1}{b - a} \cdot \operatorname{area}(f(x), a, b) $$
Both the integral of a function and a Monte Carlo sampling of that function can be used to solve for
the average over a specific interval. While integration solves for the average with the sum of
infinitely many infinitesimally small slices of the interval, a Monte Carlo algorithm will
approximate the same average by solving the sum of ever increasing random sample points within the
interval. Counting the number of points that fall inside of an object isn't the only way to measure
its average or area. Integration is also a common mathematical tool for this purpose. If a closed
form exists for a problem, integration is frequently the most natural and clean way to formulate
things.
I think a couple of examples will help.
Integrating x²
---------------
Let’s look at a classic integral:
$$ I = \int_{0}^{2} x^2 dx $$
We could solve this using integration:
$$ I = \frac{1}{3} x^3 \Big|_{0}^{2} $$
$$ I = \frac{1}{3} (2^3 - 0^3) $$
$$ I = \frac{8}{3} $$
Or, we could solve the integral using a Monte Carlo approach. In computer sciency notation, we might
write this as:
$$ I = \operatorname{area}( x^2, 0, 2 ) $$
We could also write it as:
$$ E[f(x) | a \leq x \leq b] = \frac{1}{b - a} \cdot \operatorname{area}(f(x), a, b) $$
$$ \operatorname{average}(x^2, 0, 2) = \frac{1}{2 - 0} \cdot \operatorname{area}( x^2, 0, 2 ) $$
$$ \operatorname{average}(x^2, 0, 2) = \frac{1}{2 - 0} \cdot I $$
$$ I = 2 \cdot \operatorname{average}(x^2, 0, 2) $$
<div class='together'>
The Monte Carlo approach:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include <iostream>
#include <iomanip>
int main() {
int a = 0;
int b = 2;
int N = 1000000;
auto sum = 0.0;
for (int i = 0; i < N; i++) {
auto x = random_double(a, b);
sum += x*x;
}
std::cout << std::fixed << std::setprecision(12);
std::cout << "I = " << (b - a) * (sum / N) << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [integ-xsq-1]: <kbd>[integrate_x_sq.cc]</kbd> Integrating $x^2$]
</div>
<div class='together'>
This, as expected, produces approximately the exact answer we get with integration, _i.e._
$I = 2.666… = \frac{8}{3}$. You could rightly point to this example and say that the integration is
actually a lot less work than the Monte Carlo. That might be true in the case where the function is
$f(x) = x^2$, but there exist many functions where it might be simpler to solve for the Monte Carlo
than for the integration, like $f(x) = \sin^5(x)$.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
for (int i = 0; i < N; i++) {
auto x = random_double(a, b);
sum += std::pow(std::sin(x), 5.0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [integ-sin5]: Integrating $\sin^5$]
</div>
<div class='together'>
We could also use the Monte Carlo algorithm for functions where an analytical integration does not
exist, like $f(x) = \ln(\sin(x))$.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
for (int i = 0; i < N; i++) {
auto x = random_double(a, b);
sum += std::log(std::sin(x));
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [integ-ln-sin]: Integrating $\ln(\sin)$]
</div>
In graphics, we often have functions that we can write down explicitly but that have a complicated
analytic integration, or, just as often, we have functions that _can_ be evaluated but that _can't_
be written down explicitly, and we will frequently find ourselves with a function that can _only_ be
evaluated probabilistically. The function `ray_color` from the first two books is an example of a
function that can only be determined probabilistically. We can’t know what color can be seen from
any given place in all directions, but we can statistically estimate which color can be seen from
one particular place, for a single particular direction.
Density Functions
------------------
The `ray_color` function that we wrote in the first two books, while elegant in its simplicity, has
a fairly _major_ problem. Small light sources create too much noise. This is because our uniform
sampling doesn’t sample these light sources often enough. Light sources are only sampled if a ray
scatters toward them, but this can be unlikely for a small light, or a light that is far away. If
the background color is black, then the only real sources of light in the scene are from the lights
that are actually placed about the scene. There might be two rays that intersect at nearby points on
a surface, one that is randomly reflected toward the light and one that is not. The ray that is
reflected toward the light will appear a very bright color. The ray that is reflected to somewhere
else will appear a very dark color. The two intensities should really be somewhere in the middle. We
could lessen this problem if we steered both of these rays toward the light, but this would cause
the scene to be inaccurately bright.
For any given ray, we usually trace from the camera, through the scene, and terminate at a light.
But imagine if we traced this same ray from the light source, through the scene, and terminated at
the camera. This ray would start with a bright intensity and would lose energy with each successive
bounce around the scene. It would ultimately arrive at the camera, having been dimmed and colored by
its reflections off various surfaces. Now, imagine if this ray was forced to bounce toward the
camera as soon as it could. It would appear inaccurately bright because it hadn't been dimmed by
successive bounces. This is analogous to sending more random samples toward the light. It would go a
long way toward solving our problem of having a bright pixel next to a dark pixel, but it would then
just make _all_ of our pixels bright.
We can remove this inaccuracy by downweighting those samples to adjust for the over-sampling. How do
we do this adjustment? Well, we'll first need to understand the concept of a _probability density
function_. But to understand the concept of a _probability density function_, we'll first need to
know what a _density function_ is.
A _density function_ is just the continuous version of a histogram. Here’s an example of a histogram
from the histogram Wikipedia page:
![Figure [histogram]: Histogram example](../images/fig-3.03-histogram.jpg)
If we had more items in our data source, the number of bins would stay the same, but each bin would
have a higher frequency of each item. If we divided the data into more bins, we'd have more bins,
but each bin would have a lower frequency of each item. If we took the number of bins and raised it
to infinity, we'd have an infinite number of zero-frequency bins. To solve for this, we'll replace
our histogram, which is a _discrete function_, with a _discrete density function_. A _discrete
density function_ differs from a _discrete function_ in that it normalizes the y-axis to a fraction
or percentage of the total, _i.e_ its density, instead of a total count for each bin. Converting
from a _discrete function_ to a _discrete density function_ is trivial:
$$ \text{Density of Bin i} = \frac{\text{Number of items in Bin i}}
{\text{Number of items total}} $$
Once we have a _discrete density function_, we can then convert it into a _density function_ by
changing our discrete values into continuous values.
$$ \text{Bin Density} = \frac{(\text{Fraction of trees between height }H\text{ and }H’)}
{(H-H’)} $$
So a _density function_ is a continuous histogram where all of the values are normalized against a
total. If we had a specific tree we wanted to know the height of, we could create a _probability
function_ that would tell us how likely it is for our tree to fall within a specific bin.
$$ \text{Probability of Bin i} = \frac{\text{Number of items in Bin i}}
{\text{Number of items total}} $$
If we combined our _probability function_ and our (continuous) _density function_, we could
interpret that as a statistical predictor of a tree’s height:
$$ \text{Probability a random tree is between } H \text{ and } H’ =
\text{Bin Density}\cdot(H-H’)$$
Indeed, with this continuous probability function, we can now say the likelihood that any given tree
has a height that places it within any arbitrary span of multiple bins. This is a _probability
density function_ (henceforth _PDF_). In short, a PDF is a continuous function that can be
integrated over to determine how likely a result is over an integral.
Constructing a PDF
-------------------
Let’s make a PDF and play around with it to build up an intuition. We'll use the following function:
![Figure [linear-pdf]: A linear PDF](../images/fig-3.04-linear-pdf.jpg)
What does this function do? Well, we know that a PDF is just a continuous function that defines the
likelihood of an arbitrary range of values. This function $p(r)$ is constrained between $0$ and $2$
and linearly increases along that interval. So, if we used this function as a PDF to generate a
random number then the _probability_ of getting a number near zero would be less than the
probability of getting a number near two.
The PDF $p(r)$ is a linear function that starts with $0$ at $r=0$ and monotonically increases to its
highest point at $p(2)$ for $r=2$. What is the value of $p(2)$? What is the value of $p(r)$? Maybe
$p(2)$ is 2? The PDF increases linearly from 0 to 2, so guessing that the value of $p(2)$ is 2 seems
reasonable. At least it looks like it can't be 0.
Remember that the PDF is a probability function. We are constraining the PDF so that it lies in the
range [0,2]. The PDF represents the continuous density function for a probabilistic list. If we know
that everything in that list is contained within 0 and 2, we can say that the probability of getting
a value between 0 and 2 is 100%. Therefore, the area under the curve must sum to 1:
$$ \operatorname{area}(p(r), 0, 2) = 1 $$
All linear functions can be represented as a constant term multiplied by a variable.
$$ p(r) = C \cdot r $$
We need to solve for the value of $C$. We can use integration to work backwards.
$$ 1 = \operatorname{area}(p(r), 0, 2) $$
$$ = \int_{0}^{2} C \cdot r dr $$
$$ = C \cdot \int_{0}^{2} r dr $$
$$ = C \cdot \frac{r^2}{2} \Big|_{0}^{2} $$
$$ = C ( \frac{2^2}{2} - \frac{0}{2} ) $$
$$ C = \frac{1}{2} $$
That gives us the PDF of $p(r) = r/2$. Just as with histograms we can sum up (integrate) the region
to figure out the probability that $r$ is in some interval $[x_0,x_1]$:
$$ \operatorname{Probability} (r | x_0 \leq r \leq x_1 )
= \operatorname{area}(p(r), x_0, x_1)
$$
$$ \operatorname{Probability} (r | x_0 \leq r \leq x_1 ) = \int_{x_0}^{x_1} \frac{r}{2} dr $$
To confirm your understanding, you should integrate over the region $r=0$ to $r=2$, you should get a
probability of 1.
After spending enough time with PDFs you might start referring to a PDF as the probability that a
variable $r$ is value $x$, _i.e._ $p(r=x)$. Don't do this. For a continuous function, the
probability that a variable is a specific value is always zero. A PDF can only tell you the
probability that a variable will fall within a given interval. If the interval you're checking
against is a single value, then the PDF will always return a zero probability because its "bin" is
infinitely thin (has zero width). Here's a simple mathematical proof of this fact:
$$ \operatorname{Probability} (r = x) = \int_{x}^{x} p(r) dr $$
$$ = P(r) \Big|_{x}^{x} $$
$$ = P(x) - P(x) $$
$$ = 0 $$
Finding the probability of a region surrounding x may not be zero:
$$ \operatorname{Probability} (r | x - \Delta x < r < x + \Delta x ) =
\operatorname{area}(p(r), x - \Delta x, x + \Delta x) $$
$$ = P(x + \Delta x) - P(x - \Delta x) $$
Choosing our Samples
--------------------
If we have a PDF for the function that we care about, then we have the probability that the function
will return a value within an arbitrary interval. We can use this to determine where we should
sample. Remember that this started as a quest to determine the best way to sample a scene so that we
wouldn't get very bright pixels next to very dark pixels. If we have a PDF for the scene, then we
can probabilistically steer our samples toward the light without making the image inaccurately
bright. We already said that if we steer our samples toward the light then we _will_ make the image
inaccurately bright. We need to figure out how to steer our samples without introducing this
inaccuracy, this will be explained a little bit later, but for now we'll focus on generating samples
if we have a PDF. How do we generate a random number with a PDF? For that we will need some more
machinery. Don’t worry -- this doesn’t go on forever!
<div class='together'>
Our random number generator `random_double()` produces a random double between 0 and 1. The number
generator is uniform between 0 and 1, so any number between 0 and 1 has equal likelihood. If our PDF
is uniform over a domain, say $[0,10]$, then we can trivially produce perfect samples for this
uniform PDF with
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
10.0 * random_double()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
That's an easy case, but the vast majority of cases that we're going to care about are nonuniform.
We need to figure out a way to convert a uniform random number generator into a nonuniform random
number generator, where the distribution is defined by the PDF. We'll just _assume_ that there
exists a function $f(d)$ that takes uniform input and produces a nonuniform distribution weighted by
PDF. We just need to figure out a way to solve for $f(d)$.
For the PDF given above, where $p(r) = \frac{r}{2}$, the probability of a random sample is higher
toward 2 than it is toward 0. There is a greater probability of getting a number between 1.8 and 2.0
than between 0.0 and 0.2. If we put aside our mathematics hat for a second and put on our computer
science hat, maybe we can figure out a smart way of partitioning the PDF. We know that there is a
higher probability near 2 than near 0, but what is the value that splits the probability in half?
What is the value that a random number has a 50% chance of being higher than and a 50% chance of
being lower than? What is the $x$ that solves:
$$ 50\% = \int_{0}^{x} \frac{r}{2} dr = \int_{x}^{2} \frac{r}{2} dr $$
Solving gives us:
$$ 0.5 = \frac{r^2}{4} \Big|_{0}^{x} $$
$$ 0.5 = \frac{x^2}{4} $$
$$ x^2 = 2$$
$$ x = \sqrt{2}$$
As a crude approximation we could create a function `f(d)` that takes as input `double d =
random_double()`. If `d` is less than (or equal to) 0.5, it produces a uniform number in
$[0,\sqrt{2}]$, if `d` is greater than 0.5, it produces a uniform number in $[\sqrt{2}, 2]$.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
double f(double d)
{
if (d <= 0.5)
return std::sqrt(2.0) * random_double();
else
return std::sqrt(2.0) + (2 - std::sqrt(2.0)) * random_double();
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [crude-approx]: A crude, first-order approximation to nonuniform PDF]
<div class='together'>
While our initial random number generator was uniform from 0 to 1:
![Figure [uniform-dist]: A uniform distribution](../images/fig-3.05-uniform-dist.jpg)
</div>
<div class='together'>
Our, new, crude approximation for $\frac{r}{2}$ is nonuniform (but only just):
![Figure [approx-f]: A nonuniform distribution for $r/2$
](../images/fig-3.06-nonuniform-dist.jpg)
</div>
We had the analytical solution to the integration above, so we could very easily solve for the 50%
value. But we could also solve for this 50% value experimentally. There will be functions that we
either can't or don't want to solve for the integration. In these cases, we can get an experimental
result close to the real value. Let's take the function:
$$ p(x) = e^{\frac{-x}{2 \pi}} \sin^2(x) $$
<div class='together'>
Which looks a little something like this:
![Figure [exp-sin2]: A function that we don't want to solve analytically
](../images/fig-3.07-exp-sin2.jpg)
</div>
<div class='together'>
At this point you should be familiar with how to experimentally solve for the area under a curve.
We'll take our existing code and modify it slightly to get an estimate for the 50% value. We want to
solve for the $x$ value that gives us half of the total area under the curve. As we go along and
solve for the rolling sum over N samples, we're also going to store each individual sample alongside
its `p(x)` value. After we solve for the total sum, we'll sort our samples and add them up until we
have an area that is half of the total. From $0$ to $2\pi$ for example:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include <algorithm>
#include <vector>
#include <iostream>
#include <iomanip>
struct sample {
double x;
double p_x;
};
bool compare_by_x(const sample& a, const sample& b) {
return a.x < b.x;
}
int main() {
const unsigned int N = 10000;
sample samples[N];
double sum = 0.0;
// Iterate through all of our samples.
for (unsigned int i = 0; i < N; i++) {
// Get the area under the curve.
auto x = random_double(0, 2*pi);
auto sin_x = std::sin(x);
auto p_x = exp(-x / (2*pi)) * sin_x * sin_x;
sum += p_x;
// Store this sample.
sample this_sample = {x, p_x};
samples[i] = this_sample;
}
// Sort the samples by x.
std::sort(std::begin(samples), std::end(samples), compare_by_x);
// Find out the sample at which we have half of our area.
double half_sum = sum / 2.0;
double halfway_point = 0.0;
double accum = 0.0;
for (unsigned int i = 0; i < N; i++){
accum += samples[i].p_x;
if (accum >= half_sum) {
halfway_point = samples[i].x;
break;
}
}
std::cout << std::fixed << std::setprecision(12);
std::cout << "Average = " << sum / N << '\n';
std::cout << "Area under curve = " << 2 * pi * sum / N << '\n';
std::cout << "Halfway = " << halfway_point << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [est-halfway]: <kbd>[estimate_halfway.cc]</kbd> Estimating the 50% point of a function]