Now that 4.9 is out, I wanted to talk about a few of the changes you might notice in our template content. The primary change we’ve made is that the brightness value for the “Sun” (directional light) has been lowered across the board, and we have also added a skylight to each template to account for an ambient lighting contribution from the sky to achieve a more realistic look. These changes were not made arbitrarily, instead they are rooted in a complicated question: “What color is that?”. The answer, as we learned, is not as straightforward as you’d think.
When we talk about rendering, many of the terms we use have their roots in photographic concepts. If we want to reproduce a given real world object’s color, or match a reference photo from the real world, there are a number of factors at play. How bright or dark the object really is, as well as how bright the light is that’s hitting the object are the primary factors. In an outdoor environment, diffused light is coming in from nearly every direction, so we need to account for that. We also need to understand what exposure value we are dealing with, in the case of photography this means knowing how long the camera shutter was open, how sensitive the film was that captured the image, and so on.
All of these variables mean that knowing the actual brightness of a given object isn’t as straightforward as “take a picture of it, and sample the color value”. Any given value is simply relative to all the other values, and a true baseline marker for the “color of things” isn’t a slam dunk. Let’s use a visual example. Say I take a photo of a basketball.
The leftmost image is a basketball where we let the camera do its best to figure out the proper exposure. The middle image is the same basketball, but with an exposure value of -2, and the rightmost is with an exposure value of +2. Which one of these is correct? Well, the truth is that all of them are correct in their own way, and at the same time none of them are. If we know the settings which generated the image (the shutter speed of the camera, the sensitivity of the film, the size of the aperture of the camera lens) we can reverse engineer the image and close in on what value the colors we are seeing really “are”. The problem is we have no baseline. For instance, an isolated image taken of a basketball whose coloration is twice as dark as a normal ball taken under a light which is twice as bright will result in an image which in many ways looks “like a basketball”, but this doesn’t tell us anything about the real actual color of the object. In short, everything is relative. What we need is a universal baseline against which we can compare other objects to find their actual color and brightness, a.k.a. their reflectance.
Enter an American photographer named Ansel Adams. Adams (alongside photographer Fred Archer) developed a system called the “Zone System” which attempts to codify a standard range of reflectance values of real world objects and place them into eleven buckets which are useful baselines for a photographer to calibrate their image-making against.
The zone system is a deep subject and certainly outside the scope of this post, however, there’s one fundamental discovery of their research which is meaningful to us. Outside of man made objects, most everything in nature falls between two rough, approximate values: the darkness of inky black soot, and the brightness of snow. Dark soot reflects something in the order of 3% of incoming light back to the viewer. Freshly fallen snow reflects around 95% of incoming light. If we assume those are the bookends of the possible reflectivity of various surfaces, we can take an average of those values called a “geometric mean”, to give us a value for the average amount of light reflected by an average material. That number is 18% reflectance, also known as “middle grey”. There, we have our baseline.
18% has become a standard in photography and CG, used as a baseline to calibrate image-making. The basic idea is that if we photograph a perfectly 18% grey, diffuse object (say a sphere, or a card), and use that to calibrate our exposure and white balance against, we now know how bright or dark all other objects are in the scene as compared to that known quantity.
All this theory is well and good, but why does having this new baseline mean that our sample maps sun lights got dimmer? Well, for that, we turn to the engine itself.
First, we need to make ourselves a virtual 18% grey sphere. It’s fully “rough” for our purposes and will get us very close to a painted real-world sphere. Here’s one in Unreal.
The next goal is to set up a scene which allows the color of this sphere at its brightest point to reflect the actual brightness value of the asset, which we know to be 18% (or 0.18). In order to do this, we place the asset in a lit scene and point a directional light at it to simulate the sun. We then view that asset from the light direction itself and tweak the light until the measured value of the actual color of the brightest point on the sphere reads 0.18. We accomplish this by using a built-in engine feature called “Visualize HDR” (Viewport->Show->Visualize->HDR(Eye Adaptation) which displays the actual color of the object in the scene before it is gamma corrected, and find out its “true” color value, not just the value being displayed on the screen.
Now, we know from talking to the rendering team that the expected value of the light brightness which results in a “true” object color (under neutral exposure conditions) should be approximately the value of pi (~3.1415), which is a side effect of our lighting algorithm and a common value across many high end renderers in offline rendering, as well as Unreal. This assumes, however, that any point on a surface is only receiving light from one source, which is never the case. You have to imagine a point in space as receiving light from everything else that it itself can see. We have light scattered from the sky, bounced from the ground, inscattered through tiny particles floating all over, etc. To simulate this, we also add a skylight to the scene, set to a neutral sunlight day setting. Since we are adding ambient light intensity to the scene via this method, we have to subsequently lower the sunlight somewhat to cancel out that contribution and still achieve diffuse illumination which approaches that initial value we wanted of pi (3.1415) as our overall light contribution.
Now, this is a good moment to point out that we are not in search of perfection here. There are a number of reasons why an exact value is less interesting to us than a good intuitive set of useable values that get us close, let’s say within ¼ or ½ a stop of exposure to ground truth. One of those is the desire for easy to remember values, in our case a sun brightness of 2.75 and a skylight brightness of 1. You’ll notice these don’t add up to 3.14. There are many interpretations of a proper baseline value, for instance some camera manufacturers calibrate their exposure internally to a 0.18 baseline reflectance, some calibrate to 0.16 and some to 0.12. Our settings represent a middle compromise amongst all those variables. Also, sunlight intensity can vary widely depending on whether it’s an overcast day or a blazing, bright, sunlit afternoon.
To summarize, everything is relative. We believe that these new values reflect a good middle ground between many different choices, get us close to ground truth along many axes, and allow people who are creating photoreal content to have a much better baseline against which to calibrate their assets.