Adventures in Scene Referred Space – Part Three

Recap

In a previous blog post I spoke to the differences between Scene Referred Space (henceforth SRS – not a recognized acronym, I’m just lazy) and more limited Display Referred Space (DRS). Working in SRS in a compositing or rendering environment computes the behavior of light in an color range that is not limited to our display, but rather – as the name implies – is relative to the scene. In the ideal CG pipeline we’ll work in SRS and only shift to a DRS as our final step when outputting our imagery to a limited color range suitable for display.

There are two critical benefits to working this way. First, matching imported elements into a unified SRS removes a lot of compositing guesswork and gets us away from simply “eyeballing” exposure and color matches.

Second, SRS compositing computations such as ADD and MULTIPLY will be calculated in a more physically accurate way regardless of whether the element source is CG or photographic or where (or who) it came from earlier in the pipeline. We use color transforms to convert these inputs into our SRS working environment

In my last blog post I mentioned that professional digital cinema cameras provide industry-standard documented outputs allowing us to use color transforms to quickly get footage into an SRS pipeline. ACES (Academy Color Encoding System) is the industry standard that’s quickly gaining traction as the standard to move assets back and forth between DRS and SRS and render them consistently. Here’s a nice overview.

A practical application of SRS

But what of the apprentice VFX artist with a consumer camera looking to adopt a SRS pipeline to create the most convincing CG integration on a budget?

In the next few posts I’ll be sharing a series of practical steps toward a SRS workflow when shooting video with a consumer level DSLR camera. This type of camera likely is not capable of outputting an industry standard format, has a limited color depth (8 vs. 10-bit), and a limited dynamic range (only half the stops of an ARRI Alexa for example). Essentially we’re going to create our own color transform to redistribute our DRS video into SRS.

While this is effort upfront, it’s work saved many times over further down the pipe and produces more convincing results. And while this still isn’t going to give us as much color data as the pros, it’s a significant step up from compositing in DRS or dealing with mismatched elements in different color spaces.

In this post we’ll take our first practical step.

Shooting LOG or “Flat”

If you’re reading this post then you’ve likely come across the concept of shooting digital video with a “LOG” (short for Logarithmic) or “Flat” color profile. By imposing a curve on the color captured by our camera the initial image looks washed out to the human eye but dedicates more data to to information in areas of light and shadow giving us more exposure flexibility in post processing and color grading later on.

YouTuber Channel 8 provides a nice digestible introduction to logarithmic and “Flat” profiles.

Professional and (increasingly) some prosumer cameras provide industry standard logarithmic profiles for output video such as Sony’s S-Log. To a CG professional the benefit is that documented color transforms exist that remap this output into SRS in a compositing pipeline: IDTs, Input Device Transforms in ACES terminology.

In our scenario, assuming your camera doesn’t provide a LOG output, another option is to use a “Flat” color profile. Your camera may come loaded with an option deep in the menu, though in most cases you’ll load one via the SD card. Downloadable “Flat” profiles are widely available but because they’re typically not well documented or standardized while you get the benefit of capturing more color information upfront you won’t be able to use an off-the-shelf color transform to move your video into SRS. We’re going to fix that.

Finally, if your camera provides neither LOG not “Flat”, at the very least it’ll allow you to flatten your color profile by reducing image saturation, brightness, and/or contrast, to something that brings more information into shadows and highlights. Arguably, in fact, these minor adjustments may be all you need as an effective no muss no fuss “Flat” profile with the added benefit of not flattening color too far (a whole other discussion but part of the balancing act of using a LOG-like curve with limited 8 bit color data)

There are plenty of conversations online about how LOG or “Flat” color curves work (including this recommendation) so I won’t discuss further here but we’re discussing it here because shooting this way not only helps us capture the best possible color information upfront, it introduces our first known repeatable parameter and therefore takes us on our first step toward building a personal color transform that can translate the data out of our camera into SRS.

Getting to know your camera

The next step toward our custom color transform is to record how our camera and lens respond to light.

We’re going to shoot a series of video clips against a gray card to determine the upper and lower limits of our rig’s dynamic range and how it responds across that range. A typical DSLR is capable of capturing 12 stops of light in still mode but considerably less when capturing video.

It’s important to shoot this test with the same ISO, lens and “Flat” profile (or equivalent brightness, contrast, saturation, sharpness settings) that you intend to use later on when shooting video for compositing with CG. This is because the color transform we’ll be building is specific to camera, lens, ISO, and color profile. ISO is particularly important and will have a big impact on your results. Of course you can generate several transforms for different ISOs, lenses, or color profile combinations but you’ll need to rinse-and-repeat this initial data collection. The good news is that once this initial grunt work is done you’ll have your own color transform(s) “on file” for future use. At the very least if you can capture 3 ISO sets in one set up you’ve made nice headway.

For my Nikon D5200 I selected a “Flat” color profile from Nikon Picture Control Editor. This handy website allows you to preview Picture Controls (color profiles in Nikon parlance) and download them in a format that can be loaded onto the camera via SD card. It also documents the numerical values of the color curves which can come in handy later on.

NIkon Picture Control Editor. Not a Nikon user? Your camera brand will have similar color curve solutions.

NIkon Picture Control Editor. Not a Nikon user? Your camera brand will have similar color curve solutions.

I chose (the confusingly named) “[51] Neutral Gamma 1.0” because the linearity of the curve dedicates color information equally across the entire range without over flattening any particular part. But other useable profiles such as “[15] FLAT” exist or even something like “[57] Neutral Gamma 0.4” which more closely resembles the familiar “knee” of a LOG curve. It’s important to note that the site is a repository for all sorts of looks and not all of them are designed to maximize color capture. A quick Google search will bring up similar solutions for your camera brand.

One last note. Depending on your camera it may be possible to combine a color profile with additional brightness, contrast, sharpness, or saturation settings. Ideally use one approach or the other. The most important thing is to be consistent. Shoot the test as you intend to shoot later.

Next you’ll need a gray card. I have one as part of a color checker but a single gray card is more affordable and frankly should be part of every photographer’s arsenal. A gray card is designed to match middle gray or 18% gray which is the magic number the exposure meter in your camera is aiming for when the needle hits the center. This is our next baseline.

Time to start shooting

You need to shoot under controlled conditions with the gray card lit as brightly as possible. Inside with photo lamps, or at least clean, bright, even lighting is best but outside in direct sunlight can also work. The secret is to keep the light consistent. Set your camera on a tripod and frame up your gray card to fill the frame. Focus, and set your camera to spot (not matrix) metering. Center the spot. Your gray card can be set up on a second tripod, a flat surface or taped to a wall.

If you’re comfortable with white balancing, now is also the time to do so. You may see some color drift over the course of the test and that can be solved mathematically later but anything you can do to correct up front is better.

Setting up on the roof of my apartment building near the elevator bulkhead let me get the color checker in direct sunlight while keeping me and camera in the shade.

Once set up, the exercise is to shoot a series of short 4 or 5 second video clips (not stills) of the gray card starting with the correct exposure (needle in the middle, making a note of the settings – in my case: ISO 100, f-14, 1/125th – and stopping down in ⅓ stop increments until your sensor reads absolute black, then stopping up to absolute white.

My approach is to step down the f-stop in ⅓ increments (slowly closing the aperture) until I’ve run out of options and only then start stopping down (increasing) the shutter speed.

If your display is set up to show histograms following each shot, keep an eye on them and you should see the bars gradually moving toward RGB 0,0,0 with each ⅓ stop down. At the very least the LCD preview will be getting darker. If your LCD preview and/or histogram indicate you’re close to but not quite at zero and you’ve run out of f-stop or shutter speed adjustments, as a last shot, try a shot with the cap on. What you’re looking to record is how your sensor responds from middle gray all the way down to the absence of light.

Once you’ve taken the 12 or 15 shots (around 4-5 stops worth) that will take you to black, reset to your original f-stop and shutter speed settings for absolute middle gray and repeat the process, this time moving up in ⅓ stop increments aiming to reach absolute white.

Halving or doubling the ISO is, of course, the equivalent of stopping up or down but don’t change the ISO in this exercise. An ISO change will negatively influence sensor response. If you need more headroom or can’t bottom out to black get more light on your gray card or drop the light levels accordingly and reshoot.

Organizing the results

 Notes from the field as I stop down in 1/3 stop increments, starting aperture first.

I find it helpful to keep field notes of each stop as I move down but quickly transfer these to an Excel file or Google Sheets file once back at my desk to keep my head straight. Do the same and get comfortable with this sheet, we’ll be returning to it in a few moments and again in the next blog post. Here were my results:

Once you’ve downloaded the shots to your computer, it’s helpful to rename them to something that makes sense. I name mine in 0.3 EV (Exposure Value) increments.

The thumbnail view of the shots after renaming in Windows explorer.

The next step is to record the RGB values of each ⅓ stop increment shot. I recommend using the excellent DJV Imaging viewer. It’s free and a great tool for CG artists allowing you to review image sequences and video files, and has OpenEXR support. Load each shot in turn into DJV and use the ink dropper tool to test the RGB values. I sample from the center of the frame given this is where my spot meter was set. Given this is video, you’ll get some grain and/or variance in the RGB result so move the ink dropper around a little until you feel like you’ve found some consistency. I typically see an RGB range that fluctuates up or down by 2 or 3 values, and so round down to the lowest value for consistency. Middle gray for me read at RGB 121,121,121.

It’s worth noting that my middle gray RGB reading (121,121,121) nicely demonstrates that shooting a gray card with perfect exposure still likely won’t assert values exactly equal to 18% gray. Instead, again, the results will depend on your camera’s characteristics and any color profile in use, and therefore nicely exemplifies the purpose of this exercise: to gather data that’s specific to your camera set up.
Sampling RGB values in DJV Imaging Viewer. The values can be seen lower right.

Sampling RGB values in DJV Imaging Viewer. The values can be seen lower right.

At some point above or below middle gray you may see the RGB values separate e.g. 32,34,35. This speaks either to the lighting conditions and/or your camera’s sensor response. Record the values as you see them even if they separate.

Once you hit RGB 255,255,255 you’ve hit the upper limit of your sensor. In my case this came 3 stops (9 shots) above middle gray. On the day of the shoot I recorded another 3 shots above this but ended up not needing them. Once you start reading 255,255,255 and stay there you’ve hit the wall.

The same is true in the other direction. At some point you should see RGB value readings of 0,0,0 and 3 or 4 shots may repeat this number if you were able to dive deeper. The first time you hit 0,0,0 is what you care about. In my case I hit 0,0,0 at about 4 ½ stops below middle gray taking into account shooting with the cap on to hit absolute zero. It’s not unusual to have lopsided results like this with the number of stops above and below middle gray not matching.

All in all this meant my Nikon D5200 with the stock lens in video mode was able to capture a total of 7 ½ stops, 3 above and 4 ½ below middle gray. Record these findings in your spreadsheet. Here are mine:

Depending on the curve you shoot with there are some rare cases where you may not be able to hit 255,255,255 or 0,0,0. For example on one shoot I couldn’t get below 23,23,23. This is the sign of a bad curve and it’s wasting data. Try a different color profile or color settings and re-shoot. Remember, you only have 255 pips of color data, don’t waste any.

Wrapping Up

You’ve just recorded the dynamic range of your camera, lens, and color profile combination. We’ve completed this exercise because to build a unique color transform for our camera package we need to know how many stops of light the 8-bit DRS video output from our camera represents and how the sensor responds over that range.

If you have more time available to you and/or have some favorite lenses it’s worth running through the exercise again, recording the results, and making a note of how the results vary. From these results you’ll be able to build a suite of transforms.

In the upcoming posts I’ll show you how to plot the RGB data we’ve recorded and use this data to build a custom color transform. Congratulations, you’re on your way to more adventures in Scene Referred Space.

Thanks

If you’ve found this post hopeful, please consider following me on Twitter for updates on future posts.

Once again, a huge thanks has to go to Troy Sobotka, industry veteran, and the brain behind Filmic Blender (standard in Blender as of version 2.79) and a huge wealth of knowledge on all things color and lighting who opened my eyes to the importance of a Scene Referred workflow during the production of a recent VFX project. Be sure to follow him on Twitter.