Adventures in Scene Referred Space – Part Six

What was this all about?

Welcome to my final post (for now?) on the benefits of compositing in Scene Referred Space, and how to get at least part of the way there even if you’re limited to footage from an inexpensive camera that can’t natively export Scene Referred values.

I’ll wrap up by sharing a couple of simple composites that will hopefully answer what this whole adventure has been all about, and whether it was all worth it?

Below are a couple of examples of some basic CG elements rendered in synthetic Scene Referred space and comparisons of how convincing an integration we can achieve when:

  • Compositing them over plates have been extrapolated into a limited Scene Referred range using the custom color transform methodology shared in my previous posts.

Versus:

  • Compositing them over the original 8-bit Display Referred video plates output by a typical DSLR or mirrorless digital camera (in my case .MOV files from a Nikon D5200).

The difference in some cases will be subtle, but always important, and at the very least should provide valuable insight into why it’s important to work with physically accurate energy values. Hopefully you’ll see that one of the benefits of working this way is to let these physically accurate light values do most of the heavy lifting so that compositing operations work consistently across both CG and real world elements giving seamless integration with less fuss.

At the end of the day my hope is that fellow aspiring VFX artists looking for more seamless CG/plate integration at least walk away from this series armed with additional knowledge. As you take a look at these examples remember we’re working with plates with a limited extrapolated dynamic range. If the benefit is noticeable here, imagine how much more apparent it becomes when working with a plate with a much wider dynamic range produced by a more capable camera.

One last note. I’m not going to get into the minutiae of how the CG renders were set up: aligning real and CG cameras, lighting with image based lighting, or other techniques as this information is freely available elsewhere. I’ll keep the conversation on Scene versus Display Referred values.

Step One: Getting into a unified space

Remember the OCIO color transform we developed to convert our 8-bit footage into Scene Linear? We should also use this as our working color space during compositing. In Blender (or your compositor of choice), set the render view under color management to your bespoke color transform. In my case:

This will align any computed generated RGB values we render within the upper and lower reaches of the dynamic range of our plate. Yes, it does artificially limit the dynamic range of the CG which but at the end of the day, it’s better to work with the range that we do have in support of our ultimate aim: to make the CG elements appear as though they were shot “in camera”.

Remember, when sampling RGB values in the UV/Image editor in Blender, two sets of values are always presented, with the color managed values (marked CM) indicated on the right.

Step Two: Aligning our exposures

I captured an HDRI on location using the same camera color profile that was used when shooting the plate (important to ensure color consistency across the plate and the CG that will be lit with the HDRI). But the HDRI by its nature covers a wider dynamic range than the plate, so it’s necessary to align the exposure of the resulting CG renders to the exposure of the plate. To find a relative exposure between the two.

Do this by introducing a multiply node to multiply the RGB values of the CG prior to compositing them over the plate. By using multiply you adjust linear values linearly and so preserve the color integrity.

To figure out how much to multiply, we need a ground truths to align to, hence the value in shooting a Macbeth chart (or at least a gray card) on location and then synthetically re-creating the same color value(s) digitally in our CG. Here’s a great resource for finding synthetic Macbeth charts. Given I’m employing it as a texture on a simple plane, sRGB is the appropriate version to use. Blender will convert textures into Scene Linear Space automatically but it’s always worth double-checking it’s being ingested with the correct color transform.

Now it’s simply a matter of multiplying the RGB of the CG to find a numerical and visual match between the RGB values of the digital chart versus the real one caught on plate.

Rather than just using an abstract multiply number, by running the value through a Power (^2) node we can use numbers that represent f-stops. So in my case it was necessary to adjust the exposure of my CG by 1.9 stops to align it with the background plate.

Here’s the node set up for the CG part of the composite using Blender’s compositor:

Here’s just the plate, this was extrapolated into Scene Referred values using the color transform detailed in my previous posts. In addition to shooting a Macbeth chart on location, I also shot a gray ball and chrome ball to help me align the HDRI and give me a couple of additional known values to match to:

Here are the CG elements multiplied using the approach above to align values:

Finally, using an “Alpha-Over” merge node, here’s the resulting composite:

Looks pretty good. So was all the effort to process our video with a transform really necessary? Well, given how easily everything has aligned without needing to introduce curves or other RGB futzing, that should tell you something already about the value of working with both Scene Referred values and a common camera color profile between plate and HDRI.

Out of interest, how would the composite look if we just used the original Display Referred plate straight out of the camera? Compare the two by using the interactive slider:

That also looks pretty good doesn’t it? Well mostly, but look closely. Compare the bright specular highlight on the real and CG chrome balls in the middle of the shot and you’ll notice that the highlights between our Scene Referred plate and CG align better than the 8-bit Display Referred plate. That’s because our color transform has given us more dynamic range headroom for highlights and we’re using the same transform as our view transform for the CG.

Also notice the exposure of the sky in the top-right of the image has better parity with the specular highlights in the Scene Referred plate as expected.

So, was it worth the effort? It looks pretty good either way doesn’t it? Well, yes it does, but this is basically a raw composite before any further compositing or grading has taken place.

Even if you have access to a working color grade while compositing to get a sense of what the final production will look like, things often change as shots are pieced together into a sequence. Good compositors will therefore “push and pull” their composites to expose any visible differences between real and CG components. Let’s do the same.

Step Three: Straining our composite

We’ll start off by going relatively easy on our composite – introducing a basic grade:

Now the Display and Scene Referred plates look pretty much interchangeable because this grade has begun to blow out the highlights regardless of the plate, but this is really just hiding some potential problems.

Let’s get tougher. On top of the grade, dropping the exposure of the entire composite by adding a Multiply node after the merge starts to show how the specular highlights on the real and CG chrome balls begin to drift apart:

Granted, this is an extreme exposure change. But it does show how the Scene Referred plate values hold up to underexposure much more robustly than Display Referred values. By working in a unified Scene Referred Space, we’re therefore not only creating composites that align quickly with minimal fuss, but STAY THAT WAY when the composited image is adjusted further down the pipeline during grading.

One more example to show why Scene Referred values are important. Let’s introduce Bokeh by using a Defocus node. The Bokeh filter is expecting real world energy values to work with to replicate the effect we would ordinarily create in camera:

Again, the bokeh from the specular highlights of the real and CG chrome balls match more closely when we use the Scene Referred plate, and once again, the bokeh also better matches the exposure of the sky further helping to sell that everything was caught “in-camera”.

For reference, here’s the full Blender node network so you can see the nodes used after the Alpha-Over merge to “kick the tires” on our composite. Notice a small blur was introduced into the CG to help the composite more naturally over the plate. You can click on the image to see it at its original size:

Another example

Okay, so — yes — some of the differences in the example above were very subtle, but it’s often in these subtleties that our brain tells us when something’s wrong. Here’s another example with some more challenging lighting where the benefits of a Scene Referred approach should be much clearer. This time, a dark interior. Once again, here’s just the plate:

This time to spice things up I’ve added a CG lamp to the camera-left side of the hallway. This changes the light on the gray ball which gives a mismatch to the ball on the plate, but know that the CG ball was exposure aligned to match prior to introducing the additional light. CG “Shadow Catcher” walls have been added to catch the shadow of the statue and the light bounces off the CG lamp for a more convincing composite:

And here’s the composite:

Introduce a stylized blue/magenta grade and the Display referred plate looks passable but, unlike in the outdoor example, this time it’s much clearer that the Scene Referred plate stays much hotter in the lamp on the right wall giving a better match to the CG lamp on the left:

Once again, introduce a bokeh blur and the mismatch between lamps becomes even more apparent and on the Scene Referred composite, while subtle, there’s a clearer parity between the specular highlights on both chrome balls:

Finally, introducing 2D motion blur in post really highlights that the values of the hallway lamps (not just the one on the right wall, but also the one in the rear of the shot) on the Display Referred plate begin to separate from the very hot values of the CG lamp:

Wrapping Up

So there you go. By transforming our plate and our CG into a common color working space we’ve been able to match the two together simply by aligning exposures without any further tweaking. And further, demonstrated that when we introduce common compositing techniques like color grades, exposure shifts, and blurs, our Scene Referred workflow holds up robustly.

Now of course, I could have taken the original Display Referred plates above and tweaked this and that to get the CG to align more convincingly, but ultimately that’s been one of the points of this exercise: to show that if you bring all of your elements into a common compositing space you can spend your time on what really matters - crafting the look you want rather than nudging here and futzing there just to get the CG convincingly integrated before you’ve even begun investigating a shot specific look. Further, by using correct (or at least closer to correct) Scene Referred values, compositing effects like motion blur and bokeh that are expecting real world values behave as expected.

That completes my journey in Scene Referred Space for now. I hope you – as did I – learned something new about light, photography, and rendering over the past few posts. If you’ve ever wondered why your integrated CG composites don’t look as convincing as your favorite feature film and episodic visual effects, perhaps it’s because the energy values of the light you’ve been working with have been misaligned. Yes, as we’ve seen, sometimes the differences can be subtle, but when you’re aiming to make everything look like it was caught “in camera”, every small improvement adds to the realism.

Regardless of whether you’re using the color transform detailed in these posts to eek some (limited) dynamic range out of 8-bit video or have the luxury of working with high dynamic range RAW footage. I hope you’ll see that moving your workflow into a unified Scene Referred working color space doesn’t just get you another important step closer to creating compelling, convincing imagery. It makes compositing faster and easier and means you’ll produced composites that’ll hold up after you hand them off.

Thanks

If you’ve found this post hopeful, please consider following me on Twitter for updates on future posts.

Once again, a huge thanks must go to Troy Sobotka, industry veteran, and the brain behind Filmic Blender (standard in Blender as of version 2.79) who first walked through me this approach so that I may share it with you. Be sure to follow him on Twitter.

Comparison embeds courtesy of Juxtapose.

Paul Chambers5 Comments