Adobe Premiere CC 2017 - Real World Feature Review

About two weeks ago Adobe released their 2017 update to Creative Cloud, and because of a couple of projects that I happened to be working on at the time, I figured I’d download it immediately to see if I could take advantage of some of the new features.

If you want the TL;DR review, the short version is this: most of the features offer genuine improvements, but range in usefulness from incredibly useful to just minor time savers; a few, though, are utter crap.

Side note: I considered talking about the new features found in Adobe After Effects, but really, there’s not much to say other than: they work? Largely they’re just performance increases accomplished by moving things to the GPU, broader native format support, time shortening templating, and better integration with a few other Adobe CC products.  If you look at their new features page, you should be able to pretty quickly figure out which ones could be important to you, and there’s not much else to say about them other than “they work”.

Premiere is a different animal though, and I can’t say that all of the new features work properly.  But let’s start with the positives, of which there are many.

First and foremost, 8K native R3D imports.

This was expected, and necessary.  And while not ‘featured’ as part of their summaries, it is there and it works.  That’s a boon to all of us shooting on Helium sensors, and to our clients.  So far we’ve been running 8K ProRes or 2K proxies for our clients so they could edit with our footage; now they can take care of mastering with the 8K themselves (if they want).  So definitely a plus.

Second, the new native compression engine supporting DNxHD and DNxHR.

To me, this is a big plus.  I keep looking for a solid alternative to ProRes for my workflows, and while they don’t yet support the DNxHR 444, they do solidly support DNxHR HQX.  Since a significant portion of my usual workflows are built on 12 bits per channel and roundtripping between Adobe and DaVinci, having a solid 12 bit 422 cross-platform alternative to ProRes may finally let me get rid of DPX.

Third, the new audio tools.  Oh, thank god, the new audio tools.

I happen to be working this week on a short project doing sound design and light mixing (I’ll link to it when it’s up) and the new audio tools in Premiere have been a massive time saver.  If you’ve ever tried to do audio work directly in Premiere before, you’ll know how maddening it’s been dealing with their unresponsive skeuomorphic effect control knobs.  Even doing basic EQ meant flagging values on and off and struggling to get things as precise as you wanted.

Adobe Premiere CC 2015.3 EQ

Adobe Premiere CC 2015.3 Pitch Shifter

But the new audio UX is… well, fantastic.  I really can’t praise it enough.  The effect controls are still skeuomorphic (which I actually think is important in this case) but look classier, and more importantly actually respond really quickly to the changes you want to make.  They’ve expanded the tools set and the effects run more quickly.  I can’t be happier - this alone saved me hours of frustration and headaches this week.

Adobe Premiere CC 2017 EQ

Adobe Premiere CC 2017 Pitch Shifter

Fourth, the new VR tools.

So the same project I was doing sound design on happens to be a stereoscopic VR project.  So immediately, the promise of new VR tools was exciting - what more would they let me do, I wondered?

Install, fire it up, and… not much, actually.

Here’s basically all of the new VR tools I could find:

  • Automatically detect the VR properties of imported footage, but only if they were properly flagged with metadata (marginally useful, not really useful)
  • Automatically assign VR properties to sequences if you create a new sequence based on properly flagged VR footage.
  • Manual assign VR properties to sequences, allowing you to flag stereoscopic (and the type of 3D used, if any).  The sequence flagging allows for Premiere to automatically flag for VR on export, when supported.
  • Embed VR metadata into mp4 files in the H264 encoder module, instead of just QuickTime.
  • Connect seamlessly to an Oculus or other VR headset with full 360 / 3D output.

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

And that’s… it.  Really?  I mean, there is actually no difference between the viewers in 2015.3 and 2017, both handle stereoscopic properly; assigning the VR flags to sequences and then embedding the necessary metadata on export VERY useful.  But I would really LOVE to see an editor trying to edit with a VR headset.  Or color correct, for that matter.  Reviewing what you’ve got, sure, but not for the bulk of what you’re doing.

I should note that Premiere chokes on stereoscopic VR files at resolutions greater than 3K by 3K, which makes mastering footage from the GoPro Odyssey interesting, since it comes back from the Google Jump VR system as 8K by 8K mp4s.  Even converting to a full ProRes 422 intermediate at 4K by 4K proved too data heavy for Premiere to keep up with on an 8 Core MacPro.

But it’s not only VR performance that’s an issue: it’s still missing a whole bunch of features that would really make it a useful VR tool.  Where are my VR aware transitions?  What about VR specific effects, like simple reframing?  Where is my VR support in After Effects?  Why can’t I manually flag footage as VR if it didn’t have the embedded metadata?  What about recognizing projections other than equirectangular?  They have a drop down for changing projection type on a timeline, but equirectangular is the only option.  What about native ambisonic audio support? Or even flagging for ambisonic audio on export?

Don’t get me wrong, what they’ve done isn’t bad; it does work, and is an improvement.  It’s just that the tools they added were very tiny improvements on what was already there.  And I know (and use) that there are plugins that give Premiere and After Effects many of the VR features that I need to actually work in VR.  But it's really difficult, almost impossible, to get by without the 3rd party plugins.

Maybe I’m just jaded and judgmental, in part because of my reaction to the HDR 'tools' they announced, but when you advertise “New VR Support” as the second item on the new features list, it had better be good support.  Like, you know, actually work as well in VR as you can in standard 2D video.  If I, as a professional, require third party plugins to your program to make it work at the most basic level, it’s not the turnkey solution you advertise.  I’m sure that more tools are in the works, but for now, it feels lackluster and an engineering afterthought rather than an intelligent feature designed for professionals.


But don’t worry, that’s not their most useless feature change.  Let’s talk about their new HDR tools.

What. The. Hell.

This is how using the new HDR 'tools' in Premiere 2017 feel.

I mean that.  With all of my heart.

I might be a little biased on the subject, but honestly I question who in their right mind decided that what they included was actually something useful.

It’s not.

It’s utter shit.

But worse than that, as-is it’s more likely to hurt the broader adoption of HDR than to help it.

And no, I’m not exaggerating.


On paper, the new HDR tools sound amazing.  HDR metadata on export!  HDR grading tools!  HDR Scopes!  Full recognition of HDR files!  Yay!

In practice, all of these are useless.

Let me give you a rundown of what the new HDR tools actually do.

Premiere now recognizes SMPTE ST.2084 HDR files, which is awesome.  But only if the proper metadata is already embedded in the video stream, and then only if it’s an HEVC deliverable file.  Not a ProRes, DPX, or other intermediate file; only HEVC.  And like VR support above, there’s no way to flag footage as already being in HDR or using BT.2020 color primaries.  Which ends up being a massive problem, which I’ll get to in a minute.

When you insert properly flagged HDR footage into a sequence, you get a pleasant surprise: hard clipping at 120 nits on your viewer or connected external display.  It’s honestly the worst clipping I’ve seen.  And there’s no way to turn it off.  If you go to export the clip into any format without the HDR metadata flag enabled on export, you get the same hard clipping.  And since you can only flag for HDR if you’re exporting to HEVC, you can’t export HDR graded or processed through premiere in DPX, ProRes, TIFFs, OpenEXR or any other intermediate format.

This is why in my article on Grading and Mastering HDR I mention that it’s really important to be using a color space agnostic color grading system.  When the application includes color management that can’t be disabled, your options become very limited.

Also, side note, their HEVC encoder needs work - it’s very slow at the 10 bits you need for HDR export.  I expect it’s better on the Intel Kaby Lake chips that include hardware 10 bit HEVC encoder support that, oh wait, don’t exist for professionals yet (2017 5K iMac maybe?)

But at least with the metadata flagging you can bypass the FFMPEG / x265 encoder that you’ll have needed up to this point to properly encode HDR for delivery, right?

Why would you think that?  Of course you can’t.

Because if you bring in a ProRes, DPX, or other intermediate file into Premiere, there’s no way to flag it as HDR and it doesn’t recognize embedded metadata saying it’s HDR like DaVinci and YouTube do.  What happens is that if you use these intermediates as a source (individually or assembled in a sequence) and you flag for HDR on export, Premiere runs a transform on the footage that scales it into the HDR range as if it’s SDR footage.

12 Bit ProRes 4444 HDR Intermediate in Timeline with 8 Bit Scope showing proper range of values

12 Bit ProRes 4444 HDR Intermediate in Timeline with HDR Scope showing how Premiere CC 2017 interprets the intermediate if you flag for HDR on export

When is that useful? If I have a graded SDR sequence that I want to encode into the PQ HDR space, while keeping 100% of the limits of an SDR image.  Because why the hell not.

But never fear!  Premiere has included new color grading tools for HDR!

Well, they aren’t horrible, which I suppose is a compliment?

How to enable HDR Grading in Premiere 2017

To enable HDR Grading you need to change three different settings.  From the Lumetri context menu in your Lumetri Panel, you need to select “High Dynamic Range” to enable the HDR features; on the scopes you’ll need to switch the scale from “8 Bit” to “HDR” (and BT.2020 from the scope settings); and if you actually want to see those HDR values on the scope, you’ll need to enable the flag “Maximum Bit Depth” in your Sequence Settings.  I’m sure there’s a fantastic engineering explanation for that last one, but it’s not exactly intuitive or obvious, and took me a bit of hunting to figure it out.

Maximum Bit Depth needs to be turned on in Sequence Settings to enable proper HDR Scopes

HDR Scopes WITHOUT Maximum Bit Depth Flag

HDR Scopes WITH Maximum Bit Depth Flag

Once you’ve enabled HDR grading from the Lumetri drop down menu, you’ll get a few new options in your grading panels.  “HDR White” and “HDR Specular” come available under the Basic Correction panel, “HDR Range” comes available under the Curves panel, and “HDR Specular” comes available under the Color Wheels panel.

The HDR White setting seems to control how much the other sliders of the Basic Correction panel behave, almost like changing the scale.  The higher the HDR White value, the less of an effect exposure adjustments have and the greater the effect of contrast adjustments.  The HDR Specular slider controls just the brightest whites, almost like the LOG adjustment I use in DaVinci Resolve Studio.  This applies to both the slider under Basic Correction, and the wheel under the Color Wheels panel.  HDR Range seems to change the scale of the curves similar to how the HDR White slider does for the basic corrections.

All of this, by the way, I figured from watching the scopes, and not the output image.  I’ve tried hooking up a second display to the computer and hooking up our BVM-X300 through our Ultrastudio 4K to Premiere, but to no avail - the output image is always clipped to standard video levels and output in gamma 2.4.

Which, if you ask me, severely defeats the purpose of having HDR grading tools to begin with. Here’s a great idea: let’s allow people to grade HDR, but not see what they’re grading.  Which is like trying to use a table saw blindfolded.  Because that’s a thing people do, right?  Which brings me back to my original premise: What. The. Hell.

When you couple that little gem with the hard clip scaling, you realize that the only reason the color grading features are in this particular version is to make the process of cross grading from SMPTE ST.2084 into SDR easier, and nothing else.

No fields for adding HDR10 Compliant Metadata on Export.  That's okay, you shouldn't use their exporter anyway (at least not this version)

Oh, one last thing of course: here’s the real kicker: you can’t even export HDR10 compliant files.  Yes, I know I said that in the HEVC encoder you can flag for ST.2084, but you can’t add any MaxFALL, MaxCLL, or Master Display metadata.  And yes, I double checked that Premiere didn’t casually put those into the file without telling you (it doesn’t).

And it has zero support for Hybrid Log Gamma.  Way to pick a side, Adobe.


So passions aside, let’s run down the list again of new HDR tools and what they do:

  1. Recognize SMPTE ST.2084 files, but only when already properly flagged in HEVC streams and no other codec or format.
  2. Export minimal SMPTE ST.2084 metadata to flag for HDR, but only works if your source files are already in the HEVC format and already properly HDR flagged (see #1), or if they’re graded in HDR in the timeline, which you can’t see.  Which renders their encoder effectively useless.
  3. Enable HDR grading through a convoluted process, with a minimal but useful set of tools.  But you can’t see what you’re doing, so I'm not sure why they're there.
  4. There is no bullet point 4.  That’s literally all it does.

The question that I have that I keep coming back to is “who do they think is going to use these tools?”  It feels like the entire feature set was a “well, we need to include HDR, so get it in there”.  But unlike the VR tools that you can kind-of build into, these HDR “tools” (I use the word loosely) are really problematic, not just because the toolset is incomplete but because the way that the current tools are implemented is actually harmful to a professional workflow.

Call it simple feature bandwagoning, or engineers that didn’t consult real creative professionals, or blame it on whatever reason you will.  But the fact is, this ‘feature’ is utter shit, which to me sours the whole release, just a little.

My biggest concern here is that while someone like me, who's been working with HDR for a while now, can tell that these will hurt my workflow, Premiere is an accessible editing platform for everyone from amateurs to professionals.  And anyone looking to get into HDR video may try to use these tools as their way in, and their results are going to be terrible.  God awful.  And that hurts everyone - why would we want to adopt HDR when 'most of what people can do' (meaning the amateurs and prosumers who don't know any better) looks bad?

So basically, if Premiere is part of your HDR workflow, don't even think about using their new 'tools'.

HDR Rant over, let’s bring this back to the positive.


Just to reiterate, the new audio tools in Premiere CC 2017 are fantastic.  I can't emphasize that enough.  Most of the rest of the features added are pretty good.  The new team projects collaboration tools, though I haven’t had a chance to use them, appear to work well (though are still in beta).  The new captions are useful, the new visual keyboard layout manager fantastic (though WAAAY long overdue!), and the other under-the-hood adjustments have improved performance.

Should you upgrade?  Yes!  It’s a great upgrade!  Despite my gripes I’m overall happy with what they did!

Just don’t try to use it for HDR yet, and be aware that the new VR tools aren’t really that exciting.

How to Upload HDR Video to YouTube (with a LUT)

Today YouTube announced via their blog official HDR streaming support.  I alluded to the fact that this was coming in my article about grading in HDR because we've been working with them the past month to get our latest HDR video onto the platform. It's officially live now, so we can go into detail.


How to Upload HDR Video to YouTube

Similar to VR support, there are no flags on the platform itself that will allow the user to manually flag the video as HDR after it's been uploaded, so the uploaded file must include the proper HDR metadata.  But YouTube doesn't support uploading in HEVC, so there are two possible pathways to getting the right metadata into your file: DaVinci Resolve Studio 12.5.2 or higher, or the YouTube HDR Metadata Tool.  They are generally outlined in the YouTube support page, but not very clearly, so I think more detail is useful.

I did include a lengthy description on how to manage HDR metadata in DaVinci Resolve Studio 12.5.2+, with a lot more detail than they include on their support page, so if you want to use the Resolve method, head over there and check that out.  I've covered it once, so I don't see the need to cover the how-to's again.

I should note that Resolve doesn't include the necessary metadata for full HDR10 compatibility, lacking fields for MaxFALL, MaxCLL, and the Mastering Display values of SMPTE ST.2086.  It does mark the BT.2020 primaries and the transfer characteristics as either ST.2084 (PQ) or ARIB STD-B67 (HLG), which will let YouTube recognize the file as HDR Video.  YouTube will then fill in the missing metadata for you when it prepares the streaming version for HDR televisions, by assuming you're using the Sony BVM-X300.  So this works, and is relatively easy.  BUT, you don't get to include your own SDR cross conversion LUT; for that you'll need to use YouTube's HDR Metadata Tool.

 

YouTube's HDR Metadata Tool

Okay, let's talk about option two: YouTube's HDR Metadata Tool.  

Alright, not to criticize or anything here, but the VR metadata tool comes in a nice GUI, but the link to the HDR tool sends you straight to GitHub.  Awesome.  Don't panic, just follow the link, download the whole package, and un-Zip the file.

So the bad news: whether you're working on Windows or on a Mac, you're going to need to use the command line to run the utility.  Fire up Command Prompt (Windows) or Terminal (MacOS) to get yourself a shell.

So the really bad news: If you're using a Mac, the binary you need to run is actually inside the app package mkvmerge.app.  If you're on Windows, drag the 32 or 64 bit version of mkvmerge.exe into Command Prompt to get thing started; if you're on MacOS, right click on mkvmerge.app, select "Show Package Contents", and drag the binary file ./Contents/MacOS/mkvmerge into Terminal to get started:

Right click on mkvmerge.app and select "Show Package Contents"

Drag the mkvmerge binary into Terminal

The README.md file includes some important instructions and the default syntax to run the tool, with the assumption that you're using the Sony BVM-X300 and mastering in SMPTE ST.2084.  I've copied the relevant syntax here (I'm using a Mac; delete anything in bold before copying the command over, and replace the file paths in the **s with your content:)

./hdr_metadata-master/macos/mkvmerge.app/Contents/MacOS/mkvmerge \
-o *yourfilename.mkv* \
--colour-matrix 0:9 \
--colour-range 0:1 \
--colour-transfer-characteristics 0:16 \
--colour-primaries 0:9 \
--max-content-light 0:1000 \
--max-frame-light 0:300 \
--max-luminance 0:1000 \
--min-luminance 0:0.01 \
--chromaticity-coordinates 0:0.68,0.32,0.265,0.690,0.15,0.06 \
--white-colour-coordinates 0:0.3127,0.3290 \

If using a LUT, add the lines
--attachment-mime-type application/x-cube \
--attach-file *file-path-to-your-cube-LUT* \

In all cases end with
*yourfilename.mov*

Beyond the initial call to the binary or executable, the syntax is identical on MacOS and Windows.

The program's full syntax can be found here, but it's a little overwhelming.  If you want to look it up, just focus on section 2.8, which include the arguments we're using here.   The first four arguments set the color matrix (BT.2020 non-constant), color range (Broadcast), transfer function (ST.2084), and color space (BT.2020) by referencing specific index values, which you can find on the linked page.  If you want to use HLG instead of PQ, switch the value of --colour-transfer-characteristics to 0:18, which will flag for ARIB STD-B67.

(Note to the less code savvy: the backslashes at the end of each line allow you to break the syntax across multiple lines in the command prompt or terminal window.  You'll need them at the end of every line you copy and paste in, except for the last one)

The rest of the list of video properties should be fairly self explanatory, and match the metadata required by HDR10, which I go over in more detail here.

Now, if you want to include your own SDR cross conversion LUT, you'll need to include the arguments --attachment-mime-type application/x-cube, which tells the program you want to attach a file that's not processed (specifically, a cube LUT), and --attach-file filepath, which is the actual file you're attaching.

If you don't attach your own LUT, YouTube will handle the SDR cross conversion with their own internal LUT.  It's not bad, but personally I don't like the hard clipping above 300 nits and the loss of detail in the reds, but that's largely a personal preference.  See the comparison screenshots below to see how theirs works.

Once you've pasted in all of the arguments and set your input file path, hit enter to let it run and it'll make a new MKV.  It doesn't recompress any video data, just copies it over, so if you gave it ProRes, it'll still be the same ProRes stream but with the included HDR metadata and LUT that YouTube needs to recognize the file.

Overall, it's It's a pretty fast tool, and extremely useful beyond just YouTube applications.  You can see what it's done in this set of screenshots below.  The first is the source ProRes clip, the second is the same after passing it through mkvmerge to add the metadata only, and the third went through mkvmerge to get the metadata and my own LUT:

ProRes 422 UHD Upload Without Metadata Injection

ProRes 422 UHD Upload in MKV File.  Derived from the ProRes File above and passed through the mkvmerge tool to add HDR Metadata, but no LUT.

ProRes 422 UHD Upload in MKV file.  Derived from the ProRes file above and passed through the mkvmerge tool to add HDR Metadata and including our SDR cross conversion LUT.  Notice the increased detail in the brights of the snake skin, and the regained detail in the red flower.


All of us at Mystery Box are extremely excited to see HDR support finally widely available on YouTube.  We've been working in the medium for over a year, and haven't been able to distribute any of our HDR content in a way that consumers would actually be able to use.  But now, there's a general content distribution platform available with full HDR support, and we're excited to see what all creators can do with these new tools!

HDR Video Part 5: Grading, Mastering, and Delivering HDR

To kick off our new weekly blog here on mysterybox.us, we’ve decided to publish five posts back-to-back on the subject of HDR video.  This is Part 5: Grading, Mastering, and Delivering HDR.

In our series on HDR so far, we’ve covered the basic question of “What is HDR?”, what hardware you need to see it, the new terms that apply to it, and how to prepare and shoot with HDR in mind.  Arguably, we’ve saved the most complicated subject for last: grading, mastering, and delivering.

First, we’re going to look at setting up an HDR grading project, and the actual mechanics of grading in the two HDR spaces.  Next, we’re going to look at how to prepare cross conversion grades to convert from one HDR space to the other, or from HDR to SDR spaces.  Then, we’re going to look at suitable compression options for master & intermediate files, before discussing how to prepare files suitable for end-user delivery.

Now, if you don’t handle your own coloring and mastering, you may be tempted simply to ignore this part of our series.  I’d recommend you don’t - not just because I’ve taken the time to write it, but I sincerely believe that if you work at any step along an image pipeline, from acquisition to exhibition, your work will benefit from learning how the image is treated in other steps along the way.  Just my two cents.

Let’s dive in.

NOTE: Much of this information will be dated, probably within the next six months to a year or so. As programs incorporate more native HDR features, some of the workarounds and manual processes described here will likely be obsolete.


Pick Your Program

Before diving into the nitty gritty of technique, we need to talk applications.  Built-in color grading tools or plugins for Premiere, Avid, or FCP-X are a no-no.  Until all of the major grading applications have full and native HDR support, you’re going to want to pick a program that offers full color flexibility and precision in making adjustments.

I’m going to run you through my workflow using DaVinci Resolve Studio, which I’ve been using to grade in HDR since October 2015, long before Resolve contained any native HDR tools.  My reasoning here is threefold: one, it’s the application I actually use for grading on a regular basis; two, the tricks I developed to grade HDR in DaVinci can be applied to most other color grading applications; and three, it offers some technical benefits that we find important to HDR grading, including:

  • 32 bit internal color processing
  • Node based corrections offering both sequential and parallel operations
  • Color space agnostic processing engine
  • Extensive LUT support, including support for multiple LUTs per shot
  • Ability to quickly apply timeline & group corrections
  • Extensive, easily accessible correction toolset with customizable levels of finesse
  • Timeline editing tools for quick edits or sequence changes
  • Proper metadata inclusion in QuickTime intermediate files

Now, I’m not going to say that DaVinci Resolve is perfect.  I have a laundry list of beefs that range from minor annoyances to major complaints (but the same is basically true for every program that I’ve used…), but for HDR grading its benefits outweigh its drawbacks.

My philosophy tends to be that if you can pretty easily make a program you’re familiar with do something, use that program.  So while we’re going to look at how to grade in DaVinci Resolve Studio, you should be able to use any professional color grading application to achieve similar results, by translating the technique of the grade into that application’s feature set.*

If you are using DaVinci Resolve Studio, I recommend upgrading to version 12.5.2 or higher, for reasons I’ll clarify in a bit.

DaVinci Resolve Studio  version 12.5.2 has features that make it very useful for HDR grading and encoding.


Grading in HDR

So now that we’re clear on what we need in a color grading program, let’s get to the grading technique itself.  For starters, I’m going to focus on grading with the PQ EOTF rather than HLG, simply because there’s a lot of overlap between the two.  The initial subsections will focus on PQ grading, but I’ll conclude the section with a bit about how to adapt the advice (and your PQ grade!) to grading in HLG.

Set up the Project

I assume, at this point, that you’re familiar with how to import and set up a DaVinci Resolve Studio project for normal grading using your own hardware, adding footage, and importing the timeline with your sequence.  Most of that hasn’t changed, so go ahead and set up the project as usual, and then take a look at the settings that need to be different for HDR.

First, under your Master Project Settings you’re going to want to turn on DaVinci’s integrated color management by changing the Color Science value to “Davinci YRGB Color Managed”.  Enabling DaVinci’s color management allows you to set the working color space, which as of Resolve Studio 12.5.2 and higher will embed the correct color space, transfer function, and transform matrix metadata to QuickTime files using ProRes, DNxHR, H.264, or Uncompressed codecs.  As more and more applications become aware of how to move between color spaces, especially BT.2020 and the HDR curves, this is invaluable.

Enabling DaVinci YRGB Color Management as a Precursor for HDR Grading

Side note: I’m actually not recommending using their color management for input color space transformations; in fact, for my HDR grades, I actually set the input to “bypass” and the timeline and output color space values to the same values, because I don’t like how these transformations affect how basic grading operations act.  Color management is however a useful starting point for HDR and SDR cross conversions, which we’ll discuss in a bit.

Once color management is turned on, you’ll want to set it up for the HDR grade.  Move to the Color Management pane of the project settings and enable setting “Use Separate Color Space and Gamma”.  This will give you fine-tuneable controls over the input, timeline, and output values.  If you want to keep these flat, i.e. preventing any actual color management by DaVinci, set the Input Color Space to “Bypass” and the Timeline and Output Color Space to “Rec.2020” - “ST.2084”.  This will enable the proper metadata in your renders without affecting any grading operations.

For the purposes of what I’m demonstrating here, if you are using DaVinci’s color management for color transformations, use these settings:

  • Input Color Space - <”Bypass,” Camera Color Space or Rec 709> - <”Bypass,” Camera Gamma or Rec 709>
  • Timeline Color Space - “Rec.2020” - “ST.2084”
  • Output Color Space - “Rec.2020” - “ST.2084”

DaVinci Resolve Studio for embedding HDR metadata in master files, without affecting overall color management.

NOTE: At the time of this writing DaVinci's ACES doesn’t support HLG at all, or PQ within the BT.2020 color space; in the future, this may be a better option to use, if you’re comfortable grading in ACES.

After setting your color management settings, you’ll want to enable your HDR scopes by flagging “Enable HDR Scopes for ST.2084” in the Color settings tab of the project settings.  This changes the scale on DaVinci’s integrated scopes from 10 bit digital values to a logarithmic brightness scale showing the output brightness of each pixel in nits.

How to Enable HDR Scopes for ST.2084 in DaVinci Resolve Studio 12.5+

DaVinci Resolve Studio scopes in standard digital scale, and in ST.2084 nits scale.

If you’re connected to your HDMI reference display, under Master Project Settings flag “Enable HDR Metadata over HDMI”, and under Color Management flag “HDR Mastering is for X nits” to trigger the HDR mode on your HDMI display.

How to enable HDR Metadata over HDMI to trigger HDR on consumer displays.

If you’re connected to a reference monitor over SDI, set the display’s color space to BT.2020 and its gamma curve to ST.2084 (and its Transform Matrix to BT.2020 or BT.709, depending on whether you’re using subsampling and what your output matrix is).

Settings for enabling SMPTE ST.2084 HDR on the Sony BVM-X300

That’s it for settings.  It’s really that simple.


Adjusting the Brightness Range

Now that we’ve got the project set up properly, we’re going to add the custom color management compensation that will allow the program’s mathematical engine to process changes in brightness and contrast in a way more conducive to grading in ST.2084.

The divergence of the PQ EOTF from a linear scale is pretty hefty, especially in the high values.  Internally, the mathematical engine operates on the linear digital values, with a slight weighting towards optimization for Gamma 2.4.  What we want to do is make the program respond more uniformly to the brightness levels (output values) of HDR, rather than to the digital values behind them (input values).

We’re going to do this by setting up a bezier curve that compresses the lights and expands the darks:

Bezier curve for expanding the darks and compressing the whites of ST.2084, for grading with natural movement between exposure values in HDR

For best effect, we need to add the curve to a node after the rest of the corrections, either as a serial node after other correctors on individual clips, on the timeline as a whole (timeline corrections are processed in serial, after clip corrections), or exported as a LUT and attached to the overall output.

Where to attach the HDR bezier curve for best HDR grading experience - serial to each clip, or serial to all clips by attaching it to the timeline.

So what effect does this have on alterations?  Look at the side by side effect of the same gain adjustment on the histogram with and without the custom curve in serial:

Animated GIF of brightness adjustments with and without the HDR Bezier Curve

Without the curves, the upper range of brightnesses race through the HDR brights.  This is, as you can imagine, very unnatural and difficult to control.  On the other hand, the curve forces the bright ranges to move more slowly, still increasing, but at a pace that’s more comparable to a linear adjustment of brightnesses, rather than a linear adjustment of digital values: exactly what we want.

NOTE: DaVinci Resolve Studio includes a feature called “HDR Mode”, accessible through the context menu on individual nodes, that in theory is supposed to accomplish a similar thing.  I’ve found it has really strange effects on Lift - Gamma - Gain that I can’t figure out how is supposed to help HDR grading: Gain races faster through the brights, Gamma is inverted and seems to compress the space, and so does Lift, but at different rates.  If you’ve figured out how to make these controls useful, let me know…

If you've figured out how to use HDR Mode in DaVinci Resolve Studio for HDR grading, let me know!

Once that curve’s in place, grading in HDR becomes pretty normal, in some ways even easier than grading for SDR.  But there are a few differences that need to be noted, and a couple more tricks that will get your images looking the best.  And the first one of these we’ll look at is the HDR frenemy, MaxFALL.


Grading with MaxFALL

If you read the last part in this HDR series about shooting for HDR, you’ll remember that MaxFALL was an important consideration when planning the full scene for HDR.  In color grading you’re likely going to discover why MaxFALL is such an important consideration: it can become frustratingly limiting to what you think you want to do.

Just a quick recap: MaxFALL is the maximum frame average light level permitted by the display.  We calculate each frame average light level by measuring the light level, in nits, of each pixel and taking the average across each individual frame.  The MaxFALL value is the maximum encoded within an HDR stream, or permitted by a display.  The MaxFALL permitted by your reference or target display is what we really need to think about with respect to color grading.

Without getting into the technical reasons behind the MaxFALL, you can imagine it as collapsing all of the color and brightness within a frame into a single, full frame, uniform grey screen, and the MaxFALL is how bright that grey (white) screen can be before the display would be damaged.  Every display has a MaxFALL value, and will hard-limit the overall brightness by dimming the overall image when you send it a signal that exceeds the MaxFALL.

Average Pixel Brightness with Accompanying Source Image

On the BVM-X300, you’ll notice the over range indicator turns on when you exceed the MaxFALL, so that when you look at the display, you can see when the display is limiting the brightness.  On consumer television displays, there is no such indicator, so if the dimming happens when you’re looking away from the screen, you’re likely to not notice the decreased range.  Use the professional reference when it’s available!

BVM-X300 Over Range Indicator showing MaxFALL Exceeded

Just like with CRT displays, the MaxFALL tends to be lower on reference displays than on consumer displays with the same peak brightness, since the size of the consumer displays often reduces the damage produced through the heat generated from the higher current, and the tolerable color deviation in consumer displays allows for lower color fidelities with higher MaxFALLs than a reference display.

So what do we do in grading that can be limited by the MaxFALL attribute?  Here are some scenarios that I’ve run into limitations with MaxFALL:

  1. Bright, sunny, outdoors scenes
  2. Large patches of blue skies
  3. Large patches of white clouds
  4. Large patches of clipped whites
  5. Large gradients into the brightest whites

When I first started grading in HDR, running into MaxFALL was infuriating.  You’re working at a nice clip, when suddenly, no matter how much I raise the brightness of the scene, it just never got brighter!  I didn’t understand initially, since I was looking at the scopes and I was well below the peak brightness available on my display, yet every time I added gain, the display bumped up, then suddenly dimmed down.

When MaxFALL is exceeded, the Over Range indicator lights up and the display brightness is notched down to prevent damage.

Now that I know what I was fighting against, it’s less infuriating, but still annoying.  In generally, I know that I need to keep the majority of the scene around 100-120 nits, and pull only a small amount into the superwhites of HDR.  When my light levels are shifting across frames, as in this grade with the fire breather, I’ll actually allow a few frames to exceed the display’s MaxFALL temporarily, so long as it’s very, very brief, so as not to damage the display when it temporarily averages brighter.

Grading with brief exceeding of target MaxFALL.

When I’m grading content that’s generally bright, with long sets of even brighter, such as this outdoor footage from India, it can be a good idea to keyframe an upper-brightness adjustment to drop the MaxFALL, basically dropping the peak white values as the clipped or white patch takes up more and more of the scene.  This can be visible, though, as a greying of the whites, so be careful.

Tilt-up Shot of Taj Mahal where brightness keyframes were required to limit MaxFALL.  In an ideal world, no keyframes would have been necessary and the final frame would have been much brighter (as shot) than the first.

In other cases, it may be necessary to drop the overall frame brightness, to allow for additional peak brightness in a part of the frame, such as what happened with this shot of Notre Dame Cathedral, where I dropped the brightness of the sky, tree, and cathedral to less than what I wanted to allow the clouds to peak higher into the HDR white range.

Average brightness was limited so that more of the cloud details would push higher into the superwhites without exceeding MaxFALL

In some cases, you really have no choice but to darken the entire image and reduce the value of peak white, such as this shot of the backflip in front of the direct sun - the gradient created nearby the sun steps if I pull the center up to the peak white of the sun, while the MaxFALL is exceeded if I pull up the overall brightness of the image.

MaxFALL limited the white point to only 200 nits because of the quantity of the bright portion of the image and the softness of the gradient around the sun.

The last consideration with MaxFALL comes with editing across scenes, and is more important when maintaining consistency across a set of shots that should look like they’re in the same location.  You may have to decrease the peak white within the series of shots so that on no edit does the white suddenly appear grey, or rather, ‘less white’ than the shot before it.

Three shots with their possible peak brightnesses (due to MaxFALL limitations of the BVM-X300) vs the values I graded them at.

What do I mean by ‘less white’?  I mentioned it in Part 4: Shooting for HDR, but to briefly reiterate and reinforce:


In HDR grading, there’s no such thing as absolute white and black.


HDR Whites & Blacks

From a grading paradigm point of view, this may be the biggest technical shift: in HDR, there is no absolute white or absolute black.

Okay, well, that’s not entirely true, since there is a ‘lowest permitted digital code’ which is essentially the blackest value possible, and a ‘highest permitted digital code’ the can be called the peak brightness - essentially the whitest value possible within the system (encoded video + display).  However, in HDR, there is a range of whites available through the brights, and a range of blacks available through the darks.

Black and white have always been a construct in video systems, limited by the darkest and brightest values displays could produce.  There were the hard-coded limits of the digital and voltage values available.  In traditional SDR color grading, crushing to blacks was simply: push the darks below the lowest legal dark value, and you have black.  Same thing with whites - set the brightness to the highest legal value and that was the white that was available: anything less than that tends to look grey, especially in contrast with ‘true white’ or ‘legal white’.

But in the real world, there is a continuum that exists between blacks and whites.  With the exception of a black hole, there is nothing that is truly ‘black’, and no matter how bright an object is, there’s always something brighter, or whiter than it.

Of course, that’s not how we see the world - we see blacks and whites all around us.  Because of the way that the human visual system works, we perceive as blacks any part of a scene (that is, what is in front of our eyes) that is either very low in relative illumination and reflects all wavelengths of light relatively uniformly, or that is very low in relative illumination such that few of our cones are activated in our eyes and we therefore can’t perceive the ratio of wavelengths reflected with any degree of certainty.  Or, in other words, everything that is dark with little saturation, or so dark that we can’t see the saturation, we perceive as black.

The same thing is true with whites, but in reverse.  Everything illuminated or emitting brightness beyond a specific value, with little wavelength variance (or along the normal distribution of wavelengths) we see as white, or if things are so bright that we can’t differentiate between the colors reflected or emitted, we see it as white.

Why do I bring this up?  Because unlike in SDR video where there is a coded black and coded white, in HDR video, there are ranges of blacks and whites (and colors of blacks and whites), and as a colorist you have the opportunity to decide what level of whiteness and blackness you want to add to the image.

Typically, any area that’s clipped should be pushed as close as possible to the scene-relative white level where the camera.  Or, in other words, as high as possible in a scene with a very large range of exposure values, or significantly lower when the scene was overexposed and therefore clipped at a much lower relative ratio.

Clipping in an image with wide range of values and tones vs clipping in image with limited range of values and tones

Since this is different for every scene and every camera, it’s hard to recommend what that level should be.  I usually aim for the maximum value of the display or the highest level permitted by MaxFALL if my gradient into the white or size of the clipped region won’t permit it to be brighter.

So long as the light level is consistent across edits, the whites will look the same and be seen as white.  If, within a scene, you have to drop the peak brightness level of one shot because of MaxFALL or other considerations, it’s probably going to look best if you drop the brightness level of the whites across every other shot within that same scene.  In DaVinci, you can do this quickly by grouping your shots and applying a limiting corrector (in the Group Post-Clip, to maintain the fidelity of any shot-based individual corrections).

Sometimes you may actually want a greyer white, or a colored white that reads more blue or yellow, depending on the scene.  In fact, when nothing within the image is clipping and you don’t have other MaxFALL considerations, it’s very liberating to decide the absolute white level within an image.  Shots without any ‘white’ elements can still have colored brights at levels well above traditional white, which helps separate the relative levels within a scene in a way that could not be possible with traditional SDR video.

The only catch, and this is a catch, is that when you do an SDR cross conversion, some of that creativity can translate into gross looking off-whites, but if you plan specifically for it in your cross conversion to SDR, you should be able to pull it off in HDR without any issues.

Blacks have a similar tonal range available to them.  You have about 100 levels of black available below traditional SDR’s clipping point, and that in turn creates some fantastic creative opportunities.  Whole scenes can play out with the majority of values below 10 nits.  Some parts of the darks can be so dark that they appear uniform black, until you block out the brighter areas of the screen and suddenly find that you can see even deeper into the blacks.  Noise, especially chromatic noise, disappears more in these deep darks, making the image appear cleaner than it would in SDR.  All of these offer incredible creative opportunities when planning for production, and I discussed them in more detail in Part 4: Shooting for HDR.

So how do you play with these whites and blacks?

The two tools I use on a regular basis to adjust my HDR whites and blacks are the High and Low LOG adjustments within DaVinci.  These tools allow me to apply localized gamma corrections to specific parts of the image, that is, those above a specific value for the highs adjustment, and those below a specific value for the lows adjustment.

DaVinci Resolve Studio's LOG Adjustment Panel

In SDR video, I typically use LOG adjustments on the whites to extend contrast, or to adjust the color of the near-whites.  In HDR, I first adjust the “High Range” value to ‘bite’ the part of the image that I want, and then pull it towards the specific brightness value I’m looking for.  This often (but not always) involves pulling up a specific part of the whites (say, the highlights on the clouds) to a higher brightness value in the HDR range, for a localized contrast enhancement, though I do use it to adjust the peak brightness too.

Effect of LOG Adjustments on an HDR Image with Waveform.  Notice the extended details in the clouds.

In SDR video, I’d typically use the low adjustment to pull down my blacks to ‘true black’, or to fix a color shift in the blacks I’d introduced with another correction (or the camera captured). In HDR, I use the same adjustment to bite a portion of the lows and extend them through the range of blacks, increasing the local contrast in the darks to make the details that are already there more visible.

The availability of the LOG toolset is one of the major reasons I have a preference for color grading in DaVinci, and what it lets you do quickly with HDR grading really helps speed up the process.  When it’s not available its functionality is difficult to emulate, with finesse, using tools such as curves or lift-gamma-gain.  Typically, I’ve found it generally requires a secondary corrector limited to a specific color range and then using a gamma adjustment, which is a very inelegant workaround, but one that works.


Futureproofing

Once the grade is nearly finalized, there’s a couple of things that you may consider doing to clean up the grade and make it ‘futureproof’, or, to make sure that things you do now don’t come back to haunt the grade later.

If you’ve been grading by eye, any value above the maximum brightness of your reference display will be invisible, clipped at the maximum display value.  If you’re only ever using the footage internally, and on that display only, don’t worry about making it future proof.  If, however, you’re intending on sharing that content with anyone else, or upgrading your display later, you’ll want to actually add the clip to your grade.

The reasoning here I think is pretty easy to see: if you don’t clip it your video signal, your master will contain information that you can’t actually see.  In the future, or on a different display with greater latitude, it may be visible.

There are a couple of ways of doing this.

One that’s available in DaVinci is to generate a soft-clip LUT in the Color Management of the project settings, setting the top clip value to the 10 bit digital value of your display’s maximum nits value (767, for instance for 1000 nits max brightness display using PQ space).  Once you generate the LUT, attach it to the output and you’ve got yourself a fix.

Generating a Soft Clipping LUT for ST.2084 at 1000 nits in DaVinci Resolve

Alternatively, you can adjust your roll off curve that we’re using for making uniform brightness adjustments so that it comes as close to limiting the maximum displayable value as you can get, by extending the bezier curve into a near flat line that lands at your target maximum

Bezier curve for HDR grading with flatter whites to minimize peak range

But sometimes you may want to leave those values there, so that when the next generation of brighter displays comes around, you may find a little more detail in the lights.  What’s really important here is that you make white white, and not accidentally off-white.

If you’re working with RAW footage that allows you to adjust the white balance later, you may find that where white ‘clipped’ on the sensor isn’t uniform in all three channels.  This can happen too with a grading correction that adjusts the color balance of the whites - you can end up with separate clips in the red, green, and blue channels that may be clipped an invisible on your display, but will show up in the future.

Waveform of clipped whites with separated RGB Channels.  This is common with RAW grading with clipped whites at the sensor and the ability to control decoded color temperature.

The simple fix here is to add a serial node adjustment that selects, as a gradient, all values above a specific point, and desaturate the hell out of.  Be careful to limit your range to low saturation values only (so long as they encompass what you’re trying to hit) so that you don’t accidentally desaturate other more intentionally colorful parts of the image that just happen to be bright.

How to fix RGB separated clipped whites: add a serial node with a Hue/Saturation/Luminance restriction to just the whites and reduce their saturation to zero.

Working with Hybrid Log Gamma

Up to this point the grading techniques I’ve been discussing have been centered on grading in PQ space.  Grading in Hybrid Log Gamma is slightly different in a couple of important ways.

As a quick refresher, Hybrid Log Gamma is an HDR EOTF that intends to be partially backwards compatible with traditional gamma 2.4 video.  This is a benefit and a drawback when it comes to HDR grading.

If you have multiple reference displays available, this is an important time to break them out.  Ideally, one display should be set up in HLG with a system gamma of 1.2 (or whatever your target system gamma is), and the second should be set up in regular gamma 2.4.  That way, whatever grading you do you can see the effect immediately on both target systems.  Otherwise you’ll need to flip back and forth between two HDR and SDR modes on a single reference display in your search for ‘the happy medium’.

Grading HLG with two reference displays - one in HDR, one in SDR, to ensure the best possible contrast in both.

Most of the project and grading setup is identical to grading with the PQ EOTF, with the exception of the bezier curve in serial that adjusts the brightness response.  In HLG we don’t want to expand the darks, since the HLG darks are identical to the gamma 2.4 darks, so we want that part of the curve to be more linear, before easing into our compression of the highs.

Bezier curve for HDR grading in Hybrid Log Gamma.  This curve replaces the ST.2084 Bezier curve added earlier.

Once that’s in place, the rest of the grading process is similar to grading in PQ.  In fact, you can replace the ST.2084 bezier curve with this curve and your grade should be nearly ready to go in HLG.  The major exception to this is that you still need to regularly be evaluating how the image looks in SDR, on a shot by shot basis.

The biggest complaint I have with grading in HLG is the relative contrast between the HDR and the SDR images.  Because HLG runs up to 5000 nits with its top digital values, if you’re grading in 1000 nits you end up with a white level in the SDR version below the usual peak white.  This often means that the whites in the SDR version look muddied and lower contrast than the same content graded for SDR natively.  This is especially true when the MaxFALL dictates a darker image is necessary and a lower white point is necessary, landing values solidly in the middle ranges of brightness.

Hybrid Log Gamma occasionally has much dimmer and muddied whites, when compared to SDR natively graded footage, due to MaxFALL limitations.

And as if muddied whites weren’t enough, it’s difficult in HLG to find a contrast curve that works for both the HDR and the SDR image: because of how our brains perceive contrast, when the contrast looks right and natural in HDR, it looks flat in SDR because of the more limited dynamic range, while when it looks right in SDR it looks overly contrasty in HDR.

Personally, I find grading in HLG to compounds the minor problems of HDR with the problems of SDR, which I find extremely irritating.  Rather than being happy with the grade, I’m often left with a sense of “It’s okay, I guess”.

But on the other hand, when it’s done, you won’t necessarily have to regrade for other target gamma systems, which is what you have to do when working in PQ.



Cross Converting HDR to HDR & HDR to SDR

Let’s be honest.  A PQ encoded image displayed in standard gamma 2.4 rendering looks disgusting.  The trouble is, we only really want to do the bulk of the grading once, so how can we cheat and make sure we don’t have to regrade every project two or more times?

LUTs, LUTs, and more LUTs!  Also, Dolby Vision.

Dolby Vision is an optional (paid to Dolby) add-in for DaVinci Resolve Studio that allows you to encode the metadata for the SDR cross conversion into your output files.  Essentially, the PQ HDR image is transmitted with metadata that describes how to transform the HDR into a properly graded SDR image.  It’s a nifty process that seeks to solve the dilemma of backwards compatibility.

But I’ve never used it, because we’ve had no need and I don’t have a license.  DaVinci Resolve’s documentation on how to use it with DaVinci is extensive though, and it requires a similar process to doing a standard SDR cross conversion, so take that as you will.  I’ve also heard rumors that some major industry players are looking for / looking to create a royalty-free dynamic metadata alternative that everyone can use as a global standard for transmitting this information - but that’s just a rumor.

For everyone not using Dolby Vision, you’re going to have to render the SDR versions separately from the HDR versions as separate video files.  Here at Mystery Box, we prefer to render the entire HDR sequence as set of clip-separated 12bit intermediate files to make the SDR grade from them, versus adding additional corrector elements to the HDR grade.  This tends to render faster, because you only render from the RAWs once, and make any other post-processing adjustments once instead of on every version.

NOTE: I’m going to cover the reason why later, but it’s important that you use a 12 bit intermediate if you want a 10 bit master, since the cross conversion from PQ to any other gamma system cuts the detail levels preserved by about 2-4 times, or an effective loss of 1-2 bits of information per channel.

When I’m cross converting from PQ in the BT.2020 space to gamma 2.4 in the BT.2020 space, after reimporting and reassembling the HDR sequence (and adding any logos or text as necessary), I’ll duplicate the HDR sequence and add a custom LUT to the timeline.

The fastest way to build this LUT is to use the built-in DaVinci Color Management (set the sequence gamma to ST.2084 and the output gamma to Gamma 2.4) or the HDR 1000 nits to Gamma 2.4 LUT, and then add a gain and gamma adjustment to bring the brightness range and contrast back to where you want it to be.  It’s a pretty good place to start building your own LUT on, and while these tools weren’t available when I started building my first cross conversion LUT, the process they use is nearly identical to what I did.

Using DaVinci Resolve Studio to handle HDR to SDR cross conversion

Using DaVinci Resolve Studio to handle HDR to SDR cross conversion

Once you’ve attached that correction to the timeline, it’s a pretty fast process to run through each shot and simply do minor brightness, contrast, white point, and black point adjustments - Using DaVinci’s built-in LUT / Color Management I can do a full SDR cross conversion for 5 minutes of footage in less than half an hour using this LUT method.  Using my own custom LUT this processes can take less than five minutes.

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 01

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 02

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 03

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 03

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 04

Notice the detail loss in the pinks, reds, and oranges because of over saturation in the simple downconversion process (images 01 and 04), the milkiness and hue shifting in the darks (images 02) and the fluorescence of the pinks and skin tones (images 03) with a straight downconversion.  This happens largely in the BT.2020 to BT.709 color space conversion, when colors land outside of the BT.709 gamut.  Building a custom LUT can be a great solution to retain the detail.

After prepping the BT.2020 version, making a BT.709 version, for web or demonstration purposes is incredibly easy.  All that you have to do is duplicate the BT.2020 sequence (this is why I like adding LUTs to timelines, instead of to the output globally) and add an additional LUT to the timeline that does a color space cross conversion from BT.2020 to BT.709.  (Alternatively, change the color management settings).  Since the BT.2020 and BT.709 contrast is the same, all I need to do then is run through the sequence looking for regions where reds, blues, or greens end up out of gamut, and bring those back in.  That’s usually less than 5 minutes for a 5 minute project.

Stacked LUTs on a Timeline to combine transformations.

Cross converting from HLG to PQ is fairly simple, since PQ encompasses a larger range of brightnesses than the HLG range and it can fairly easily be directly moved over with a simple LUT or color management tool; you may want to adjust your low-end blacks to take advantage of the deeper PQ space, but it’s otherwise straightforward.

Cross-grading from PQ to HLG is a different animal altogether.  It’s still faster to work from the intermediate than the RAWs themselves, but it’s more than just a simple LUT or color management solution.  Because of the special considerations for HLG, that its contrast has to look good in both HLG and gamma 2.4, you have a lot more work to do finessing the contrast then when you convert ST.2084 into gamma 2.4.  You’ll also run into issues with balancing the MaxFALL in HLG, which in some cases you’ll just have to ignore.

DaVinci’s built-in color management is actually quite good starting point for cross converting from HLG to PQ or PQ to HLG.  It’s important, though, to be aware of how color management injects metadata into QuickTime files, which I’ll address in a second, so that you don’t accidentally flag the incorrect color space or gamma in your master files.

Using DaVinci Color Management to apply an HLG to ST.2084 cross conversion.

Understanding how LUTs work to handle SDR cross conversions is really important, because until there’s a universal metadata method for including SDR grades with HDR content, which in and of itself would essentially be a version of a shot-by-shot LUT, display manufacturers and content delivery system creators rely on LUTs (or their mathematical equivalent) to convert your HDR content into something that can be shown on SDR displays!


Metadata & DaVinci’s Color Management

If you’re using color management to handle parts of your color space and gamma curve transformations, you’re going to need to adjust the Output Color Space each time you change sequences, to match the targeted space of that timeline (in addition to switching the settings on your reference display).  This is actually the biggest reason I prefer using LUTs over color management - it just becomes a hassle to continually have to reset the color management when I’m grading.

Even if you’re not using the color management to handle color space conversions, you’re going to need to make some changes to the color management settings when rendering out QuickTime masters, so that the correct metadata is included into the master files.

Proper Metadata Inclusion for BT.2020 / ST.2084 QuickTime File, encoded in ProRes 4444 out of DaVinci Resolve Studio.

The settings you use depend when you go to render will depend on whether you’re using color management for the transformation or not.  If you are using color management for the transform, change just the Output Color Space to match the target color space and gamma of the timeline to be rendered.  If you aren’t using color management to handle the color conversion, switch both the Timeline Color Space and the Output Color Space to match your target color space and gamma immediately before rendering the matching timeline.  Again, and unfortunately, you will need to make this adjustment every time you go to render a new sequence.  Sorry, no batch processing.

DaVinci Resolve Studio Color Management Settings for transforming color and adding metadata, and adding metadata only.

Grading in HDR isn’t as hard as it originally seems, once you figure out the tricks that allow the grading system to respond to your input as you would expect and predict.  And despite how different HDR is from traditional video, SDR and HDR cross conversions aren’t as hard as they seems, especially when you’re using prepared LUTs specifically designed for that process.


Mastering in HDR

When it comes to picking an appropriate master or intermediate codec for HDR video files, the simplest solution would always be to pick an uncompressed format with an appropriate per-channel bit depth.  Other than the massive file size considerations (especially when dealing with 4K+ video), there are a few cautions here.  

First, for most of the codecs available today that use chroma subsampling, the transfer matrix that converts from RGB to YCbCr is the BT.709 transfer matrix, and not the newer BT.2020 transfer matrix, which should be used with the BT.2020 color space.  This isn’t a problem per-se, and actually benefits out of date decoders that don’t honor the BT.2020 transfer matrix, even with the proper metadata.  It’s also possible to use the use the BT.2020 transfer matrix and improperly flag the matrix used when working with a transcoding application that requires manual flagging instead of metadata flagging.  At its very worst, it can create a very small amount of color shifting on decode.

As slightly more concerning consideration, however, is the availability of high quality 12+ bit codecs for use in intermediate files.  Obviously any codec using 8 bits / channel only are out of the question for HDR masters or intermediates, since 10 bits are required by all HDR standards.  10 bit encoding is completely fine for mastering space, and codecs like ProRes 422, DNxHR HQX/444, 10 bit DPX, or any of the many proprietary ‘uncompressed’ 10 bit formats you’ll find with most NLEs and color correction softwares should all work effectively.

However, if you’re considering which codecs to use as intermediates for HDR work, especially if you’re planning on an SDR down-grade from these intermediates, 12 bits per channel as a minimum is important.  I don’t want to get sidetracked into the math behind it, but just a straight cross conversion from PQ HDR into SDR loses about ½ bit of precision in data scaling, and another ¼ - ½ bit precision in redistributing the values to the gamma 2.4 curve, leaving a little more 1 bit of precision available for readjusting the contrast curve (these are not uniform values).  So, to end up with an error-free 10 bit master (say, for UHD broadcast) you need to encode 12 bits of precision into your HDR intermediate.

ProRes 4444 / 4444 (XQ), DNxHR 444, 12 bit DPX, Cineform RGB 12 bit, 16 bit TIFFs, or OpenEXR (Half Precision) are all suitable intermediate codecs,** though it’s important to double check all of your downstream applications to make sure that whichever you pick will work later.  Similarly, any of these codecs should be suitable for mastering, with the possibility of creating a cross converted grade from the master later.

I just want to note before anyone actually asks: intermediate and master files encapsulating HDR video are still reeditable after rendering - they can be assembled, cut, combined, etc just like regular video files.  You don’t need to be using an HDR display to do that either - they just look a little flatter on a regular display (except if you’re using HLG).  So long as you don’t pass them through a process that drops the precision of the encoded video, you should be fine to work with them in other applications as usual, though you may want to return to DaVinci to add the necessary metadata to whatever your final sequence ends up being.


Metadata

After you’ve made the master, it’s easy to assume you’re done.  But HDR specifications call for display referenced metadata during encoding of the final deliverable stream, so it’s actually important to record this metadata at the time of creation, if you aren’t handling the final encode yourself.  Unfortunately, currently none of the video file formats have a standardized place to record this metadata.

Your options are fairly limited; the simplest solution is to include a simple text file with a list of attribute:value pairs.

Text file containing necessary key : value pairs for an HDR master file that doesn't provide embedded metadata.

What metadata should you include?  It’s a good idea to include everything that you’d need to include in the standard VUI for HDR transmission:

  • Color Primaries
  • Transfer Matrix
  • Transfer Characteristics (for chroma subsampled video)
  • MaxCLL
  • MaxFALL
  • Master Display

When you’re creating distribution files, each of these values need to be properly set to flag a stream as HDR Video to the decoding display.  It’s possible to guess many of these (color space, transfer matrix, etc) if you’ve been provided with a master file without metadata, but it’s much easier to record and provide this metadata at the time of creation so that no matter how long down the line you come back to the master, none of the information is lost.


Distributing HDR

If you’ve made it this far through the HDR creation process, there should really only be one major question remaining: how do we encode HDR video in a way that consumers can see it?

First, the bad news.  There’s no standardization for HDR in digital cinema yet.  So if your intention is a theatrical HDR delivery, you’re probably need to work with Dolby.  At the moment, they’re the only ones with the actual installations that can display HDR, and they have specialists who will handle that step for you.  For most people, what we want to know is how to get an HDR capable television to display the video file properly.

This is where things get more tricky.

I don’t say that because it’s a necessarily complicated process, but only because there’s no ‘drop in’ solutions that are generally available to do it (other than YouTube, very soon).

There are only three codecs that can, by specification, actually be used for distributing HDR video, HEVC, VP9 and AV1 (AV1 is the successor to VP9), and within these only specific operational modes support HDR.  And of these three, the only real option at the moment is HEVC, simply because HDR televisions support hardware based 10 bit HEVC decoding - it’s the same hardware decoder needed for the video stream of UHD broadcasts.

HEVC encoding support is still rather limited, and finding an application with an encoder that supports all of the optional features needed to encode HDR is still difficult.  Adobe Media Encoder, for instance, supports 10 bit HEVC rendering, but doesn’t allow for the embedding of VUI metadata, which means that the file won’t trigger the right mode in the end-viewer’s televisions.

Unfortunately, there’s only one encoder freely available that gives you access to all of the options you need for HDR video encoder: x265 through FFmpeg.

If you’re not comfortable using FFmpeg through a command line, I seriously recommend downloading Hybrid (http://www.selur.de), which is one of the best, if not the best, FFmpeg frontend I’ve found.

Here are the settings that I typically use for encoding HEVC using FFmpeg for a file graded in SMPTE ST.2084 HDR using BT.2020 primaries on our BVM-X300, at a UHD resolution with a frame rate of 59.94fps:

Profile: Main 10
Tier: Main
Bit Depth: 10-bit
Encoding Mode: Average Bitrate (1-Pass)
Target Bitrate: 18,000 - 50,000 kbps
GOP: Closed
Primaries: BT.2020
Matrix: BT.2020nc
Transfer Characteristics: SMPTE ST.2084
MaxCLL: 1000 nits
MaxFALL: 180 nits
Master Display: G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(10000000,1)
Repeat Headers: True
Signaling: HRD, AUD, SEI Info

I’ve only listed the settings that I are different from the default x265 settings, so let me run through what they do, and why I use these values.

First, x265 needs to output a 10-bit stream in order to be compliant with UHD broadcast, SMPTE ST.2084, ARIB STD-B67 or HDR10 standards.  To trigger that mode, that I set the Profile to Main 10 and the Bit Depth to 10-bit.  Unless you’re setting a really high bit rate, or using 8K video, you shouldn’t need a higher Tier than Main.

Next, I target 18 - 50 mbps as an average bitrate, with a 1 pass encoding scheme.  If you can tolerate a little flexibility in the final bitrate, I prefer using this mode, simply because it balances render time with quality, without padding the final result.  If you need broadcast compliant UHD, you’ll need to drop the target bitrate from 18 to 15 mbps, to leave enough headroom on the 20 mbps available bandwidth for audio programs, closed captions, etc.

x265 Main Compression Settings for HDR Delivery (Using Hybrid FFMPEG front-end)

However, I’ve found that 15mbps does introduce some artifacts, in most cases, when using high frame rates such as 50 or 60p.  18 seems to be about the most that many television decoders can handle seamlessly, though individual manufacturers vary and it does depend significantly on the content you’re transmitting.  Between 30 and 50 mbps you end up with a near-lossless encode, so if you happen to know the final display system can handle it, pushing the bitrate up can give you better results.  Above 50 mbps, there are no perceptual benefits to raising the bitrate.

A closed GOP is useful for random seeks and to minimize the amount of memory used by the decoder.  By default, x265 uses a GOP of at most 250 frames, so reference frames can end up being stored for quite some time when using an open GOP; it’s better just to keep it closed.

x265 Frames Compression Settings for HDR Delivery (Using Hybrid FFMPEG front-end)

Next we add the necessary HDR metadata into the Video Usability Information (VUI).  This is the metadata required by HDR10, and records information about your mastering settings, including color space, which HDR EOTF you’re using, the MaxCLL of the encoded video, the MaxFALL of the encoded video (if you’ve kept your MaxFALL below your display’s peak, you can estimate this value using the display’s MaxFALL), and the SMPTE ST.2086 metadata that records the primaries, white point, and brightness range of the display itself.

x265 Video Usability Information Compression Settings for HDR Delivery (Using Hybrid FFMPEG front-end)

This metadata is embedded into the headers of the video stream itself, so even f you change containers the information will still be there.  To make sure that the metadata is stored at regular intervals, and to enable smoother random access to the video stream, the last step is to turn on the option for repeating the headers and to include HRD, AUD, and SEI Info.

x265 Stream Settings for HDR Delivery (Using Hybrid FFMPEG front-end)

The HEVC stream can be wrapped in either a .mp4 or a .ts container; both are valid MPEG containers and should work properly on HDR televisions.  Be aware that it can take a while to get your settings right on the encode; if you’re using Hybrid you may need to tweak some of the settings to get 10-bit HEVC to encode without crashing (I flag on “Prefer FFmpeg” and “Use gpu for decoding” to get it to run stable) - don’t leave testing to the last minute!


Grading, mastering, and delivering HDR are the last pieces you need to understand to create excellent quality HDR video.  We hope that the information in this guide to HDR video will help you to be confident in working in this new and exciting video format.

HDR Video is the future of video.  It’s time to get comfortable with it, because it’s not going anywhere.  The sooner you get on board with it and start working with the medium, the more prepared you’ll be for the forthcoming time when HDR video becomes the defacto video standard.


Endnotes


*The rationale behind the technical requirements will become clear over the course of the article.  I would recommend that you look at the documentation for the application you use to make sure it meets the same minimum technical requirements as DaVinci Resolve when grading in HDR.  Most major color grading programs meet most or all of these technical criteria, and it’s always better to grade in the program you know than in the program you don’t.


However, if you are looking to pick a program right off the bat, I’d recommend DaVinci Resolve Studio, primarily since you can learn on regular Resolve level to learn the application and toolset before even having to spend a dime.


** You should always test that these codecs actually perform as expected with HDR in your workflow, even if you’ve used them for other applications in the past.  I’ve run into an issue where certain applications decode the codecs in different ways that have little effect in SDR, but create larger shifts and stepping in HDR.

HDR Video Part 4: Shooting for HDR

To kick off our new weekly blog here on mysterybox.us, we’ve decided to publish five posts back-to-back on the subject of HDR video.  This is Part 4: Shooting for HDR.

The first HDR project I graded was a set space shuttle launch shots, filmed on the RED ONE camera by NASA.  The footage wasn’t filmed with HDR in mind.  In fact, HDR wasn’t anything close to ‘a thing’: the shuttle last flew in 2011, and Dolby didn’t present their proposition of “Perceptual Signal Coding for More Efficient Usage of Bit Codes” (what we now call PQ), until 2012.  And yet despite the age of the footage and the lack of consideration for HDR when it was filmed, I still had no problem grading the footage into HDR space, and getting a pretty awesome image out of it.

HDR Grade from NASA Archive Footage, shot on Red One, circa 2011

I bring this up simply as a point of perspective; while I’m going to offer some suggestions here to help make your footage better in HDR, it’s important to realize that, in general, all footage is better in HDR, regardless of its age or how it was filmed.  That being said, there are things you can do while filming to best prepare for an HDR finish, which is what we’re going to discuss here.


The Kit

Choosing a digital camera today is like how choosing film stock used to be - each one responds differently than others, and can create slightly different looks.  This doesn’t change when you’re shooting HDR.  Beyond a specific point, your camera choice doesn’t matter; camera choice is a creative (or budgetary) decision.

But there are a few features that are important, if not essential, when planning an HDR shoot.  Think of these as the minimum level of kit needed.  I’m going to outline those first.  Then, there are niceties you can add on that will make your life a little easier, and I’ll outline those next.

First and foremost for HDR recording: never, ever, EVER shoot with a standard Rec 709 / BT.1886 / Gamma 2.4 contrast curve.

It’s possible to grade footage that uses one, but the results are pretty poor.  There’s too much clipping of the darks and whites, and the loss of detail kills you.  Linear is okay if it’s a high enough bit depth; a LOG format is better, but native RAW is really the best.  LOG and RAW will preserve more of the detail through the darks while retaining better roll-off into the whites, that will make HDR grading easier / possible.

When you’re shooting with HDR mastery in mind, use the highest bit depth (and bit rate) available.  If you’re using a camera that stores its footage in a compressed 8 bit format, you’re going to do yourself a world of disservice when it comes to grading in HDR.  The same reason that all HDR formats require 10 bits minimum applies to the camera - 8 bits causes stepping.  If you’ve shot in a LOG format, it’s possible to get away with using 8 bit sources, but you can’t push the footage as far as it’s able to go, and are very likely to see stepping in the whites.

If an 8 bit camera is your only option, and you still want / need HDR, consider using an external ProRes, DNxHR, or other high bitrate, intraframe, 10b+ per channel recorder.  You’ll save yourself a world of hurt in post.

Of course, the ideal format to shoot in is a camera RAW format, like Cinema DNG, RED RAW, ARRI RAW, Sony RAW, Phantom RAW, etc.  You’ll love yourself in HDR post for using RAW, even if you typically prefer the turnaround speed offered by a ProRes or DNxHR workflow.  Here’s why:

  1. Most ProRes and DNxHR workflows normalize the RAW footage into a LOG format, which is fine, but they collapse the bit depth range used to 10 bits.  With RAWs you typically have access to the full 12, 14 or 16 bits offered by the sensor!  Your grading application typically uses even higher bit depth internals for color processing, so even if you’re only grading in 10 bit at the display, having the extra 2, 4, or 6 bits per channel, per pixel, is a major advantage in grading latitude, something I can’t emphasize enough as being important to HDR.
     
  2. ProRes and DNxHR workflows typically normalize the camera primaries into Rec. 709 color space.  Most professional cameras use primaries that are wider than Rec. 709, but the video signals they output are typically conformed to Rec. 709 for ‘no brains’ compatibility, that is, it’ll just work.  While this isn’t a problem per-se for HDR, it does restrict the volumes of colors available for grading, and will require a LUT or manual shifting of the primaries in grading to match HDR’s BT.2020 or DCI-P3 space.

    Typically, RAW formats record their data using the camera native RGB values, and your RAW interpreter in your color grading program can then renormalize them to whatever target space you’re looking for.  Since BT.2020 is the widest of all display spaces, you’ll be able to better reproduce what your camera is already capturing.
     
  3. RAW formats often provide highlight and lowlight recovery not available in fixed video formats, even when using LOG or linear recording.  The RAW formats here give you access to as much information as the sensor actually recorded, which is invaluable in post.  Because of the extended dynamic range of the HDR environment, you’ll want as much of the highlights as you possibly can get, and may even at times push further into the noise floor, because the noise is less perceptible in the deep HDR darks.

If you’re shooting at the professional level, and using professional cinematography equipment, what you already have is probably okay for shooting in HDR.  Cameras like the RED Epic or Weapon (or pretty well all of RED’s cameras because of their RAW format), Sony’s F55 or F65, Arri’s Alexa, or any camera in the same class are perfect.  As are most film sources (S35mm or greater, for resolution), when captured with a high bit depth scanner.  Using any of these, you’ll be well suited for HDR, assuming you follow the format’s best shooting practices, which I’ll discuss in the technique section below.

If you’re using a prosumer or entry level professional camera, taking a few preparatory steps to set up how you’ll actually be capturing the image can mean the difference between getting footage that can be used in HDR mastering, vs. footage that can’t.

So in summary, when choosing your kit for HDR, consider:

  1. 16 bpc > 14 bpc > 12 bpc > 10 bpc > 8 bpc: 10 bpc should be your minimum spec.
  2. RAW > LOG > Linear > Gamma 2.4: avoid baking in your gamma at all costs!
  3. Camera Native / BT.2020 > DCI-P3 > Rec. 709 color primaries
  4. RAW > Compressed RAW > Intraframe compression (ProRes, DNxHR, AVC/HEVC Intra) > Interframe compressed (AVC/H.264, HEVC).

As a quick side note, some cameras offer a SMPTE ST.2084 signal or other HDR signal out of the camera for use with an external recorder.  These are a useful replacement to recording externally in either a LOG or gamma format - they can lead to faster turnaround times, but will require an HDR grade (or a dedicated step out of HDR) vs. being ready to be graded in HDR with the option of grading normally.


The Technique

First things first, some general, good advice: take some time learning how your camera and lenses respond to various lighting situations.  How is its roll-off into the highs?  What’s its noise level in the darks?  How does the color response change in different exposure levels?  While this is generally good practice, it’s the kind of forethought you really need in planning an HDR shoot.

When shooting for HDR mastery, you may find that you’ll need to modify your typical shooting technique.  There are three things that are important above all others: protecting your highlights, protecting your darks, and planning the expansion of the dynamic range of your scene.


Protecting Your Highlights

Most of us are rightfully excited about the creative possibilities that come with increased brightnesses at the display, and the expanded range of highlight detail that comes with it.  The catch is that there are some things that used to work well that, frankly, now look like shit.

Clipping.  May you rot in hell.

The large area to the right of the sun has no detail retention in the RAW.

All sensors clip at some level of exposure.  Film does too.  It’s unavoidable.  The goal for HDR shooting is to expose your whites to eliminating clipping the RAW data when possible, and minimizing it when it’s not.

Unlike traditional mastering workflows, where images clipped to white are simple to correct (set the clipped area to true white), clipping in HDR becomes problematic very quickly.  In HDR there is no longer such a thing as “true white”.  Instead, in the grading process (which we’ll discuss in Part 5), we make a creative decision about how bright white should be, and how to roll into it.  That roll into whatever white you pick is essential to tricking the eye to believe whatever you’ve picked to be white is, in fact, white.

The same shot can be graded with different white points in HDR, depending on the goals of the cinematographer & colorist.  Both of these grades work with the snow reading as white; the lighter image feels brighter, while the darker image feels more oppressive and foreboding

The human visual system perceives any object within, or region of a scene, as colored a shade of white (that is, not as a shade of grey, but varying intensities of whites) so long as three conditions are met:

  1. The brightness level is above a specific threshold relative to the rest of the scene, which is usually around 100 nits
  2. The chromatic characteristics are relatively balanced (that is, low saturation)
  3. The area is not completely uniform in brightness level and juxtaposed with a scene (or part of the same frame) with a brighter or more natural roll-into the whites

When talking about clipping, it’s that third condition that ends up being a problem.  Clipped footage typically has large swatches of ‘white’ with an abrupt transition into the patch (once the rest of the footage is graded to a normalized brightness level).

Gentle rolls into clipped white areas appear more natural than abrupt transitions

Deciding what brightness level to place this ‘white’ at becomes problematic for a couple of reasons.

First, you have to limit the brightness of the white patch with respect to the rest of the scene - if it’s too much brighter than everything else (say, everything is under 100 nits and you put the patch at 1000 nits), without roll into the whites you have an obnoxiously bright patch that dominates and overwhelms the rest of the scene.

Second, because these patches typically have a large area, that is, make up a significant portion of the pixels used on the screen, they end up skewing the distribution of brightnesses when calculating the MaxFALL, meaning that everything else in the scene has to be significantly darker than you might like, or you have to bring down the brightness of the white to bring up the brightness of everything else.

The overall brightness around the sun limits the overall peak image brightness due to MaxFALL.  For contrast, I've included both the direct SDR down grade (roll into white between 200 and 500 nits), and the same with the white point restored to full

Third, with the first two effects limiting the overall brightness of the uniform patch, it’s likely to appear grey when cut together with footage that has proper roll into the whites, since that footage is likely to have parts that are much brighter than whatever white you’re able to use for this clipped value.  The overall effect: grossness that pulls you out of the ‘magic’ that HDR creates.

In this sequence, the peak available white point of the middle shot is lower than the two shots that surround it, due to MaxFALL.  In the final grade, the first and third shots were graded with lower peak whites to match

Stop down or use ND: protect your highlights and avoid clipping like the plague.

Some parts of your image, like the sun or bright lights for instance, may clip and that’s okay, so long as they don’t dominate your scene.  You can typically roll into these whites much more subtly than larger clipped areas.

Not all clipping is unnatural, even in HDR

White, puffy clouds also tend to want to clip on most cameras, but don’t let them, if possible.  Because of the frequency that most people see clouds, and see the details in them, you need to preserve as much of that as you can or risk your viewers looking into them and being jarred at the bright uniform shapes that come with clipping, vs. the gentle gradients that come with the more rounded textures.

In HDR the contrast in clouds is much more significant than in SDR, and the clipping in the clouds hurts the realism of the scene

Coupled with this, is the idea that you can’t assume that you’ll get to hide things on the other side of bright windows.  If you camera retains any detail through a window or doorway, it’ll probably be visible.  If you’re hiding crew or equipment through a blown out window, you’ll need to be doubly sure that the window will, in fact, be blown out in HDR. (The same, by the way, is true for your darks - don’t assume they’ll be crushed out.  More on that in a second).

If your monitor out from your camera allows for separate colorimetry than your recorded image signal, you may want to switch it to a LOG curve out so that you can see on your field display or eyepiece where the scene is clipping, if at all, and what details are visible in the brights.


Protecting Your Darks

While the brights tend to get the love when talking about HDR, personally, I love what HDR does for the darks.

Just like with the whites in the image, we have to get rid of the concept of ‘true black’ when discussing HDR.  Instead, we have a range of blacks, just like we have a range of whites.  Two of the three conditions we discussed above describing how the brain perceives whites is true for blacks as well: they need to be below a certain value threshold, and they can’t be large uniform areas juxtaposed with darker regions.  The only difference is that below the brightness threshold the brain typically stops perceiving chromatic value anyway: saturation doesn’t matter (unless you’re trying to supersaturate your darks?)

Just like with our whites, eventually sensors clip to black.  In most video signals, this will be a hard clip, but in many RAW formats (especially those that offer ISO adjustments in post production), the blacks are typically recoverable into the noise floor of the camera.

If you’re planning on using a PQ HDR mastery workflow, you’ll need to assume that most of these darks are in fact visible, beyond what you’d normally consider available.  Which means you may need to be concerned about overall exposure level for the detail in the darks - you can’t necessarily hide equipment there, and need to make sure your production design moves deep into the visible darks.

Details often lost in darks in SDR are often visible in HDR

Even worse, or better, depending on whether you’ve planned for it or not, even after the image is properly graded, areas that appear black with the full HDR grade can ‘open up’ to the eye just by obscuring with your hand the brightest regions of the image, just like how blocking the light from a spotlight pointed at you allows you to see behind the light source.

Simulated images showing details visible in the darks when you block the lights in HDR.  This does not happen in SDR.

The good news is that noise is far less perceivable in the darkest depths of HDR than when the footage is normalized into SDR, largely because of the lack of saturation and our vision’s greater degree of tolerance to luminance noise than to chromatic noise.  So while it’s important to keep important details above the noise floor, it’s not as essential as protecting against clipping.

To the darks the same rules apply: open up or increase your ISO to avoid clipping in the darks.


Planning the Scene in HDR

You may have noticed the two pieces of contradictory advice from the last two considerations: stop down to protect your highlights, and open up to protect your darks: a paradox.  Something has to give: how do you plan for that?

Don’t worry: planning your scene for HDR is actually even more complicated than that.

When you shoot for HDR, you can’t assume that every consumer display will be HDR.  So you need to consider that how the darks and lights will play in both HDR and SDR.  With the whites, it’s relatively simple to adjust your roll into or clip so that it plays well in SDR, but crushing the blacks isn’t always the best option.  Creatively, you may want to highlight action or detail in the darks in a way that will be lost with a simple crush.

Crushing the darks in SDR maintains the mood of the HDR image, but at the expense of detail retention

A solution, of course, would be to bring up part of the darks during post, which increases the visible noise in SDR and may require a clipping or flattening of the whites to maintain the contrast and detail across the scene.

Noise is more perceptible in SDR darks than in HDR darks

Alternatively, you can adjust your lighting to bring up the darks and compress the range, then re-expand the range while color grading in HDR space.  So long as you’re shooting in a RAW format and capturing 12+ bits / channel, you won’t see stepping with this technique, since your mid gradients on a log curve are allocated sufficient bits that expansion is possible.

Another thing to consider when planning the scene is the MaxFALL limitation of HDR mastering.  The overall dynamic range of the scene needs to be planned in a way that the super bright / HDR elements are restricted to a small portion of the overall frame, so as to not push up the frame average light level.  Shooting interiors with a few bright windows or patches of direct sunlight tend to be fine, larger bay windows with cloudy or limited outdoor light also work so long as the eternal ambient isn’t too high (dusk / dawn, not noonday sun).

Both of these shots were done in the same space, about a year apart.  The time of day plays an important role to how much the windows affect the MaxFALL of the scene, with the blown out windows limiting overall brightness.

Particularly problematic are blue skies.  Why?  Because blue skies often take up a much bigger part of a frame than you expect, and contribute more to the MaxFALL since our eyes perceive blue values of similar absolute brightnesses as darker than those of other colors.  What we see as mid range blues can suddenly push up MaxFall and limit your overall scene brightness while still looking ‘normal’ or ‘average’ to the eye.  Exposing for blue skies often means keeping the blueness in the traditional light level range, which can leave the rest of your brights muddied (especially when shooting into the sun).

The amount of the image take up by the blue sky limited the overall MaxFALL of this image.  The result: in HDR, the sky never felt 'bright' like the trees or the tower.

Essentially, when designing your scene for HDR you need to plan the bulk of each frame to land below the traditional film standard light levels, so as to not push up your MaxFALL / average light level.  Of particular concern here is planning your edits for HDR - small patches of direct sun in a darker scene are fine, until you move in for the close up and that small patch behind the actor’s face dominates part of the scene.

In this wide and close pair, the wide shot is only limited by the available peak brightness of the display, while the close up is limited by the MaxFALL

While as an individual shot it’s fine to limit the MaxCLL / peak light level of a close up’s bright patch, when you’re cutting between two shots you’ll need to adjust the wider shot’s MaxCLL to match the MaxCLL permitted by the close up’s MaxFALL.

Or, in plain english, you’ll be limited on the maximum brightness in the wide because the maximum brightness of the close-up will be more limited, if the brighter areas take up more of the frame.  If you’re looking to push the 1000 nits limit of current HDR displays for creative reasons, your scene blocking needs to take in account that average brightness for the close up: plan on minimizing bright areas around the talent or inserts to keep the bright patches bright across a sequence.

Otherwise the shifting brightness levels can be much more visible and leave a ‘greying’ feeling in the more restricted close ups (which the eye would normally perceive as white, except in contrast with something whiter).

Because of the expanded darkness range of HDR, you can design much more ‘dark and moody’ lighting setups than you normally would for standard film or video exhibition.  Whole detailed-filled scenes can play out in levels under 30 nits!  However, be aware that this is a bad idea if you’re intending your work to end up on consumer displays.  In a darkened reference environment, our eyes will adjust to the lower light levels and we’ll see deeper into the darks.  But in a consumer’s home, where the ambient light level around the display may be higher than cinema or grading spaces, the viewer’s adjustment levels may be limited.

You can, however, still allow scenes to play much darker than typically available in television exhibition, keeping your maximum brightness below 80 nits.  This can be used with great effect when cutting between darker average to lighter average scenes: it’s in this contrast that HDR really pops.

In HDR darker footage can be cut in with brighter footage without the details in the blacks feeling milky, or the drop in brightness being jarring.


RED HDRx

If you don’t shoot using the RED ecosystem, ignore this section (seriously). But if you do, all this talk of shooting in HDR may make you tempted to shoot using RED’s HDRx.  I’m not saying this is a bad idea, but I am saying this is difficult to execute.

The real problem here is getting the HDR grade right using the HDRx footage.  We shot with it once, and our takeaway from that is has been: only shoot it if you absolutely, certainly, without a doubt, need it.  Which, in this case, means an increase in the recorded dynamic range of the scene (or scene elements).

RED HDRx Blend in HDR and SDR

The reason not to use it comes down to grading.  Blending the two separately exposed elements is fine in REDCINE, but you’re going to run into difficulties with HDR Grading in REDCINE, simply because of the limited grading toolset.  When you grade in DaVinci, you run into severe performance issues using the API blending tool in the RAW decoding.  DaVinci’s split input tool is better, but you still run into problems compressing the larger dynamic range and maintaining the overall look of HDR video.

In the end, the most efficient (inefficient) workflow was actually grade the shot twice in HDR - once for the standard exposure to grade the darks, and a second time with the blended exposure to grade the lights.  Then, both shots need to be passed through a compositing program like After Effects to selectively decide which set of contrast you want for which parts of the image - far more like traditional HDR photography than HDR video.

Dark and Light Plates in HDR and SDR with Final Image Blend

You can get great results this way, but it’s way, way more involved.

 

A Grain of Salt

Cameras, settings, best practices, planning.  Here’s the caveat: take all this advice with a grain of salt, not as a set of hard and fast rules.

Going back to the story that I opened with, even footage never planned to be shown in HDR can give excellent results.  Comparing the SDR version of the shuttle launch footage to the HDR grade, the HDR looks better.  The darks are darker while preserving all the details, and the range is higher.  This is, of course, an ideal case since high quality RAWs were available; the same is true for film sources when negatives are available.

We’ve done a lot of HDR regrading of our back catalog of footage, and I haven’t found a single shot that looks worse in HDR than SDR (even when ignoring the benefits of BT.2020 and 10 bit displays).

 

But even when you’re limited to just 8 bit log or standard gamma footage, you can often find more detail within an scene when grading in HDR then is perceptible in SDR.  While you’ll want to be far more cautious with how far you push the footage, but you’ll still be able to get good results.

Detail recovery is often possible when grading from sufficiently high quality SDR graded sources



Generally, if you are already following best practices for digital cinematography, and if you spend a little bit of time reviewing HDR grades of your existing footage with a colorist, you’ll quickly get a feel for how the HDR space works and what you can do with it, and that’s when you can unleash your own creative potential.


But once it’s shot and edited, what happens next?  Grading, mastering, and delivering in HDR is our next topic, so stay tuned for Part 5.

HDR Video Part 3: HDR Video Terms Explained

To kick off our new weekly blog here on mysterybox.us, we’ve decided to publish five posts back-to-back on the subject of HDR video.  This is Part 3: HDR Video Terms Explained.

In HDR Video Part 1 we explored what HDR video is, and what makes it different from traditional video.  In Part 2, we looked at the hardware you need to view HDR video in a professional environment.  Since every new technology comes with a new set of vocabulary, here in Part 3, we’re going to look at all of the new terms that you’ll need to know when working with HDR video.  These fall into three main categories: key terms, standards, and metadata.


Key Terms

HDR / HDR Video - High Dynamic Range Video - Any video signal or recording using one of the new transfer functions (PQ or HLG) to capture, transmit, or display a dynamic range greater than the traditional CRT gamma or BT.1886 Gamma 2.4 transfer functions at 100-120 nits reference.

The term can also be used as a compatibility indicator, to describe any camera capable of capturing and recording a signal this way, or a display that either exhibits the extended dynamic range natively or is capable of automatically detecting an HDR video signal and renormalizing the footage for its more limited or traditional range.


SDR / SDR Video - Standard Dynamic Range Video - Any video signal or recording using the traditional transfer functions to capture, transmit, or display a dynamic range limited to the traditional CRT gamma or BT.2886 Gamma 2.4 transfer functions at 100-120 nits reference. SDR video is fully compatible with all pre-existing video technologies.


nit - A unit of brightness density, or luminance. It’s the colloquial term for the SI units of candelas per square meter (1 nit = 1 cd/m2). It directly converts with the United States customary unit of foot-lamberts (1 fl = 1 cd/foot2), with 1 fl = 3.426 nits = 3.426 cd/m2.

Note that the peak nits / foot-lamberts value of a projector is often lower than that of a display, even in HDR video: because a projected image covers more area and the image is viewed in a darker environment than consumer’s homes, the same psychological and physiological responses exist at lower light levels.

For instance, a typical digital cinema screen will have a maximum brightness of 14fl or 48 cd/m2 vs. the display average of 80-120nits for reference and 300 for LCDs and Plasmas in the home. HDR cinema actual light output ranges in theaters are adjusted accordingly, since 1000 cd/m2 on a theater’s 30 foot screen is perceived to be far brighter than on a 65” flat screen.


EOTF - Electro-Optical Transfer Function - A mathematical equation or set of instructions that translate voltages or digital values into brightness values. It is the opposite of the Optical-Electro Transfer Function, or OETF, that defines how to translate brightness levels into voltages or digital values.

Traditionally, the OETF and EOTF were incidental to the behavior of the cathode ray tube, which could be approximated by a 0-1 exponential curve with a power value (gamma) of 2.4. Now they are defined values like ‘Linear”, “Gamma 2.4” or any of the various LOG formats. OETFs are used at the acquisition end of the video pipeline (by the camera) to convert brightness values into voltages/digital values, and EOTFs are used by displays to translate voltages/digital values into brightness values for each pixel.


PQ - Perceptual Quantization - Name of the EOTF curve developed by Dolby and standardized in SMPTE ST.2084, designed to allocate bits as efficiently as possible with respect to how the human vision perceives changes in light levels.

Perceptual Quantization (PQ) Electro-Optical Transfer Function (EOTF) with Gamma 2.4 Reference

Dolby’s tests established the Barten Threshold (also called the Barten Limit or the Barten Ramp), the point at what the difference in light levels between two values does that difference become visible.

PQ is designed that when operating at 12 bits per channel, the stepping between single digital values is always below the Barten threshold, for the whole range from 0.0001 to 10,000 nits, without being so far below that threshold that the resolution between bits is wasted. At 10 bits per channel, the PQ function is just slightly above the Barten threshold, where in some (idealized) circumstances stepping may be visible, but in most cases should be unnoticeable.

Barten Thesholds for 10 bit and 12 bit Rec. 1886 and PQ curves.  Source

For comparison, current log formats waste bits on the low end (making them suitable for acquisition to preserve details in the darks, but not transmission and exhibition), while the current standard gamma functions waste bits on the high end, while creating stepping in the darks.

HDR systems using PQ curves are not directly backwards compatible with standard dynamic range video.


HLG - Hybrid Log Gamma - A competing EOTF curve to PQ / SMPTE ST.2084 designed by the BBC and NHK to preserve a small amount of backwards compatibility.

Hybrid Log Gamma (HLG) Electro-Optical Transfer Function (EOTF) with Gamma 2.4 Reference

HLG vs. SDR gamma curve with and without knees.  Source

HLG vs. SDR gamma curve with and without knees.  Source

On this curve, the first 50% of the curve follows the output light levels of standard Gamma 2.4, while the top 50% steeply diverges along a log curve, covering the brightness range from about 100 to 5000 nits. As with PQ, 10 bits per channel is the minimum permitted.

HLG does not expand the range of the darks like PQ curve, and as an unfortunate side effect of the backwards compatibility coupled with the max-fall necessitated by the technology of HDR displays, whites can appear grey, when viewed in standard gamma 2.4, especially when compared to footage natively graded in gamma 2.4.


Standards

SMPTE ST. 2084 - First official standardization of HDR video transfer function by a standardization body, and is at the moment (October 2016), the most widely implemented. SMPTE ST.2084 officially defines the PQ EOTF curve for translating a set of 10 bit, or 12 bit per channel digital values into a brightness range of 0.0001 to 10,000 nits. SMPTE ST.2084 provides the basis for HDR 10 Media Profile and Dolby Vision implementation standards.

This is the transfer function to select in HEVC encoding to signal a PQ HDR curve.


ARIB STD-B67 - Standardized implementation of Hybrid Log Gamma by the Association of Radio Industries and Businesses. Defines the use of the HLG curve, with 10 or 12 bits per channel color and the same color primaries as BT.2020 color space.

This is the transfer function to select in HEVC encoding to signal an HLG HDR curve.


ITU-T BT.2100 - ITU-T Recommendation BT.2100 - ITU-T’s standardization of HDR for television broadcast. Ratified in 2016, this document is the HDR equivalent of ITU-T Recommendation BT.2020 (Rec.2020 / BT.2020). When compared with BT.2020, BT.2100 includes the FHD (1920x1080) frame size in add