Adobe Premiere CC 2017 - Real World Feature Review

About two weeks ago Adobe released their 2017 update to Creative Cloud, and because of a couple of projects that I happened to be working on at the time, I figured I’d download it immediately to see if I could take advantage of some of the new features.

If you want the TL;DR review, the short version is this: most of the features offer genuine improvements, but range in usefulness from incredibly useful to just minor time savers; a few, though, are utter crap.

Side note: I considered talking about the new features found in Adobe After Effects, but really, there’s not much to say other than: they work? Largely they’re just performance increases accomplished by moving things to the GPU, broader native format support, time shortening templating, and better integration with a few other Adobe CC products.  If you look at their new features page, you should be able to pretty quickly figure out which ones could be important to you, and there’s not much else to say about them other than “they work”.

Premiere is a different animal though, and I can’t say that all of the new features work properly.  But let’s start with the positives, of which there are many.

First and foremost, 8K native R3D imports.

This was expected, and necessary.  And while not ‘featured’ as part of their summaries, it is there and it works.  That’s a boon to all of us shooting on Helium sensors, and to our clients.  So far we’ve been running 8K ProRes or 2K proxies for our clients so they could edit with our footage; now they can take care of mastering with the 8K themselves (if they want).  So definitely a plus.

Second, the new native compression engine supporting DNxHD and DNxHR.

To me, this is a big plus.  I keep looking for a solid alternative to ProRes for my workflows, and while they don’t yet support the DNxHR 444, they do solidly support DNxHR HQX.  Since a significant portion of my usual workflows are built on 12 bits per channel and roundtripping between Adobe and DaVinci, having a solid 12 bit 422 cross-platform alternative to ProRes may finally let me get rid of DPX.

Third, the new audio tools.  Oh, thank god, the new audio tools.

I happen to be working this week on a short project doing sound design and light mixing (I’ll link to it when it’s up) and the new audio tools in Premiere have been a massive time saver.  If you’ve ever tried to do audio work directly in Premiere before, you’ll know how maddening it’s been dealing with their unresponsive skeuomorphic effect control knobs.  Even doing basic EQ meant flagging values on and off and struggling to get things as precise as you wanted.

Adobe Premiere CC 2015.3 EQ

Adobe Premiere CC 2015.3 Pitch Shifter

But the new audio UX is… well, fantastic.  I really can’t praise it enough.  The effect controls are still skeuomorphic (which I actually think is important in this case) but look classier, and more importantly actually respond really quickly to the changes you want to make.  They’ve expanded the tools set and the effects run more quickly.  I can’t be happier - this alone saved me hours of frustration and headaches this week.

Adobe Premiere CC 2017 EQ

Adobe Premiere CC 2017 Pitch Shifter

Fourth, the new VR tools.

So the same project I was doing sound design on happens to be a stereoscopic VR project.  So immediately, the promise of new VR tools was exciting - what more would they let me do, I wondered?

Install, fire it up, and… not much, actually.

Here’s basically all of the new VR tools I could find:

  • Automatically detect the VR properties of imported footage, but only if they were properly flagged with metadata (marginally useful, not really useful)

  • Automatically assign VR properties to sequences if you create a new sequence based on properly flagged VR footage.

  • Manual assign VR properties to sequences, allowing you to flag stereoscopic (and the type of 3D used, if any). The sequence flagging allows for Premiere to automatically flag for VR on export, when supported.

  • Embed VR metadata into mp4 files in the H264 encoder module, instead of just QuickTime.

  • Connect seamlessly to an Oculus or other VR headset with full 360 / 3D output.

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

And that’s… it.  Really?  I mean, there is actually no difference between the viewers in 2015.3 and 2017, both handle stereoscopic properly; assigning the VR flags to sequences and then embedding the necessary metadata on export VERY useful.  But I would really LOVE to see an editor trying to edit with a VR headset.  Or color correct, for that matter.  Reviewing what you’ve got, sure, but not for the bulk of what you’re doing.

I should note that Premiere chokes on stereoscopic VR files at resolutions greater than 3K by 3K, which makes mastering footage from the GoPro Odyssey interesting, since it comes back from the Google Jump VR system as 8K by 8K mp4s.  Even converting to a full ProRes 422 intermediate at 4K by 4K proved too data heavy for Premiere to keep up with on an 8 Core MacPro.

But it’s not only VR performance that’s an issue: it’s still missing a whole bunch of features that would really make it a useful VR tool.  Where are my VR aware transitions?  What about VR specific effects, like simple reframing?  Where is my VR support in After Effects?  Why can’t I manually flag footage as VR if it didn’t have the embedded metadata?  What about recognizing projections other than equirectangular?  They have a drop down for changing projection type on a timeline, but equirectangular is the only option.  What about native ambisonic audio support? Or even flagging for ambisonic audio on export?

Don’t get me wrong, what they’ve done isn’t bad; it does work, and is an improvement.  It’s just that the tools they added were very tiny improvements on what was already there.  And I know (and use) that there are plugins that give Premiere and After Effects many of the VR features that I need to actually work in VR.  But it's really difficult, almost impossible, to get by without the 3rd party plugins.

Maybe I’m just jaded and judgmental, in part because of my reaction to the HDR 'tools' they announced, but when you advertise “New VR Support” as the second item on the new features list, it had better be good support.  Like, you know, actually work as well in VR as you can in standard 2D video.  If I, as a professional, require third party plugins to your program to make it work at the most basic level, it’s not the turnkey solution you advertise.  I’m sure that more tools are in the works, but for now, it feels lackluster and an engineering afterthought rather than an intelligent feature designed for professionals.


But don’t worry, that’s not their most useless feature change.  Let’s talk about their new HDR tools.

What. The. Hell.

This is how using the new HDR 'tools' in Premiere 2017 feel.

I mean that.  With all of my heart.

I might be a little biased on the subject, but honestly I question who in their right mind decided that what they included was actually something useful.

It’s not.

It’s utter shit.

But worse than that, as-is it’s more likely to hurt the broader adoption of HDR than to help it.

And no, I’m not exaggerating.


On paper, the new HDR tools sound amazing.  HDR metadata on export!  HDR grading tools!  HDR Scopes!  Full recognition of HDR files!  Yay!

In practice, all of these are useless.

Let me give you a rundown of what the new HDR tools actually do.

Premiere now recognizes SMPTE ST.2084 HDR files, which is awesome.  But only if the proper metadata is already embedded in the video stream, and then only if it’s an HEVC deliverable file.  Not a ProRes, DPX, or other intermediate file; only HEVC.  And like VR support above, there’s no way to flag footage as already being in HDR or using BT.2020 color primaries.  Which ends up being a massive problem, which I’ll get to in a minute.

When you insert properly flagged HDR footage into a sequence, you get a pleasant surprise: hard clipping at 120 nits on your viewer or connected external display.  It’s honestly the worst clipping I’ve seen.  And there’s no way to turn it off.  If you go to export the clip into any format without the HDR metadata flag enabled on export, you get the same hard clipping.  And since you can only flag for HDR if you’re exporting to HEVC, you can’t export HDR graded or processed through premiere in DPX, ProRes, TIFFs, OpenEXR or any other intermediate format.

This is why in my article on Grading and Mastering HDR I mention that it’s really important to be using a color space agnostic color grading system.  When the application includes color management that can’t be disabled, your options become very limited.

Also, side note, their HEVC encoder needs work - it’s very slow at the 10 bits you need for HDR export.  I expect it’s better on the Intel Kaby Lake chips that include hardware 10 bit HEVC encoder support that, oh wait, don’t exist for professionals yet (2017 5K iMac maybe?)

But at least with the metadata flagging you can bypass the FFMPEG / x265 encoder that you’ll have needed up to this point to properly encode HDR for delivery, right?

Why would you think that?  Of course you can’t.

Because if you bring in a ProRes, DPX, or other intermediate file into Premiere, there’s no way to flag it as HDR and it doesn’t recognize embedded metadata saying it’s HDR like DaVinci and YouTube do.  What happens is that if you use these intermediates as a source (individually or assembled in a sequence) and you flag for HDR on export, Premiere runs a transform on the footage that scales it into the HDR range as if it’s SDR footage.

12 Bit ProRes 4444 HDR Intermediate in Timeline with 8 Bit Scope showing proper range of values

12 Bit ProRes 4444 HDR Intermediate in Timeline with HDR Scope showing how Premiere CC 2017 interprets the intermediate if you flag for HDR on export

When is that useful? If I have a graded SDR sequence that I want to encode into the PQ HDR space, while keeping 100% of the limits of an SDR image.  Because why the hell not.

But never fear!  Premiere has included new color grading tools for HDR!

Well, they aren’t horrible, which I suppose is a compliment?

How to enable HDR Grading in Premiere 2017

To enable HDR Grading you need to change three different settings.  From the Lumetri context menu in your Lumetri Panel, you need to select “High Dynamic Range” to enable the HDR features; on the scopes you’ll need to switch the scale from “8 Bit” to “HDR” (and BT.2020 from the scope settings); and if you actually want to see those HDR values on the scope, you’ll need to enable the flag “Maximum Bit Depth” in your Sequence Settings.  I’m sure there’s a fantastic engineering explanation for that last one, but it’s not exactly intuitive or obvious, and took me a bit of hunting to figure it out.

Maximum Bit Depth needs to be turned on in Sequence Settings to enable proper HDR Scopes

HDR Scopes WITHOUT Maximum Bit Depth Flag

HDR Scopes WITH Maximum Bit Depth Flag

Once you’ve enabled HDR grading from the Lumetri drop down menu, you’ll get a few new options in your grading panels.  “HDR White” and “HDR Specular” come available under the Basic Correction panel, “HDR Range” comes available under the Curves panel, and “HDR Specular” comes available under the Color Wheels panel.

The HDR White setting seems to control how much the other sliders of the Basic Correction panel behave, almost like changing the scale.  The higher the HDR White value, the less of an effect exposure adjustments have and the greater the effect of contrast adjustments.  The HDR Specular slider controls just the brightest whites, almost like the LOG adjustment I use in DaVinci Resolve Studio.  This applies to both the slider under Basic Correction, and the wheel under the Color Wheels panel.  HDR Range seems to change the scale of the curves similar to how the HDR White slider does for the basic corrections.

All of this, by the way, I figured from watching the scopes, and not the output image.  I’ve tried hooking up a second display to the computer and hooking up our BVM-X300 through our Ultrastudio 4K to Premiere, but to no avail - the output image is always clipped to standard video levels and output in gamma 2.4.

Which, if you ask me, severely defeats the purpose of having HDR grading tools to begin with. Here’s a great idea: let’s allow people to grade HDR, but not see what they’re grading.  Which is like trying to use a table saw blindfolded.  Because that’s a thing people do, right?  Which brings me back to my original premise: What. The. Hell.

When you couple that little gem with the hard clip scaling, you realize that the only reason the color grading features are in this particular version is to make the process of cross grading from SMPTE ST.2084 into SDR easier, and nothing else.

No fields for adding HDR10 Compliant Metadata on Export. That's okay, you shouldn't use their exporter anyway (at least not this version)

Oh, one last thing of course: here’s the real kicker: you can’t even export HDR10 compliant files.  Yes, I know I said that in the HEVC encoder you can flag for ST.2084, but you can’t add any MaxFALL, MaxCLL, or Master Display metadata.  And yes, I double checked that Premiere didn’t casually put those into the file without telling you (it doesn’t).

And it has zero support for Hybrid Log Gamma.  Way to pick a side, Adobe.


So passions aside, let’s run down the list again of new HDR tools and what they do:

  1. Recognize SMPTE ST.2084 files, but only when already properly flagged in HEVC streams and no other codec or format.

  2. Export minimal SMPTE ST.2084 metadata to flag for HDR, but only works if your source files are already in the HEVC format and already properly HDR flagged (see #1), or if they’re graded in HDR in the timeline, which you can’t see. Which renders their encoder effectively useless.

  3. Enable HDR grading through a convoluted process, with a minimal but useful set of tools. But you can’t see what you’re doing, so I'm not sure why they're there.

  4. There is no bullet point 4. That’s literally all it does.

The question that I have that I keep coming back to is “who do they think is going to use these tools?”  It feels like the entire feature set was a “well, we need to include HDR, so get it in there”.  But unlike the VR tools that you can kind-of build into, these HDR “tools” (I use the word loosely) are really problematic, not just because the toolset is incomplete but because the way that the current tools are implemented is actually harmful to a professional workflow.

Call it simple feature bandwagoning, or engineers that didn’t consult real creative professionals, or blame it on whatever reason you will.  But the fact is, this ‘feature’ is utter shit, which to me sours the whole release, just a little.

My biggest concern here is that while someone like me, who's been working with HDR for a while now, can tell that these will hurt my workflow, Premiere is an accessible editing platform for everyone from amateurs to professionals.  And anyone looking to get into HDR video may try to use these tools as their way in, and their results are going to be terrible.  God awful.  And that hurts everyone - why would we want to adopt HDR when 'most of what people can do' (meaning the amateurs and prosumers who don't know any better) looks bad?

So basically, if Premiere is part of your HDR workflow, don't even think about using their new 'tools'.

HDR Rant over, let’s bring this back to the positive.


Just to reiterate, the new audio tools in Premiere CC 2017 are fantastic.  I can't emphasize that enough.  Most of the rest of the features added are pretty good.  The new team projects collaboration tools, though I haven’t had a chance to use them, appear to work well (though are still in beta).  The new captions are useful, the new visual keyboard layout manager fantastic (though WAAAY long overdue!), and the other under-the-hood adjustments have improved performance.

Should you upgrade?  Yes!  It’s a great upgrade!  Despite my gripes I’m overall happy with what they did!

Just don’t try to use it for HDR yet, and be aware that the new VR tools aren’t really that exciting.

Written by Samuel Bilodeau, Head of Technology and Post Production

HDR Video Part 5: Grading, Mastering, and Delivering HDR

To kick off our new weekly blog here on mysterybox.us, we’ve decided to publish five posts back-to-back on the subject of HDR video.  This is Part 5: Grading, Mastering, and Delivering HDR.

In our series on HDR so far, we’ve covered the basic question of “What is HDR?”, what hardware you need to see it, the new terms that apply to it, and how to prepare and shoot with HDR in mind.  Arguably, we’ve saved the most complicated subject for last: grading, mastering, and delivering.

First, we’re going to look at setting up an HDR grading project, and the actual mechanics of grading in the two HDR spaces.  Next, we’re going to look at how to prepare cross conversion grades to convert from one HDR space to the other, or from HDR to SDR spaces.  Then, we’re going to look at suitable compression options for master & intermediate files, before discussing how to prepare files suitable for end-user delivery.

Now, if you don’t handle your own coloring and mastering, you may be tempted simply to ignore this part of our series.  I’d recommend you don’t - not just because I’ve taken the time to write it, but I sincerely believe that if you work at any step along an image pipeline, from acquisition to exhibition, your work will benefit from learning how the image is treated in other steps along the way.  Just my two cents.

Let’s dive in.

NOTE: Much of this information will be dated, probably within the next six months to a year or so. As programs incorporate more native HDR features, some of the workarounds and manual processes described here will likely be obsolete.


Pick Your Program

Before diving into the nitty gritty of technique, we need to talk applications.  Built-in color grading tools or plugins for Premiere, Avid, or FCP-X are a no-no.  Until all of the major grading applications have full and native HDR support, you’re going to want to pick a program that offers full color flexibility and precision in making adjustments.

I’m going to run you through my workflow using DaVinci Resolve Studio, which I’ve been using to grade in HDR since October 2015, long before Resolve contained any native HDR tools.  My reasoning here is threefold: one, it’s the application I actually use for grading on a regular basis; two, the tricks I developed to grade HDR in DaVinci can be applied to most other color grading applications; and three, it offers some technical benefits that we find important to HDR grading, including:

  • 32 bit internal color processing

  • Node based corrections offering both sequential and parallel operations

  • Color space agnostic processing engine

  • Extensive LUT support, including support for multiple LUTs per shot

  • Ability to quickly apply timeline & group corrections

  • Extensive, easily accessible correction toolset with customizable levels of finesse

  • Timeline editing tools for quick edits or sequence changes

  • Proper metadata inclusion in QuickTime intermediate files

Now, I’m not going to say that DaVinci Resolve is perfect.  I have a laundry list of beefs that range from minor annoyances to major complaints (but the same is basically true for every program that I’ve used…), but for HDR grading its benefits outweigh its drawbacks.

My philosophy tends to be that if you can pretty easily make a program you’re familiar with do something, use that program.  So while we’re going to look at how to grade in DaVinci Resolve Studio, you should be able to use any professional color grading application to achieve similar results, by translating the technique of the grade into that application’s feature set.*

If you are using DaVinci Resolve Studio, I recommend upgrading to version 12.5.2 or higher, for reasons I’ll clarify in a bit.

DaVinci Resolve Studio version 12.5.2 has features that make it very useful for HDR grading and encoding.


Grading in HDR

So now that we’re clear on what we need in a color grading program, let’s get to the grading technique itself.  For starters, I’m going to focus on grading with the PQ EOTF rather than HLG, simply because there’s a lot of overlap between the two.  The initial subsections will focus on PQ grading, but I’ll conclude the section with a bit about how to adapt the advice (and your PQ grade!) to grading in HLG.

Set up the Project

I assume, at this point, that you’re familiar with how to import and set up a DaVinci Resolve Studio project for normal grading using your own hardware, adding footage, and importing the timeline with your sequence.  Most of that hasn’t changed, so go ahead and set up the project as usual, and then take a look at the settings that need to be different for HDR.

First, under your Master Project Settings you’re going to want to turn on DaVinci’s integrated color management by changing the Color Science value to “Davinci YRGB Color Managed”.  Enabling DaVinci’s color management allows you to set the working color space, which as of Resolve Studio 12.5.2 and higher will embed the correct color space, transfer function, and transform matrix metadata to QuickTime files using ProRes, DNxHR, H.264, or Uncompressed codecs.  As more and more applications become aware of how to move between color spaces, especially BT.2020 and the HDR curves, this is invaluable.

Enabling DaVinci YRGB Color Management as a Precursor for HDR Grading

Side note: I’m actually not recommending using their color management for input color space transformations; in fact, for my HDR grades, I actually set the input to “bypass” and the timeline and output color space values to the same values, because I don’t like how these transformations affect how basic grading operations act.  Color management is however a useful starting point for HDR and SDR cross conversions, which we’ll discuss in a bit.

Once color management is turned on, you’ll want to set it up for the HDR grade.  Move to the Color Management pane of the project settings and enable setting “Use Separate Color Space and Gamma”.  This will give you fine-tuneable controls over the input, timeline, and output values.  If you want to keep these flat, i.e. preventing any actual color management by DaVinci, set the Input Color Space to “Bypass” and the Timeline and Output Color Space to “Rec.2020” - “ST.2084”.  This will enable the proper metadata in your renders without affecting any grading operations.

For the purposes of what I’m demonstrating here, if you are using DaVinci’s color management for color transformations, use these settings:

  • Input Color Space - <”Bypass,” Camera Color Space or Rec 709> - <”Bypass,” Camera Gamma or Rec 709>

  • Timeline Color Space - “Rec.2020” - “ST.2084”

  • Output Color Space - “Rec.2020” - “ST.2084”

DaVinci Resolve Studio for embedding HDR metadata in master files, without affecting overall color management.

NOTE: At the time of this writing DaVinci's ACES doesn’t support HLG at all, or PQ within the BT.2020 color space; in the future, this may be a better option to use, if you’re comfortable grading in ACES.

After setting your color management settings, you’ll want to enable your HDR scopes by flagging “Enable HDR Scopes for ST.2084” in the Color settings tab of the project settings.  This changes the scale on DaVinci’s integrated scopes from 10 bit digital values to a logarithmic brightness scale showing the output brightness of each pixel in nits.

How to Enable HDR Scopes for ST.2084 in DaVinci Resolve Studio 12.5+

DaVinci Resolve Studio scopes in standard digital scale, and in ST.2084 nits scale.

If you’re connected to your HDMI reference display, under Master Project Settings flag “Enable HDR Metadata over HDMI”, and under Color Management flag “HDR Mastering is for X nits” to trigger the HDR mode on your HDMI display.

How to enable HDR Metadata over HDMI to trigger HDR on consumer displays.

If you’re connected to a reference monitor over SDI, set the display’s color space to BT.2020 and its gamma curve to ST.2084 (and its Transform Matrix to BT.2020 or BT.709, depending on whether you’re using subsampling and what your output matrix is).

Settings for enabling SMPTE ST.2084 HDR on the Sony BVM-X300

That’s it for settings.  It’s really that simple.


Adjusting the Brightness Range

Now that we’ve got the project set up properly, we’re going to add the custom color management compensation that will allow the program’s mathematical engine to process changes in brightness and contrast in a way more conducive to grading in ST.2084.

The divergence of the PQ EOTF from a linear scale is pretty hefty, especially in the high values.  Internally, the mathematical engine operates on the linear digital values, with a slight weighting towards optimization for Gamma 2.4.  What we want to do is make the program respond more uniformly to the brightness levels (output values) of HDR, rather than to the digital values behind them (input values).

We’re going to do this by setting up a bezier curve that compresses the lights and expands the darks:

Bezier curve for expanding the darks and compressing the whites of ST.2084, for grading with natural movement between exposure values in HDR

For best effect, we need to add the curve to a node after the rest of the corrections, either as a serial node after other correctors on individual clips, on the timeline as a whole (timeline corrections are processed in serial, after clip corrections), or exported as a LUT and attached to the overall output.

Where to attach the HDR bezier curve for best HDR grading experience - serial to each clip, or serial to all clips by attaching it to the timeline.

So what effect does this have on alterations?  Look at the side by side effect of the same gain adjustment on the histogram with and without the custom curve in serial:

Animated GIF of brightness adjustments with and without the HDR Bezier Curve

Without the curves, the upper range of brightnesses race through the HDR brights.  This is, as you can imagine, very unnatural and difficult to control.  On the other hand, the curve forces the bright ranges to move more slowly, still increasing, but at a pace that’s more comparable to a linear adjustment of brightnesses, rather than a linear adjustment of digital values: exactly what we want.

NOTE: DaVinci Resolve Studio includes a feature called “HDR Mode”, accessible through the context menu on individual nodes, that in theory is supposed to accomplish a similar thing.  I’ve found it has really strange effects on Lift - Gamma - Gain that I can’t figure out how is supposed to help HDR grading: Gain races faster through the brights, Gamma is inverted and seems to compress the space, and so does Lift, but at different rates.  If you’ve figured out how to make these controls useful, let me know…

If you've figured out how to use HDR Mode in DaVinci Resolve Studio for HDR grading, let me know!

Once that curve’s in place, grading in HDR becomes pretty normal, in some ways even easier than grading for SDR.  But there are a few differences that need to be noted, and a couple more tricks that will get your images looking the best.  And the first one of these we’ll look at is the HDR frenemy, MaxFALL.


Grading with MaxFALL

If you read the last part in this HDR series about shooting for HDR, you’ll remember that MaxFALL was an important consideration when planning the full scene for HDR.  In color grading you’re likely going to discover why MaxFALL is such an important consideration: it can become frustratingly limiting to what you think you want to do.

Just a quick recap: MaxFALL is the maximum frame average light level permitted by the display.  We calculate each frame average light level by measuring the light level, in nits, of each pixel and taking the average across each individual frame.  The MaxFALL value is the maximum encoded within an HDR stream, or permitted by a display.  The MaxFALL permitted by your reference or target display is what we really need to think about with respect to color grading.

Without getting into the technical reasons behind the MaxFALL, you can imagine it as collapsing all of the color and brightness within a frame into a single, full frame, uniform grey screen, and the MaxFALL is how bright that grey (white) screen can be before the display would be damaged.  Every display has a MaxFALL value, and will hard-limit the overall brightness by dimming the overall image when you send it a signal that exceeds the MaxFALL.

Average Pixel Brightness with Accompanying Source Image

On the BVM-X300, you’ll notice the over range indicator turns on when you exceed the MaxFALL, so that when you look at the display, you can see when the display is limiting the brightness.  On consumer television displays, there is no such indicator, so if the dimming happens when you’re looking away from the screen, you’re likely to not notice the decreased range.  Use the professional reference when it’s available!

BVM-X300 Over Range Indicator showing MaxFALL Exceeded

Just like with CRT displays, the MaxFALL tends to be lower on reference displays than on consumer displays with the same peak brightness, since the size of the consumer displays often reduces the damage produced through the heat generated from the higher current, and the tolerable color deviation in consumer displays allows for lower color fidelities with higher MaxFALLs than a reference display.

So what do we do in grading that can be limited by the MaxFALL attribute?  Here are some scenarios that I’ve run into limitations with MaxFALL:

  1. Bright, sunny, outdoors scenes

  2. Large patches of blue skies

  3. Large patches of white clouds

  4. Large patches of clipped whites

  5. Large gradients into the brightest whites

When I first started grading in HDR, running into MaxFALL was infuriating.  You’re working at a nice clip, when suddenly, no matter how much I raise the brightness of the scene, it just never got brighter!  I didn’t understand initially, since I was looking at the scopes and I was well below the peak brightness available on my display, yet every time I added gain, the display bumped up, then suddenly dimmed down.

When MaxFALL is exceeded, the Over Range indicator lights up and the display brightness is notched down to prevent damage.

Now that I know what I was fighting against, it’s less infuriating, but still annoying.  In generally, I know that I need to keep the majority of the scene around 100-120 nits, and pull only a small amount into the superwhites of HDR.  When my light levels are shifting across frames, as in this grade with the fire breather, I’ll actually allow a few frames to exceed the display’s MaxFALL temporarily, so long as it’s very, very brief, so as not to damage the display when it temporarily averages brighter.

Grading with brief exceeding of target MaxFALL.

When I’m grading content that’s generally bright, with long sets of even brighter, such as this outdoor footage from India, it can be a good idea to keyframe an upper-brightness adjustment to drop the MaxFALL, basically dropping the peak white values as the clipped or white patch takes up more and more of the scene.  This can be visible, though, as a greying of the whites, so be careful.

Tilt-up Shot of Taj Mahal where brightness keyframes were required to limit MaxFALL. In an ideal world, no keyframes would have been necessary and the final frame would have been much brighter (as shot) than the first.

In other cases, it may be necessary to drop the overall frame brightness, to allow for additional peak brightness in a part of the frame, such as what happened with this shot of Notre Dame Cathedral, where I dropped the brightness of the sky, tree, and cathedral to less than what I wanted to allow the clouds to peak higher into the HDR white range.

Average brightness was limited so that more of the cloud details would push higher into the superwhites without exceeding MaxFALL

In some cases, you really have no choice but to darken the entire image and reduce the value of peak white, such as this shot of the backflip in front of the direct sun - the gradient created nearby the sun steps if I pull the center up to the peak white of the sun, while the MaxFALL is exceeded if I pull up the overall brightness of the image.

MaxFALL limited the white point to only 200 nits because of the quantity of the bright portion of the image and the softness of the gradient around the sun.

The last consideration with MaxFALL comes with editing across scenes, and is more important when maintaining consistency across a set of shots that should look like they’re in the same location.  You may have to decrease the peak white within the series of shots so that on no edit does the white suddenly appear grey, or rather, ‘less white’ than the shot before it.

Three shots with their possible peak brightnesses (due to MaxFALL limitations of the BVM-X300) vs the values I graded them at.

What do I mean by ‘less white’?  I mentioned it in Part 4: Shooting for HDR, but to briefly reiterate and reinforce:


In HDR grading, there’s no such thing as absolute white and black.


HDR Whites & Blacks

From a grading paradigm point of view, this may be the biggest technical shift: in HDR, there is no absolute white or absolute black.

Okay, well, that’s not entirely true, since there is a ‘lowest permitted digital code’ which is essentially the blackest value possible, and a ‘highest permitted digital code’ the can be called the peak brightness - essentially the whitest value possible within the system (encoded video + display).  However, in HDR, there is a range of whites available through the brights, and a range of blacks available through the darks.

Black and white have always been a construct in video systems, limited by the darkest and brightest values displays could produce.  There were the hard-coded limits of the digital and voltage values available.  In traditional SDR color grading, crushing to blacks was simply: push the darks below the lowest legal dark value, and you have black.  Same thing with whites - set the brightness to the highest legal value and that was the white that was available: anything less than that tends to look grey, especially in contrast with ‘true white’ or ‘legal white’.

But in the real world, there is a continuum that exists between blacks and whites.  With the exception of a black hole, there is nothing that is truly ‘black’, and no matter how bright an object is, there’s always something brighter, or whiter than it.

Of course, that’s not how we see the world - we see blacks and whites all around us.  Because of the way that the human visual system works, we perceive as blacks any part of a scene (that is, what is in front of our eyes) that is either very low in relative illumination and reflects all wavelengths of light relatively uniformly, or that is very low in relative illumination such that few of our cones are activated in our eyes and we therefore can’t perceive the ratio of wavelengths reflected with any degree of certainty.  Or, in other words, everything that is dark with little saturation, or so dark that we can’t see the saturation, we perceive as black.

The same thing is true with whites, but in reverse.  Everything illuminated or emitting brightness beyond a specific value, with little wavelength variance (or along the normal distribution of wavelengths) we see as white, or if things are so bright that we can’t differentiate between the colors reflected or emitted, we see it as white.

Why do I bring this up?  Because unlike in SDR video where there is a coded black and coded white, in HDR video, there are ranges of blacks and whites (and colors of blacks and whites), and as a colorist you have the opportunity to decide what level of whiteness and blackness you want to add to the image.

Typically, any area that’s clipped should be pushed as close as possible to the scene-relative white level where the camera.  Or, in other words, as high as possible in a scene with a very large range of exposure values, or significantly lower when the scene was overexposed and therefore clipped at a much lower relative ratio.

Clipping in an image with wide range of values and tones vs clipping in image with limited range of values and tones

Since this is different for every scene and every camera, it’s hard to recommend what that level should be.  I usually aim for the maximum value of the display or the highest level permitted by MaxFALL if my gradient into the white or size of the clipped region won’t permit it to be brighter.

So long as the light level is consistent across edits, the whites will look the same and be seen as white.  If, within a scene, you have to drop the peak brightness level of one shot because of MaxFALL or other considerations, it’s probably going to look best if you drop the brightness level of the whites across every other shot within that same scene.  In DaVinci, you can do this quickly by grouping your shots and applying a limiting corrector (in the Group Post-Clip, to maintain the fidelity of any shot-based individual corrections).

Sometimes you may actually want a greyer white, or a colored white that reads more blue or yellow, depending on the scene.  In fact, when nothing within the image is clipping and you don’t have other MaxFALL considerations, it’s very liberating to decide the absolute white level within an image.  Shots without any ‘white’ elements can still have colored brights at levels well above traditional white, which helps separate the relative levels within a scene in a way that could not be possible with traditional SDR video.

The only catch, and this is a catch, is that when you do an SDR cross conversion, some of that creativity can translate into gross looking off-whites, but if you plan specifically for it in your cross conversion to SDR, you should be able to pull it off in HDR without any issues.

Blacks have a similar tonal range available to them.  You have about 100 levels of black available below traditional SDR’s clipping point, and that in turn creates some fantastic creative opportunities.  Whole scenes can play out with the majority of values below 10 nits.  Some parts of the darks can be so dark that they appear uniform black, until you block out the brighter areas of the screen and suddenly find that you can see even deeper into the blacks.  Noise, especially chromatic noise, disappears more in these deep darks, making the image appear cleaner than it would in SDR.  All of these offer incredible creative opportunities when planning for production, and I discussed them in more detail in Part 4: Shooting for HDR.

So how do you play with these whites and blacks?

The two tools I use on a regular basis to adjust my HDR whites and blacks are the High and Low LOG adjustments within DaVinci.  These tools allow me to apply localized gamma corrections to specific parts of the image, that is, those above a specific value for the highs adjustment, and those below a specific value for the lows adjustment.

DaVinci Resolve Studio's LOG Adjustment Panel

In SDR video, I typically use LOG adjustments on the whites to extend contrast, or to adjust the color of the near-whites.  In HDR, I first adjust the “High Range” value to ‘bite’ the part of the image that I want, and then pull it towards the specific brightness value I’m looking for.  This often (but not always) involves pulling up a specific part of the whites (say, the highlights on the clouds) to a higher brightness value in the HDR range, for a localized contrast enhancement, though I do use it to adjust the peak brightness too.

Effect of LOG Adjustments on an HDR Image with Waveform. Notice the extended details in the clouds.

In SDR video, I’d typically use the low adjustment to pull down my blacks to ‘true black’, or to fix a color shift in the blacks I’d introduced with another correction (or the camera captured). In HDR, I use the same adjustment to bite a portion of the lows and extend them through the range of blacks, increasing the local contrast in the darks to make the details that are already there more visible.

The availability of the LOG toolset is one of the major reasons I have a preference for color grading in DaVinci, and what it lets you do quickly with HDR grading really helps speed up the process.  When it’s not available its functionality is difficult to emulate, with finesse, using tools such as curves or lift-gamma-gain.  Typically, I’ve found it generally requires a secondary corrector limited to a specific color range and then using a gamma adjustment, which is a very inelegant workaround, but one that works.


Futureproofing

Once the grade is nearly finalized, there’s a couple of things that you may consider doing to clean up the grade and make it ‘futureproof’, or, to make sure that things you do now don’t come back to haunt the grade later.

If you’ve been grading by eye, any value above the maximum brightness of your reference display will be invisible, clipped at the maximum display value.  If you’re only ever using the footage internally, and on that display only, don’t worry about making it future proof.  If, however, you’re intending on sharing that content with anyone else, or upgrading your display later, you’ll want to actually add the clip to your grade.

The reasoning here I think is pretty easy to see: if you don’t clip it your video signal, your master will contain information that you can’t actually see.  In the future, or on a different display with greater latitude, it may be visible.

There are a couple of ways of doing this.

One that’s available in DaVinci is to generate a soft-clip LUT in the Color Management of the project settings, setting the top clip value to the 10 bit digital value of your display’s maximum nits value (767, for instance for 1000 nits max brightness display using PQ space).  Once you generate the LUT, attach it to the output and you’ve got yourself a fix.

Generating a Soft Clipping LUT for ST.2084 at 1000 nits in DaVinci Resolve

Alternatively, you can adjust your roll off curve that we’re using for making uniform brightness adjustments so that it comes as close to limiting the maximum displayable value as you can get, by extending the bezier curve into a near flat line that lands at your target maximum

Bezier curve for HDR grading with flatter whites to minimize peak range

But sometimes you may want to leave those values there, so that when the next generation of brighter displays comes around, you may find a little more detail in the lights.  What’s really important here is that you make white white, and not accidentally off-white.

If you’re working with RAW footage that allows you to adjust the white balance later, you may find that where white ‘clipped’ on the sensor isn’t uniform in all three channels.  This can happen too with a grading correction that adjusts the color balance of the whites - you can end up with separate clips in the red, green, and blue channels that may be clipped an invisible on your display, but will show up in the future.

Waveform of clipped whites with separated RGB Channels. This is common with RAW grading with clipped whites at the sensor and the ability to control decoded color temperature.

The simple fix here is to add a serial node adjustment that selects, as a gradient, all values above a specific point, and desaturate the hell out of.  Be careful to limit your range to low saturation values only (so long as they encompass what you’re trying to hit) so that you don’t accidentally desaturate other more intentionally colorful parts of the image that just happen to be bright.

How to fix RGB separated clipped whites: add a serial node with a Hue/Saturation/Luminance restriction to just the whites and reduce their saturation to zero.

Working with Hybrid Log Gamma

Up to this point the grading techniques I’ve been discussing have been centered on grading in PQ space.  Grading in Hybrid Log Gamma is slightly different in a couple of important ways.

As a quick refresher, Hybrid Log Gamma is an HDR EOTF that intends to be partially backwards compatible with traditional gamma 2.4 video.  This is a benefit and a drawback when it comes to HDR grading.

If you have multiple reference displays available, this is an important time to break them out.  Ideally, one display should be set up in HLG with a system gamma of 1.2 (or whatever your target system gamma is), and the second should be set up in regular gamma 2.4.  That way, whatever grading you do you can see the effect immediately on both target systems.  Otherwise you’ll need to flip back and forth between two HDR and SDR modes on a single reference display in your search for ‘the happy medium’.

Grading HLG with two reference displays - one in HDR, one in SDR, to ensure the best possible contrast in both.

Most of the project and grading setup is identical to grading with the PQ EOTF, with the exception of the bezier curve in serial that adjusts the brightness response.  In HLG we don’t want to expand the darks, since the HLG darks are identical to the gamma 2.4 darks, so we want that part of the curve to be more linear, before easing into our compression of the highs.

Bezier curve for HDR grading in Hybrid Log Gamma. This curve replaces the ST.2084 Bezier curve added earlier.

Once that’s in place, the rest of the grading process is similar to grading in PQ.  In fact, you can replace the ST.2084 bezier curve with this curve and your grade should be nearly ready to go in HLG.  The major exception to this is that you still need to regularly be evaluating how the image looks in SDR, on a shot by shot basis.

The biggest complaint I have with grading in HLG is the relative contrast between the HDR and the SDR images.  Because HLG runs up to 5000 nits with its top digital values, if you’re grading in 1000 nits you end up with a white level in the SDR version below the usual peak white.  This often means that the whites in the SDR version look muddied and lower contrast than the same content graded for SDR natively.  This is especially true when the MaxFALL dictates a darker image is necessary and a lower white point is necessary, landing values solidly in the middle ranges of brightness.

Hybrid Log Gamma occasionally has much dimmer and muddied whites, when compared to SDR natively graded footage, due to MaxFALL limitations.

And as if muddied whites weren’t enough, it’s difficult in HLG to find a contrast curve that works for both the HDR and the SDR image: because of how our brains perceive contrast, when the contrast looks right and natural in HDR, it looks flat in SDR because of the more limited dynamic range, while when it looks right in SDR it looks overly contrasty in HDR.

Personally, I find grading in HLG to compounds the minor problems of HDR with the problems of SDR, which I find extremely irritating.  Rather than being happy with the grade, I’m often left with a sense of “It’s okay, I guess”.

But on the other hand, when it’s done, you won’t necessarily have to regrade for other target gamma systems, which is what you have to do when working in PQ.



Cross Converting HDR to HDR & HDR to SDR

Let’s be honest.  A PQ encoded image displayed in standard gamma 2.4 rendering looks disgusting.  The trouble is, we only really want to do the bulk of the grading once, so how can we cheat and make sure we don’t have to regrade every project two or more times?

LUTs, LUTs, and more LUTs!  Also, Dolby Vision.

Dolby Vision is an optional (paid to Dolby) add-in for DaVinci Resolve Studio that allows you to encode the metadata for the SDR cross conversion into your output files.  Essentially, the PQ HDR image is transmitted with metadata that describes how to transform the HDR into a properly graded SDR image.  It’s a nifty process that seeks to solve the dilemma of backwards compatibility.

But I’ve never used it, because we’ve had no need and I don’t have a license.  DaVinci Resolve’s documentation on how to use it with DaVinci is extensive though, and it requires a similar process to doing a standard SDR cross conversion, so take that as you will.  I’ve also heard rumors that some major industry players are looking for / looking to create a royalty-free dynamic metadata alternative that everyone can use as a global standard for transmitting this information - but that’s just a rumor.

For everyone not using Dolby Vision, you’re going to have to render the SDR versions separately from the HDR versions as separate video files.  Here at Mystery Box, we prefer to render the entire HDR sequence as set of clip-separated 12bit intermediate files to make the SDR grade from them, versus adding additional corrector elements to the HDR grade.  This tends to render faster, because you only render from the RAWs once, and make any other post-processing adjustments once instead of on every version.

NOTE: I’m going to cover the reason why later, but it’s important that you use a 12 bit intermediate if you want a 10 bit master, since the cross conversion from PQ to any other gamma system cuts the detail levels preserved by about 2-4 times, or an effective loss of 1-2 bits of information per channel.

When I’m cross converting from PQ in the BT.2020 space to gamma 2.4 in the BT.2020 space, after reimporting and reassembling the HDR sequence (and adding any logos or text as necessary), I’ll duplicate the HDR sequence and add a custom LUT to the timeline.

The fastest way to build this LUT is to use the built-in DaVinci Color Management (set the sequence gamma to ST.2084 and the output gamma to Gamma 2.4) or the HDR 1000 nits to Gamma 2.4 LUT, and then add a gain and gamma adjustment to bring the brightness range and contrast back to where you want it to be.  It’s a pretty good place to start building your own LUT on, and while these tools weren’t available when I started building my first cross conversion LUT, the process they use is nearly identical to what I did.

Using DaVinci Resolve Studio to handle HDR to SDR cross conversion

Using DaVinci Resolve Studio to handle HDR to SDR cross conversion

Once you’ve attached that correction to the timeline, it’s a pretty fast process to run through each shot and simply do minor brightness, contrast, white point, and black point adjustments - Using DaVinci’s built-in LUT / Color Management I can do a full SDR cross conversion for 5 minutes of footage in less than half an hour using this LUT method.  Using my own custom LUT this processes can take less than five minutes.

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 01

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 02

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 03

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 03

HDR to SDR Cross Conversions using a Custom LUT vs. using DaVinci Resolve Studio's integrated conversion + brightness adjustment, Image 04

Notice the detail loss in the pinks, reds, and oranges because of over saturation in the simple downconversion process (images 01 and 04), the milkiness and hue shifting in the darks (images 02) and the fluorescence of the pinks and skin tones (images 03) with a straight downconversion.  This happens largely in the BT.2020 to BT.709 color space conversion, when colors land outside of the BT.709 gamut.  Building a custom LUT can be a great solution to retain the detail.

After prepping the BT.2020 version, making a BT.709 version, for web or demonstration purposes is incredibly easy.  All that you have to do is duplicate the BT.2020 sequence (this is why I like adding LUTs to timelines, instead of to the output globally) and add an additional LUT to the timeline that does a color space cross conversion from BT.2020 to BT.709.  (Alternatively, change the color management settings).  Since the BT.2020 and BT.709 contrast is the same, all I need to do then is run through the sequence looking for regions where reds, blues, or greens end up out of gamut, and bring those back in.  That’s usually less than 5 minutes for a 5 minute project.

Stacked LUTs on a Timeline to combine transformations.

Cross converting from HLG to PQ is fairly simple, since PQ encompasses a larger range of brightnesses than the HLG range and it can fairly easily be directly moved over with a simple LUT or color management tool; you may want to adjust your low-end blacks to take advantage of the deeper PQ space, but it’s otherwise straightforward.

Cross-grading from PQ to HLG is a different animal altogether.  It’s still faster to work from the intermediate than the RAWs themselves, but it’s more than just a simple LUT or color management solution.  Because of the special considerations for HLG, that its contrast has to look good in both HLG and gamma 2.4, you have a lot more work to do finessing the contrast then when you convert ST.2084 into gamma 2.4.  You’ll also run into issues with balancing the MaxFALL in HLG, which in some cases you’ll just have to ignore.

DaVinci’s built-in color management is actually quite good starting point for cross converting from HLG to PQ or PQ to HLG.  It’s important, though, to be aware of how color management injects metadata into QuickTime files, which I’ll address in a second, so that you don’t accidentally flag the incorrect color space or gamma in your master files.

Using DaVinci Color Management to apply an HLG to ST.2084 cross conversion.

Understanding how LUTs work to handle SDR cross conversions is really important, because until there’s a universal metadata method for including SDR grades with HDR content, which in and of itself would essentially be a version of a shot-by-shot LUT, display manufacturers and content delivery system creators rely on LUTs (or their mathematical equivalent) to convert your HDR content into something that can be shown on SDR displays!


Metadata & DaVinci’s Color Management

If you’re using color management to handle parts of your color space and gamma curve transformations, you’re going to need to adjust the Output Color Space each time you change sequences, to match the targeted space of that timeline (in addition to switching the settings on your reference display).  This is actually the biggest reason I prefer using LUTs over color management - it just becomes a hassle to continually have to reset the color management when I’m grading.

Even if you’re not using the color management to handle color space conversions, you’re going to need to make some changes to the color management settings when rendering out QuickTime masters, so that the correct metadata is included into the master files.

Proper Metadata Inclusion for BT.2020 / ST.2084 QuickTime File, encoded in ProRes 4444 out of DaVinci Resolve Studio.

The settings you use depend when you go to render will depend on whether you’re using color management for the transformation or not.  If you are using color management for the transform, change just the Output Color Space to match the target color space and gamma of the timeline to be rendered.  If you aren’t using color management to handle the color conversion, switch both the Timeline Color Space and the Output Color Space to match your target color space and gamma immediately before rendering the matching timeline.  Again, and unfortunately, you will need to make this adjustment every time you go to render a new sequence.  Sorry, no batch processing.

DaVinci Resolve Studio Color Management Settings for transforming color and adding metadata, and adding metadata only.

Grading in HDR isn’t as hard as it originally seems, once you figure out the tricks that allow the grading system to respond to your input as you would expect and predict.  And despite how different HDR is from traditional video, SDR and HDR cross conversions aren’t as hard as they seems, especially when you’re using prepared LUTs specifically designed for that process.


Mastering in HDR

When it comes to picking an appropriate master or intermediate codec for HDR video files, the simplest solution would always be to pick an uncompressed format with an appropriate per-channel bit depth.  Other than the massive file size considerations (especially when dealing with 4K+ video), there are a few cautions here.  

First, for most of the codecs available today that use chroma subsampling, the transfer matrix that converts from RGB to YCbCr is the BT.709 transfer matrix, and not the newer BT.2020 transfer matrix, which should be used with the BT.2020 color space.  This isn’t a problem per-se, and actually benefits out of date decoders that don’t honor the BT.2020 transfer matrix, even with the proper metadata.  It’s also possible to use the use the BT.2020 transfer matrix and improperly flag the matrix used when working with a transcoding application that requires manual flagging instead of metadata flagging.  At its very worst, it can create a very small amount of color shifting on decode.

As slightly more concerning consideration, however, is the availability of high quality 12+ bit codecs for use in intermediate files.  Obviously any codec using 8 bits / channel only are out of the question for HDR masters or intermediates, since 10 bits are required by all HDR standards.  10 bit encoding is completely fine for mastering space, and codecs like ProRes 422, DNxHR HQX/444, 10 bit DPX, or any of the many proprietary ‘uncompressed’ 10 bit formats you’ll find with most NLEs and color correction softwares should all work effectively.

However, if you’re considering which codecs to use as intermediates for HDR work, especially if you’re planning on an SDR down-grade from these intermediates, 12 bits per channel as a minimum is important.  I don’t want to get sidetracked into the math behind it, but just a straight cross conversion from PQ HDR into SDR loses about ½ bit of precision in data scaling, and another ¼ - ½ bit precision in redistributing the values to the gamma 2.4 curve, leaving a little more 1 bit of precision available for readjusting the contrast curve (these are not uniform values).  So, to end up with an error-free 10 bit master (say, for UHD broadcast) you need to encode 12 bits of precision into your HDR intermediate.

ProRes 4444 / 4444 (XQ), DNxHR 444, 12 bit DPX, Cineform RGB 12 bit, 16 bit TIFFs, or OpenEXR (Half Precision) are all suitable intermediate codecs,** though it’s important to double check all of your downstream applications to make sure that whichever you pick will work later.  Similarly, any of these codecs should be suitable for mastering, with the possibility of creating a cross converted grade from the master later.

I just want to note before anyone actually asks: intermediate and master files encapsulating HDR video are still reeditable after rendering - they can be assembled, cut, combined, etc just like regular video files.  You don’t need to be using an HDR display to do that either - they just look a little flatter on a regular display (except if you’re using HLG).  So long as you don’t pass them through a process that drops the precision of the encoded video, you should be fine to work with them in other applications as usual, though you may want to return to DaVinci to add the necessary metadata to whatever your final sequence ends up being.


Metadata

After you’ve made the master, it’s easy to assume you’re done.  But HDR specifications call for display referenced metadata during encoding of the final deliverable stream, so it’s actually important to record this metadata at the time of creation, if you aren’t handling the final encode yourself.  Unfortunately, currently none of the video file formats have a standardized place to record this metadata.

Your options are fairly limited; the simplest solution is to include a simple text file with a list of attribute:value pairs.

Text file containing necessary key : value pairs for an HDR master file that doesn't provide embedded metadata.

What metadata should you include?  It’s a good idea to include everything that you’d need to include in the standard VUI for HDR transmission:

  • Color Primaries

  • Transfer Matrix

  • Transfer Characteristics (for chroma subsampled video)

  • MaxCLL

  • MaxFALL

  • Master Display

When you’re creating distribution files, each of these values need to be properly set to flag a stream as HDR Video to the decoding display.  It’s possible to guess many of these (color space, transfer matrix, etc) if you’ve been provided with a master file without metadata, but it’s much easier to record and provide this metadata at the time of creation so that no matter how long down the line you come back to the master, none of the information is lost.


Distributing HDR

If you’ve made it this far through the HDR creation process, there should really only be one major question remaining: how do we encode HDR video in a way that consumers can see it?

First, the bad news.  There’s no standardization for HDR in digital cinema yet.  So if your intention is a theatrical HDR delivery, you’re probably need to work with Dolby.  At the moment, they’re the only ones with the actual installations that can display HDR, and they have specialists who will handle that step for you.  For most people, what we want to know is how to get an HDR capable television to display the video file properly.

This is where things get more tricky.

I don’t say that because it’s a necessarily complicated process, but only because there’s no ‘drop in’ solutions that are generally available to do it (other than YouTube, very soon).

There are only three codecs that can, by specification, actually be used for distributing HDR video, HEVC, VP9 and AV1 (AV1 is the successor to VP9), and within these only specific operational modes support HDR.  And of these three, the only real option at the moment is HEVC, simply because HDR televisions support hardware based 10 bit HEVC decoding - it’s the same hardware decoder needed for the video stream of UHD broadcasts.

HEVC encoding support is still rather limited, and finding an application with an encoder that supports all of the optional features needed to encode HDR is still difficult.  Adobe Media Encoder, for instance, supports 10 bit HEVC rendering, but doesn’t allow for the embedding of VUI metadata, which means that the file won’t trigger the right mode in the end-viewer’s televisions.

Unfortunately, there’s only one encoder freely available that gives you access to all of the options you need for HDR video encoder: x265 through FFmpeg.

If you’re not comfortable using FFmpeg through a command line, I seriously recommend downloading Hybrid (http://www.selur.de), which is one of the best, if not the best, FFmpeg frontend I’ve found.

Here are the settings that I typically use for encoding HEVC using FFmpeg for a file graded in SMPTE ST.2084 HDR using BT.2020 primaries on our BVM-X300, at a UHD resolution with a frame rate of 59.94fps:

Profile: Main 10
Tier: Main
Bit Depth: 10-bit
Encoding Mode: Average Bitrate (1-Pass)
Target Bitrate: 18,000 - 50,000 kbps
GOP: Closed
Primaries: BT.2020
Matrix: BT.2020nc
Transfer Characteristics: SMPTE ST.2084
MaxCLL: 1000 nits
MaxFALL: 180 nits
Master Display: G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(10000000,1)
Repeat Headers: True
Signaling: HRD, AUD, SEI Info

I’ve only listed the settings that I are different from the default x265 settings, so let me run through what they do, and why I use these values.

First, x265 needs to output a 10-bit stream in order to be compliant with UHD broadcast, SMPTE ST.2084, ARIB STD-B67 or HDR10 standards.  To trigger that mode, that I set the Profile to Main 10 and the Bit Depth to 10-bit.  Unless you’re setting a really high bit rate, or using 8K video, you shouldn’t need a higher Tier than Main.

Next, I target 18 - 50 mbps as an average bitrate, with a 1 pass encoding scheme.  If you can tolerate a little flexibility in the final bitrate, I prefer using this mode, simply because it balances render time with quality, without padding the final result.  If you need broadcast compliant UHD, you’ll need to drop the target bitrate from 18 to 15 mbps, to leave enough headroom on the 20 mbps available bandwidth for audio programs, closed captions, etc.

x265 Main Compression Settings for HDR Delivery (Using Hybrid FFMPEG front-end)

However, I’ve found that 15mbps does introduce some artifacts, in most cases, when using high frame rates such as 50 or 60p.  18 seems to be about the most that many television decoders can handle seamlessly, though individual manufacturers vary and it does depend significantly on the content you’re transmitting.  Between 30 and 50 mbps you end up with a near-lossless encode, so if you happen to know the final display system can handle it, pushing the bitrate up can give you better results.  Above 50 mbps, there are no perceptual benefits to raising the bitrate.

A closed GOP is useful for random seeks and to minimize the amount of memory used by the decoder.  By default, x265 uses a GOP of at most 250 frames, so reference frames can end up being stored for quite some time when using an open GOP; it’s better just to keep it closed.

x265 Frames Compression Settings for HDR Delivery (Using Hybrid FFMPEG front-end)

Next we add the necessary HDR metadata into the Video Usability Information (VUI).  This is the metadata required by HDR10, and records information about your mastering settings, including color space, which HDR EOTF you’re using, the MaxCLL of the encoded video, the MaxFALL of the encoded video (if you’ve kept your MaxFALL below your display’s peak, you can estimate this value using the display’s MaxFALL), and the SMPTE ST.2086 metadata that records the primaries, white point, and brightness range of the display itself.

x265 Video Usability Information Compression Settings for HDR Delivery (Using Hybrid FFMPEG front-end)

This metadata is embedded into the headers of the video stream itself, so even f you change containers the information will still be there.  To make sure that the metadata is stored at regular intervals, and to enable smoother random access to the video stream, the last step is to turn on the option for repeating the headers and to include HRD, AUD, and SEI Info.

x265 Stream Settings for HDR Delivery (Using Hybrid FFMPEG front-end)

The HEVC stream can be wrapped in either a .mp4 or a .ts container; both are valid MPEG containers and should work properly on HDR televisions.  Be aware that it can take a while to get your settings right on the encode; if you’re using Hybrid you may need to tweak some of the settings to get 10-bit HEVC to encode without crashing (I flag on “Prefer FFmpeg” and “Use gpu for decoding” to get it to run stable) - don’t leave testing to the last minute!


Grading, mastering, and delivering HDR are the last pieces you need to understand to create excellent quality HDR video.  We hope that the information in this guide to HDR video will help you to be confident in working in this new and exciting video format.

HDR Video is the future of video.  It’s time to get comfortable with it, because it’s not going anywhere.  The sooner you get on board with it and start working with the medium, the more prepared you’ll be for the forthcoming time when HDR video becomes the defacto video standard.

Written by Samuel Bilodeau, Head of Technology and Post Production


Endnotes


*The rationale behind the technical requirements will become clear over the course of the article.  I would recommend that you look at the documentation for the application you use to make sure it meets the same minimum technical requirements as DaVinci Resolve when grading in HDR.  Most major color grading programs meet most or all of these technical criteria, and it’s always better to grade in the program you know than in the program you don’t.


However, if you are looking to pick a program right off the bat, I’d recommend DaVinci Resolve Studio, primarily since you can learn on regular Resolve level to learn the application and toolset before even having to spend a dime.


** You should always test that these codecs actually perform as expected with HDR in your workflow, even if you’ve used them for other applications in the past.  I’ve run into an issue where certain applications decode the codecs in different ways that have little effect in SDR, but create larger shifts and stepping in HDR.

HDR Video Part 2: HDR Video Reference Hardware

UPDATE 18 December 2017: We've posted a new blog about using Production HDR monitors for grading in HDR.  This puts HDR grading displays in the sub $4,000 USD range.  Read our post about how to do that and what you'll need here.

To kick off our new weekly blog here on mysterybox.us, we’ve decided to publish five posts back-to-back on the subject of HDR video.  This is Part 2: HDR Video Reference Hardware.

In HDR Video Part 1 we explored what HDR video is, and what makes it different from traditional video.  Here in Part 2, we’re going to look at what hardware is needed for proper HDR grading (as of October 2016), and how to partially emulate HDR to get a feel for grading in the standard before investing in a full HDR setup.


New Standard, New Hardware

Alright, first, the bad news.  Professional grade reference displays for HDR are expensive.  And there are only two that are commercially available for purchase*: The Sony BVM-X300 and the Dolby PRM-4220.  Both cover 100% of DCI-P3 space, but the BVM-X300 operates in and covers most of BT.2020, has a 4K resolution, a peak brightness of 1000 nits, and uses OLED panels for more detail through the darks.  The PRM-4220 is an excellent display, but is only 2K in resolution and 600 nits max, though it operates with a 12 bit panel for better DCI reference.

At the time of this writing, these are the only two commercially available HDR reference monitors.

At the time of this writing, these are the only two commercially available HDR reference monitors.

At this time, I can’t find any DCI projectors advertising HDR capabilities, though I think that a bright enough laser projector with the right LUT could emulate one, in a small environment - essentially using the LUT trick I’m going to describe below while using a projector that’s 10x brighter than it should be for the reference environment.  That doesn’t mean they don’t exist, it just means you’ll need to talk to the manufacture directly.  I haven’t tested this, though, so don’t quote me on it.

There's at least one reference display that claims to be HDR compatible, but really isn’t - the Canon DP-V2410.  Frankly, the display is actually gorgeous and comparable to the Sony for color rendering and detail level, but it’s max brightness is only 300 nits and it’s HDR mode downscales the SMPTE ST.2084 0.0001 - 1000 nit range into the 0.01-300 nit range.  This leaves the overall image rather lackluster and less impactful, though you could use it to grade in a pinch, since its response curve is right.  But I wouldn’t, primarily because of MaxFALL, which I’ll cover extensively in Parts 4 and 5.

At Mystery Box we decided to go with the Sony BVM-X300 for our HDR grading.  I can’t praise the look of this display enough, though I do have my gripes (I mean, who doesn’t?), but I’ll save that review for another time.

Sony BVM-X300 (Right) in Mystery Box's grading environment (lights on, for detail clarity)


HDR Video on Consumer Displays

DaVinci Resolve Studio 12.5+ Settings for enabling HDR metadata over HDMI

The most affordable option for grading in HDR is to use an HDR television.  The Vizio Reference series have nice color with a 300 nit peak (in HDR mode), while the LG 2016 OLED HDR series displays have phenomenal color, with max brightness levels approaching 1000 nits.

The catch is, of course, that there is still more variation in the color of the display than in a reference display, so unless you know for certain that you’re going to be exhibiting on that specific display, be cautious when using them to grade.  They also lack SDI inputs, but that’s solvable.

DaVinci Resolve Studio version 12.5+ has an option to inject flags for HDR and BT.2020 into the HDMI output of your DeckLink or UltraStudio hardware.  To grade in HDR using a consumer HDR television with HDMI input, simply hook up the display over HDMI, toggle the option in your DaVinci settings and the display will automatically switch into HDR mode:

If you’re not using DaVinci Resolve Studio 12.5+, or if for whatever reason you have to route SDI out, you can inject the right metadata into the HDMI stream once you’ve converted from SDI to HDMI.  What you’ll need is an Integral by HD Fury.  This box, which is annoyingly only configurable under Windows, will add the right metadata into the HDMI connection between host and device, allowing you to flag any HDMI stream as BT.2020 and HDR.

Marketing shot of Integral by HD Fury, a box that will allow you to manually alter HDMI metadata

BE CAREFUL if you’re using the Integral though.  It can be tempting to use the HDMI output of your computer to just patch your desktop into HDR.  This is a bad idea.  Any interface lines will also be translated into HDR, which will limit the displays overall brightness (because you can’t switch your desktop into HDR mode), and any static elements risk burn-in.  Most HDR displays use OLED panels and OLEDs are susceptible to burn-in!

If you are already using SDI outputs for your I/O, and want to switch to the BVM-X300 or the PRM-4220, you shouldn’t NEED to upgrade your I/O card or box to drive HDR - 10b 4:2:2 works for grading HDR.  You might want to upgrade though if you want higher output resolutions (4K vs 2K/1080), higher frame rates at the higher resolutions (50/60p) or RGB/4:4:4 instead of 4:2:2 Chroma Subsampling.

Everything else should work with your existing color correction hardware.


Emulating HDR Video

Okay, so if you’re not ready to spring for the new reference hardware, but want to emulate HDR just to get a feel for how it works, here’s a trick you can do using a standard broadcast display, or a display like the HP Dreamcolor Z27x (which I used when doing my first tests) to partially emulate HDR.

Use a reference display with native BT.2020 color support, if you can.  If you’re using Rec 709, but still want to get a feel for grading in BT.2020, there’s a fix for that using LUTs, but it’s not elegant.  You can get a feel for the HDR curve in a Rec 709 color space, but you won’t get a feel for how the primaries behave slightly differently, or how saturation works in BT.2020.**

In addition, if possible, try to use a reference display with a 10 bit panel.  There’s no cheat for this one, you either have it or you don’t.  8 bits will give you an idea what you’re looking at, but won’t be as clear as possible.  In many cases it won’t make a difference, you’ll just lose your ability to see specific fine details.

Now, calibrate the display and your environment, to emulate HDR.  Turn your maximum brightness to full (on the Dreamcolor Z27x, it peaks at 250 nits; your display may be different).  Turn off all ambient lighting (as pitch black as possible).  Then, turn down the brightness of the host interface display to the lowest setting that it’s still useable.  Do the same for any video scopes or other external monitoring hardware that may also be hooked up to the output signal.

HP Dreamcolor Z27x HDR Approximation Settings

This should make your reference display the brightest thing in the room, by a factor of 2 to 4x.  This is important.  While the display will still lack ‘umph’, at very least it’ll dominate your perception of brightness.  That’s key to creating the illusion of the HDR effect in this case; without it your screen will just look muted and dull.

HDR Approximation Environment Calibration: Lights off, scopes dimmed, interface display as low as possible while retaining visibility (6%, in this case)

At this point, what we’ve done by adjusting the ambient and display brightness is emulated the greater brightness range of HDR without using a display that pushes into the HDR range.  Next what we need to do is adjust the display’s response so that it matches the HDR curve we want to emulate.  Essentially, we need to eliminate the display’s native gamma curve for either PQ or HLG curve.

DaVinci Resolve Studio's LUTs for scaling HDR into Gamma 2.4 / Gamma 2.6

This is actually pretty easy to do in DaVinci Resolve Studio - DaVinci has a set of 3D LUTs you can attach to your output that will automatically do it for you.  You’ll find them written as “HDR <value in nits> nits to Gamma <target display gamma>” (ex. HDR 1000 nits to Gamma 2.4) for the SMPTE 2084 / PQ curve, and “HLG to Gamma <target display gamma>” (ex. HLG to Gamma 2.2) for the Hybrid Log Gamma curve.

What these LUTs do, essentially, is add a 1/gamma (ex 1/2.4) contrast curve to the output signal, combined with the selected contrast curve, i.e., the one you want to see.  The gamma reciprocal adjustment combines with the display’s native or selected gamma to linearlize the overall signal, as the two curves cancel each other left.  The only contrast curve you’re left with, then, is the HDR contrast curve you’ve added to the signal, the HDR curve being translated into your display’s native or adapted luminance range.**

Using one of these LUTs on your monitor or output will allow your display to respond as if it were operating natively with the HDR curve, though you'll notice that your display is only showing the first 100 nits of HDR curve.  We'll fix that next.

The final step is to calibrate your display’s brightness and contrast.  I add a timeline node and scale the gain and gamma adjustments to bring the full HDR range back into the display's native signal range.  As for adjusting the contrast, though, there’s not much I can say about how to do that, other than to use a reference image or two graded in the target space to calibrate the display until it ‘looks right’.  Here are a couple that I graded in SMPTE 2084 that you can use for calibration:

Mystery Box ST.2084 Calibration Images, normalized for Rec.709. Follow this link to download the DPX and individual reference PNGs.

All of this LUT business and brightness scaling, by the way, is exactly what the Canon DP-V2410 does, it just does it internally with a mode switch instead of needing manual setup.  Don’t get me wrong - in every other respect, the DP-V2410 is an amazing display, but in HDR mode it’s equivalent to this setup for HDR emulation, rather than true HDR performance.*


Emulated HDR vs. True HDR

So how does an emulated HDR display compare to a true HDR reference display?  Well, poorly is an understatement.  It's not terrible, but emulated HDR lacks the power of the true HDR, the ability to grade with lights on and see how your footage holds up through the large punch of the whites.  With an 8 bit panel you’re going to see stepping while grading in an emulated HDR mode, because most of the region you’ll be working in ends up compressed to 50 or so code values.

This compression in the darks means you won’t get a feel for just how deep SMPTE 2084 can go while still preserving details - you can grade whole shots with full detail in the darks and a few hundred levels of contrast, that land between codes 4 and 14 (full range) on a standard 8 bit display (especially an LED or CFL backed LCD).

You’ll also be tempted in this mode to grade ‘hot’, that is, push more into the brights of the image, since you don’t have any actual limits for frame average light levels, like all true HDR displays do.  That’s not necessarily a problem, but you’ll run into trouble if you try to use the footage elsewhere.  You also miss the great psychological response the actual dark and light levels of a true HDR range give you.

So why emulate then?  Well, right now, HDR reference hardware is expensive.  And if you want to practice grading and mastering in HDR, without having to invest in the hardware, emulation is a fantastic place to start.  You’ll get to see how the mids and highs roll into the whites in SMPTE 2084, and develop tricks to make your grading more natural when you make the switch to a proper HDR display.  You may even be able to grade using emulated HDR so long as you have a proper HDR television to internally QC before sending out to a client - assuming your mastering of the HDR file is right, you can check it on a television and make sure it at very least looks good there, contrast and curve wise, before sending it out to a client.

Of course, mastering HDR video is problem in and of itself, but I’m saving it for last, in Part 5 of our series.  First, though, we’re going to look at the new terminology introduced with HDR video, because even if you’ve been working with video for decades, most of this is likely to be new.

Written by Samuel Bilodeau, Head of Technology and Post Production


Endnotes

* The day I went to post this I found Canon had updated their website to include the Canon DP-V2420, which they claim supports full HDR in both the ST.2084 and the HLG specifications, and be Dolby Vision qualified;  I haven't had time to look into these claims.

The Dolby PRM-4220 requires a workaround to get it to operate in an HDR mode.  It can be loaded with a custom gamma curve that can match the HDR EOTF, or you can add a custom LUT that scales the 0.01 - 600 nits of SMPTE ST2084 into gamma 2.4 while operating the display in 600 nits mode.

The Dolby Pulsar and the Dolby PRM-32FHD are both HDR capable displays, operating at 4000 and 2000 nits respectively, but I elected not to mention them because they are not, to the best of my knowledge, generally available for purchase.

** If you’re using the LUT on your output to emulate the HDR curve, but only have a Rec. 709 display and want to get a feel for BT.2020, you may consider using a BT.2020 to Rec. 709 LUT and stacking it with the gamma compensating LUT.  In DaVinci you can do this by adding one LUT to the output, and a second LUT for the monitor, or by attaching one of the LUTs to a global node for a timeline.  As a last resort, you can attach as many LUTs as you want to individual grades. You should be able to do something similar in pretty much all other color grading or mastering softwares like Scratch or Nuke.