Adobe Premiere CC 2017 - Real World Feature Review

About two weeks ago Adobe released their 2017 update to Creative Cloud, and because of a couple of projects that I happened to be working on at the time, I figured I’d download it immediately to see if I could take advantage of some of the new features.

If you want the TL;DR review, the short version is this: most of the features offer genuine improvements, but range in usefulness from incredibly useful to just minor time savers; a few, though, are utter crap.

Side note: I considered talking about the new features found in Adobe After Effects, but really, there’s not much to say other than: they work? Largely they’re just performance increases accomplished by moving things to the GPU, broader native format support, time shortening templating, and better integration with a few other Adobe CC products.  If you look at their new features page, you should be able to pretty quickly figure out which ones could be important to you, and there’s not much else to say about them other than “they work”.

Premiere is a different animal though, and I can’t say that all of the new features work properly.  But let’s start with the positives, of which there are many.

First and foremost, 8K native R3D imports.

This was expected, and necessary.  And while not ‘featured’ as part of their summaries, it is there and it works.  That’s a boon to all of us shooting on Helium sensors, and to our clients.  So far we’ve been running 8K ProRes or 2K proxies for our clients so they could edit with our footage; now they can take care of mastering with the 8K themselves (if they want).  So definitely a plus.

Second, the new native compression engine supporting DNxHD and DNxHR.

To me, this is a big plus.  I keep looking for a solid alternative to ProRes for my workflows, and while they don’t yet support the DNxHR 444, they do solidly support DNxHR HQX.  Since a significant portion of my usual workflows are built on 12 bits per channel and roundtripping between Adobe and DaVinci, having a solid 12 bit 422 cross-platform alternative to ProRes may finally let me get rid of DPX.

Third, the new audio tools.  Oh, thank god, the new audio tools.

I happen to be working this week on a short project doing sound design and light mixing (I’ll link to it when it’s up) and the new audio tools in Premiere have been a massive time saver.  If you’ve ever tried to do audio work directly in Premiere before, you’ll know how maddening it’s been dealing with their unresponsive skeuomorphic effect control knobs.  Even doing basic EQ meant flagging values on and off and struggling to get things as precise as you wanted.

Adobe Premiere CC 2015.3 EQ

Adobe Premiere CC 2015.3 Pitch Shifter

But the new audio UX is… well, fantastic.  I really can’t praise it enough.  The effect controls are still skeuomorphic (which I actually think is important in this case) but look classier, and more importantly actually respond really quickly to the changes you want to make.  They’ve expanded the tools set and the effects run more quickly.  I can’t be happier - this alone saved me hours of frustration and headaches this week.

Adobe Premiere CC 2017 EQ

Adobe Premiere CC 2017 Pitch Shifter

Fourth, the new VR tools.

So the same project I was doing sound design on happens to be a stereoscopic VR project.  So immediately, the promise of new VR tools was exciting - what more would they let me do, I wondered?

Install, fire it up, and… not much, actually.

Here’s basically all of the new VR tools I could find:

  • Automatically detect the VR properties of imported footage, but only if they were properly flagged with metadata (marginally useful, not really useful)

  • Automatically assign VR properties to sequences if you create a new sequence based on properly flagged VR footage.

  • Manual assign VR properties to sequences, allowing you to flag stereoscopic (and the type of 3D used, if any). The sequence flagging allows for Premiere to automatically flag for VR on export, when supported.

  • Embed VR metadata into mp4 files in the H264 encoder module, instead of just QuickTime.

  • Connect seamlessly to an Oculus or other VR headset with full 360 / 3D output.

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

And that’s… it.  Really?  I mean, there is actually no difference between the viewers in 2015.3 and 2017, both handle stereoscopic properly; assigning the VR flags to sequences and then embedding the necessary metadata on export VERY useful.  But I would really LOVE to see an editor trying to edit with a VR headset.  Or color correct, for that matter.  Reviewing what you’ve got, sure, but not for the bulk of what you’re doing.

I should note that Premiere chokes on stereoscopic VR files at resolutions greater than 3K by 3K, which makes mastering footage from the GoPro Odyssey interesting, since it comes back from the Google Jump VR system as 8K by 8K mp4s.  Even converting to a full ProRes 422 intermediate at 4K by 4K proved too data heavy for Premiere to keep up with on an 8 Core MacPro.

But it’s not only VR performance that’s an issue: it’s still missing a whole bunch of features that would really make it a useful VR tool.  Where are my VR aware transitions?  What about VR specific effects, like simple reframing?  Where is my VR support in After Effects?  Why can’t I manually flag footage as VR if it didn’t have the embedded metadata?  What about recognizing projections other than equirectangular?  They have a drop down for changing projection type on a timeline, but equirectangular is the only option.  What about native ambisonic audio support? Or even flagging for ambisonic audio on export?

Don’t get me wrong, what they’ve done isn’t bad; it does work, and is an improvement.  It’s just that the tools they added were very tiny improvements on what was already there.  And I know (and use) that there are plugins that give Premiere and After Effects many of the VR features that I need to actually work in VR.  But it's really difficult, almost impossible, to get by without the 3rd party plugins.

Maybe I’m just jaded and judgmental, in part because of my reaction to the HDR 'tools' they announced, but when you advertise “New VR Support” as the second item on the new features list, it had better be good support.  Like, you know, actually work as well in VR as you can in standard 2D video.  If I, as a professional, require third party plugins to your program to make it work at the most basic level, it’s not the turnkey solution you advertise.  I’m sure that more tools are in the works, but for now, it feels lackluster and an engineering afterthought rather than an intelligent feature designed for professionals.

But don’t worry, that’s not their most useless feature change.  Let’s talk about their new HDR tools.

What. The. Hell.

This is how using the new HDR 'tools' in Premiere 2017 feel.

I mean that.  With all of my heart.

I might be a little biased on the subject, but honestly I question who in their right mind decided that what they included was actually something useful.

It’s not.

It’s utter shit.

But worse than that, as-is it’s more likely to hurt the broader adoption of HDR than to help it.

And no, I’m not exaggerating.

On paper, the new HDR tools sound amazing.  HDR metadata on export!  HDR grading tools!  HDR Scopes!  Full recognition of HDR files!  Yay!

In practice, all of these are useless.

Let me give you a rundown of what the new HDR tools actually do.

Premiere now recognizes SMPTE ST.2084 HDR files, which is awesome.  But only if the proper metadata is already embedded in the video stream, and then only if it’s an HEVC deliverable file.  Not a ProRes, DPX, or other intermediate file; only HEVC.  And like VR support above, there’s no way to flag footage as already being in HDR or using BT.2020 color primaries.  Which ends up being a massive problem, which I’ll get to in a minute.

When you insert properly flagged HDR footage into a sequence, you get a pleasant surprise: hard clipping at 120 nits on your viewer or connected external display.  It’s honestly the worst clipping I’ve seen.  And there’s no way to turn it off.  If you go to export the clip into any format without the HDR metadata flag enabled on export, you get the same hard clipping.  And since you can only flag for HDR if you’re exporting to HEVC, you can’t export HDR graded or processed through premiere in DPX, ProRes, TIFFs, OpenEXR or any other intermediate format.

This is why in my article on Grading and Mastering HDR I mention that it’s really important to be using a color space agnostic color grading system.  When the application includes color management that can’t be disabled, your options become very limited.

Also, side note, their HEVC encoder needs work - it’s very slow at the 10 bits you need for HDR export.  I expect it’s better on the Intel Kaby Lake chips that include hardware 10 bit HEVC encoder support that, oh wait, don’t exist for professionals yet (2017 5K iMac maybe?)

But at least with the metadata flagging you can bypass the FFMPEG / x265 encoder that you’ll have needed up to this point to properly encode HDR for delivery, right?

Why would you think that?  Of course you can’t.

Because if you bring in a ProRes, DPX, or other intermediate file into Premiere, there’s no way to flag it as HDR and it doesn’t recognize embedded metadata saying it’s HDR like DaVinci and YouTube do.  What happens is that if you use these intermediates as a source (individually or assembled in a sequence) and you flag for HDR on export, Premiere runs a transform on the footage that scales it into the HDR range as if it’s SDR footage.

12 Bit ProRes 4444 HDR Intermediate in Timeline with 8 Bit Scope showing proper range of values

12 Bit ProRes 4444 HDR Intermediate in Timeline with HDR Scope showing how Premiere CC 2017 interprets the intermediate if you flag for HDR on export

When is that useful? If I have a graded SDR sequence that I want to encode into the PQ HDR space, while keeping 100% of the limits of an SDR image.  Because why the hell not.

But never fear!  Premiere has included new color grading tools for HDR!

Well, they aren’t horrible, which I suppose is a compliment?

How to enable HDR Grading in Premiere 2017

To enable HDR Grading you need to change three different settings.  From the Lumetri context menu in your Lumetri Panel, you need to select “High Dynamic Range” to enable the HDR features; on the scopes you’ll need to switch the scale from “8 Bit” to “HDR” (and BT.2020 from the scope settings); and if you actually want to see those HDR values on the scope, you’ll need to enable the flag “Maximum Bit Depth” in your Sequence Settings.  I’m sure there’s a fantastic engineering explanation for that last one, but it’s not exactly intuitive or obvious, and took me a bit of hunting to figure it out.

Maximum Bit Depth needs to be turned on in Sequence Settings to enable proper HDR Scopes

HDR Scopes WITHOUT Maximum Bit Depth Flag

HDR Scopes WITH Maximum Bit Depth Flag

Once you’ve enabled HDR grading from the Lumetri drop down menu, you’ll get a few new options in your grading panels.  “HDR White” and “HDR Specular” come available under the Basic Correction panel, “HDR Range” comes available under the Curves panel, and “HDR Specular” comes available under the Color Wheels panel.

The HDR White setting seems to control how much the other sliders of the Basic Correction panel behave, almost like changing the scale.  The higher the HDR White value, the less of an effect exposure adjustments have and the greater the effect of contrast adjustments.  The HDR Specular slider controls just the brightest whites, almost like the LOG adjustment I use in DaVinci Resolve Studio.  This applies to both the slider under Basic Correction, and the wheel under the Color Wheels panel.  HDR Range seems to change the scale of the curves similar to how the HDR White slider does for the basic corrections.

All of this, by the way, I figured from watching the scopes, and not the output image.  I’ve tried hooking up a second display to the computer and hooking up our BVM-X300 through our Ultrastudio 4K to Premiere, but to no avail - the output image is always clipped to standard video levels and output in gamma 2.4.

Which, if you ask me, severely defeats the purpose of having HDR grading tools to begin with. Here’s a great idea: let’s allow people to grade HDR, but not see what they’re grading.  Which is like trying to use a table saw blindfolded.  Because that’s a thing people do, right?  Which brings me back to my original premise: What. The. Hell.

When you couple that little gem with the hard clip scaling, you realize that the only reason the color grading features are in this particular version is to make the process of cross grading from SMPTE ST.2084 into SDR easier, and nothing else.

No fields for adding HDR10 Compliant Metadata on Export. That's okay, you shouldn't use their exporter anyway (at least not this version)

Oh, one last thing of course: here’s the real kicker: you can’t even export HDR10 compliant files.  Yes, I know I said that in the HEVC encoder you can flag for ST.2084, but you can’t add any MaxFALL, MaxCLL, or Master Display metadata.  And yes, I double checked that Premiere didn’t casually put those into the file without telling you (it doesn’t).

And it has zero support for Hybrid Log Gamma.  Way to pick a side, Adobe.

So passions aside, let’s run down the list again of new HDR tools and what they do:

  1. Recognize SMPTE ST.2084 files, but only when already properly flagged in HEVC streams and no other codec or format.

  2. Export minimal SMPTE ST.2084 metadata to flag for HDR, but only works if your source files are already in the HEVC format and already properly HDR flagged (see #1), or if they’re graded in HDR in the timeline, which you can’t see. Which renders their encoder effectively useless.

  3. Enable HDR grading through a convoluted process, with a minimal but useful set of tools. But you can’t see what you’re doing, so I'm not sure why they're there.

  4. There is no bullet point 4. That’s literally all it does.

The question that I have that I keep coming back to is “who do they think is going to use these tools?”  It feels like the entire feature set was a “well, we need to include HDR, so get it in there”.  But unlike the VR tools that you can kind-of build into, these HDR “tools” (I use the word loosely) are really problematic, not just because the toolset is incomplete but because the way that the current tools are implemented is actually harmful to a professional workflow.

Call it simple feature bandwagoning, or engineers that didn’t consult real creative professionals, or blame it on whatever reason you will.  But the fact is, this ‘feature’ is utter shit, which to me sours the whole release, just a little.

My biggest concern here is that while someone like me, who's been working with HDR for a while now, can tell that these will hurt my workflow, Premiere is an accessible editing platform for everyone from amateurs to professionals.  And anyone looking to get into HDR video may try to use these tools as their way in, and their results are going to be terrible.  God awful.  And that hurts everyone - why would we want to adopt HDR when 'most of what people can do' (meaning the amateurs and prosumers who don't know any better) looks bad?

So basically, if Premiere is part of your HDR workflow, don't even think about using their new 'tools'.

HDR Rant over, let’s bring this back to the positive.

Just to reiterate, the new audio tools in Premiere CC 2017 are fantastic.  I can't emphasize that enough.  Most of the rest of the features added are pretty good.  The new team projects collaboration tools, though I haven’t had a chance to use them, appear to work well (though are still in beta).  The new captions are useful, the new visual keyboard layout manager fantastic (though WAAAY long overdue!), and the other under-the-hood adjustments have improved performance.

Should you upgrade?  Yes!  It’s a great upgrade!  Despite my gripes I’m overall happy with what they did!

Just don’t try to use it for HDR yet, and be aware that the new VR tools aren’t really that exciting.

Written by Samuel Bilodeau, Head of Technology and Post Production

HDR Video Part 3: HDR Video Terms Explained

To kick off our new weekly blog here on, we’ve decided to publish five posts back-to-back on the subject of HDR video.  This is Part 3: HDR Video Terms Explained.

In HDR Video Part 1 we explored what HDR video is, and what makes it different from traditional video.  In Part 2, we looked at the hardware you need to view HDR video in a professional environment.  Since every new technology comes with a new set of vocabulary, here in Part 3, we’re going to look at all of the new terms that you’ll need to know when working with HDR video.  These fall into three main categories: key terms, standards, and metadata.

Key Terms

HDR / HDR Video - High Dynamic Range Video - Any video signal or recording using one of the new transfer functions (PQ or HLG) to capture, transmit, or display a dynamic range greater than the traditional CRT gamma or BT.1886 Gamma 2.4 transfer functions at 100-120 nits reference.

The term can also be used as a compatibility indicator, to describe any camera capable of capturing and recording a signal this way, or a display that either exhibits the extended dynamic range natively or is capable of automatically detecting an HDR video signal and renormalizing the footage for its more limited or traditional range.

SDR / SDR Video - Standard Dynamic Range Video - Any video signal or recording using the traditional transfer functions to capture, transmit, or display a dynamic range limited to the traditional CRT gamma or BT.2886 Gamma 2.4 transfer functions at 100-120 nits reference. SDR video is fully compatible with all pre-existing video technologies.

nit - A unit of brightness density, or luminance. It’s the colloquial term for the SI units of candelas per square meter (1 nit = 1 cd/m2). It directly converts with the United States customary unit of foot-lamberts (1 fl = 1 cd/foot2), with 1 fl = 3.426 nits = 3.426 cd/m2.

Note that the peak nits / foot-lamberts value of a projector is often lower than that of a display, even in HDR video: because a projected image covers more area and the image is viewed in a darker environment than consumer’s homes, the same psychological and physiological responses exist at lower light levels.

For instance, a typical digital cinema screen will have a maximum brightness of 14fl or 48 cd/m2 vs. the display average of 80-120nits for reference and 300 for LCDs and Plasmas in the home. HDR cinema actual light output ranges in theaters are adjusted accordingly, since 1000 cd/m2 on a theater’s 30 foot screen is perceived to be far brighter than on a 65” flat screen.

EOTF - Electro-Optical Transfer Function - A mathematical equation or set of instructions that translate voltages or digital values into brightness values. It is the opposite of the Optical-Electro Transfer Function, or OETF, that defines how to translate brightness levels into voltages or digital values.

Traditionally, the OETF and EOTF were incidental to the behavior of the cathode ray tube, which could be approximated by a 0-1 exponential curve with a power value (gamma) of 2.4. Now they are defined values like ‘Linear”, “Gamma 2.4” or any of the various LOG formats. OETFs are used at the acquisition end of the video pipeline (by the camera) to convert brightness values into voltages/digital values, and EOTFs are used by displays to translate voltages/digital values into brightness values for each pixel.

PQ - Perceptual Quantization - Name of the EOTF curve developed by Dolby and standardized in SMPTE ST.2084, designed to allocate bits as efficiently as possible with respect to how the human vision perceives changes in light levels.

Perceptual Quantization (PQ) Electro-Optical Transfer Function (EOTF) with Gamma 2.4 Reference

Dolby’s tests established the Barten Threshold (also called the Barten Limit or the Barten Ramp), the point at what the difference in light levels between two values does that difference become visible.

PQ is designed that when operating at 12 bits per channel, the stepping between single digital values is always below the Barten threshold, for the whole range from 0.0001 to 10,000 nits, without being so far below that threshold that the resolution between bits is wasted. At 10 bits per channel, the PQ function is just slightly above the Barten threshold, where in some (idealized) circumstances stepping may be visible, but in most cases should be unnoticeable.

Barten Thesholds for 10 bit and 12 bit Rec. 1886 and PQ curves. Source

For comparison, current log formats waste bits on the low end (making them suitable for acquisition to preserve details in the darks, but not transmission and exhibition), while the current standard gamma functions waste bits on the high end, while creating stepping in the darks.

HDR systems using PQ curves are not directly backwards compatible with standard dynamic range video.

HLG - Hybrid Log Gamma - A competing EOTF curve to PQ / SMPTE ST.2084 designed by the BBC and NHK to preserve a small amount of backwards compatibility.

Hybrid Log Gamma (HLG) Electro-Optical Transfer Function (EOTF) with Gamma 2.4 Reference

HLG vs. SDR gamma curve with and without knees.  Source

HLG vs. SDR gamma curve with and without knees. Source

On this curve, the first 50% of the curve follows the output light levels of standard Gamma 2.4, while the top 50% steeply diverges along a log curve, covering the brightness range from about 100 to 5000 nits. As with PQ, 10 bits per channel is the minimum permitted.

HLG does not expand the range of the darks like PQ curve, and as an unfortunate side effect of the backwards compatibility coupled with the max-fall necessitated by the technology of HDR displays, whites can appear grey, when viewed in standard gamma 2.4, especially when compared to footage natively graded in gamma 2.4.


SMPTE ST. 2084 - First official standardization of HDR video transfer function by a standardization body, and is at the moment (October 2016), the most widely implemented. SMPTE ST.2084 officially defines the PQ EOTF curve for translating a set of 10 bit, or 12 bit per channel digital values into a brightness range of 0.0001 to 10,000 nits. SMPTE ST.2084 provides the basis for HDR 10 Media Profile and Dolby Vision implementation standards.

This is the transfer function to select in HEVC encoding to signal a PQ HDR curve.

ARIB STD-B67 - Standardized implementation of Hybrid Log Gamma by the Association of Radio Industries and Businesses. Defines the use of the HLG curve, with 10 or 12 bits per channel color and the same color primaries as BT.2020 color space.

This is the transfer function to select in HEVC encoding to signal an HLG HDR curve.

ITU-T BT.2100 - ITU-T Recommendation BT.2100 - ITU-T’s standardization of HDR for television broadcast. Ratified in 2016, this document is the HDR equivalent of ITU-T Recommendation BT.2020 (Rec.2020 / BT.2020). When compared with BT.2020, BT.2100 includes the FHD (1920x1080) frame size in addition to the UHD and FUHD, and defines two acceptable transfer functions (PQ and HLG) for HDR broadcast, instead of the single transfer function (BT.1886 equivalent) found in BT.2020.

BT.2100 uses the same color primaries and the same RGB to YCbCr signal format transform as BT.2020, and includes similar permissions of 10 or 12 bits per channel as BT.2020, although BT.2100 also permits full range code values in 10 or 12 bits where BT.2020 is limited only to traditional legal.

BT.2100 also includes considerations for a chroma subsampling methodology based on the LMS color space (human visual system tristimulus values), called ICTCP, and a transform for ‘gamma weighting’ (in the sense of the PQ and HLG equivalent of gamma weighting) the LMS response as L’M’S’.

HDR 10 Media Profile - The Consumer Technologies Association (CTA)’s official HDR video standard for use in HDR Televisions. HDR 10 requires the use of the SMPTE ST.2084 EOTF, BT.2020 color space, 10 bits per channel, 4.2.0 chroma subsampling, and the inclusion of SMPTE ST.2086 and associated MaxCLL and MaxFALL metadata values.

HDR 10 Media Profile defines the signal televisions can decode for the inclusion of “HDR compatibility” term in the marketing of televisions.

Note that “HDR compatibility” does not necessarily define the ability to display in the higher dynamic range, simply to the compatibility to decode and renormalize footage in the HDR 10 specification for whatever the dynamic range and color space of the display happen to be.

Dolby Vision - Dolby’s proprietary implementation of the PQ curve, for theatrical setups and home devices. Dolby Vision supports both the BT.2020 and the DCI-P3 color space, at 10 and 12 bits per channel, for home and theater, respectively.

The distinguishing feature of Dolby Vision is the inclusion of shot-by-shot transform metadata that adapts the PQ graded footage into a limited range gamma 2.4 or gamma 2.6 output for SDR displays and projectors. The colorist grades the film in the target HDR space, and then runs a second adaptation pass to adapt the HDR grade into SDR, and the transform is saved into the rendered HDR output files as metadata. This allows for a level of backwards compatibility with HDR transmitted footage, while still being able to make the most of the SDR and the HDR ranges.

Because Dolby Vision is a proprietary format, it requires a license issued by Dolby and the use of qualified hardware, which at the moment (October 2016) is only the Dolby PRM-4220, the Sony BVM-X300, or the Canon DP-V2420 displays


MaxCLL Metadata - Maximum Content Light Level - An integer metadata value defining the maximum light level, in nits, of any single pixel within an encoded HDR video stream or file. MaxCLL should be measured during or after mastering. However if you keep your color grade within the MaxCLL of your display’s HDR range, and add a hard clip for the light levels beyond your display’s maximum value, you can use your display’s maximum CLL as your metadata MaxCLL value.

MaxFALL Metadata - Maximum Frame Average Light Level - An integer metadata value defining the maximum average light level, in nits, for any single frame within an encoded HDR video stream or file. MaxFALL is calculated by averaging the decoded brightness values of all pixels within each frame (that is, converting the digital value of each frame into its corresponding nits value, and averaging all of the nits values within each frame).

MaxFALL is an important value to consider in mastering and color grading, and is usually lower than the MaxCLL value. The two values combined define how bright any individual pixel within a frame can be, and how bright the frame as a whole can be.

Displays are limited differently on both of those values, though typically only the peak (single pixel) brightness of a display is reported. As pixels get brighter and approach their peak output, they draw more power and heat up. With current technology levels, no display can push all of its pixels into the maximum HDR brightness level at the same time - the power draw would be extremely high, and the heat generated would severely damage the display.

As a result, displays will abruptly notch down the overall image brightness when the frame average brightness exceeds the rated MaxFALL, to keep the image under the safe average brightness level, regardless of what the peak brightness of the display or encoded image stream may be.

For example, while the BVM-X300 has a peak value of 1000 nits for any given pixel (MaxCLL = 1000), on average, the frame brightness cannot exceed about 180 nits (MaxFALL = 180). The MaxCLL and MaxFALL metadata included in the HDR 10 media profile allows consumer displays to adjust the entire stream’s brightness to match their own display limits.

SMPTE ST.2086 Metadata - Metadata Information about the display used to grade the HDR content. SMPTE ST.2086 includes information on six values: the three RGB primaries used, the white point used, and the display maximum and minimum light levels.

The RGB primaries and the white point values are recorded as ½ of their (X,Y) values from the CIE XYZ 1931 chromaticity standard, and expressed as the integer portion of the the first five significant digits, without a decimal place. Or, in other words:

f(XPrimary) = 100,000 × XPrimary ÷ 2

f(YPrimary) = 100,000 × YPrimary ÷ 2.

For example, the (X,Y) value of DCI-P3’s ‘red’ primary is (0.68, 0.32) in CIE XYZ; in SMPTE ST.2086 terms it’s recorded as



for R(0.68,0.32):

f(XR) = 100,000 × 0.68 ÷ 2 = 34,000

f(YR) = 100,000 × 0.32 ÷ 2 = 16,000

Maximum and minimum luminance values are recorded as nits × 10,000, so that they too end up as positive integers. For instance, a display like the Sony BVM-X300 with a range from 0.0001 to 1000 nits would record its luminance as


The full ST.2086 Metadata is ordered Green, Blue, Red, White Point, Luminance with the values as


all strung together, and without spaces. For instance, the ST.2086 for a DCI-P3 display with a maximum luminance of 1000 nits, a minimum of 0.0001 nit would be, and using white point D65 would be:


while a display like the Sony BVM-X300, using BT.2020 primaries, with a white point of D65 and the same max and min brightness would be:


In an ideal situation, it would be best to use a colorimeter and measure the display’s native R-G-B and white point values; however, in all practicality the RGB and white point values the display conforms to that was used in mastering, are sufficient in communicating information about the mastery to the end unit display.

That should be a good overview of the new terms that HDR video has (so far) introduced into the extended video technologies vocabulary, and are a good starting point for diving deeper into learning about and using HDR video on your own, at the professional level.

In Part 4 of our series we’re going to take the theory of HDR video and start talking about the practice, and look specifically about how to shoot with HDR in mind.

Written by Samuel Bilodeau, Head of Technology and Post Production