Adobe Premiere CC 2017 - Real World Feature Review

About two weeks ago Adobe released their 2017 update to Creative Cloud, and because of a couple of projects that I happened to be working on at the time, I figured I’d download it immediately to see if I could take advantage of some of the new features.

If you want the TL;DR review, the short version is this: most of the features offer genuine improvements, but range in usefulness from incredibly useful to just minor time savers; a few, though, are utter crap.

Side note: I considered talking about the new features found in Adobe After Effects, but really, there’s not much to say other than: they work? Largely they’re just performance increases accomplished by moving things to the GPU, broader native format support, time shortening templating, and better integration with a few other Adobe CC products.  If you look at their new features page, you should be able to pretty quickly figure out which ones could be important to you, and there’s not much else to say about them other than “they work”.

Premiere is a different animal though, and I can’t say that all of the new features work properly.  But let’s start with the positives, of which there are many.

First and foremost, 8K native R3D imports.

This was expected, and necessary.  And while not ‘featured’ as part of their summaries, it is there and it works.  That’s a boon to all of us shooting on Helium sensors, and to our clients.  So far we’ve been running 8K ProRes or 2K proxies for our clients so they could edit with our footage; now they can take care of mastering with the 8K themselves (if they want).  So definitely a plus.

Second, the new native compression engine supporting DNxHD and DNxHR.

To me, this is a big plus.  I keep looking for a solid alternative to ProRes for my workflows, and while they don’t yet support the DNxHR 444, they do solidly support DNxHR HQX.  Since a significant portion of my usual workflows are built on 12 bits per channel and roundtripping between Adobe and DaVinci, having a solid 12 bit 422 cross-platform alternative to ProRes may finally let me get rid of DPX.

Third, the new audio tools.  Oh, thank god, the new audio tools.

I happen to be working this week on a short project doing sound design and light mixing (I’ll link to it when it’s up) and the new audio tools in Premiere have been a massive time saver.  If you’ve ever tried to do audio work directly in Premiere before, you’ll know how maddening it’s been dealing with their unresponsive skeuomorphic effect control knobs.  Even doing basic EQ meant flagging values on and off and struggling to get things as precise as you wanted.

Adobe Premiere CC 2015.3 EQ

Adobe Premiere CC 2015.3 Pitch Shifter

But the new audio UX is… well, fantastic.  I really can’t praise it enough.  The effect controls are still skeuomorphic (which I actually think is important in this case) but look classier, and more importantly actually respond really quickly to the changes you want to make.  They’ve expanded the tools set and the effects run more quickly.  I can’t be happier - this alone saved me hours of frustration and headaches this week.

Adobe Premiere CC 2017 EQ

Adobe Premiere CC 2017 Pitch Shifter

Fourth, the new VR tools.

So the same project I was doing sound design on happens to be a stereoscopic VR project.  So immediately, the promise of new VR tools was exciting - what more would they let me do, I wondered?

Install, fire it up, and… not much, actually.

Here’s basically all of the new VR tools I could find:

  • Automatically detect the VR properties of imported footage, but only if they were properly flagged with metadata (marginally useful, not really useful)

  • Automatically assign VR properties to sequences if you create a new sequence based on properly flagged VR footage.

  • Manual assign VR properties to sequences, allowing you to flag stereoscopic (and the type of 3D used, if any). The sequence flagging allows for Premiere to automatically flag for VR on export, when supported.

  • Embed VR metadata into mp4 files in the H264 encoder module, instead of just QuickTime.

  • Connect seamlessly to an Oculus or other VR headset with full 360 / 3D output.

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

And that’s… it.  Really?  I mean, there is actually no difference between the viewers in 2015.3 and 2017, both handle stereoscopic properly; assigning the VR flags to sequences and then embedding the necessary metadata on export VERY useful.  But I would really LOVE to see an editor trying to edit with a VR headset.  Or color correct, for that matter.  Reviewing what you’ve got, sure, but not for the bulk of what you’re doing.

I should note that Premiere chokes on stereoscopic VR files at resolutions greater than 3K by 3K, which makes mastering footage from the GoPro Odyssey interesting, since it comes back from the Google Jump VR system as 8K by 8K mp4s.  Even converting to a full ProRes 422 intermediate at 4K by 4K proved too data heavy for Premiere to keep up with on an 8 Core MacPro.

But it’s not only VR performance that’s an issue: it’s still missing a whole bunch of features that would really make it a useful VR tool.  Where are my VR aware transitions?  What about VR specific effects, like simple reframing?  Where is my VR support in After Effects?  Why can’t I manually flag footage as VR if it didn’t have the embedded metadata?  What about recognizing projections other than equirectangular?  They have a drop down for changing projection type on a timeline, but equirectangular is the only option.  What about native ambisonic audio support? Or even flagging for ambisonic audio on export?

Don’t get me wrong, what they’ve done isn’t bad; it does work, and is an improvement.  It’s just that the tools they added were very tiny improvements on what was already there.  And I know (and use) that there are plugins that give Premiere and After Effects many of the VR features that I need to actually work in VR.  But it's really difficult, almost impossible, to get by without the 3rd party plugins.

Maybe I’m just jaded and judgmental, in part because of my reaction to the HDR 'tools' they announced, but when you advertise “New VR Support” as the second item on the new features list, it had better be good support.  Like, you know, actually work as well in VR as you can in standard 2D video.  If I, as a professional, require third party plugins to your program to make it work at the most basic level, it’s not the turnkey solution you advertise.  I’m sure that more tools are in the works, but for now, it feels lackluster and an engineering afterthought rather than an intelligent feature designed for professionals.


But don’t worry, that’s not their most useless feature change.  Let’s talk about their new HDR tools.

What. The. Hell.

This is how using the new HDR 'tools' in Premiere 2017 feel.

I mean that.  With all of my heart.

I might be a little biased on the subject, but honestly I question who in their right mind decided that what they included was actually something useful.

It’s not.

It’s utter shit.

But worse than that, as-is it’s more likely to hurt the broader adoption of HDR than to help it.

And no, I’m not exaggerating.


On paper, the new HDR tools sound amazing.  HDR metadata on export!  HDR grading tools!  HDR Scopes!  Full recognition of HDR files!  Yay!

In practice, all of these are useless.

Let me give you a rundown of what the new HDR tools actually do.

Premiere now recognizes SMPTE ST.2084 HDR files, which is awesome.  But only if the proper metadata is already embedded in the video stream, and then only if it’s an HEVC deliverable file.  Not a ProRes, DPX, or other intermediate file; only HEVC.  And like VR support above, there’s no way to flag footage as already being in HDR or using BT.2020 color primaries.  Which ends up being a massive problem, which I’ll get to in a minute.

When you insert properly flagged HDR footage into a sequence, you get a pleasant surprise: hard clipping at 120 nits on your viewer or connected external display.  It’s honestly the worst clipping I’ve seen.  And there’s no way to turn it off.  If you go to export the clip into any format without the HDR metadata flag enabled on export, you get the same hard clipping.  And since you can only flag for HDR if you’re exporting to HEVC, you can’t export HDR graded or processed through premiere in DPX, ProRes, TIFFs, OpenEXR or any other intermediate format.

This is why in my article on Grading and Mastering HDR I mention that it’s really important to be using a color space agnostic color grading system.  When the application includes color management that can’t be disabled, your options become very limited.

Also, side note, their HEVC encoder needs work - it’s very slow at the 10 bits you need for HDR export.  I expect it’s better on the Intel Kaby Lake chips that include hardware 10 bit HEVC encoder support that, oh wait, don’t exist for professionals yet (2017 5K iMac maybe?)

But at least with the metadata flagging you can bypass the FFMPEG / x265 encoder that you’ll have needed up to this point to properly encode HDR for delivery, right?

Why would you think that?  Of course you can’t.

Because if you bring in a ProRes, DPX, or other intermediate file into Premiere, there’s no way to flag it as HDR and it doesn’t recognize embedded metadata saying it’s HDR like DaVinci and YouTube do.  What happens is that if you use these intermediates as a source (individually or assembled in a sequence) and you flag for HDR on export, Premiere runs a transform on the footage that scales it into the HDR range as if it’s SDR footage.

12 Bit ProRes 4444 HDR Intermediate in Timeline with 8 Bit Scope showing proper range of values

12 Bit ProRes 4444 HDR Intermediate in Timeline with HDR Scope showing how Premiere CC 2017 interprets the intermediate if you flag for HDR on export

When is that useful? If I have a graded SDR sequence that I want to encode into the PQ HDR space, while keeping 100% of the limits of an SDR image.  Because why the hell not.

But never fear!  Premiere has included new color grading tools for HDR!

Well, they aren’t horrible, which I suppose is a compliment?

How to enable HDR Grading in Premiere 2017

To enable HDR Grading you need to change three different settings.  From the Lumetri context menu in your Lumetri Panel, you need to select “High Dynamic Range” to enable the HDR features; on the scopes you’ll need to switch the scale from “8 Bit” to “HDR” (and BT.2020 from the scope settings); and if you actually want to see those HDR values on the scope, you’ll need to enable the flag “Maximum Bit Depth” in your Sequence Settings.  I’m sure there’s a fantastic engineering explanation for that last one, but it’s not exactly intuitive or obvious, and took me a bit of hunting to figure it out.

Maximum Bit Depth needs to be turned on in Sequence Settings to enable proper HDR Scopes

HDR Scopes WITHOUT Maximum Bit Depth Flag

HDR Scopes WITH Maximum Bit Depth Flag

Once you’ve enabled HDR grading from the Lumetri drop down menu, you’ll get a few new options in your grading panels.  “HDR White” and “HDR Specular” come available under the Basic Correction panel, “HDR Range” comes available under the Curves panel, and “HDR Specular” comes available under the Color Wheels panel.

The HDR White setting seems to control how much the other sliders of the Basic Correction panel behave, almost like changing the scale.  The higher the HDR White value, the less of an effect exposure adjustments have and the greater the effect of contrast adjustments.  The HDR Specular slider controls just the brightest whites, almost like the LOG adjustment I use in DaVinci Resolve Studio.  This applies to both the slider under Basic Correction, and the wheel under the Color Wheels panel.  HDR Range seems to change the scale of the curves similar to how the HDR White slider does for the basic corrections.

All of this, by the way, I figured from watching the scopes, and not the output image.  I’ve tried hooking up a second display to the computer and hooking up our BVM-X300 through our Ultrastudio 4K to Premiere, but to no avail - the output image is always clipped to standard video levels and output in gamma 2.4.

Which, if you ask me, severely defeats the purpose of having HDR grading tools to begin with. Here’s a great idea: let’s allow people to grade HDR, but not see what they’re grading.  Which is like trying to use a table saw blindfolded.  Because that’s a thing people do, right?  Which brings me back to my original premise: What. The. Hell.

When you couple that little gem with the hard clip scaling, you realize that the only reason the color grading features are in this particular version is to make the process of cross grading from SMPTE ST.2084 into SDR easier, and nothing else.

No fields for adding HDR10 Compliant Metadata on Export. That's okay, you shouldn't use their exporter anyway (at least not this version)

Oh, one last thing of course: here’s the real kicker: you can’t even export HDR10 compliant files.  Yes, I know I said that in the HEVC encoder you can flag for ST.2084, but you can’t add any MaxFALL, MaxCLL, or Master Display metadata.  And yes, I double checked that Premiere didn’t casually put those into the file without telling you (it doesn’t).

And it has zero support for Hybrid Log Gamma.  Way to pick a side, Adobe.


So passions aside, let’s run down the list again of new HDR tools and what they do:

  1. Recognize SMPTE ST.2084 files, but only when already properly flagged in HEVC streams and no other codec or format.

  2. Export minimal SMPTE ST.2084 metadata to flag for HDR, but only works if your source files are already in the HEVC format and already properly HDR flagged (see #1), or if they’re graded in HDR in the timeline, which you can’t see. Which renders their encoder effectively useless.

  3. Enable HDR grading through a convoluted process, with a minimal but useful set of tools. But you can’t see what you’re doing, so I'm not sure why they're there.

  4. There is no bullet point 4. That’s literally all it does.

The question that I have that I keep coming back to is “who do they think is going to use these tools?”  It feels like the entire feature set was a “well, we need to include HDR, so get it in there”.  But unlike the VR tools that you can kind-of build into, these HDR “tools” (I use the word loosely) are really problematic, not just because the toolset is incomplete but because the way that the current tools are implemented is actually harmful to a professional workflow.

Call it simple feature bandwagoning, or engineers that didn’t consult real creative professionals, or blame it on whatever reason you will.  But the fact is, this ‘feature’ is utter shit, which to me sours the whole release, just a little.

My biggest concern here is that while someone like me, who's been working with HDR for a while now, can tell that these will hurt my workflow, Premiere is an accessible editing platform for everyone from amateurs to professionals.  And anyone looking to get into HDR video may try to use these tools as their way in, and their results are going to be terrible.  God awful.  And that hurts everyone - why would we want to adopt HDR when 'most of what people can do' (meaning the amateurs and prosumers who don't know any better) looks bad?

So basically, if Premiere is part of your HDR workflow, don't even think about using their new 'tools'.

HDR Rant over, let’s bring this back to the positive.


Just to reiterate, the new audio tools in Premiere CC 2017 are fantastic.  I can't emphasize that enough.  Most of the rest of the features added are pretty good.  The new team projects collaboration tools, though I haven’t had a chance to use them, appear to work well (though are still in beta).  The new captions are useful, the new visual keyboard layout manager fantastic (though WAAAY long overdue!), and the other under-the-hood adjustments have improved performance.

Should you upgrade?  Yes!  It’s a great upgrade!  Despite my gripes I’m overall happy with what they did!

Just don’t try to use it for HDR yet, and be aware that the new VR tools aren’t really that exciting.

Written by Samuel Bilodeau, Head of Technology and Post Production

HDR Video Part 1: What is HDR Video?

It’s October 2016, and here at Mystery Box we’ve been working in HDR video for a little over a year.

While it’s easier today to find out information about the new standard than it was when I first started reading the research last year, it’s still not always clear what it is and how it works.  So, to kick off our new weekly blog here on mysterybox.us, we’ve decided to publish five posts back-to-back on the subject of HDR video.  This is Part 1: What is HDR Video?

HDR video is as much of a revolution and leap forward as the jump from analog standard definition, to digital 4K.

Or, to put it far less clinically, it’s mind-blowingly, awesomesauce, revolutionarily, incredible!  If it doesn’t get you excited, I’m not sure why you’re reading this…

So what is it about HDR video that makes it so special, so much better than what we’ve been doing?  That’s what we’re going to dive into here.


HDR Video vs. HDR Photography

If you’re a camera guy or a even an image guy, you’re probably familiar with HDR photography.  And if you’re thinking “okay, what’s the big deal, we’ve had HDR for years”, think again.  HDR video is completely unrelated to HDR photography, except for the ‘higher dynamic range’ part.

In general, any high dynamic range technique seeks to capture or display more levels of brightness within a scene, that is, increase the overall dynamic range.  It’s kind of a ‘duh’ statement, but let’s go with it.

In photography, this usually means using multiple exposures at different exposure values (EVs), and blending the results into a single final image.  The catch, of course, has always been that regardless of how many stops of light you capture with your camera or HDR technique, you’re still limited by the same 256 levels of brightness offered by 8 bit JPEG compression and computer/television displays, or the slightly bigger, but still limited set of tonality offered by inks for print.

So, most HDR photography relies on creating regions of local contrast throughout the image, blending in the different exposure levels to preserve the details in the darks and the lights:

Photograph with standard contrast vs. the same with local contrast

While the results are often beautiful, they are, at their core, unnatural or surreal.

HDR Video is Completely Different

Instead of trying to compress the natural dynamic range of a scene into a very limited dynamic range for display, HDR video expands the dynamic range of the display itself by increasing the average and peak display brightnesses (measured in nits), and by increasing the overall image bit depth from 8 bit to at least 10 bits / channel, or from 255 brightness levels & 16 million colors, to at least 1024 brightness levels & 1.02 billion colors.

Standard Video / Photography Range vs. HDR Photography vs. HDR Video Ranges

The change of the display light level allows for extended ranges of tonalities through the darks and the lights, so that the final displayed image itself is a more natural rendering of a scene, one that’s able to match the overall dynamic range of today’s digital cinema and film-sourced cameras. And perhaps more importantly, when fully implemented, HDR video will almost completely match the dynamic range of the human eye itself.

How big of a deal is it?  I can’t describe it better than my younger brother did the first time I showed him HDR video:

 

“I want to say that it’s like you’re looking through a window into another world, except that when you look through a window, it’s not as crisp, or as clean, or as clear as this”.

 

First Impressions to HDR Video

First Impressions to HDR Video


How did we get here?

So if HDR video is so much better than what we’ve been using so far, why haven’t we been using it all along?

And now, for a history lesson (it’s interesting; but it’s not essential to know, so skip down if you don’t care).

Cathode Ray Tubes as scientific apparatus and ‘display’ devices have been around in some form or another since the late 1880s, but the first CRT camera wasn’t invented until the late  1920s.  Early cameras were big with low resolutions; televisions were grainy, noisy, and low fidelity.

Things changed quickly in the early years of television. As more companies jumped on board the CRT television bandwagon, each created slightly different, and incompatible, television systems in an effort to avoid patent rights infringement.  These different systems, with different signal types, meant that home television sets had to match the cameras used by the broadcaster, i.e., they had to be the made by the same company.  As a result, the first broadcaster in an area created a local monopoly for the equipment manufacturer they sourced their first cameras from, and consumers had no choice.

Foreseeing a large problem when more people started buying televisions sets, and more broadcasters wanted to enter an area, the United States government stepped in and said that the diversity of systems wouldn’t fly - all television broadcasts and television sets had to be compatible.  To that end they created a new governing body, the National Television System Committee, or NTSC, which went on to define the first national television standard in 1941.

We’ve had to deal with the outcomes of standardization, good and bad, ever since.

The good, obviously, has been that we don’t have to buy a different television for every channel we want to watch, or every part of the country we want to live in (though transnationals are often still out of luck).  The bad is that every evolution of the standard since 1941 has required backwards compatibility: today’s digital broadcast standards, and computer display standards too, are still limited in part by what CRTs could do in the 1940s and 50s.

Don’t believe me?  Even ignoring the NTSC 1/1.001 frame rate modifier, there’s still a heavy influence: let’s look at the list:

  1. Color Space: The YIQ color space for NTSC and the YUV color space used in both PAL and SECAM are both based on the colors that can be produced by the short glow phosphors, which coat the inside of CRT screens and form the light and color producing element of the CRT.  In the transition to digital, YIQ and YUV formed the basis for Rec. 601 color space (SD Digital), which in turn is the basis for Rec. 709 (HD Digital) color space (Rec. 709 uses almost the same primaries as Rec. 601).

    And just in case your computer feels left out, the same color primaries are used in the sRGB display standard too, because all of these color spaces were display referenced, and they were all built on the same CRT technology.  Because up until the early 2000s, CRTs were THE way of displaying images electronically - LCDs were low contrast, plasma displays were expensive, and neither LEDs nor DLPs had come into their own.
     

  2. Transfer Function: The transfer function (also called the gamma curve) used in SD and HD is also based on the CRT’s natural light-to-electrical and electrical-to-light response.  The CRT camera captured images with a light-to-voltage response curve of approximately gamma 1/2.2, while the CRT display recreated images with a voltage-to-light response curve of approximately gamma 2.4.  Together, these values formed the standard approximate system gamma of 1.2, and form the basis for the current reference display gamma standard of 2.4, found in ITU-T Recommendation BT.1886.
     

  3. Brightness Limits: Lastly, and probably most frustratingly, color accurate CRT displays require limited brightness to maintain their color accuracy. Depending on the actual phosphors used for primaries, that max-brightness value typically lands in the 80-120 nits range.  And consumer CRT displays, while bigger, brighter, and less color accurate, still only land in the 200 nit max brightness levels.  For comparison, the brightness levels found on different outdoor surfaces during a sunny day land in the 5000-14,000 range (or more!).

    This large brightness disparity between reference and consumer display levels has been accentuated in recent years with the replacement of CRTs with LCD, Plasma and OLED displays, which can easily push 300-500 nits peak brightness.  Those brightness levels skew the overall look of images graded at reference, while being very intolerant of changes in ambient light conditions.  In short this means that with the current standards, consumers rarely have the opportunity to see content in their homes as filmmakers intended.

So, because of the legacy cathode ray tube, (a dead technology), we’re stuck with a set of legacy standards that limit how we can deliver images to consumers.  But because CRTs are a dead technology, we now have an opportunity where we can choose to either be shackled by the 1950s for the rest of time, or, to say “enough is enough,” and use something better.  Something forward thinking.  Something our current technology can’t even match 100% yet.  Something like, HDR video.


The HDR Way

At the moment, there two different categories and multiple standards covering HDR video, including CTA’s HDR 10 Media Profile, Dolby’s Dolby Vision, and the BBC’s Hybrid Log Gamma.  And naturally, they all do things just a little differently.  I’ll cover their differences in depth in Part 3: HDR Video Terms Explained, but for now I’m going to lump them all together and just focus on the common aspects of all HDR video, and what makes it different than video from the past.

There are four main things that are required to call something HDR video: ITU-T Recommendation BT.2020 or DCI-P3 color space, a high dynamic range transfer function, 10 bits per channel transmission and display values, and transmitted metadata.

BT.709, DCI-P3, and BT.2020 on CIE XYZ 1931

1. Color Space: For the most part, HDR video is seen by many as an extension of the existing BT.2020 UHD/FUHD and DCI specifications, and as such uses either the wider BT.2020 color gamut (BT.2020 is the 4K/8K replacement for BT.709/Rec.709 HD broadcast standards), or the more limited, but still wide, DCI-P3 gamut.

BT.2020 uses pure wavelength primaries, instead primary values based on the light emissions of CRT phosphors or any material.  The catch is, of course, we can’t fully show these in a desktop display (yet), and only the most recent laser projectors can cover the whole color range. But ultimately, the breadth of the color space covers as many of the visible colors as is possible with three real primaries*, and includes all color values already available in Rec.709/sRGB and DCI-P3, as well as 100% of Adobe RGB and most printer spaces available with today’s pigments and dyes.

2. Transfer Function: Where HDR video diverges from standard BT.2020 and DCI specs is in its light-level-to-digital-value and digital-value-to-light-level relationship, called the OETF and EOTF respectively.  I’m going to go into more depth on OETFs and EOTFs at another time, but for now what we need to know is that the current relationship between light levels and digital values is a legacy of the cathode ray tube days, and approximates gamma 2.4.  Under this system, full white digital value of 235 translates to a light output of between 80-120nits.

Extending this same curve into a higher dynamic range output proves problematic because of the non-linear response of the human eye: it would either cause severe stepping in the darks and lights, or it would require 14-16 bits per channel while wasting digital values in increments that can’t actually be seen.  And it still wouldn’t be backwards compatible, in which case, what’s the point?

So instead, HDR video uses one of two new transfer curves: the BBC’s Hybrid Log Gamma (HLG), standardized in ARIB STD-B67, which allows for output brightness levels from 0.01 nit up to around 5000 nits, and Dolby’s Perceptual Quantization (PQ) curve, standardized in SMPTE ST.2084, which allows for output brightness levels from 0.0001 nit up to 10,000 nits.

PQ is the result of direct research done by Dolby to measure the response of the human eye, and to create a curve where no value is wasted with no visible stepping between values.  The advantage of PQ is pretty clear, in terms of maximizing future output brightness (the best experimental single displays currently max out at 4000 nits; Dolby’s test apparatus ranged from 0.004 to 20,000 nits) and increasing the amount of detail captured in the darks.

HLG, on the other hand, provides a degree of backwards compatibility, matching the output levels of gamma 2.4 for the first 50% of the curve, and reserving the top 50% of the values to the higher light level output.  Generally, HLG content with a system gamma of 1.2 looks pretty close to standard dynamic range content, though it’s whites sometimes end up compressed and greyer than content mastered in SDR to begin with.

Footage graded in Rec. 709 and the same graded in HLG.

(Side note: I prefer grading in SMPTE ST.2084 because of the extended dynamic range through the blacks, and smoother roll-into the whites.)
 

3. Bit Depth: The new transfer curves accentuate a problem that’s been with video since the switch from analog to digital values: stepping.  As displays have gotten brighter, the difference between two code values (say, digital value of 25 and 26) is sometimes enough that we can see a clear distinguishing line between the two greys.  This is especially true when using a display whose maximum brightness is greater than reference standard, and is more common in the blacks than in the whites.

Both the BT.2020 and DCI standards already have requirements to decrease stepping by switching signal encoding and transmission from 8 bits per channel to 10 bits minimum (12 bits for DCI), allowing for at least a 4 times smoother gradient.  However, BT.2020 still permits 8 bit rendering at the display, which is what you’ll find on the vast majority of televisions and reference displays on the market today.

On the other hand, HDR video goes one step further and requires 10 bit rendering at the display panel itself; that is, each color sub pixel must be capable of between 876 and 1024 distinguishable light levels, in all operational brightness and contrast modes.

The reason that HDR requires a 10 bit panel while BT.2020 doesn’t, is that our eyes are more susceptible to stepping in the value of a color or gradient than to stepping in its hue or saturation: the eye can easily make up for lower color fidelity (8 bits per channel in BT.2020 space) by filling in the gaps, but with an HDR curve the jump in light levels between two codes in 8 bits per channel is big enough that it’s clearly noticeable.

Comparison between gradients step sizes at 8 bit, 10 bit, and 12 bit precisions (contrast emphasized)

4. Metadata: The last thing that HDR video requires that standard BT.2020 doesn’t, is metadata.  All forms of HDR video should include information about both the content and the mastering environment.  This includes which EOTF was used in the grade, the maximum and frame average brightnesses of the content and display, and which RGB primaries were used.  Dolby Vision even includes metadata to define, shot by shot, how to translate the HDR values into the SDR range!

Consumer display manufacturers use this information to adapt content for their screens in real time, knowing when to clip or compress the highlights and darks (based on the capability of the screen it’s being shown on), and for the automatic selection of operational mode (switching from Rec. 709 to BT.2020, and in and out of HDR mode, without the end user ever having to change a setting).

 

So, in summary, what does HDR video do differently?  Wider color gamuts, new transfer function curves to allow for a much larger range of brightnesses, 10 bits per channel minimum requirement at the display to minimize stepping, and the transmission of metadata to communicate information about the content and its mastering environment to the end user.

All of which are essential, none of which are completely backwards compatible.


Yes, but what does it look like?

Unfortunately, the only way to really show you what HDR looks like is to tell you to go to a trade show or post house with footage to show, or buy a TV with HDR capabilities and stream some actual HDR content.  Because when you show HDR content on a normal display, it does not look right:

Images in SMPTE ST.2084 HDR Video formats do not appear normal when directly brought into Rec. 709 or sRGB Gamma 2.4 systems

You can get a little bit of a feel for it if I cut the brightness levels of a standard dynamic range image by half, and put it side-by-side with one that more closely follows the HDR range of brightnesses:

Normalized & Scaled SMPTE ST.2084 HDR Video vs Rec. 709 with Brightness Scaled

But that doesn’t capture what HDR video actually does.  I don’t quite know how to describe it - it’s powerful, beautiful, clear, real, present and multidimensional.  There’s an actual physiological and psychological response to the image that you don’t get with standard dynamic range footage - not simply an emotional response to the quality of the image, but the higher brightness levels actually trigger things in your eyes and brain that let you literally see it differently than anything you’ve seen before.

And once you start using it on a regular basis, nothing else seems quite as satisfactory, no other image quite as beautiful.  You end up with a feeling that everything else is just a little bit inadequate.  That’s why HDR will very rapidly become the new normal of future video.


So that's it for Part 1: What is HDR Video?  In Part 2 of our series on HDR video, we’re going to cover what you need to grade in HDR, and how can you cheat a bit to get a feel for the format by emulating its response curve on your existing reference hardware.

Written by Samuel Bilodeau, Head of Technology and Post Production


Endnotes:

* While ACES does cover the entire visible color spectrum, it’s primary RGB values are imaginary, which means that while it can code for all possible colors, there’s no way of building a piece of technology that actually uses the ACES RGB values as its primary display colors.  Or in other words, if you were to try and display ACES full value RED, you couldn’t, because that color doesn’t exist.