Display Calibration & Color Management

There are many different ways for consumers to experience your content today - so many that it’s often difficult to predict exactly where and how it’ll be seen.  Is it going to a theater?  Will it be watched on a television?  Which of the many handheld devices or personal computers will an end consumer use to view and listen to your work?  And how is that device or environment going to be rendering the colors?

Color management is an important consideration for every modern digital content production company to keep in the forefront of their minds.  In larger post production environments, there will often be a dedicated team that manages the preservation of color accuracy across the many screens and displays found throughout the facility.  But for small companies and independent producers, the burden of color management often falls on an individual with multiple roles, and is easier to ignore and to hope for the best than to spend the time and money to make sure it’s done right.

Before going any further, it’s important to define what we’re talking about when we say ‘color management.’  Color management is different than color correction or color grading, which is the process of normalizing colors and contrasts, maintaining color consistency across edits, and applying creative looks to your footage.  Instead, color management is about making sure the colors you see on your screens match as closely to the what the digital values stored in your video files are actually describing, within the color space you’re using.

In practice this means making sure that your displays, televisions, projectors, or other screens, as well as your lighting environment, are all calibrated so that their RGB balance, brightness, and contrast all match as close to the target standard as you can get them.  This makes sure that you don’t accidently add corrections to your digital data when you’re trying to ‘fix’ what you see on your displays that’s only there because of your displays or environment.  “Burning in” these kinds of shifts adversely affects the quality of your content by creating perceptual color shifts for your clients and consumers.

While calibration is essential, color management also involves ensuring the preservation of color from camera to end user display, keeping the color consistent between programs and ensuring your final deliverables contain the appropriate metadata.  Both parts to color management are essential, so we’re going to talk about both.  We’ll focus more on the calibration aspect of color management since that’s essential to get right, before briefly addressing color management in applications without getting mired too deep in advanced technical talk.


The problem

How do I know that my red is the same as your red?

This is one of the fundamental philosophical questions of color perception.  How do I know know that the way that I perceive red is the same as the way that you perceive red, and not how you perceive blue or green?  There’s actually no way to measure or determine for certain that the perceived shades are identical in the minds of any two individuals, since color perception happens as the brain interprets the stimulus it receives from the eyes.

While being a fun (or maddening) thought provoking question, color sameness is actually a really important baseline to establish in science and imaging.  In this case we’re not asking about the perception of color, but whether the actual shade of color produced or recorded by two devices is the same.  Today we’re only going to focus on colors being produced, and not recorded - we’ll cover capturing colors accurately in our next post.

There are a LOT of different kinds of displays in the world - from the ones we find on our mobile devices, to computer displays, televisions, and consumer or professional projectors.  The core technologies used to create or display images, such as plasma, LCD, OLEDs, etc., all render shades of color in slightly different ways, leading to differences in how colors within images look between displays.

But it’s not just the core technology used that affects the color rendition. Other factors like the age of the display, the specific implementation of the core technology (like edge-lit or backlit LCDs), the manufacturing tolerances for the specific class of display, the viewing angle, and the ambient environment all affect the colors produced or the colors perceived.  Which makes it almost impossible to predict the accuracy of color perception and rendering for one viewer, let alone the thousands or millions who are going to see your work.

But rather than throw up your hands in despair at the impossibility of the task, shift your focus to what you, as the content creator, can do: if you can be reasonably sure that what you see in your facility is as close to what’s actually being encoded as possible, you can be confident that your end viewers will not be seeing something horrifying.  While every end viewer’s experience will be different, at very least your content will be consistent for them - it will shift in exactly the same way as everyone else’s content, a shift they’re already used to and don’t even know it.

For that reason it’s important that when you master your work you’re viewing it in an environment and with a display that’s as close to perfectly accurate as possible.  But unfortunately, color calibration isn’t something you can simply ‘set and forget’: it needs to be done on a recurring schedule, especially with inexpensive displays.


What is Color Calibration?

How do we make sure color looks or is measured the same everywhere?

This question was first ‘answered’ in 1931 with the creation of the CIE XYZ color space.  Based the results of a series of tests that measured the sensitivity of the human vision to various colors, the CIE created a reference chart that mapped how the brain perceived the combination of visible wavelengths as colors into a Cartesian plane (X-Y graph).  This is called the CIE 1931 Chromaticity Diagram 

Three different color spaces referenced on the CIE 1931 Chromasticity diagram. The colors within each triangle represent the colors that can be produced by those three color primaries. All three share the same white point (D65).

This chart allows color scientists to assign a number value to all perceivable colors, both those that exist as a pure wavelength of light, and those that exist as a combination of wavelengths.  Every color you can see has a set of CIE 1931 coordinates to define its chromaticity (combined hue & saturation, ignoring brightness), which means that while we may not have an answer to a philosophical question of individual color experience, we do have a way of scientifically determining that my red is the same as your red.

This standard reference for colors is a powerful tool, and we can use it to define color spaces. A color space is the formal name for all of the colors a device can capture or produce using a limited set of primary colors.  Map the primary colors onto the chromaticity diagram, join them as geometric shape, and your device you can create or capture any color within the enclosed shape.  With an accompanying white point, you have the fundamentals ingredients for a defined color space, like Rec. 709, sRGB, AdobeRGB, etc.

Defining and adhering to color spaces is actually quite important to managing and matching end to end color.  Digital RGB values have no meaning without knowing which of the many possible shades of red, green, or blue color primaries are actually being used.  Interpreting digital data using different RGB primaries than the original creator used almost always results in nonlinear hue shifts throughout the image.

This is where color calibration comes in.  Color calibration is the process whereby a technician reads the actual color values produced by a display, and either adjusts the display’s settings to conform more closely to the target color space, and / or adjusts the signal coming to the display to better match the targeted output values.

To do this, you need access to four things:

  1. A signal generator to send the display specific digital values

  2. A colorimeter to measure the actual colors produced

  3. Control of the display’s input signal or color balance settings to adjust the output

  4. Software to manage the whole process and correlate the signal to measurement

If you want to make sure you’re doing it right, though, an in-depth understanding of how color and every image generation technology works helps a lot too.

Some consumer, most prosumer, and almost all professional displays leave the factory calibrated, though consumer and commercial televisions and almost all projectors must be calibrated after installation, for reasons we’ll talk about later.  Unfortunately, displays lose their calibration with time, and each kind and quality of display will start showing more or less variance as they age.  Which means that in circumstances where calibration is important, such as in professional video applications, displays require regular recalibration.

For desktop displays, this usually involves creating or update the ICC color profile, while for reference displays this typically involves adjusting the color balance controls so that the display itself better matches the target color space.

The differences in calibration technique comes from the workflow paradigm.  For desktop displays it’s assumed that the host computer will be directly attached to any number of different kinds of displays, each with their own color characteristics, at any given time - but always directly attached.  So, to simplify the end user experience, the operating system handles color management of attached displays through ICC profiles.

ICC profiles are data files that define how a display produces colors.  It records the CIE XYZ values of its RGB color primaries, white point, and black point, and its RGB tone curves, among some other metadata.

Using this information, the operating system “shapes” the digital signal sent to the display, converting on the fly the RGB values from the color space embedded in an image or video file into the display’s RGB space.  It does this for all applications, and essentially under all circumstances.  Some professional programs do bypass the internal color management, sort of, by assigning all images they decode or create to use the generic RGB profile (i.e. an undefined RGB color space). But it’s usually best to assume that for all displays directly attached to the computer, the operating system is applying some form of color management to what you’re seeing1.

Calibrating direct attached displays is relatively quick and easy.  The signal generator bypasses the operating system’s internal color management and produces a sequence of colored patches, which the colorimeter reads to map the display’s color output.  The software then generates an ICC color profile for that specific display, which compensates for color shifting from wear and tear, or the individual factory variances the display has.

Once calibrated, you can be reasonably confident that when viewing content, you’ll be seeing the content as close to intended as that particular display allows.

Reference displays, projectors, and televisions have a slightly different paradigm for calibration.  For calibrating computer displays, you can shape the signal to match the display characteristics.  But because of the assumption that a single video signal will (or at very least can) go to multiple displays or signal analysis hardware at the same time, and the signal generator is likely to have no information about the attached devices, it’s simply not practical to adjust the output signal.  Rather, professional output hardware always transmit their signals as pure RGB or YCbCr values, without worrying about the details of color space or managing color at all.

So instead of calibrating the signal, calibration of reference displays, projectors, or any kind of television usually requires adjusting the device itself.2

Once again, a signal generator creates specific color patches the colorimeter reads to see exactly what values the display creates.  Software then calculates the color’s offset as a Delta E value (how far away is the color produced from where it’s supposed to be according to the selected standard) and reports to the operator how far away from calibration it is.

The operator then goes through a set of trial and error adjustments to the image to lower the Delta E values of all the colors to get the best image possible.  Tweak the ‘red’ gain and see how that affects the colors produced.  Touch the contrast and see its effect on the overall image gamma - and on all the other colors.  Measure, tweak, measure, tweak, measure, tweak… and repeat, until the hardware is as close to the target color space as possible.

Calibration results showing DeltaE values for greyscale and color points

Generally, Delta E values less than 5 are good, less than 3 are almost imperceptible, and under 2 is considered accurate.  Once the calibration is complete, you can be reasonably sure that what you’re seeing on a reference display, projector, or television is as close to the target color space as possible.  But does that even matter?


Regular Calibration

Medium priced computer displays and professional reference displays usually leave the factory with a default calibration that puts them as close to standard reference as the technology allows.  The same is not true of most televisions and projectors - they leave the factory uncalibrated, or are in an uncalibrated mode by default for a couple of reasons which we’re not going to get into.

But even with this initial factory calibration for the displays that have it, the longer a display’s been used the more likely it will be experience color shifts.  How quickly it loses calibration depends on the technology in use: some technologies can lose their calibration in as short as a month with daily use.

The reasons behind this shift over time can be lumped together as “wear and tear”.  The exact reasons for each different display technology losing its calibration are a little off topic, so I’m going to spare you the gory details of the exact mechanisms that cause the degradations.  However, the important things to know are:

  1. The backlight of LCDs and the main bulb in digital projectors change colors over time. This is a major problem with the xenon arc lamps found in projectors, and is a bigger problem for CCFL LCDs than for LED lit (white or RGB) LCDs, but even the LED spectrums shift with use.

  2. The phosphors inside of CRTs and plasma displays degrade with time and their colors change, as do the primary color filters on LCD displays though at a slower pace.

  3. Anything using liquid crystals (LCD displays and LC or LCoS projectors) can suffer from degradation of the liquid crystal, which affects color and brightness contrasts.

  4. The spectrum of light emitted by plasma cells change with age, so they don’t stay balanced for the same output levels.

Or in other words, all displays change colors over time.  Setting up a regular calibration schedule for every display that you look at your content on is an important part of color management.  You don’t want to move a project from your reference display to your desktop to find that suddenly the entire video appears to be pulling magenta, or invite a client to review your work in your conference room to find the picture washed out or color shifted.


Environment and Color Management

Up until now we’ve been talking about the color characteristics of your displays and projectors.  But just as important as your display calibration is the characteristics of your environment in general.  The brightness level and color of lights in the room affect perceptions of contrast and the colors within the image.

This is really easy to get wrong.  Because not only does the display need to be calibrated for the target color space, it should be calibrated within the target environment.  The technician handling the calibration will usually make a judgement call for changing display values like display brightness, gamma curve, or white point based on these environmental choices.  But they may also make other recommendations about the environment to improve the perception of color on the screen - what to do to other displays, lighting, windows etc., so that your perception of color will better match industry standards.

Generally speaking, reference environments should be kept dim (not pitch black), using tungsten balanced lighting that’s as close to full spectrum as possible.  Avoid daylight balanced bulbs, and install blackout curtains on any windows.  Where possible, keep lighting above and pointed away from the workstation screens - reflected light is better than direct lighting, since it reduces glare and is better for color perception.

The easiest way get proper lighting is to set up track lighting with dimmable bulbs (LED or tungsten based, colored between 2800K & 3200K), and point the pots slightly away from the workstation.  The dimmer ensures that you can bring the environment into specification for grading, but can then bring the lighting back up to normal ambient conditions for general work or for installing hardware etc.  If changing the overhead lighting isn’t an option, good alternatives are stick lights on the opposite side of the room, positioned at standing height.

Keep your reference display or projector as the brightest screen in the environment.  If you don’t, your brights will look washed out and gray since they’re dimmer than other light sources. It will also affect your overall perception of contrast: you’ll perceive the image as darker and having more contrast than expected, and are therefore more likely to push up the mids and dark and wash out the image as a whole.  Dimming the brightness of interface displays, scopes, phones or tablets, and any other screen within the room will make sure that you’re perceiving the image on your reference hardware as accurately as possible.

Depending on the number of interface displays and other other light sources in the room, you may need to further lower ambient lighting to keep contrast perception as accurate as possible.  In rare cases, such as in small rooms, this may include turning the lights off completely since the interface displays provide sufficient ambient lighting for the environment.

Calibrating your displays is essential, calibrating the environment is important.  Usually it’s pretty easy to tweak environmental calibration for better color perception, so long as you’re starting from a dark or otherwise light controlled environment.  And unlike display calibration it’s something you can do once and not need to tweak for years.


Application Color Management

Once you’ve calibrated all of your hardware and your environment, it’s easy to assume that your job is done, and you don’t have to worry about color management until the next time you book a calibration session.  Oh how I wish that were the case.

Different applications manage color in different ways, which means you may still see differences between applications with the same footage.  Sometimes applications get in fights with the operating system over who’s managing color and both end up applying transformations you’re not aware of.

Which means it’s important to understand exactly how each application touches color.  To do that, let’s briefly look at how four common applications manage color: Adobe Premiere, Final Cut Pro X, Adobe After Effects, and DaVinci Resolve.

Both Adobe Premiere and Final Cut Pro X actively manage the colors within the project.  Adobe Premiere gives you exactly no ways of changing the color interpretation of the input files, beyond the embedded metadata in HEVC and a few other formats (NOT Apple ProRes).  It conforms everything to Rec. 709 in your viewers and signal outputs, and there’s no way to override this.  The operating system then uses the display’s ICC profile to conform the output so that you can see it as close to Rec. 709 as possible.  Which is good, because it means that when you output the video file, what you see is what you get.

Adobe Premiere’s color engine processes colors in 8 bit.  You can turn 16 bit color processing in the output or in the sequence settings by flagging on “Maximum Bit Depth” and “Maximum Render Quality.”  This is really important for using high bit depth formats like Apple ProRes, which stores 10 or 12 bit image data, assuming you want to maintain high color fidelity with your output files.  If you’re outputting to 8 bit formats for delivery you may still benefit from keeping these flags on, however, depending on how in depth your color corrections and gradients are.

Basically, Adobe Premiere assumes you know nothing about color management, and that it should handle everything for you.  Not a terrible assumption, just something to be aware of when you start thinking about managing color yourself.

Like Adobe Premiere, Final Cut Pro X also handles all of the color management, but offers at least a small amount of control over input and output settings.  By default, it processes colors at a higher internal bit depth than Premiere, and in linear color which offers smoother gradients and generally gives better results.  You also get to assign a working color space to your library and your project (sequence), though your only options are Rec. 709 and Wide Color Gamut (Rec. 2020).

Each clip by default is interpreted as belonging to the color space identified in its metadata, and conformed to the output color space selected by the project (sequence).  If necessary, you can override the color space interpretation of each video clip by assigning it to either Rec. 601 (NTSC or PAL), Rec. 709, or Rec. 2020 (notably missing is DCI-P3 and HDR curves).  When using professional video outs, the signal’s data levels of the is managed by the selection of Rec. 709 or Rec. 2020, and FCP-X handles everything else.  Like Adobe Premiere, it works with the operating system to conform the video displayed in the interface to the attached monitor’s ICC profile.

Both Adobe Premiere and FCP-X work on a “what you see is what you get” philosophy.  If your interface display is calibrated and using the proper ICC profile, you shouldn’t have to touch anything, ever.  It just works.  But gods Adobe and Apple forbid you try to make it do something else.

On the other hand, Adobe After Effects and DaVinci Resolve have highly flexible, colorspace agnostic color engines that allow you to nearly completely ignore all color management.  They’re quite content to simply apply the transformations you’ve requested to the digital data read in, and to not care about what color space or contrast curve the digital data is in.  And when you output, it simply writes the RGB data back to a file and you’re good to go.

Of course, that’s the theory.  After Effects makes a few color assumptions under the hood about intent, including ignoring the display ICC profile on output, since it has no idea what color space you’re working in anyway.  That sounds innocuous, but it’s a problem if you’re using a display with properties that are mismatched to the color profile of the footage you’re using3.  Suddenly your output, with an embedded color profile and playing back in a color managed application, may look significantly different than it did in After Effects.

Turning on After Effect’s color management by assigning a project working space allows for a more accurate viewing of the final output.  You can then flag on the view option to “Use Display Color Management” (on by default), and adjust the input space of any RGB footage.  But you can still get into trouble: any chroma subsampled footage, like ProRes 422 or H.264, is only permitted to use the embedded color profile.  Also Adobe ignores ProRes metadata for Rec. 2020 and HDR, which will negatively affect the output when using color management.  It also exhibits strange behavior when using HDR gamma curves and in some other working spaces.

DaVinci Resolve has some of the best functionality for color management.  It’s agnostic color engine renders color transformations in 32 bit float precision, and outputs raw RGB data to your video out.  It assumes you know what color space you’re using, so it’s happy to ignore everything else.  By default, on a Mac it applies the monitor ICC profile to the interface viewers, with the assumption that your input footage is Rec. 7094.

Fortunately, changing the working space is incredibly easy, even without color management turned on - simply set it the color primaries and EOTF in the Color Management tab of the project settings.  With color management off, this will only affect the interface display viewers, and then only if the flag “Use Mac Display Color Profile for Viewers” is set (on by default, MacOS only).  Unfortunately it does not as of yet apply ICC profiles to the viewers under Windows (see footnote 4).

When you turn DaVinci Resolve’s color management on, you have extremely fine grained control over color space - being able to set the input, working, and output color spaces and gammas separately (with Resolve managing the transformations on the fly), and then being able to bypass or override the input color space and gamma on a clip by clip basis in the color correction workspace.  And because of their 32 bit floating point internals, their conversions work really well, preserving “out of range” data between nodes and between steps in the color management process, allowing the operator to reign it in and make adjustments to the image at later steps - an advantage of active color management over LUTs in a few cases.

Input Processing Output Display
Adobe Premiere Assumes embedded or Rec. 709, cannot be changed 8 bit Rec. 709 with Gamma 2.4 assumed, 16 bit and linear color processing possible Rec. 709 on all outputs Output conformed to display using ICC profile
Final Cut Pro X Assumes embedded or Rec. 709, overridable to Rec. 2020 10-12 bit Rec 709 or Rec 2020 (configured by library) with gamma 2.4. Rec. 709 or Rec. 2020 on all outputs (configured by project) Output conformed to display using ICC profile
Adobe After Effects Assumes embedded or Rec. 709, ignored by default, reassignable for RGB formats but fixed interpretation of YCbCr 8 or 16 bit integer or 32 bit float agnostic color engine. Working space assignable on project basis, many fixed working spaces available RGB output in working space or generic RGB Color space and calibration defined by display (Pro out), output conformed to display using ICC profile for direct attached interfaces when working space assigned.
DaVinci Resolve Studio Ignored by default, global assignable with per-clip overrides to nearly any color space 32 bit floating point agnostic color engine. Working space assignable on a project basis, many combinations of working spaces with independently assignable color primaries and EOTFs RGB output in working space or assignable output space, or generic RGB Color space and calibration defined by display (pro out), output conformed to display using ICC profile for direct attached interfaces when working space assigned, LUTs available for pro output calibration.

These four programs kind of form a good scale for understanding application color management.  Generally speaking, the easier an application is to set up and use, the more hands-off management it’s likely to do, and give you anywhere from no, to very limited control over color management.  More advanced programs usually offer more in depth color management features, or the ability to bypass color management completely so that you’re able to have the finesse you need.  They also tend to preserve RGB data internally (and output that RGB data through professional video output cards), but require more of a knowledge of color spaces and the use of calibrated devices.

Calibrating your displays is a significant portion of the color management battle, though it’s also necessary to understand exactly what the applications are doing to the color if you want to be able to trust that what you’re seeing on the screen is reasonably close to what will be delivered to a client or to the end user.


What A Fine Mess We’re In

Keeping displays and projectors calibrated and trusting their accuracy has always been a concern, but it’s really become a major issue as the lower cost of video technologies has made the equipment more accessible, and since both the video and film production industries have shifted into modern digital productions.

“Back in the day”, analog video displays relied on color emissive phosphors for their primary colors.  The ‘color primaries’ of NTSC and PAL (and SECAM) weren’t based on the X-Y coordinates on the CIE XYZ 1931 diagram, but on the specific phosphors used in the CRT displays that emitted red, green, and blue light.  They weren’t officially defined with respect to the CIE 1931 standards until Recommendation BT.709 for High Definition Television Systems (Rec. 709) in 1990.

Around that time, with the introduction of liquid crystal displays computer displays also had to start defining colors more accurately.  They adopted the sRGB color space in mid to late nineties, using the same primaries as Rec. 709 but with a different data range and more flexible gamma control.  Naturally, both of these standards based their color primaries on… the CRT phosphors used in NTSC and PAL televisions systems.  And while the phosphors degrade and shift over time, they don’t shift anywhere near as much as the backlights of an LCD.  Meaning that prior to the early 2000s, when LCDs really took off, calibration was far less of an issue.

Now we have to worry not only about the condition of the display and its shifting calibration, but which of the multiple color spaces and new EOTFs (gamma curves) the display or application works with, what client deliverables need to be, and which parts of the process may or may not be fully color managed with our target spaces supported.

And then we have film.  Right up until the advent of end to end digital production, film had the massive benefit of “what you see is what you get” - your color space was the color space of the film stock you were using for your source, intermediates, and masters.  Now with the DCI standard of using gamma corrected CIE X’Y’Z’ values in digital cinema masters, you have to be far more cautious of projector calibration: it’s not possible to convert from CIE X’Y’Z’ into proper color output without regularly measuring the projector’s actual output values.  And we’re not going to talk about the nightmare of DCI white points and desktop displays that use the DCI-P3 color space.

Oh, and by the way, every camera sees the colors differently than the actual color spaces you’re trying to shoot in, and may or may not be conforming the camera color primaries to Rec 709, DCI-P3, or something else.  Because this needed to be more complicated.

Fortunately, with a basic understanding of color management and color calibration navigating the modern color problems is actually much more manageable than it all appears on face value.  In our next post we’re going to be discussing RED Digital Cinema’s Image Processing Pipeline 2 (IPP2), and why it’s the perfect paradigm for solving the modern color management problem.


But in the meantime, if you’re working in the Utah area and want to figure out the best way of calibrating your workspace or home office, give us a call.  We’ve got the right equipment and know how to make sure that when you look at your display or projector, you’re seeing as close to the standards as possible.

Color and deliver with confidence: make sure it’s calibrated.
 

Written by Samuel Bilodeau, Head of Technology and Post Production


ADDENDUM:

Color management and calibration are trickier than I’ve made it sound.  I’ve simplified a few things and tried to be as clear as possible, but there are many, many gotcha’s in the process of preserving color that can make it maddening.  And this is one area where a small amount of knowledge and trying to do things yourself can get you into huge amounts of trouble really quickly.

Trial and error is important to learning, and often it’s still the only way to feel out exactly what an application is doing to your files.  But be smart: calibrate your displays and let the programs manage things for you, unless you’re intending on experimenting and know the risks associated with it.

 

Footnotes:

1 Note, this is not a bad thing.  In most cases it’s a good thing.  It’s just something to be aware of and to understand how it works.

2 It’s also possible to use lookup tables to shape the signal for viewing on a reference display.  Here, the software will measure the actual values produced by the display, and calculate the offsets as values to put in a 3D LUT.  When attached to multiple displays using the same professional signals, LUTs should be applied using external hardware, when attached to one display only it’s acceptable to apply the LUT in the software application generating the output signal or in a hardware converter.  Ensure that the LUT is not applied to any place on the signal upstream of the final output recording.

3 This is a big problem with the iMac, or any other Wide Gamut / DCI-P3 display.  Colors will look different than expected without enabling color management within After Effects.

4 At least it did, until DaVinci Resolve Beta 14b8, 14b9, and 14.0 release - the option to flag on and off color management for the display disappeared with this update and I haven’t had time to test whether it’s on by default, works under Windows, or whether they’ve gone a different way with their color management.

Adobe Premiere CC 2017 - Real World Feature Review

About two weeks ago Adobe released their 2017 update to Creative Cloud, and because of a couple of projects that I happened to be working on at the time, I figured I’d download it immediately to see if I could take advantage of some of the new features.

If you want the TL;DR review, the short version is this: most of the features offer genuine improvements, but range in usefulness from incredibly useful to just minor time savers; a few, though, are utter crap.

Side note: I considered talking about the new features found in Adobe After Effects, but really, there’s not much to say other than: they work? Largely they’re just performance increases accomplished by moving things to the GPU, broader native format support, time shortening templating, and better integration with a few other Adobe CC products.  If you look at their new features page, you should be able to pretty quickly figure out which ones could be important to you, and there’s not much else to say about them other than “they work”.

Premiere is a different animal though, and I can’t say that all of the new features work properly.  But let’s start with the positives, of which there are many.

First and foremost, 8K native R3D imports.

This was expected, and necessary.  And while not ‘featured’ as part of their summaries, it is there and it works.  That’s a boon to all of us shooting on Helium sensors, and to our clients.  So far we’ve been running 8K ProRes or 2K proxies for our clients so they could edit with our footage; now they can take care of mastering with the 8K themselves (if they want).  So definitely a plus.

Second, the new native compression engine supporting DNxHD and DNxHR.

To me, this is a big plus.  I keep looking for a solid alternative to ProRes for my workflows, and while they don’t yet support the DNxHR 444, they do solidly support DNxHR HQX.  Since a significant portion of my usual workflows are built on 12 bits per channel and roundtripping between Adobe and DaVinci, having a solid 12 bit 422 cross-platform alternative to ProRes may finally let me get rid of DPX.

Third, the new audio tools.  Oh, thank god, the new audio tools.

I happen to be working this week on a short project doing sound design and light mixing (I’ll link to it when it’s up) and the new audio tools in Premiere have been a massive time saver.  If you’ve ever tried to do audio work directly in Premiere before, you’ll know how maddening it’s been dealing with their unresponsive skeuomorphic effect control knobs.  Even doing basic EQ meant flagging values on and off and struggling to get things as precise as you wanted.

Adobe Premiere CC 2015.3 EQ

Adobe Premiere CC 2015.3 Pitch Shifter

But the new audio UX is… well, fantastic.  I really can’t praise it enough.  The effect controls are still skeuomorphic (which I actually think is important in this case) but look classier, and more importantly actually respond really quickly to the changes you want to make.  They’ve expanded the tools set and the effects run more quickly.  I can’t be happier - this alone saved me hours of frustration and headaches this week.

Adobe Premiere CC 2017 EQ

Adobe Premiere CC 2017 Pitch Shifter

Fourth, the new VR tools.

So the same project I was doing sound design on happens to be a stereoscopic VR project.  So immediately, the promise of new VR tools was exciting - what more would they let me do, I wondered?

Install, fire it up, and… not much, actually.

Here’s basically all of the new VR tools I could find:

  • Automatically detect the VR properties of imported footage, but only if they were properly flagged with metadata (marginally useful, not really useful)

  • Automatically assign VR properties to sequences if you create a new sequence based on properly flagged VR footage.

  • Manual assign VR properties to sequences, allowing you to flag stereoscopic (and the type of 3D used, if any). The sequence flagging allows for Premiere to automatically flag for VR on export, when supported.

  • Embed VR metadata into mp4 files in the H264 encoder module, instead of just QuickTime.

  • Connect seamlessly to an Oculus or other VR headset with full 360 / 3D output.

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

Is this 2015.3 or 2017?

And that’s… it.  Really?  I mean, there is actually no difference between the viewers in 2015.3 and 2017, both handle stereoscopic properly; assigning the VR flags to sequences and then embedding the necessary metadata on export VERY useful.  But I would really LOVE to see an editor trying to edit with a VR headset.  Or color correct, for that matter.  Reviewing what you’ve got, sure, but not for the bulk of what you’re doing.

I should note that Premiere chokes on stereoscopic VR files at resolutions greater than 3K by 3K, which makes mastering footage from the GoPro Odyssey interesting, since it comes back from the Google Jump VR system as 8K by 8K mp4s.  Even converting to a full ProRes 422 intermediate at 4K by 4K proved too data heavy for Premiere to keep up with on an 8 Core MacPro.

But it’s not only VR performance that’s an issue: it’s still missing a whole bunch of features that would really make it a useful VR tool.  Where are my VR aware transitions?  What about VR specific effects, like simple reframing?  Where is my VR support in After Effects?  Why can’t I manually flag footage as VR if it didn’t have the embedded metadata?  What about recognizing projections other than equirectangular?  They have a drop down for changing projection type on a timeline, but equirectangular is the only option.  What about native ambisonic audio support? Or even flagging for ambisonic audio on export?

Don’t get me wrong, what they’ve done isn’t bad; it does work, and is an improvement.  It’s just that the tools they added were very tiny improvements on what was already there.  And I know (and use) that there are plugins that give Premiere and After Effects many of the VR features that I need to actually work in VR.  But it's really difficult, almost impossible, to get by without the 3rd party plugins.

Maybe I’m just jaded and judgmental, in part because of my reaction to the HDR 'tools' they announced, but when you advertise “New VR Support” as the second item on the new features list, it had better be good support.  Like, you know, actually work as well in VR as you can in standard 2D video.  If I, as a professional, require third party plugins to your program to make it work at the most basic level, it’s not the turnkey solution you advertise.  I’m sure that more tools are in the works, but for now, it feels lackluster and an engineering afterthought rather than an intelligent feature designed for professionals.


But don’t worry, that’s not their most useless feature change.  Let’s talk about their new HDR tools.

What. The. Hell.

This is how using the new HDR 'tools' in Premiere 2017 feel.

I mean that.  With all of my heart.

I might be a little biased on the subject, but honestly I question who in their right mind decided that what they included was actually something useful.

It’s not.

It’s utter shit.

But worse than that, as-is it’s more likely to hurt the broader adoption of HDR than to help it.

And no, I’m not exaggerating.


On paper, the new HDR tools sound amazing.  HDR metadata on export!  HDR grading tools!  HDR Scopes!  Full recognition of HDR files!  Yay!

In practice, all of these are useless.

Let me give you a rundown of what the new HDR tools actually do.

Premiere now recognizes SMPTE ST.2084 HDR files, which is awesome.  But only if the proper metadata is already embedded in the video stream, and then only if it’s an HEVC deliverable file.  Not a ProRes, DPX, or other intermediate file; only HEVC.  And like VR support above, there’s no way to flag footage as already being in HDR or using BT.2020 color primaries.  Which ends up being a massive problem, which I’ll get to in a minute.

When you insert properly flagged HDR footage into a sequence, you get a pleasant surprise: hard clipping at 120 nits on your viewer or connected external display.  It’s honestly the worst clipping I’ve seen.  And there’s no way to turn it off.  If you go to export the clip into any format without the HDR metadata flag enabled on export, you get the same hard clipping.  And since you can only flag for HDR if you’re exporting to HEVC, you can’t export HDR graded or processed through premiere in DPX, ProRes, TIFFs, OpenEXR or any other intermediate format.

This is why in my article on Grading and Mastering HDR I mention that it’s really important to be using a color space agnostic color grading system.  When the application includes color management that can’t be disabled, your options become very limited.

Also, side note, their HEVC encoder needs work - it’s very slow at the 10 bits you need for HDR export.  I expect it’s better on the Intel Kaby Lake chips that include hardware 10 bit HEVC encoder support that, oh wait, don’t exist for professionals yet (2017 5K iMac maybe?)

But at least with the metadata flagging you can bypass the FFMPEG / x265 encoder that you’ll have needed up to this point to properly encode HDR for delivery, right?

Why would you think that?  Of course you can’t.

Because if you bring in a ProRes, DPX, or other intermediate file into Premiere, there’s no way to flag it as HDR and it doesn’t recognize embedded metadata saying it’s HDR like DaVinci and YouTube do.  What happens is that if you use these intermediates as a source (individually or assembled in a sequence) and you flag for HDR on export, Premiere runs a transform on the footage that scales it into the HDR range as if it’s SDR footage.

12 Bit ProRes 4444 HDR Intermediate in Timeline with 8 Bit Scope showing proper range of values

12 Bit ProRes 4444 HDR Intermediate in Timeline with HDR Scope showing how Premiere CC 2017 interprets the intermediate if you flag for HDR on export

When is that useful? If I have a graded SDR sequence that I want to encode into the PQ HDR space, while keeping 100% of the limits of an SDR image.  Because why the hell not.

But never fear!  Premiere has included new color grading tools for HDR!

Well, they aren’t horrible, which I suppose is a compliment?

How to enable HDR Grading in Premiere 2017

To enable HDR Grading you need to change three different settings.  From the Lumetri context menu in your Lumetri Panel, you need to select “High Dynamic Range” to enable the HDR features; on the scopes you’ll need to switch the scale from “8 Bit” to “HDR” (and BT.2020 from the scope settings); and if you actually want to see those HDR values on the scope, you’ll need to enable the flag “Maximum Bit Depth” in your Sequence Settings.  I’m sure there’s a fantastic engineering explanation for that last one, but it’s not exactly intuitive or obvious, and took me a bit of hunting to figure it out.

Maximum Bit Depth needs to be turned on in Sequence Settings to enable proper HDR Scopes

HDR Scopes WITHOUT Maximum Bit Depth Flag

HDR Scopes WITH Maximum Bit Depth Flag

Once you’ve enabled HDR grading from the Lumetri drop down menu, you’ll get a few new options in your grading panels.  “HDR White” and “HDR Specular” come available under the Basic Correction panel, “HDR Range” comes available under the Curves panel, and “HDR Specular” comes available under the Color Wheels panel.

The HDR White setting seems to control how much the other sliders of the Basic Correction panel behave, almost like changing the scale.  The higher the HDR White value, the less of an effect exposure adjustments have and the greater the effect of contrast adjustments.  The HDR Specular slider controls just the brightest whites, almost like the LOG adjustment I use in DaVinci Resolve Studio.  This applies to both the slider under Basic Correction, and the wheel under the Color Wheels panel.  HDR Range seems to change the scale of the curves similar to how the HDR White slider does for the basic corrections.

All of this, by the way, I figured from watching the scopes, and not the output image.  I’ve tried hooking up a second display to the computer and hooking up our BVM-X300 through our Ultrastudio 4K to Premiere, but to no avail - the output image is always clipped to standard video levels and output in gamma 2.4.

Which, if you ask me, severely defeats the purpose of having HDR grading tools to begin with. Here’s a great idea: let’s allow people to grade HDR, but not see what they’re grading.  Which is like trying to use a table saw blindfolded.  Because that’s a thing people do, right?  Which brings me back to my original premise: What. The. Hell.

When you couple that little gem with the hard clip scaling, you realize that the only reason the color grading features are in this particular version is to make the process of cross grading from SMPTE ST.2084 into SDR easier, and nothing else.

No fields for adding HDR10 Compliant Metadata on Export. That's okay, you shouldn't use their exporter anyway (at least not this version)

Oh, one last thing of course: here’s the real kicker: you can’t even export HDR10 compliant files.  Yes, I know I said that in the HEVC encoder you can flag for ST.2084, but you can’t add any MaxFALL, MaxCLL, or Master Display metadata.  And yes, I double checked that Premiere didn’t casually put those into the file without telling you (it doesn’t).

And it has zero support for Hybrid Log Gamma.  Way to pick a side, Adobe.


So passions aside, let’s run down the list again of new HDR tools and what they do:

  1. Recognize SMPTE ST.2084 files, but only when already properly flagged in HEVC streams and no other codec or format.

  2. Export minimal SMPTE ST.2084 metadata to flag for HDR, but only works if your source files are already in the HEVC format and already properly HDR flagged (see #1), or if they’re graded in HDR in the timeline, which you can’t see. Which renders their encoder effectively useless.

  3. Enable HDR grading through a convoluted process, with a minimal but useful set of tools. But you can’t see what you’re doing, so I'm not sure why they're there.

  4. There is no bullet point 4. That’s literally all it does.

The question that I have that I keep coming back to is “who do they think is going to use these tools?”  It feels like the entire feature set was a “well, we need to include HDR, so get it in there”.  But unlike the VR tools that you can kind-of build into, these HDR “tools” (I use the word loosely) are really problematic, not just because the toolset is incomplete but because the way that the current tools are implemented is actually harmful to a professional workflow.

Call it simple feature bandwagoning, or engineers that didn’t consult real creative professionals, or blame it on whatever reason you will.  But the fact is, this ‘feature’ is utter shit, which to me sours the whole release, just a little.

My biggest concern here is that while someone like me, who's been working with HDR for a while now, can tell that these will hurt my workflow, Premiere is an accessible editing platform for everyone from amateurs to professionals.  And anyone looking to get into HDR video may try to use these tools as their way in, and their results are going to be terrible.  God awful.  And that hurts everyone - why would we want to adopt HDR when 'most of what people can do' (meaning the amateurs and prosumers who don't know any better) looks bad?

So basically, if Premiere is part of your HDR workflow, don't even think about using their new 'tools'.

HDR Rant over, let’s bring this back to the positive.


Just to reiterate, the new audio tools in Premiere CC 2017 are fantastic.  I can't emphasize that enough.  Most of the rest of the features added are pretty good.  The new team projects collaboration tools, though I haven’t had a chance to use them, appear to work well (though are still in beta).  The new captions are useful, the new visual keyboard layout manager fantastic (though WAAAY long overdue!), and the other under-the-hood adjustments have improved performance.

Should you upgrade?  Yes!  It’s a great upgrade!  Despite my gripes I’m overall happy with what they did!

Just don’t try to use it for HDR yet, and be aware that the new VR tools aren’t really that exciting.

Written by Samuel Bilodeau, Head of Technology and Post Production

How to Upload HDR Video to YouTube (with a LUT)

Today YouTube announced via their blog official HDR streaming support.  I alluded to the fact that this was coming in my article about grading in HDR because we've been working with them the past month to get our latest HDR video onto the platform. It's officially live now, so we can go into detail.


How to Upload HDR Video to YouTube

Similar to VR support, there are no flags on the platform itself that will allow the user to manually flag the video as HDR after it's been uploaded, so the uploaded file must include the proper HDR metadata.  But YouTube doesn't support uploading in HEVC, so there are two possible pathways to getting the right metadata into your file: DaVinci Resolve Studio 12.5.2 or higher, or the YouTube HDR Metadata Tool.  They are generally outlined in the YouTube support page, but not very clearly, so I think more detail is useful.

I did include a lengthy description on how to manage HDR metadata in DaVinci Resolve Studio 12.5.2+, with a lot more detail than they include on their support page, so if you want to use the Resolve method, head over there and check that out.  I've covered it once, so I don't see the need to cover the how-to's again.

I should note that Resolve doesn't include the necessary metadata for full HDR10 compatibility, lacking fields for MaxFALL, MaxCLL, and the Mastering Display values of SMPTE ST.2086.  It does mark the BT.2020 primaries and the transfer characteristics as either ST.2084 (PQ) or ARIB STD-B67 (HLG), which will let YouTube recognize the file as HDR Video.  YouTube will then fill in the missing metadata for you when it prepares the streaming version for HDR televisions, by assuming you're using the Sony BVM-X300.  So this works, and is relatively easy.  BUT, you don't get to include your own SDR cross conversion LUT; for that you'll need to use YouTube's HDR Metadata Tool.

 

***UPDATE: April 20, 2017*** We've discovered in our testing that if you pass uncompressed 24 bit audio into your QuickTime container out of some versions of Adobe Media Encoder / Adobe Premiere into the mkvmerge tool described below the audio will be distorted.  We recommend using 16 bit uncompressed audio or AAC instead until the solution is found.

 

YouTube's HDR Metadata Tool

Okay, let's talk about option two: YouTube's HDR Metadata Tool.  

Alright, not to criticize or anything here, but the VR metadata tool comes in a nice GUI, but the link to the HDR tool sends you straight to GitHub.  Awesome.  Don't panic, just follow the link, download the whole package, and un-Zip the file.

So the bad news: whether you're working on Windows or on a Mac, you're going to need to use the command line to run the utility.  Fire up Command Prompt (Windows) or Terminal (MacOS) to get yourself a shell.

So the really bad news: If you're using a Mac, the binary you need to run is actually inside the app package mkvmerge.app.  If you're on Windows, drag the 32 or 64 bit version of mkvmerge.exe into Command Prompt to get thing started; if you're on MacOS, right click on mkvmerge.app, select "Show Package Contents", and drag the binary file ./Contents/MacOS/mkvmerge into Terminal to get started:

Right click on mkvmerge.app and select "Show Package Contents"

Drag the mkvmerge binary into Terminal

The README.md file includes some important instructions and the default syntax to run the tool, with the assumption that you're using the Sony BVM-X300 and mastering in SMPTE ST.2084.  I've copied the relevant syntax here (I'm using a Mac; delete anything in bold before copying the command over, and replace the file paths in the **s with your content:)

./hdr_metadata-master/macos/mkvmerge.app/Contents/MacOS/mkvmerge \
-o *yourfilename.mkv* \
--colour-matrix 0:9 \
--colour-range 0:1 \
--colour-transfer-characteristics 0:16 \
--colour-primaries 0:9 \
--max-content-light 0:1000 \
--max-frame-light 0:300 \
--max-luminance 0:1000 \
--min-luminance 0:0.01 \
--chromaticity-coordinates 0:0.68,0.32,0.265,0.690,0.15,0.06 \
--white-colour-coordinates 0:0.3127,0.3290 \

If using a LUT, add the lines
--attachment-mime-type application/x-cube \
--attach-file *file-path-to-your-cube-LUT* \

In all cases end with
*yourfilename.mov*

Beyond the initial call to the binary or executable, the syntax is identical on MacOS and Windows.

The program's full syntax can be found here, but it's a little overwhelming.  If you want to look it up, just focus on section 2.8, which include the arguments we're using here.   The first four arguments set the color matrix (BT.2020 non-constant), color range (Broadcast), transfer function (ST.2084), and color space (BT.2020) by referencing specific index values, which you can find on the linked page.  If you want to use HLG instead of PQ, switch the value of --colour-transfer-characteristics to 0:18, which will flag for ARIB STD-B67.

(Note to the less code savvy: the backslashes at the end of each line allow you to break the syntax across multiple lines in the command prompt or terminal window.  You'll need them at the end of every line you copy and paste in, except for the last one)

The rest of the list of video properties should be fairly self explanatory, and match the metadata required by HDR10, which I go over in more detail here.

Now, if you want to include your own SDR cross conversion LUT, you'll need to include the arguments --attachment-mime-type application/x-cube, which tells the program you want to attach a file that's not processed (specifically, a cube LUT), and --attach-file filepath, which is the actual file you're attaching.

If you don't attach your own LUT, YouTube will handle the SDR cross conversion with their own internal LUT.  It's not bad, but personally I don't like the hard clipping above 300 nits and the loss of detail in the reds, but that's largely a personal preference.  See the comparison screenshots below to see how theirs works.

Once you've pasted in all of the arguments and set your input file path, hit enter to let it run and it'll make a new MKV.  It doesn't recompress any video data, just copies it over, so if you gave it ProRes, it'll still be the same ProRes stream but with the included HDR metadata and LUT that YouTube needs to recognize the file.

Overall, it's It's a pretty fast tool, and extremely useful beyond just YouTube applications.  You can see what it's done in this set of screenshots below.  The first is the source ProRes clip, the second is the same after passing it through mkvmerge to add the metadata only, and the third went through mkvmerge to get the metadata and my own LUT:

ProRes 422 UHD Upload Without Metadata Injection

ProRes 422 UHD Upload in MKV File. Derived from the ProRes File above and passed through the mkvmerge tool to add HDR Metadata, but no LUT.

ProRes 422 UHD Upload in MKV file. Derived from the ProRes file above and passed through the mkvmerge tool to add HDR Metadata and including our SDR cross conversion LUT. Notice the increased detail in the brights of the snake skin, and the regained detail in the red flower.


All of us at Mystery Box are extremely excited to see HDR support finally widely available on YouTube.  We've been working in the medium for over a year, and haven't been able to distribute any of our HDR content in a way that consumers would actually be able to use.  But now, there's a general content distribution platform available with full HDR support, and we're excited to see what all creators can do with these new tools!

Written by Samuel Bilodeau, Head of Technology and Post Production