top of page
willem2667

Dolby Vision and Independent Filmmaking



Jacob Schwarz and Samuel Bilodeau of Mystery Box will be presenting

on grading in HDR and using Dolby Vision at the Dolby Booth at NAB 2019 (SU1702), on Monday April 8 @ 5pm


Come by and see us there!


In the last few decades, the costs of creating high-quality commercial and narrative content have dropped, and independent content creation is now more accessible and more affordable than ever. But while HD, 4K, and 8K formats are dropping in price, and workflows are limited only by processing power, the newer HDR video formats still carry a more premium price tag that leaves independent filmmakers and content creators questioning whether it’s worth it to master in HDR.


We completely understand the struggle. We’re independent filmmakers. Our content production is either commercial client work or self financed, and we make hard decisions over what technologies to embrace and which to pass on. HDR is one of those technologies we feel that every independent producer needs to wholeheartedly and without reservations embrace. It’s not just because we’re gearheads, and love working with the cutting edge of new technologies either. No, the reason we feel that everyone should embrace it is that it offers content creators more control over how their work is seen, and gives creators a bigger pallet to work with, to tell the stories we all need to tell.


Right now we’re in a period of transition, where the new HDR technologies are here, but aren’t readily available everywhere. So you should wait, right?


No.


There are three things you should really keep in mind when thinking about HDR. First, we’re always in a period of transition from older technology to something newer - that’s never going to change or stop. The choice has always been either to keep up with technology, or wonder how you got left behind. Second, consumers have adopted HDR faster than any other previous technology, and nearly every television on sale in the United States has some form of HDR capability: consumers have affordable HDR before creators have affordable HDR. This isn’t something that’s happened before. And third, HDR is designed to replace SDR, not coexist or build on it. SDR on television screens and in theaters is a legacy of the limits of cathode-ray and film projection technology that no longer applies to our flat panel digital age. It’s not a question of “if” you will need to embrace HDR, but when.


Mystery Box has been working with Dolby over the course of the last year, talking about the challenges of independent HDR content creation and what needs to happen to bridge the divide between Hollywood, where HDR’s been embraced by the Streaming Services and Studios with welcome arms, and the independent content creation market, where HDR is still painfully misunderstood and distant.


One of the biggest fears about HDR is the fear of losing the image quality that we’ve spent years building and honing in, that defines our unique styles, and the looks that we know we can create for our clients. That’s a real, and important fear, driven by some of the challenges with the medium - problems with the HDR to SDR conversions on platforms like Vimeo or YouTube, problems with the availability of grading tools and techniques and mastering technologies, and problems with LUTs not translating into HDR and not giving us the results we expect. All these problems are solved with Dolby Vision HDR.


Here at Mystery Box we feel that Dolby Vision HDR is the best bridge that content creators can use as they embrace HDR. It minimizes most of the fears around HDR quality, especially interpretation and presentation, and within the last year has become affordable for smaller post production houses and independent filmmaking.


In our post today, we want to walk you through how Dolby Vision works as an essential part of a complete HDR post production, and will help you create even more stunning content in the future.


 

Background


HDR in video is a new tool available to filmmakers and not a ‘look’. We’ve written about it in more detail the past. As a quick summary, the purpose of HDR video is to allow content creators to store, distribute, and display images with higher dynamic range than traditional video or cinema have allowed up to this point. Now that our modern top-of-the-line digital cinema cameras can capture 12-16 stops or more of dynamic range, we want to be able to give our viewers or consumers’ eyeballs equivalent amounts of dynamic range: actual range in brightnesses, instead of the 5-10 stops of dynamic range we’ve been limited to in the past.


There are two ways of assigning brightness values to digital values that we use in HDR to do this.



The first is Dolby’s solution, and it’s called Perceptual Quantization, or PQ. PQ isn’t Dolby Vision, but it is what Dolby Vision is built on. It’s a log format similar to a camera log format, but stores more detail across a broader dynamic range, and its digital values are related to fixed screen output brightness values: a digital value of 520 will always be output as 100 nits of brightness, whether I’m in a grading suite or watching my content at home. No more problems with consumer miscalibrating their televisions, no more disparity between a 100 nits peak reference in the color suite and the 400+ nits peak on consumer televisions - what you see in the suite is what you see at home. When it’s implemented properly, PQ will look the same everywhere.




The catch is that when you view PQ on a traditional SDR display, it looks washed out and flat like a camera log format. But when you view it in HDR, it’s bold and dynamic, how we intend it to. The available solutions to conform PQ to traditional SDR displays is something we’ll come back to in a minute.


The competing standard to PQ is the BBC/NHK’s standard: Hybrid Log Gamma, or HLG. HLG is more of a stop-gap between SDR and HDR. It’s partially backwards compatible with the SDR standards, keeping the same encoding right up to the mid tones, then rolling off the highlights in a way that makes them look flat in SDR, but more brilliant in HDR. It’s fast, automatic, and easy to send one signal to all televisions and expect a ‘viewable’ image. That’s why it’s favored by live broadcasters making the switch to HDR - they only need to send one image that should work for all televisions.




But the reality is that the SDR image doesn’t look that great, compared to one graded specifically for SDR. And the HDR is scene referred - changing the brightness or contrast of the end user’s display will change how the image is rendered, making it far less consistent than PQ, and more like the wild-west inconsistencies of SDR. That’s why I consider it to be a stop gap - it’ll help with the transition from one to the other, but doesn’t have as much future longevity as PQ does, especially as display technologies continue to improve in the future. So if PQ is the format of the future, how do we work with the inconsistent technologies of today while embracing the future that will be?


There are five main ways to convert PQ HDR content into the SDR that we’re all familiar with:

  1. Direct Conversion with Clipping This is mathematically taking the brightness of PQ, mapping everything below 100 nits peak directly onto SDR, and clipping all of the data above 100 nits. This is the fastest and easiest solution, but limits what you can do creatively with HDR. It also creates gross hue shifts in the brights as the different RGB channels clip at different times.

  2. Algorithmic Conversion with Algorithmic Roll Off This is similar to the direct conversion method, but instead of an arbitrary hard clip, it uses an algorithm that adjusts the brightness and rolls off the highlights, or uses a variety of tone mapping techniques to try to preserve color contrast. Although the parameters are tunable in advance, the same algorithm is applied to each frame across the whole piece. The results can range from very poor to good, depending on how the shots look and how well the algorithm’s been tuned. This is essentially how HLG works, but the algorithm is built into the format.

  3. Conversion via Lookup Table (LUT) This applies a consistent transformation to the HDR data and moves it into the SDR space. It’s fast, requiring almost no processing power, but its results are a mixed bag - bad LUTs can make your image look awful, but when you combine algorithmic conversion with human refined tuning, you can get some incredible and consistent results.

  4. Adaptive Algorithmic Conversion Here an algorithm scans each shot or each frame and figures out the best way to convert it into SDR, or into the best HDR range for your television. This is what you find in SL-HDR1, SL-HDR2, HDR10+, and Dolby Vision when you just do the analysis pass. Generally the results are pretty good, though you don’t have control over what the look will be, and it’s hard to predict how each shot will be adapted into SDR.

  5. Adaptive Algorithmic Conversion with Creative Controls This is the most advanced way to convert from HDR to SDR, or to any kind of other HDR display. Here, a shot by shot analysis is paired with a colorist’s input on how to fine tune each shot to make it look the best in SDR, and intermediate HDR screen brightnesses. With the new Dolby Vision 4.0 controls, you have fine tuning over almost everything - brightness, contrast, saturation, color weighting, highlight roll off, and even hue-saturation tweaks on 6 different secondary ranges.



For the content creator or producer looking for control over how their image should look, there is no real substitute for Dolby Vision. It’s the only HDR to SDR conversion method that gives the colorist full control over how the resulting image should look in SDR. And those creative adjustments apply to how Dolby Vision adapts the content to the exact HDR range of televisions or devices, meaning that you will always get the best possible image for that device. That’s an incredibly powerful tool!


Near the end of last year (2018) we ran some internal tests on different HDR to SDR conversion methods, looking for quality of the resulting SDR grade when compared to the original HDR. We didn’t have time to nicely package all of our results and make a post about it (which we may still do), but at the end of the day our quality rankings were as follows:

  1. Dolby Vision

  2. MBOX HDR to SDR LUT

  3. HDR10+

  4. HLG

  5. Algorithmic Conversion with Algorithmic Roll Off

  6. Other LUTs

  7. Direct Conversion with Clipping

Dolby Vision and the MBOX HDR to SDR LUT were the clear frontrunners in our test, with the third ranked showing probably about 60% of the quality of the other two frontrunners. Dolby Vision was the clear SDR conversion winner, without even accounting for its HDR adapting abilities. And yes, we may be a little biased, but our LUT is very good (and that’s been verified by major third parties). As a note, we generated the HLG version using a flawless PQ to display referred HLG mapping using the Rec. 2100 standard.


The point of all of this is that if you want quality in your final images, and consistency across devices and viewing conditions, you want Dolby Vision. So how do you get it?


 

The Process


Once you’ve made the decision to work and master in HDR, adding on Dolby Vision really isn’t that hard, and a basic Dolby Vision shot-by-shot automatic HDR to SDR mapping is a free, giving you no excuse not to.


There are three main steps to creating an HDR and Dolby Vision master of your content:

  • HDR Grade

  • Dolby Vision Grade

  • HDR Deliverables Render: HDR Master File, Dolby Vision XML & IMF, and Derived SDR Grade Master Renders


The HDR grade is the heart of the Dolby Vision process, and the heart of future proofing your content. It’s not a style or a look, HDR is a container that allows you to use and allocate more color and brightness dynamic range than you could before. You can use all of it, or a more limited subset of it - the choice is yours. HDR gives you the choice by removing the shackles of the limited SDR and traditional cinema dynamic ranges.


Once you have your HDR grade, you’ll use the Dolby Vision tools to analyze your HDR grade, and algorithmically generate very good quality SDR and intermediate HDR grades, on a shot by shot basis. Then, working with your colorist you can go shot by shot and tweak these automatic grades to match your own authorial intent.


Lastly you’ll work on your deliverables: an HDR master file, a Dolby Vision Mezzanine, an XML of the Dolby Vision metadata, and a derived SDR grade from the Dolby Vision process.


Let’s take a look at how each of those steps work in practice.


HDR Grading

Shortly after we first started working in HDR, we published a “How To” guide to grading in HDR, which you can find here. At the time, that was the best practices and tools we could find. Our current approach to HDR grading is much faster, more intuitive, and uses some custom tools that we’ve built to do specific things. We intend on publishing the full workflow in depth in the future, but for now we’re going to have to just do the summary.


The first step in our HDR grading process is to take what the camera saw and translate it into a working space, what we call “developing”. Developing in this context is akin to making an interpositive in the film world. We’re converting what the camera saw into our working space, adjusting its white balance, doing any push-pull processing to exposure, and adding a global contrast curve. In the node tree below you can see our general layout for development: node 1 handles the conversion into the working space, nodes 2 and 3 use our EV Adjustment LUTs to do push-pull processing on the exposure, and node 4 applies global contrast by stretching out the dynamic range above and below the image midtones, again using our own LUT. We control the strength of the LUTs using the node opacity - this gives us a near infinite precision of exposure adjustments and dynamic range expansions (contrast), really allowing us to quickly dial in an image. The last node handles the RGB balance (white balance) adjustments using traditional color wheels, and any primary lift-gamma-gain-saturation global adjustments we want to make to the image coming out of development.









For some RAW formats, like REDRAW, we can handle push-pull processing, contrast exposure, and white balance adjustments using the RAW tools. But we prefer using the more unified working space to handle those adjustments, since it works equally as well to all cameras and all log formats, and in testing we’ve found that we have more finesse using this structure than using the RAW interface.


For a working space, our company preference is to use Rec. 2020 color primaries with a PQ transfer function, but you may choose to use any camera log as a working space, or any of the ACES based working spaces. We prefer PQ because we’re frequently dealing with content from a variety of cameras, and bringing everything into a PQ HDR space gives us the most latitude in working with our images. It’s the container with the most dynamic range available, while still being a log format (and so it acts like a LOG format when grading), and is the final master / delivery format for the content so we’ll end up there anyway. We’ve found that most camera log formats truncate the available dynamic range and add noise to the lows when used as a working space, and ACEScc has some quirks when using traditional lift-gamma-gain controls that give up a lot of precision, which is why we choose to use PQ.




We’ve had the same problems with the ACES AP0 and AP1 color primaries, and most camera wide primaries (the exception being Sony’s Sgamut3.cine) - they’re too wide and are rotated pretty far from the traditional hue directions of the Rec. 709 controls, which leave me feeling like I have no finesse in my hue controls. I’m not fond of the quality of the transfers out of ACES into a variety of delivery formats either, but you may have better luck with them than we do. Rec. 2020 primaries with a PQ curve seems to be a happy compromise for us. You may choose to do something different, like use the Dolby recommended P3 D65 color space (we usually clamp our color output to P3 even though we work in Rec. 2020) - like with all things HDR, the choice is yours.



That said I’d strongly recommend against using HLG as a working space - it takes a while to dial in your display to actually be confident in knowing that you’re seeing matches what you should be seeing. HLG also has a dynamic range that’s dependent on your display settings, rather than being something empirical. It’s still a very good choice for live production, but it’s not a great choice as your working space for grading. It also responds aggressively to common adjustments like lift-gamma-gain that make it more difficult to work with than a LOG format.


The second step of HDR color correction brings in the secondary adjustments. We focus on the color tones within the image as a whole, or on select parts of the image. Usually this involves wrangling in the hue and saturations into the ranges we want for the base balance, to optimize the original look of what was shot.


Next, we move on to local contrast, using power windows to adjust the brightness and contrast in specific spatial regions, and range limited adjustments to apply brightness and contrast to specific color or brightness regions. Localizing contrast is more important in HDR than it is in SDR - unlike SDR where you’re really limited to 5-8 stops of dynamic range, the bigger HDR container pushes against the limits of our eyes, which is around 10 stops of dynamic range within your central field of view. So while the overall image may have 12-15 stops of dynamic range, a specific part of the image may still look flat on a large television, since it may only have 3 or 4 stops of dynamic range, and is big enough to fill our 15-18 degree perifovea (greater part of the center of the retina).




The third step that we add to the content is the look, which you can apply globally to an entire reel, or locally to a scene. Are the midtones shifted warm or cool, do my speculars peak in a specific range? Do I want a nice roll-off into the highlights, or abrasive clipping?

The last step in the color correction is our conforming step, where we convert the image into what our display sees (if we’re grading in something other than PQ), or add the limits to the video signal to match our intended target, such as by adding a clip at a 1000 nits white point, or at a 0.005 nits black point to ensure our content looks good on LCD televisions, or adding a clamp to P3D65, even though we’re working in Rec. 2020. There are a bunch of these little conforming tricks we use to make sure that our masters look as good as possible.


Dolby Vision Grade


Once we’ve got our HDR grade done, we can either render it out as a master and bring it back in as pre-rendered video, or move right on to the Dolby Vision grade using our existing HDR timeline. Because we’re often working with high resolution 8K RAWs, but have extremely fast storage available, we’ll usually run a full intermediate so that Dolby Vision runs a little faster (since it doesn’t have to decode a raw, then apply all of the grade nodes before doing its own analysis), but depending on how much time you have available, and how intense your grades are, this may not be a problem.



However you get to the Dolby Vision step, the first thing you’ll want to do is turn on Dolby Vision and run an analysis pass on your HDR content. The analysis pass looks for three key pieces of metadata from each frame: what’s the darkest value, what’s the brightest values, and what’s the statistical mid tone value - what Dolby calls Level 1, or L1, metadata. These three values are fed into Dolby’s content mapping algorithm to generate what it thinks is the best conversion to SDR (or another HDR target of your choice). Once the analysis of a shot is done, you can turn on the content mapping, either through the integrated content mapping unit (iCMU) in your grading application, like DaVinci Resolve, or through using a purchased external CMU (eCMU) from Dolby.

Enabling Dolby Vision in DaVinci Resolve 15, using the eCMU for Dolby Vision 2.9, and the iCMU for Dolby Vision 4.0

There are pluses and minuses to each - we use an eCMU because it allows us to work in 4K, with both our HDR and our SDR image up at the same time. On the downside we can’t use signal frame rates higher than 30p, so when mastering 50 or 60p content we play it back at 25 or 30p for the CMU. Using the iCMU in Resolve I can get a 4K mapped image at any frame rate, or a 1080 HDR and a 1080 SDR (content mapped) version using the dual output feature of the Ultrastudio 4K Extreme, but I can’t get dual 4K images. Resolve’s iCMU also supports Dolby Vision 4.0, while the eCMU doesn’t yet (but will soon).


The L1 analysis and Dolby Vision metadata export is free. That’s right, anyone with a Resolve Studio license (or a license to any other color correction software that supports Dolby Vision) can use Dolby Vision to analyze and generate a shot-by-shot SDR grade of their content at no additional cost! Dolby Vision 2.9 was very good at doing this, and Dolby Vision 4.0 is even better - it’s hands down the best way of getting your HDR content into SDR while preserving the majority of the artistic intent. Even if you don’t want to spring for the rest of Dolby Vision’s features, if you’re working and delivering HDR content, take advantage of Dolby Vision’s analysis.


Routing the Quad 3G SDI signal from our Ultrastudio 4K Extreme (US4Ke) to the Dolby eCMU, and routing the eCMU’s signal to our SDR display.

If you pay for a Dolby Vision license, which has recently dropped into the affordable range for smaller post production facilities, you can enhance your automatic Dolby Vision grade by doing a trim pass.



Enabling both HDR and SDR outputs using DaVinci Resolves iCMU. Requires an Ultrastudio 4K Extreme or Decklink 4K Extreme 12G

A trim pass is where you, as a colorist or as the creative in charge of the content, go through the Dolby Vision grade shot by shot and adjust the lift-gamma-gain primaries, a variety of secondaries (in the new version 4.0), and the balance between preserving color and brightness. What’s great here is that you’re not boxed into just trying to match the HDR, you can take the artistic license you want. Consider this shot from our Morocco reel. In HDR it spikes in brightness and saturation as the gradient approaches the sun. Our SDR LUT will roll that off and drop its saturation slightly to maintain more of the detail. But using Dolby Vision, we can blow out more of the sky, and give the SDR a little more punch in the highlights.


Dolby recommends doing a full trim pass on the SDR version of the grade. If you’re happy with how the algorithm has rendered the image, you’re fine to quickly skip past shots, or you can tweak them to however you want them to look. After you’ve done the SDR version, you may want to grade an intermediate HDR trim pass, at a 600 or 1000 nits peak (say, for instance we’ve graded on our FSI XM310K with a 3000 nits peak and want to render the image for our Sony BVMX300 with its 1000 nits peak) to see how Dolby is interpreting the trims you’ve made to the SDR version, and how it should apply them to the intermediate HDR versions you’ll find on different HDR televisions. Trims other than the SDR grade are an optional step.



Dolby Vision 4.0 control panel in DaVinci Resolve 15 (on a high pixel count display), with both Primary and Secondary trims available. Dolby Vision uses the operator controlled trims to adjust its automatic trims to get the best possible SDR image, and any other image in between.

These trims are the heart of how Dolby Vision gives us, the creators, back the control over how televisions and display systems adapt our HDR content for the screen. These trims provide reference points that the displays use to figure out how to render the HDR image specifically for their screen, ensuring that every display gets the best possible image, as close to the creator’s vision as their screen is capable of reproducing. And as creators, this is exactly what we want - the audience to experience our content as closely to our original intentions as possible.



Setup showing Dolby Vision using the iCMU in 4K, with a 4Kp30 HDR image sent to both the Sony BVM-X300 and the Dolby eCMU. The resulting image is shown on the BenQ Rec 709 display.

I should note that when we do our SDR trim passes, we don’t use the 100 nits peak of BT.1886 - it’s far too dark for the flat panel world we live in. Other than color suites, I can’t think of a single place where you’d find a display with 100 nits peak - 250 is the minimum for computer displays, and most televisions; phones and tablets go much brighter than that - 400, 600, or 1000+ nits. The Dolby Vision SDR grade is actually optimized for brighter screens, which you can see if you ever do a comparison between it and a direct conversion into 100 nits - Dolby Vision looks very dark at reference levels, but looks really good starting at around 200 nits.


A valid concern with adding a Dolby Vision step to your grading workflow is the increased amount of time doing a second pass on your footage will take. With an experienced HDR colorist, the HDR grade really takes no longer than an equivalent intensity* SDR grade, meaning that you aren’t looking at any additional time costs for HDR. The Dolby Vision analysis pass usually runs in real-time or quicker, for 4Kp24 content, and doing a full set of SDR trims for a 90 minute feature usually only adds on an extra half day of work. If you’re doing a 600 nits or 1000 nits HDR trim after that, you’re adding on a few extra minutes to a couple of extra hours. Which means that realistically, for a 90 minute feature, you’re only adding on an extra day’s worth of color correction (or less) to future proof your content and ensure that its gets the best possible delivery across all platforms.



Setup showing dual outputs of the Resolve iCMU. HDR is on the left (Channel 1 output) and Dolby Vision SDR is on the right. The Sony BVM-X300 is set to SDR, though a proper setup would route each image to a different display.

Anyway, once you’ve done your trim pass (or if you’re using the free version, just your analysis pass), you’ll export your Dolby Vision metadata as an XML that will accompany your HDR master, as part of your deliverables, and your grade is now ready for Dolby Vision encoding and delivery.


HDR Deliverables


It’s at this point, once you’re done your Dolby Vision, that you’ll start creating your deliverables. If you’re working in HDR with Dolby Vision, you’ll want to plan on four main deliverables:

  1. HDR Master

  2. Dolby Vision Mezzanine (IMF)

  3. Dolby Vision Metadata XML

  4. SDR Master

If you haven’t exported your HDR master yet now’s the time. In the professional realm, the HDR master is typically a 16 bit TIFF sequence, using P3 D65 or Rec. 2020 color primaries and the PQ transfer function. For feature content, you’ll want to split your master into reels, with a separate Dolby Vision XML for each reel. For independent content producers, creating and storing 16 bit TiFF sequences is usually a little data heavy, so you may consider ProRes 4444 / 4444 XQ QuickTime MOV or JPEG2000 (1200+ Mbps for UHD 24p) in an MXF container as a convenient master format. If you do, ensure that Resolve is writing the correct HDR metadata to the containers by setting your timeline or output space.


Conveniently, the Dolby Vision Mezzanine format uses RGB JPEG2000 in an MXF as part of an IMF (Interoperable Master Format - a generic version of the DCP structure used for digital cinema delivery), which you can run at a lossless quality and get a self contained HDR master, with the Dolby Vision metadata already bundled into the file - super convenient. Not all color correction softwares will read the metadata in the MXF though (Resolve, for instance), so you’ll still want the XML for importing the Dolby Vision metadata back into your processing software.


ProRes with a sidecar XML is also an acceptable Dolby Vision Mezzanine format, meaning that it too can serve as your HDR master and your DV Mezzanine. For any HDR master, I’d strongly recommend using 12 bits or higher on your video signal, so if you’re using ProRes, use 4444 or 4444 XQ. This will give you the lattitude to make changes to the HDR grade if you need to later, or to make a 10 bit SDR derived grade from the HDR, without any stepping in your gradients.


Be aware that if you’re writing to ProRes, though, that your data will be converted into the YCbCr format, using either the Rec. 709 transfer matrix coefficients (for Rec. 709 or P3 D65 color primaries), or the Rec. 2020nc transfer matrix coefficients (for Rec. 2020 color primaries). The reason that this matters is that the coefficients are different: if you decode content encoded using Rec. 2020nc coefficients using the Rec. 709 coefficients you’ll see color shifts in the decoded image, especially in the blue-greens and in the reds, and brightness shifts in some of the darks. Unfortunately, while DaVinci Resolve (version 15) will encode using the Rec. 2020nc coefficients, and will flag for them in the metadata, it will only decode using the Rec. 709 coefficients. Fortunately, we have a LUT that will correct for the hue shift, which we apply to all of our ProRes 4444 HDR masters when we bring them back into resolve for future work.


This is also something to be aware of if you use Resolve to render out an SDR version from your HDR grade. Resolve has the ability to act as a content mapper, and use the Dolby Vision L1 and trim metadata to directly generate a high quality SDR master. But before you render out an SDR version using Dolby Vision tone mapping, set your timeline color space to Rec. 709 and your color management to off. Otherwise, Resolve will use the Rec. 709 primaries, but use the Rec. 2020nc coefficients when moving from RGB to YCbCr, meaning you’ll have hue and brightness shifts in the exported image.


If you have a full Dolby Vision license, you’ll have access to the Dolby Vision Professional Tools, which include command line tools for validating your metadata (either the XML, or as part of the MXF in a Dolby Vision Mezzanine), and tools for rendering out an SDR or target HDR version in a variety of formats. Conveniently, these are command line based and can be incorporated into batch processes, making them useful utilities for a full encoding facility. They’re also a great way of getting your SDR master from a mezzanine or an HDR master on a computer where you’re not running pro video tools.


Encoding & Delivery


One of the hardest parts of the whole Dolby Vision process is answering the question “okay, I’ve got a Dolby Vision Mezzanine, how do I get it to my viewers?” Unfortunately, there’s no easy answer to that question. Companies like Netflix and Amazon will take the Dolby Vision IMF and spit out Dolby Vision versions for their streaming services, with prior approval for an HDR delivery; Apple will take a ProRes Mezzanine with the Dolby Vision XML and do the same. But apart from those three, there aren’t any commercially available streaming services that you can upload your content to and spit out a Dolby Vision file that will play on a consumer television. YouTube relies on LUTs to transform HDR content into SDR, and Vimeo requires you to upload an HDR and an SDR version separate from each other.


Dolby has a software encoder for software developers and resellers that we recently got working here at Mystery Box. It’s not a process I’d recommend to small or independent post production facilities, but it’s easy enough that a moderate size facility should have someone there who can get it up and running. There are people (including Mystery Box) who sell Dolby Vision Encoding as a service, but be aware that it’s a time consuming process: on a state of the art computer, encoding may take longer than 24 hours for a 90 minute feature. A few software solutions are available using Amazon Web Services, which allow you to distribute the processing across many nodes, speeding up the encoding - that’s the solution companies like Amazon and Netflix already use, so it’s convenient to set it up that way if you run a streaming service. Be aware they are extremely expensive though.


 

Wrapping it all up


If you are an independent content producer, creating a Dolby Vision pass of your film is worth it, even if you can’t do anything more with it right now than to generate the best SDR version for distribution. Because that SDR version will be the best SDR you can get. I’ve found that grading in HDR and converting to SDR gives better results than grading in SDR directly, so start with the best. Consumer HDR adoption has outpaced any other technological innovation, and while things are lagging a bit on the professional creation end and the tools are still costly, they are dropping in price. And as that happens, more and more streaming services will support Dolby Vision, meaning that if you start with a Dolby Vision master you’ll stay ahead of the curve.


Like I said at the head of this post, HDR is going to eventually completely replace SDR as the format that everyone is using. It’s really that much better. Getting on board with it now and mastering in HDR gives you the ability to futureproof your content before the larger shift. Because unless you’re a major studio, you don’t have time or budget to remaster your content later: do it right on the head and take advantage of the new pallet HDR video offers you. Then use Dolby Vision to adapt your content for every screen.


If you have further questions about how Dolby Vision can help you and your productions, or if you’re an independent post facility and want to talk to us about how to implement HDR or Dolby Vision at your facility, drop us a line in the comments below, or send us an email.


* My note about intensity isn’t a comment about one cinematographer or company’s content being ‘better’ than another’s; rather it refers to the amount of detail work that goes into each frame. I can do a base HDR grade on a feature in a day or two, especially if the content is carefully controlled lighting scenes, properly exposed, and exposed for a specific artistic intent. However, for our exhibition grade work, I can expect to spend 20-30 minutes per shot, focussing on minute details that in most color grading cases won’t matter. So 3 days for a 90 min feature, or 5 days for a 5 minute exhibition reel or 80-90 shots. But in both cases, adding Dolby Vision takes only a small amount of additional time.


Written by Samuel Bilodeau, Head of Technology and Post Production

172 views0 comments

Commentaires


bottom of page