OWC ThunderBlade: Data Backup Dream Machine

Of the many joys a DIT has on-set, waiting for a file transfer is not one of them. After a long day of shooting when the crew starts wrapping, inevitably the DIT will be handed the final mag with 500GB to 1TB of footage to backup. More often than not he will still be there staring at the progress bar long after the rest of the crew has packed up and left. On a more run-and-gun day without a dedicated DIT, like our stock video shoots, there may be 3-4 of these mags to backup once we make it back to the hotel. Nothing is worse than having to stay up all night, waiting for one card to finish so you can start the next one. Enter OWC’s ThunderBlade. It’s a compact NVMe SSD RAID enclosure with enough bandwidth to handle multiple large file transfers simultaneously.


The thin, passively cooled enclosure is well-designed. An all-aluminum chassis covers the x4 internal NVMe SSDs. Each is connected (both top and bottom) to the chassis via thermal pads, making the entire case an excellent heat sink. These SSDs get hot, but the passive cooling performed perfectly, never reaching more than warm to the touch throughout our testing.

*Note: our tests were done in a temperature controlled studio. Since the heat sink chassis is the only method of cooling, using this device in a hot environment could produce different results. Having a separate power brick helps keep that heat down, but also adds to the bulkiness of the setup. The somewhat large power supply is not a deal breaker, but definitely worth mentioning.

Beauty Shot.jpg


The Thunderbolt 3 connection provides plenty of bandwidth for those SSDs. While eGPU and other graphic oriented Thunderbolt devices can utilize the full 40Gbps, this is the first storage device we’ve tested that comes close to actually using all of that glorious bandwidth. If your data backup workflow does not often require much bandwidth, the second port can be used to daisy chain to other devices (Including another ThunderBlade, which can be combined into a single RAID volume) 

NVMe Benefits

NVMe SSD’s have made huge improvements in transfer speeds over existing flash memory protocols. A simple Google search will reveal thousands of people showing off insane benchmark speeds with popular external NVMe SSD’s. When SSD’s originally hit the market they were still operating through existing SATA protocols used by standard disk drives. Despite having considerably faster hardware specs than spinning disk drives, the SSD’s still had to run through a SATA controller on the way to the CPU. All connections running through a SATA controller share a single PCIe lane, whereas NVMe drives can (depending on motherboard specs) completely bypass the SATA controller and run straight to the CPU over x4 PCIe lanes. In our particular case the SSD’s are connected via a Thunderbolt 3 connections, meaning the data will run though the TB3 controller with negligible latency before hitting the PCIe lanes directly to the CPU. Another benefit of NVMe over previous protocols like SATA (still held back by AHCI) is the command queueing. AHCI was not designed for the bandwidth of flash-based storage, it can only handle a single command at a time with 32 pending commands. No matter how “fast” the SSD hardware was, SATA controllers could never process more than 1 command at a time, creating a serious bottleneck. NVMe was purpose-built for flash storage and can theoretically handle 64,000 queues simultaneously, each with its own 64,000 commands… That’s a serious upgrade.

*You can read up on NVMe I/O commands and queues from the official spec sheet (page 7): Non-Volatile Memory Express White Sheet


A common reaction to this product is concern that it offers little improvement over a single NVMe SDD. In a way, they are right to feel so. If your workflow is only transferring a single memory card to the ThunderBlade, treating it as a normal external hard drive, you will never come close to utilizing its potential. In truth, if all you want is a fast and portable drive for single card dumps you should just stick with one of the popular NVMe drives from companies like SanDisk, G-Tech, Angelbird, or Glyph. In our testing, transferring a single 512GB RED Mag with 450GB of RAW footage on it to the ThunderBlade in RAID 0 took 15min with the USB 3.1 Gen 2 RED Mag Reader. Transferring the same card to a single SanDisk NVMe SSD took the same amount of time, the obvious bottleneck being the card reader. So why bother with such an expensive device if nothing will ever make use of its potential? If nothing else, having 4 drives in a single RAID enclosure offers 2 large benefits:

  1. Security- The ThunderBlade comes with a free license of SoftRAID XT. This powerful software allows the user to configure the 4 drives into RAID 0, 1, 1+0, 5, and 10. We prefer RAID 5 as it provides the best of both worlds; 6TB of usable space, super fast write speeds, and room for 1 of those SSD’s to have a catastrophic failure without loosing our data. It is also important to note SSD’s provide outstanding durability and show a substantially lower failure rate than HDD’s. You are more likely to have data loss due to human error than you are to loose an SSD in the array. Thus we usually opt for speed, and quickly transfer to a second (and physically separate) storage device overnight. We detailed backing up media in a 3-part series called Protecting Your Digital Ass(ests). Be sure to check those posts for more info on RAID choice, checksums, and general practices.

  2. Volume Size- Where else can you find an 8TB NVMe SSD? There are several other companies making portable RAID solutions with 2 SSDs for a total of 4TB, but 4 NVMe drive RAID enclosures are very rare and are much more bulky with active cooling. If we are going to use this as a backup device, we want a bigger volume size in a travel-friendly package. So in times when you need the volume size, create a ridiculously fast 8TB RAID 0 array. Want more peace of mind? RAID 5 keep the speed high, and allow 1 SSD to fail.

These are great reasons to RAID your NVMe array, but the real reason we bought the ThunderBlade can be seen in our tests.

Your portable NVMe SSD is cool… but can it handle all this?

Your portable NVMe SSD is cool… but can it handle all this?



The obvious plus for a RAID solution is the speed, but if RAID 0 offered nothing over a single NVMe drive, what’s the point? Remember those late nights in the hotel waiting for data transfers we talked about about? While a single NVMe SSD can handle one file transfer like a champ, simultaneous transfers of multiple files will quickly bog down performance. Depending on the transfer software, multiple transfers will either be taken one at a time, or all at once with a serious dip in speed. With the overhead provided by the ThunderBlade, you can transfer all those cards simultaneously with very little speed loss.


The ThunderBlade used one TB3 bus our full-spec Mac Mini 2018, while the Mag Mini Readers used the other bus. Currently we only have a single USB 3.1 Gen2 reader. The other 3 readers are either SATA when plugged into the G-Tech Shuttle XL (which connects to the computer via TB3) or USB 3.0 when plugged in directly. Hoping to see better performance out of the SATA, we opted to load 2 Mag Readers into the Shuttle XL and the last one into the Mac Mini’s USB 3 Type-A port. In the near future we hope to purchase two more USB 3.1 Gen2 readers for optimal results.

*Not to mention we don’t want to lug around the Shuttle XL on run-and-gun trips. Kind of defeats the purpose of the ThunderBlade’s small form factor.

Top Down Overview.jpg


When it comes to secure data backup software, our current weapon of choice is Hedge. Not only does Hedge do a terrific job at simultaneous transfers, it also provides important checksum analysis/logs. For anyone seeking production insurance, this is a must. Even while generating checksums, Hedge is able to keep up with MacOS’s native “drag and drop” transfer rates. And when simultaneously transferring multiple Mags, it is considerably faster than native transfers. It is a very efficient software, perfect for this workflow.

This 458.9GB transfer from the Mag Reader to the ThunderBlade took 15min with the “drag and drop” system. Through the Hedge software, the same transfer also took 15min, but competed an XXH64 Hash with a log report at the same time. Security and speed… Boom baby!

This 458.9GB transfer from the Mag Reader to the ThunderBlade took 15min with the “drag and drop” system. Through the Hedge software, the same transfer also took 15min, but competed an XXH64 Hash with a log report at the same time. Security and speed… Boom baby!

This is the Hedge interface. Sources are on the left and destinations on the right. Here we are attempting to transfer 4 RED Mags to a single 2TB SanDisk Extreme Portable SSD. Transferring a single RED Mag of this size usually takes around 15min for the SanDisk, but notice the considerable drop in performance when tasked with multiple transfers at the same time. 15min suddenly becomes an hour. We had to abort this transfer as the drive became dangerously hot and the estimated transfer time continued to climb. The recommended use for a single NVMe drive like this is to transfer one Mag at a time.

This is the Hedge interface. Sources are on the left and destinations on the right. Here we are attempting to transfer 4 RED Mags to a single 2TB SanDisk Extreme Portable SSD. Transferring a single RED Mag of this size usually takes around 15min for the SanDisk, but notice the considerable drop in performance when tasked with multiple transfers at the same time. 15min suddenly becomes an hour. We had to abort this transfer as the drive became dangerously hot and the estimated transfer time continued to climb. The recommended use for a single NVMe drive like this is to transfer one Mag at a time.


Transferring a single RED Mag with 450GB of data to the ThunderBlade takes 15min. So how long did it take to transfer 3 of those Mags simultaneously…? 17 minutes.

With only a slight hit in speed, the ThunderBlade managed to write insane amounts of data even with 2 readers running on SATA 6/Gbps connections. That’s a combined 1.35TB of data securely transferred and checksums generated… in 17min!!!

*By the way, this test was performed with the SSD’s in a RAID 1+0 array (which means there is potential for a speed increase if we moved to RAID 0)

Note the estimated times on this bad boy. The largest drive obviously taking the longest at 17min. Also interesting to notes is the data rate. Mag 02 and Mag 03 were using the SATA III via the Shuttle XL and were hitting around 450MB/s while MAG 01 used the USB 3.1 Gen 2 reader at a rate of 520MB/s. We would like to revisit this test when we have two more USB 3.1 Gen 2 readers. I’m sure we can shave off another minute or two getting it down to the same 15min a single Mag takes.

Each Hedge transfer will automatically generate a log showing the details of the transfer. Here’s what it looks like:

Duration: 1026.7 seconds, or about 17 minutes


The ThunderBlade wasn’t designed for the consumer. The price tag makes that painfully obvious, but for the professional to whom time = money, this device is an excellent investment. Instead of spending an hour or two at night in the hotel staying up to swap Mags, we can now drop several Mags simultaneously in the same time a single Mag would take. If you are looking for a blazing fast NVMe RAID enclosure in this small a package, with a max of 8TB of usable data… there is nothing else on the market. It is a dream to work with.

Written by Chris Workman, Editor

70mm Film Projection Vs. Cinema LED Displays

Never in my life have I been able to have a film vs. digital cinema experience akin to the one I was able to have this past weekend in LA. Within a 24 hour period I was able to see “2001: A Space Odyssey” on 70mm projected film and “Solo: A Star Wars Story” displayed on the Samsung Onyx Cinema LED screen. After seeing the two back to back, I took this opportunity to write some thoughts about how the vintage looks of 70mm compared to the incredible colors and contrast of HDR LED.

A quick trip to LA and a fortunate run in with an old friend, landed us at the 70mm film projection of 2001: A Space Odyssey. As we walked into the ArcLight theater in Hollywood, it felt like a throwback to the way cinema used to be. The film played on a massive screen with no commercials, an overture, and even a 10 minute Intermission, this felt like the original operatic cinema experience. Heck, there was even an usher who personally welcomed us and introduced the film before the overture.

My index finger held up against the screen. If you look closely you can see the 4 LED lights inside each "pixel".

My index finger held up against the screen. If you look closely you can see the 4 LED lights inside each "pixel".

The very next day, we found ourselves at the movies again but this time at the Pacific Winnetka 12 & XD Theaters in Chatsworth. We came here specifically to see Samsung’s latest innovative tech, the Onyx Cinema LED Screen. This screen is comprised of modular tiles that contain 4096 LEDs each (64x64).  With 64 tiles in width and 34 tiles in height, this makes this screen a true 4K pixelation with 8,912,896 individual LEDs. If you wanted to get really technical, each of the individual LEDs appears to have 4 LEDs inside (RGB and White), which gives it over 35.6 million LEDS. And this was one of the smaller screen. Along with the latest innovative technology came the modern movie theater experience: reclining seats, movie commercials, theater promos, and technology pre-rolls.

So, how did they compare?

The 70mm projection had all of the characteristics common to film projection. Occasional screen jitters, scratches and dust, and even the occasional optical flaw. This specific print of 2001: A Space Odyssey had been re-developed from the camera negatives under the supervision of Christopher Nolan (well-known for his film advocacy). There was no clean up of the image, and it showed. Quite honestly, it bugged me. It pulled me out of the story and forced me to remember “I’m watching this on a flawed film medium”. The colors were not as rich or definitive as I had remembered, and the iconic scene of Dave wearing the red suit walking down the white interior spaceship was not as striking as it should have been. Summing it up, it was antiquated. This is not a bash, but an emotional response. It took me back to what film used to be, and I gained a profound appreciation for both the technology and the storytelling in the late 1960’s. For the time, this format and this film were incredible feats to behold, and history shows that this cinema experience inspired an entire generation of filmmakers who went on revolutionize the film industry.

On the other end of the spectrum, The Samsung Onyx Cinema LED screen was a flawless cinematic display, and is the very best that modern technology has to offer. When the commercials and pre-rolls were playing, the HDR capabilities were on full display. The contrast ratios stretched into ranges that kept my eyes glued to the screen, and the colors had a life giving pop to them (especially during the Incredibles 2 trailer). It also had a sharpness and clarity that I haven’t seen before, because we are experiencing the light directly from its source. This removes that annoyance of dirty screens, scratches & tears, out of focus projectors. It also reduces the problems with optical degradation that comes from the image traveling through a projector lens, a pane of glass, across the room, through the inherent atmosphere, bouncing off of a mesh screen and then traveling back across the room before entering your eye. It’s no wonder there is a slight blurry glow to the details on a theater screen, no matter how sharp the original image is. The Cinema LED screen is a huge leap forward, for the image is coming directly from the screen itself, and it produces incredible color, clarity, and contrast.

With this amazing screen technology, I was ready for an incredible cinematic experience. And then the movie started and the washed out, muted tones of Solo: A Star Wars Story came on screen. It was pretty disappointing to have so much capability but have the film use so little of it. You could tell that the original vision of the color grade had very little contrast since the displayed blacks were well brighter than what the screen was capable of, and the colors had a desaturated old tone to them. But I must respect the cinematographer for his choices since he probably wasn’t focused on shooting for an HDR grade during production (they already had enough challenges on their hands with that film).

There was a strange irony about that took place during these two screenings. One film was a grand vision of the future (but ironically staged in the past) and was a bright and colorful presentation played through antiquated equipment. The other was a modern interpretation of past events (but still oddly set in the future), with aged tones and colors playing on the latest and most innovative equipment. 

In conclusion, the Samsung Onyx Cinema LED brought about the greatest clarity, color, and contrast that we have yet to see on the big screen. While its price point may be cost prohibitive for the near future, I fully expect this type of screen to take over future cinemas. On the other hand, 70mm film projection has been on its way out of the market for some time, but this experience has given me a great appreciation for the work of the past and our growth over such a short amount of time.

Written by Willem Kampenhout, Producer

Multi-Space Color Correction: The New Paradigm

Our last technical blog post talked about color management, including considerations for maintaining color accuracy through the post process by keeping displays & projectors calibrated and understanding how each application manages the colors you see.  We alluded to the fact that at face value it can seem pretty complicated to manage color between all of the different kinds of camera technologies and technical standards, especially when dealing with multiple delivery specifications.

With this post we want to go into more detail about the issue of color grading for multiple standards, and how the new paradigm for color correction simplifies the process and keeps your grades more futureproof.  We’re going to examine a few things today.  First, how do computers process colors; second, where the problems we’re trying to fix come from; third, what the solutions are; and lastly, how to implement the solutions and why.

Color Engines

In previous posts (here, here, and here) we’ve mentioned the importance of using a color space agnostic color engine when doing specific color correction tasks - something like DaVinci Resolve Studio.  As a quick review, a color space agnostic engine is like a glorified R-G-B or R-G-B-Y calculator: It takes a specific set of decoded R-G-B or R-G-B-Y data, applies a specific set of mathematical operations based on the user input, and outputs the new R-G-B or R-G-B-Y data.

Agnostic color engines don’t care about what the data is supposed to mean, instead simply producing the results of an operation.  It’s up to the user to know if the results are right, or if the operation has put values out of spec or create unwanted distortions.  This is a double edged tool, since it places far more importance on user understanding to get things right, while being powerful enough to apply its corrections to any combination of custom situations.

As an example of how an agnostic engine works, let’s look at three of the simplest color correction operations: lift, gamma, and gain, operating strictly on the brightness (Y) component of the image.

Lift operates essentially as a global addition value: add or subtract a specific amount to each pixel’s value.  Because of the way that traditional EOTFs work and our human perception of brightness changes, lift tends to have the greatest effect on the darks: quickly raising or lowering the blacks while having a much more reduced effect on the mids and lights.

Gain operates essentially as a global multiplication value: multiply or divide the value of each pixel by a specific amount.  Since the operation essentially affects all tones within the image evenly, all parts of the image see a similar increase or decrease in brightness, though once again because of the EOTF considerations it has the greatest effect in the brights.

Gamma operates as an exponential value adjustment, affecting the linearity of values between the brights and the darks.  Lowering the gamma value has the effect of pulling more of the middle values towards the darks, while raising the gamma value has the effect of pushing the middle values brighter.  Once again, it still affects the brights and the darks, but at a much lower rate.

Notice that these operations don’t take into account what the data is supposed to mean.  And with new HDR EOTFs, especially with the Perceptual Quantization EOTF, you may find extreme changes across the image with very small values, which is why I recommend adding a roll-off curve as the last adjustment to your HDR grading workflow.

The combination of lift, gamma, and gain allow the colorist to adjust the overall brightness and contrast of the image, with fairly fine granularity and image control.

Compare these functions of an agnostic engine to their equivalents in a color space dependant engine.  In a color space dependent engine you’re more likely to find only two adjustments for controlling brightness and contrast: brightness and contrast.

The same color transformation operation has different effects on the image. Here, I've applied the same hue adjustment curve is applied to eight different color spaces and the effects on the vec

The brightness and contrast controls tend to be far more color space dependent, since they’re designed to affect the image in a way that more evenly affects the brightness or contrast along the expected EOTFs.  For the end user, this works a far simpler and often faster approach for minor corrections, at the expense of power, precision, and adaptability.  Which hasn’t been too bad of a trade off, so long as all digital video data operated in the same color space.

But adding support for new color spaces and EOTFs to a brightness and contrast operation requires rewriting the rules for how each of the new color spaces behave as digital RGB values.  That takes time to get right, and is oftimes not done at all.  Meaning that color space dependant engines tend to adapt more slowly to the emerging standards, and there’s no clear path for how to implement the upgrades.

Every color engine, whether we’re talking about a computer application or a chip found in a camera or display, makes assumptions about how to interpret the operations it's instructed to do.  Where the engines lie on the scale from fully color managed to completely color agnostic defines how the operations work, and what effect the ‘same’ color transformation has on the image.

The overall point here is that the same color transformations applied to different color spaces have different effects on the end image.  A hue rotation will accomplish something completely different in Rec. 709 than it will in Rec. 2020; standard gain affects HDR curves in ways that are somewhat unpredictable when compared with SDR curves.  Color engines can either try to compensate for this, or simply assume the user knows what he or she is doing.  And the more assumptions any single operation within an engine makes about the data, the more pronounced the differences if it’s applied to another color space.  These seemingly small differences can create massive problems in today’s color correction and color management workflows.

Understanding the Problem

With that background in mind, let’s explore where these problems come from.

Here’s something that may come as a shock if you haven’t dived into color management before: every camera ‘sees’ the world differently.  We’re not just talking about the effects of color temperature of light or the effects of the lenses (though those are important to keep in mind), but we’re talking about the camera sensors themselves.  All things being equal, different makes and models of camera will ‘see’ the same scene with different RGB values at the sensor.  In inexpensive cameras you may even see variation between individual cameras of the same make and model.

This isn’t a problem, it’s just how cameras work.  Variations in manufacturing process, decisions about which microfilters, microlenses, and OLPF to use, and the design of the sensor circuitry all play a part in changing the raw values the sensor sees.  But to keep things consistent, these unique camera RGB color values are almost always conformed to an output color standard using the camera’s image processor (or by the computer’s RAW interpreter) before you see them.

In the past, all video cameras conformed to analog video color spaces: NTSC/SMPTE-C, PAL, etc., and their early digital successors conformed to the digital equivalent standards: first Rec. 601 and then Rec. 709.

When it comes to conforming camera primaries to standard primaries, manufacturers had two choices: apply the look effects before or after the conforming step.  If you apply color transformations before the conforming step, you often have more information available for the change.  But by conforming first to the common color space, color correction operations would behave the same way between different camera makes and models.  

Most camera manufacturers took a hybrid approach, applying some transformations like gain and white balance before the conforming step, and then applying look effects after the conforming step.  And everything was golden, until the advent of digital cinema cameras.

Digital cinema class cameras started edging out film as the medium of choice for high quality television and feature film production a decade ago, and now vastly outnumber the quantity of film-first productions.  And here’s where we run into trouble.  Because digital cinema uses a different color space than digital video: DCI-P3.  Oh, and recently the video broadcast standards shifted to a much wider color space, Rec. 2020, to shake off the limiting shackles of the cathode ray tube.

Color space selection suddenly became an important part of the camera selection and workflow process, one that few people talked about.  Right from the get go the highest end cameras started offering multiple spaces that you could shoot and conform to, one of which was usually camera RGB.  But changing the conforming space means that any corrections or effects added to the image after conforming behaved differently than they did before, and many user generated looks would be color space dependent.

To fix this, many (but not all) digital cinema camera manufactures moved the ‘look’ elements of their color processing to before the conforming step.  This way, regardless of which color space you, as the operator, chooses any looks you apply will have the same effect on the final image.

Which is fine in camera, and fine through post production.  Unless your color correcting platform doesn’t understand what the primary color values mean, or if it can’t directly transform the values into your working space.  Then you need to create and add additional conforming elements as correction layers, which can increase the computational complexity and reduce the overall image quality.

Oh, and if you start working with multiple cameras with different look settings available, you can get into trouble almost instantly, since there isn’t usually a simple way of conforming all of them to your working space if it’s not Rec. 709.

Oh, and you may have to deliver to all of the different color spaces: Rec. 2020 for 4K television broadcast, DCI-P3 for your digital cinema delivery, Rec. 709 for HD Blu-ray and traditional broadcast, and Rec. 601 for DVDs.  And for sanitiy’s sake, let’s add HDR.

And don’t break the bank.

The Solution

What if there was a way to make sure that a) all of your looks would move simply between cameras, regardless of make and manufacturer, and b) you could color grade once and deliver in all formats simply, without needing to manage multiple grades?

There is a way to do it: create a new RGB color space that encompasses all possible color values, and do all of your color corrections there.

Here’s the block diagram:

Camera RGB -> Very Wide Gamut Working Space (Log or Linear) -> Color Correction / Looks -> Tone Map to Standard Space

By mapping all of the camera sensor values into the same log or linear space, with very wide RGB color primaries1, you can make sure that you have access to all of the image data captured by every camera and that all operations will have the same effect on all images.

But what do I mean by a “very wide gamut RGB space”?  There are two types of gamuts I’m talking about here, both of which have advantages and disadvantages.

The first kind is a color space with virtual RGB primaries:  the RGB color primaries land outside of the CIE 1931 color colorimetry diagram.  Remember that CIE 1931 maps the combinations of various wavelengths of light and the perceivable colors they produce onto an X-Y coordinate plane, and that any color space requires at least three primary color vertices on this chart.  But since the chart is bigger than all mapped colors, you can put these values outside of the actual set of real colors.

By putting the values outside of the visible color ranges you’re defining ‘red’, ‘green’, and ‘blue’ values that simply don’t and can’t exist. But they end up being quite useful, because when you map your primaries this way you can define an RGB color space that includes up to all possible color values.  Yes, you could simply use CIE XYZ values to map all colors, but all of the math needed for color manipulations would need to be redefined and rebuilt from the ground up (and it always requires at least 16 bits precision).  But an RGB space with virtual primaries allows you to use standard RGB math, while maintaining as many colors as possible.

Comparison of eight very wide gamut color spaces, five with virtual primaries, three with real (or mostly real) primaries.

Examples of color spaces using virtual RGB primaries include the color space defined by the Academy of Motion Picture Arts & Sciences, ACES AP0, and many manufacturer specific spaces like Sony S-Gamut3 / S-Gamut3.cine, ARRI Log C Wide Gamut, Canon Cinema Gamut, and the new RedWideGamutRGB found in RED’s IPP2.

The catch with these virtual primaries is that many operations that you as a colorist may be accustomed to doing won’t behave exactly the same way.  The reason being is that the RGB balance as it relates to hues and saturations doesn’t quite apply the same way.  Without getting mired in the details, the effects of these operations are related to the relative shape of the triangle produced by the RGB color primaries, and the color space triangles using all virtual primaries tend to be more dissimilar with the traditional RGB color spaces than the RGB spaces themselves.

So instead some wide gamut formats use all real, or mostly all real primaries to somewhat match the shape (i.e. color correction feel) of working with the smaller color gamuts.  A couple of examples here are Rec. 2020 (called Wide Gamut on 4K televisions), Adobe Wide Gamut, and ACES AP1.  While not covering all possible color values, these spaces cover very large portions of the visible color gamut, making them very useful for color correction working spaces.

Whichever very wide color space you choose to work in is up to you and your needs.  If your company or workflow requires ACES, use ACES.   If you’re only using one type of camera, such as a RED Weapon or an ARRI Alexa, you may find it beneficial to work in as specific manufacturer’s RGB space.

For most of the work we do here at Mystery Box that’s destined for anything other than web, I typically conform everything to Rec. 2020 and do my coloring and mastering in that space.  There are a couple of reasons for this:

  1. As a defined color spaces it uses real, pure wavelength primaries. Meaning that so long as only 3 color primaries are used for image reproduction, it’s about as wide as we’ll ever go.

  2. It encompasses 100% of Rec. 709 / sRGB and 99.98% of DCI-P3 (only losing a tiny amount of the reds)

  3. It encompasses 99.9% of Pointer’s gamut, a gamut that maps all real-world reflectable colors (not perceivable colors, just those found in the real world) onto the CIE XYZ gamut - essentially every color producible through the subtractive primaries.

  4. While it behaves differently than DCI-P3 and Rec. 709, they all behave fairly similarly so the learning curve is low.

  5. It requires fewer tone mapping corrections for the final output.

Whether these reasons are convincing enough for you or not is up to you.  Personally, I don’t find the 0.02% of DCI-P3 it doesn’t cover to actually matter, nor do the set of greens and blue-greens it doesn’t store (and no three color system can produce). These differences are so small that only in the absolute best side-by-sides in a lab could you hope to see a difference.

Whatever you do choose to use as a working space, it’s worth investing the time to pick one and stick with it.  Since the grading transformations do behave differently in the different color spaces, it’s easiest to pick one and refine your technique there to get the best possible results.

Conversions Implementation

Looking at the generalized workflow block diagram you’ll want to consider how to implement the different conversion steps for your own productions, in order to maintain the highest quality image pipeline with the lowest time and resource costs.  So let’s go into the two main places that you need to make new choices in the pipeline, and how to plan for them.

Conforming Camera RGB to Very Wide Gamut Working Space

Moving from Camera RGB to a very wide gamut space is a slightly different process for each camera system, and can depend on whether you’re capturing RAW data or a compressed video image.

When you’re using RAW formats, you’ll manage this step in the color correction or DIT software, which is the prefered workflow when image quality is paramount.  If you’re recording to non-raw intermediate format, like ProRes, DNxHR, or H.264 or any other flavor of MPEG video, you’ll need to select camera settings that best match your target wide gamut space.

Most RAW formats ignore camera looks applied by the operator and store the color decisions as metadata, but most video formats don’t.  Once again, camera settings vary, so it’s important to look at your specific system and run tests to find out where looks are applied in your camera’s image pipeline, and whether you can add a separate look on the video outputs for on-set monitoring while capturing a flattened LOG or linear image.

If your camera can’t separate the looks applied to the video files and the video output, and you want to capture a flat image but need to see it normalized on set, loading LUTs into on-set monitors is the ideal choice for image monitoring.  The process of creating and applying monitoring LUTs varies with your workflow, but we often find ourselves using a two-step process that uses Lattice to generate color space conversion LUTs, which we bring into DaVinci Resolve Studio to add creative looks and generate the final monitoring LUT.

Some cameras or DIT applications export their look settings as CDLs, LUTs, or other metadata for you to use later in the grading process, which you can then apply in post as the starting point for the grading process.  Again, workflows vary.

Generally you’ll want to move directly from camera RGB into the working space to preserve as much sensor information as possible.  That implies that you need to decide what your working space will be before capture (ACES AP0 or Rec. 2020 are recommended for future broadest compatibility), though it’s sometimes not an option.  While RAWs maintain camera primaries and allow you to jump directly into a wide working space later, if you’re forced to conform to standardized RGB for video intermediates you'll need to make that decision as early as possible.  In that case, put them into the widest color space available by the camera, whether that’s Rec. 2020, DCI-P3, or the manufacturer’s proprietary Wide Gamut space.

If RAWs aren’t an option, using a 12 bit log format video is your next best choice.  10 bit is fine too, but you won’t get as clean of corrections later, and may see some banding in fine gradients.  Anything less than 10 bits per channel create severe problems when color grading and really should only be used as a last resort.  When recording using an 8 bit format, you should only use a standard SDR EOTF (never LOG) - LOG with only 8 bits of precision can create MASSIVE amounts of banding.

To summarize: To maintain the highest image quality with the smallest resource pain, use RAW formats when possible, convert to the working or widest color space if you have to record as video files, and use LUTs on display outputs to avoid baking camera looks into the video data.

Tone Mapping the Working Space to Output

Moving from a wide working space to a final deliverable space is generally relatively simple process: simply convert each color value from the working space to the color value equivalent of the target space, and discard any data that lands outside of the target range.  In most Rec. 2020 -> DCI-P3, or Rec. 2020 -> Rec. 709 conversions, this is completely fine.  You may find minor clipping in a few of the most saturated colors, but overall you shouldn’t see too many places where the color is so bad you can’t live with it.

Where you do run into problems is when you’ve graded using an HDR transfer function and are moving into SDR.  A straight translation here results in very, very large amounts of clipping.  I haven’t mentioned EOTFs much yet, simply because most color engines where you’ll be doing wide gamut work use linear internals, since that tends to offer the most dynamic range and manipulation potential.

However, displays rarely offer a linear EOTF and so you’ll have to be monitoring in some transfer function or another.  Display monitoring is another reason I typically grade in BT.2020 (and usually in HDR), since displays need to be set to a specific color space and EOTF.  Which means that if you’re using a very wide working space, you must apply a tone map to your monitoring output, regardless of whether you’re grading in HDR or SDR (especially when you’re working in linear light).

The first series we published here on the blog about HDR video included a section on “Grading Mastering, and Delivering HDR”, where we presented a few bezier curves you can apply as the last element in your node structure for HDR grading in PQ or HLG.  These bezier curves are essentially luminance tone maps, converting the linear light values into the specific range of digital values you use for HDR.

A full tone map typically includes considerations for converting color information as well.  Just like the bezier curves control the roll off of the lights into your target range, tone maps roll off color values between color spaces, to minimize the amount of hard clipping.  Here its important to exercise caution and experiment with your specific needs before selecting a tone map, since this step can create hue or saturation shifts you don’t expect.

Tone mapping is the golden goose of simplifying multi-space color corrections.  It’s what brings everything together by making it possible to very, very quickly move from your working space into your delivery space.

If you grade in ACES AP0 or AP1, the tone maps are already prepared for your conversions.  Simply apply the tone map for the target system and voila, the conversion’s ready for rendering, preserving all (or rather most - they aren’t quite perfect) of the authorial intent of the grade.  We did this on our Yellowstone video to generate the HDR master.2

Grading in other wide color spaces often requires custom tone maps, or on-the-fly maps generated by a program such as DaVinci Resolve Studio.  RED Digital Cinema, for instance, has produced LUT based tone maps for converting their RedWideGamutRGB Log3G10 footage into various HDR and SDR color spaces.  The entire Dolby Vision format is essentially a shot by shot set of tone maps for various screen brightnesses.

Or, you may find yourself doing what we’ve done - spend time to create your own tone mapped LUTs for converting HDR and SDR of various formats, and refining these maps for each individual piece of HDR content so that you end up with the optimal SDR look for that work.

Why Bother?

Wide color space corrections and tone mapping for various output systems is the way that color correction will be treated and handled in the future.  With the arrival of BT. 2020 and HDR transforms, in just the last few years the number of delivery color encoding formats has increased three fold at the very least.  The only way to ensure your content will be compatible in the future is to adopt the new paradigm and multi-space coloring workflow.

DaVinci Resolve Studio’s latest update (release of version 14) saw a significant overhaul of the color management engine in the last few beta versions to optimize the core functionality for this kind of color management workflow.  If you’re using DaVinci Color Management or ACES color management in the latest version, DaVinci will automatically select the optimal RAW interpretation of your footage and conform it to your working space, removing the ambiguity of how to interpret your footage and maintaining the maximum image quality.

Another manufacturer who’s natively implemented a similar color pipeline is RED, with their new IPP2 color workflow.  They’ve moved all of their in-camera looks and apply them to the image data after the sensor RGB is converted into RedWideGamutRGB, tone mapping all outputs to your monitoring space.  With that they now allow you to select whether color adjustments in camera are burned into the ProRes files, or simply attached as a LUT or CDL.  This way, regardless of what your monitoring or eventual mastering space is, the color changes you make in camera will have the exact same effect across the board.

This is the workflow of the future.  Like with HDR, which we can assume will be the EOTF of the future, the efficiencies and simplicities of this particular workflow are so great that the sooner you get on board with it the better your position will be in a few years time.  Grading in ACES AP0 offers a level of future proofing that not even BT.2020 provides.  While BT.2020 still exceeds what current technologies can really do, ACES AP0 ensures that regardless of where color science heads in the future (4+ color primaries?), your footage will already be common format that’s simple to convert to the new standard, preserving all color data.

While there is a learning curve to this workflow, at a technical level it’s simpler to learn and apply than even understanding how HDR video works.  Yes, it takes some getting used to, but it’s worth learning.  Because in the end, you’ll find better quality than you can otherwise hope for.

Written by Samuel Bilodeau, Head of Technology and Post Production


1 Yes, I’m making up the term “Very Wide Gamut” or “Very Wide Gamut RGB” simply because “Wide Gamut” and “Wide Gamut RGB” can refer to many different specific spaces, depending on the circumstances. Here I’m referring to any of these typical wide gamut spaces, or any space that covers a very large portion of the perceivable gamut.

2 A caveate note about ACES tone mapping: We used ACES AP0 with an ACEScc EOTF for our Yellowstone video. The tone mapping into HDR was fantastic and allowed me to skip my own range limiting map, and the ability to select different input transforms for each shot was fantastic. However, ACES failed when trying to generate an SDR version of the film: instead of tone mapping the higher dynamic range into the smaller SDR range, it clipped at the limits of SDR. This limitation makes me hesitant about recommending ACES for mixed-dynamic range work. It works wonderful for one or the other, but don't expect it to tone map directly between the two.


First off huge thanks to everyone at Red for getting me this camera so quickly. Like many of you, we’ve been holding out and waiting for VistaVision for a long time, and now know the wait seems to be coming to an end. 

I’ve only had the sensor for less than a day but I’m EXTREMELY IMPRESSED! I haven’t done side by sides with the Helium but I would say the sensor is just as clean, and like many others who have been shooting VistaVision, it’s pretty addicting. 

The following are my non-techie, non-polished, thoughts on this sensor and what it means to the industry. 

  1. 5K S35. This might be Red’s first true Arri competition. Finally, a super clean sensor with amazing highlight roll off that shoots at 5K! I love the resolution and the flexibility that it gives me but clients/agencies/post house aren’t always the biggest fans. No matter how many conversations I have about the benefits of R3D compression, it’s data rates and so on, we eventually have to bend over and give them what they want — and they love 4K ProRes 4444. Now, thanks to processor and GPU advancements, the most common video editing stations can handle 5K R3D just as well, if not better than 4K ProRes 4444 when you throw it in timeline- but you get the flexibility of RAW. While I loved the Dragon sensor and got to know it really well, it had its limitations which the Helium, and now Monstro sensor, have addressed. Now that you can get S35 5K from Monstro I can’t see a reason to shoot ProRes 4444 anymore. (Now that is still for the client to decided and I would still love if Red allowed for 4K ProRes 4444 only recording in-camera for those “back-up and walk away” clients). Besides just being 1K above 4K deliverable which is ideal for fast turn, non-future-proof productions, 5K also offers better rolling shutter performance compared to 8K even 7K with the Helium. Making it great for car work.

  2. High-Speed. 2K looks amazing on this sensor!!!! It’s super, super, super clean and usable even for a 2K finish. I have to do more compression tests but just looking through the monitor at 2K 300fps looks amazing. Which means for commercial work with 1080P finishes you should be pretty safe to shoot! Bear in mind that I have been using Otus Zeiss, and your lenses are going to play a big role, am really excited about this! And 4K 120P looks amazing as well! When I have more time next week I’ll do some true compression test and post some R3D, but yeah this sensor’s low noise floor opens up a lot of possibilities.

  3. 8K VV vs. 8K S35. For the first three hours after turning the Monstro on I was convinced that I would upgrade all of my cameras to Monstro! Just because it offers so much more flexibility and speed at lower resolutions which is especially useful for commercial workflows and the VistaVision field of view is just so so addicting. However, having both on hand is going to be a must for me. While I shoot a lot of commercials that only have 13 week life span and future proofing isn’t really necessary, the majority of my work is! 8K S35 allows me to capture 8K/7K with a wide variety of vintage and new lenses and gives me the needed crop factor for shooting wildlife on long lens. 8K VV gives me 8K with a field of view that is just breath taking! While people will always argue just use a wider lens with S35, I’ll say there is nothing like shooting VV!

Anyways, I’ll post more informations and include more techie stuff (nothing compared to Phil knowledge) in this thread later as I have more time to test. But honestly, at the end of the day, I am a shooter so I’ll be shooting a lot with this camera in the field than shooting charts and doing side by side comparison tests. 

Written by Jacob Schwarz, Owner, Director, Cinematographer

Display Calibration & Color Management

There are many different ways for consumers to experience your content today - so many that it’s often difficult to predict exactly where and how it’ll be seen.  Is it going to a theater?  Will it be watched on a television?  Which of the many handheld devices or personal computers will an end consumer use to view and listen to your work?  And how is that device or environment going to be rendering the colors?

Color management is an important consideration for every modern digital content production company to keep in the forefront of their minds.  In larger post production environments, there will often be a dedicated team that manages the preservation of color accuracy across the many screens and displays found throughout the facility.  But for small companies and independent producers, the burden of color management often falls on an individual with multiple roles, and is easier to ignore and to hope for the best than to spend the time and money to make sure it’s done right.

Before going any further, it’s important to define what we’re talking about when we say ‘color management.’  Color management is different than color correction or color grading, which is the process of normalizing colors and contrasts, maintaining color consistency across edits, and applying creative looks to your footage.  Instead, color management is about making sure the colors you see on your screens match as closely to the what the digital values stored in your video files are actually describing, within the color space you’re using.

In practice this means making sure that your displays, televisions, projectors, or other screens, as well as your lighting environment, are all calibrated so that their RGB balance, brightness, and contrast all match as close to the target standard as you can get them.  This makes sure that you don’t accidently add corrections to your digital data when you’re trying to ‘fix’ what you see on your displays that’s only there because of your displays or environment.  “Burning in” these kinds of shifts adversely affects the quality of your content by creating perceptual color shifts for your clients and consumers.

While calibration is essential, color management also involves ensuring the preservation of color from camera to end user display, keeping the color consistent between programs and ensuring your final deliverables contain the appropriate metadata.  Both parts to color management are essential, so we’re going to talk about both.  We’ll focus more on the calibration aspect of color management since that’s essential to get right, before briefly addressing color management in applications without getting mired too deep in advanced technical talk.

The problem

How do I know that my red is the same as your red?

This is one of the fundamental philosophical questions of color perception.  How do I know know that the way that I perceive red is the same as the way that you perceive red, and not how you perceive blue or green?  There’s actually no way to measure or determine for certain that the perceived shades are identical in the minds of any two individuals, since color perception happens as the brain interprets the stimulus it receives from the eyes.

While being a fun (or maddening) thought provoking question, color sameness is actually a really important baseline to establish in science and imaging.  In this case we’re not asking about the perception of color, but whether the actual shade of color produced or recorded by two devices is the same.  Today we’re only going to focus on colors being produced, and not recorded - we’ll cover capturing colors accurately in our next post.

There are a LOT of different kinds of displays in the world - from the ones we find on our mobile devices, to computer displays, televisions, and consumer or professional projectors.  The core technologies used to create or display images, such as plasma, LCD, OLEDs, etc., all render shades of color in slightly different ways, leading to differences in how colors within images look between displays.

But it’s not just the core technology used that affects the color rendition. Other factors like the age of the display, the specific implementation of the core technology (like edge-lit or backlit LCDs), the manufacturing tolerances for the specific class of display, the viewing angle, and the ambient environment all affect the colors produced or the colors perceived.  Which makes it almost impossible to predict the accuracy of color perception and rendering for one viewer, let alone the thousands or millions who are going to see your work.

But rather than throw up your hands in despair at the impossibility of the task, shift your focus to what you, as the content creator, can do: if you can be reasonably sure that what you see in your facility is as close to what’s actually being encoded as possible, you can be confident that your end viewers will not be seeing something horrifying.  While every end viewer’s experience will be different, at very least your content will be consistent for them - it will shift in exactly the same way as everyone else’s content, a shift they’re already used to and don’t even know it.

For that reason it’s important that when you master your work you’re viewing it in an environment and with a display that’s as close to perfectly accurate as possible.  But unfortunately, color calibration isn’t something you can simply ‘set and forget’: it needs to be done on a recurring schedule, especially with inexpensive displays.

What is Color Calibration?

How do we make sure color looks or is measured the same everywhere?

This question was first ‘answered’ in 1931 with the creation of the CIE XYZ color space.  Based the results of a series of tests that measured the sensitivity of the human vision to various colors, the CIE created a reference chart that mapped how the brain perceived the combination of visible wavelengths as colors into a Cartesian plane (X-Y graph).  This is called the CIE 1931 Chromaticity Diagram 

Three different color spaces referenced on the CIE 1931 Chromasticity diagram. The colors within each triangle represent the colors that can be produced by those three color primaries. All three share the same white point (D65).

This chart allows color scientists to assign a number value to all perceivable colors, both those that exist as a pure wavelength of light, and those that exist as a combination of wavelengths.  Every color you can see has a set of CIE 1931 coordinates to define its chromaticity (combined hue & saturation, ignoring brightness), which means that while we may not have an answer to a philosophical question of individual color experience, we do have a way of scientifically determining that my red is the same as your red.

This standard reference for colors is a powerful tool, and we can use it to define color spaces. A color space is the formal name for all of the colors a device can capture or produce using a limited set of primary colors.  Map the primary colors onto the chromaticity diagram, join them as geometric shape, and your device you can create or capture any color within the enclosed shape.  With an accompanying white point, you have the fundamentals ingredients for a defined color space, like Rec. 709, sRGB, AdobeRGB, etc.

Defining and adhering to color spaces is actually quite important to managing and matching end to end color.  Digital RGB values have no meaning without knowing which of the many possible shades of red, green, or blue color primaries are actually being used.  Interpreting digital data using different RGB primaries than the original creator used almost always results in nonlinear hue shifts throughout the image.

This is where color calibration comes in.  Color calibration is the process whereby a technician reads the actual color values produced by a display, and either adjusts the display’s settings to conform more closely to the target color space, and / or adjusts the signal coming to the display to better match the targeted output values.

To do this, you need access to four things:

  1. A signal generator to send the display specific digital values

  2. A colorimeter to measure the actual colors produced

  3. Control of the display’s input signal or color balance settings to adjust the output

  4. Software to manage the whole process and correlate the signal to measurement

If you want to make sure you’re doing it right, though, an in-depth understanding of how color and every image generation technology works helps a lot too.

Some consumer, most prosumer, and almost all professional displays leave the factory calibrated, though consumer and commercial televisions and almost all projectors must be calibrated after installation, for reasons we’ll talk about later.  Unfortunately, displays lose their calibration with time, and each kind and quality of display will start showing more or less variance as they age.  Which means that in circumstances where calibration is important, such as in professional video applications, displays require regular recalibration.

For desktop displays, this usually involves creating or update the ICC color profile, while for reference displays this typically involves adjusting the color balance controls so that the display itself better matches the target color space.

The differences in calibration technique comes from the workflow paradigm.  For desktop displays it’s assumed that the host computer will be directly attached to any number of different kinds of displays, each with their own color characteristics, at any given time - but always directly attached.  So, to simplify the end user experience, the operating system handles color management of attached displays through ICC profiles.

ICC profiles are data files that define how a display produces colors.  It records the CIE XYZ values of its RGB color primaries, white point, and black point, and its RGB tone curves, among some other metadata.

Using this information, the operating system “shapes” the digital signal sent to the display, converting on the fly the RGB values from the color space embedded in an image or video file into the display’s RGB space.  It does this for all applications, and essentially under all circumstances.  Some professional programs do bypass the internal color management, sort of, by assigning all images they decode or create to use the generic RGB profile (i.e. an undefined RGB color space). But it’s usually best to assume that for all displays directly attached to the computer, the operating system is applying some form of color management to what you’re seeing1.

Calibrating direct attached displays is relatively quick and easy.  The signal generator bypasses the operating system’s internal color management and produces a sequence of colored patches, which the colorimeter reads to map the display’s color output.  The software then generates an ICC color profile for that specific display, which compensates for color shifting from wear and tear, or the individual factory variances the display has.

Once calibrated, you can be reasonably confident that when viewing content, you’ll be seeing the content as close to intended as that particular display allows.

Reference displays, projectors, and televisions have a slightly different paradigm for calibration.  For calibrating computer displays, you can shape the signal to match the display characteristics.  But because of the assumption that a single video signal will (or at very least can) go to multiple displays or signal analysis hardware at the same time, and the signal generator is likely to have no information about the attached devices, it’s simply not practical to adjust the output signal.  Rather, professional output hardware always transmit their signals as pure RGB or YCbCr values, without worrying about the details of color space or managing color at all.

So instead of calibrating the signal, calibration of reference displays, projectors, or any kind of television usually requires adjusting the device itself.2

Once again, a signal generator creates specific color patches the colorimeter reads to see exactly what values the display creates.  Software then calculates the color’s offset as a Delta E value (how far away is the color produced from where it’s supposed to be according to the selected standard) and reports to the operator how far away from calibration it is.

The operator then goes through a set of trial and error adjustments to the image to lower the Delta E values of all the colors to get the best image possible.  Tweak the ‘red’ gain and see how that affects the colors produced.  Touch the contrast and see its effect on the overall image gamma - and on all the other colors.  Measure, tweak, measure, tweak, measure, tweak… and repeat, until the hardware is as close to the target color space as possible.

Calibration results showing DeltaE values for greyscale and color points

Generally, Delta E values less than 5 are good, less than 3 are almost imperceptible, and under 2 is considered accurate.  Once the calibration is complete, you can be reasonably sure that what you’re seeing on a reference display, projector, or television is as close to the target color space as possible.  But does that even matter?

Regular Calibration

Medium priced computer displays and professional reference displays usually leave the factory with a default calibration that puts them as close to standard reference as the technology allows.  The same is not true of most televisions and projectors - they leave the factory uncalibrated, or are in an uncalibrated mode by default for a couple of reasons which we’re not going to get into.

But even with this initial factory calibration for the displays that have it, the longer a display’s been used the more likely it will be experience color shifts.  How quickly it loses calibration depends on the technology in use: some technologies can lose their calibration in as short as a month with daily use.

The reasons behind this shift over time can be lumped together as “wear and tear”.  The exact reasons for each different display technology losing its calibration are a little off topic, so I’m going to spare you the gory details of the exact mechanisms that cause the degradations.  However, the important things to know are:

  1. The backlight of LCDs and the main bulb in digital projectors change colors over time. This is a major problem with the xenon arc lamps found in projectors, and is a bigger problem for CCFL LCDs than for LED lit (white or RGB) LCDs, but even the LED spectrums shift with use.

  2. The phosphors inside of CRTs and plasma displays degrade with time and their colors change, as do the primary color filters on LCD displays though at a slower pace.

  3. Anything using liquid crystals (LCD displays and LC or LCoS projectors) can suffer from degradation of the liquid crystal, which affects color and brightness contrasts.

  4. The spectrum of light emitted by plasma cells change with age, so they don’t stay balanced for the same output levels.

Or in other words, all displays change colors over time.  Setting up a regular calibration schedule for every display that you look at your content on is an important part of color management.  You don’t want to move a project from your reference display to your desktop to find that suddenly the entire video appears to be pulling magenta, or invite a client to review your work in your conference room to find the picture washed out or color shifted.

Environment and Color Management

Up until now we’ve been talking about the color characteristics of your displays and projectors.  But just as important as your display calibration is the characteristics of your environment in general.  The brightness level and color of lights in the room affect perceptions of contrast and the colors within the image.

This is really easy to get wrong.  Because not only does the display need to be calibrated for the target color space, it should be calibrated within the target environment.  The technician handling the calibration will usually make a judgement call for changing display values like display brightness, gamma curve, or white point based on these environmental choices.  But they may also make other recommendations about the environment to improve the perception of color on the screen - what to do to other displays, lighting, windows etc., so that your perception of color will better match industry standards.

Generally speaking, reference environments should be kept dim (not pitch black), using tungsten balanced lighting that’s as close to full spectrum as possible.  Avoid daylight balanced bulbs, and install blackout curtains on any windows.  Where possible, keep lighting above and pointed away from the workstation screens - reflected light is better than direct lighting, since it reduces glare and is better for color perception.

The easiest way get proper lighting is to set up track lighting with dimmable bulbs (LED or tungsten based, colored between 2800K & 3200K), and point the pots slightly away from the workstation.  The dimmer ensures that you can bring the environment into specification for grading, but can then bring the lighting back up to normal ambient conditions for general work or for installing hardware etc.  If changing the overhead lighting isn’t an option, good alternatives are stick lights on the opposite side of the room, positioned at standing height.

Keep your reference display or projector as the brightest screen in the environment.  If you don’t, your brights will look washed out and gray since they’re dimmer than other light sources. It will also affect your overall perception of contrast: you’ll perceive the image as darker and having more contrast than expected, and are therefore more likely to push up the mids and dark and wash out the image as a whole.  Dimming the brightness of interface displays, scopes, phones or tablets, and any other screen within the room will make sure that you’re perceiving the image on your reference hardware as accurately as possible.

Depending on the number of interface displays and other other light sources in the room, you may need to further lower ambient lighting to keep contrast perception as accurate as possible.  In rare cases, such as in small rooms, this may include turning the lights off completely since the interface displays provide sufficient ambient lighting for the environment.

Calibrating your displays is essential, calibrating the environment is important.  Usually it’s pretty easy to tweak environmental calibration for better color perception, so long as you’re starting from a dark or otherwise light controlled environment.  And unlike display calibration it’s something you can do once and not need to tweak for years.

Application Color Management

Once you’ve calibrated all of your hardware and your environment, it’s easy to assume that your job is done, and you don’t have to worry about color management until the next time you book a calibration session.  Oh how I wish that were the case.

Different applications manage color in different ways, which means you may still see differences between applications with the same footage.  Sometimes applications get in fights with the operating system over who’s managing color and both end up applying transformations you’re not aware of.

Which means it’s important to understand exactly how each application touches color.  To do that, let’s briefly look at how four common applications manage color: Adobe Premiere, Final Cut Pro X, Adobe After Effects, and DaVinci Resolve.

Both Adobe Premiere and Final Cut Pro X actively manage the colors within the project.  Adobe Premiere gives you exactly no ways of changing the color interpretation of the input files, beyond the embedded metadata in HEVC and a few other formats (NOT Apple ProRes).  It conforms everything to Rec. 709 in your viewers and signal outputs, and there’s no way to override this.  The operating system then uses the display’s ICC profile to conform the output so that you can see it as close to Rec. 709 as possible.  Which is good, because it means that when you output the video file, what you see is what you get.

Adobe Premiere’s color engine processes colors in 8 bit.  You can turn 16 bit color processing in the output or in the sequence settings by flagging on “Maximum Bit Depth” and “Maximum Render Quality.”  This is really important for using high bit depth formats like Apple ProRes, which stores 10 or 12 bit image data, assuming you want to maintain high color fidelity with your output files.  If you’re outputting to 8 bit formats for delivery you may still benefit from keeping these flags on, however, depending on how in depth your color corrections and gradients are.

Basically, Adobe Premiere assumes you know nothing about color management, and that it should handle everything for you.  Not a terrible assumption, just something to be aware of when you start thinking about managing color yourself.

Like Adobe Premiere, Final Cut Pro X also handles all of the color management, but offers at least a small amount of control over input and output settings.  By default, it processes colors at a higher internal bit depth than Premiere, and in linear color which offers smoother gradients and generally gives better results.  You also get to assign a working color space to your library and your project (sequence), though your only options are Rec. 709 and Wide Color Gamut (Rec. 2020).

Each clip by default is interpreted as belonging to the color space identified in its metadata, and conformed to the output color space selected by the project (sequence).  If necessary, you can override the color space interpretation of each video clip by assigning it to either Rec. 601 (NTSC or PAL), Rec. 709, or Rec. 2020 (notably missing is DCI-P3 and HDR curves).  When using professional video outs, the signal’s data levels of the is managed by the selection of Rec. 709 or Rec. 2020, and FCP-X handles everything else.  Like Adobe Premiere, it works with the operating system to conform the video displayed in the interface to the attached monitor’s ICC profile.

Both Adobe Premiere and FCP-X work on a “what you see is what you get” philosophy.  If your interface display is calibrated and using the proper ICC profile, you shouldn’t have to touch anything, ever.  It just works.  But gods Adobe and Apple forbid you try to make it do something else.

On the other hand, Adobe After Effects and DaVinci Resolve have highly flexible, colorspace agnostic color engines that allow you to nearly completely ignore all color management.  They’re quite content to simply apply the transformations you’ve requested to the digital data read in, and to not care about what color space or contrast curve the digital data is in.  And when you output, it simply writes the RGB data back to a file and you’re good to go.

Of course, that’s the theory.  After Effects makes a few color assumptions under the hood about intent, including ignoring the display ICC profile on output, since it has no idea what color space you’re working in anyway.  That sounds innocuous, but it’s a problem if you’re using a display with properties that are mismatched to the color profile of the footage you’re using3.  Suddenly your output, with an embedded color profile and playing back in a color managed application, may look significantly different than it did in After Effects.

Turning on After Effect’s color management by assigning a project working space allows for a more accurate viewing of the final output.  You can then flag on the view option to “Use Display Color Management” (on by default), and adjust the input space of any RGB footage.  But you can still get into trouble: any chroma subsampled footage, like ProRes 422 or H.264, is only permitted to use the embedded color profile.  Also Adobe ignores ProRes metadata for Rec. 2020 and HDR, which will negatively affect the output when using color management.  It also exhibits strange behavior when using HDR gamma curves and in some other working spaces.

DaVinci Resolve has some of the best functionality for color management.  It’s agnostic color engine renders color transformations in 32 bit float precision, and outputs raw RGB data to your video out.  It assumes you know what color space you’re using, so it’s happy to ignore everything else.  By default, on a Mac it applies the monitor ICC profile to the interface viewers, with the assumption that your input footage is Rec. 7094.

Fortunately, changing the working space is incredibly easy, even without color management turned on - simply set it the color primaries and EOTF in the Color Management tab of the project settings.  With color management off, this will only affect the interface display viewers, and then only if the flag “Use Mac Display Color Profile for Viewers” is set (on by default, MacOS only).  Unfortunately it does not as of yet apply ICC profiles to the viewers under Windows (see footnote 4).

When you turn DaVinci Resolve’s color management on, you have extremely fine grained control over color space - being able to set the input, working, and output color spaces and gammas separately (with Resolve managing the transformations on the fly), and then being able to bypass or override the input color space and gamma on a clip by clip basis in the color correction workspace.  And because of their 32 bit floating point internals, their conversions work really well, preserving “out of range” data between nodes and between steps in the color management process, allowing the operator to reign it in and make adjustments to the image at later steps - an advantage of active color management over LUTs in a few cases.

Input Processing Output Display
Adobe Premiere Assumes embedded or Rec. 709, cannot be changed 8 bit Rec. 709 with Gamma 2.4 assumed, 16 bit and linear color processing possible Rec. 709 on all outputs Output conformed to display using ICC profile
Final Cut Pro X Assumes embedded or Rec. 709, overridable to Rec. 2020 10-12 bit Rec 709 or Rec 2020 (configured by library) with gamma 2.4. Rec. 709 or Rec. 2020 on all outputs (configured by project) Output conformed to display using ICC profile
Adobe After Effects Assumes embedded or Rec. 709, ignored by default, reassignable for RGB formats but fixed interpretation of YCbCr 8 or 16 bit integer or 32 bit float agnostic color engine. Working space assignable on project basis, many fixed working spaces available RGB output in working space or generic RGB Color space and calibration defined by display (Pro out), output conformed to display using ICC profile for direct attached interfaces when working space assigned.
DaVinci Resolve Studio Ignored by default, global assignable with per-clip overrides to nearly any color space 32 bit floating point agnostic color engine. Working space assignable on a project basis, many combinations of working spaces with independently assignable color primaries and EOTFs RGB output in working space or assignable output space, or generic RGB Color space and calibration defined by display (pro out), output conformed to display using ICC profile for direct attached interfaces when working space assigned, LUTs available for pro output calibration.

These four programs kind of form a good scale for understanding application color management.  Generally speaking, the easier an application is to set up and use, the more hands-off management it’s likely to do, and give you anywhere from no, to very limited control over color management.  More advanced programs usually offer more in depth color management features, or the ability to bypass color management completely so that you’re able to have the finesse you need.  They also tend to preserve RGB data internally (and output that RGB data through professional video output cards), but require more of a knowledge of color spaces and the use of calibrated devices.

Calibrating your displays is a significant portion of the color management battle, though it’s also necessary to understand exactly what the applications are doing to the color if you want to be able to trust that what you’re seeing on the screen is reasonably close to what will be delivered to a client or to the end user.

What A Fine Mess We’re In

Keeping displays and projectors calibrated and trusting their accuracy has always been a concern, but it’s really become a major issue as the lower cost of video technologies has made the equipment more accessible, and since both the video and film production industries have shifted into modern digital productions.

“Back in the day”, analog video displays relied on color emissive phosphors for their primary colors.  The ‘color primaries’ of NTSC and PAL (and SECAM) weren’t based on the X-Y coordinates on the CIE XYZ 1931 diagram, but on the specific phosphors used in the CRT displays that emitted red, green, and blue light.  They weren’t officially defined with respect to the CIE 1931 standards until Recommendation BT.709 for High Definition Television Systems (Rec. 709) in 1990.

Around that time, with the introduction of liquid crystal displays computer displays also had to start defining colors more accurately.  They adopted the sRGB color space in mid to late nineties, using the same primaries as Rec. 709 but with a different data range and more flexible gamma control.  Naturally, both of these standards based their color primaries on… the CRT phosphors used in NTSC and PAL televisions systems.  And while the phosphors degrade and shift over time, they don’t shift anywhere near as much as the backlights of an LCD.  Meaning that prior to the early 2000s, when LCDs really took off, calibration was far less of an issue.

Now we have to worry not only about the condition of the display and its shifting calibration, but which of the multiple color spaces and new EOTFs (gamma curves) the display or application works with, what client deliverables need to be, and which parts of the process may or may not be fully color managed with our target spaces supported.

And then we have film.  Right up until the advent of end to end digital production, film had the massive benefit of “what you see is what you get” - your color space was the color space of the film stock you were using for your source, intermediates, and masters.  Now with the DCI standard of using gamma corrected CIE X’Y’Z’ values in digital cinema masters, you have to be far more cautious of projector calibration: it’s not possible to convert from CIE X’Y’Z’ into proper color output without regularly measuring the projector’s actual output values.  And we’re not going to talk about the nightmare of DCI white points and desktop displays that use the DCI-P3 color space.

Oh, and by the way, every camera sees the colors differently than the actual color spaces you’re trying to shoot in, and may or may not be conforming the camera color primaries to Rec 709, DCI-P3, or something else.  Because this needed to be more complicated.

Fortunately, with a basic understanding of color management and color calibration navigating the modern color problems is actually much more manageable than it all appears on face value.  In our next post we’re going to be discussing RED Digital Cinema’s Image Processing Pipeline 2 (IPP2), and why it’s the perfect paradigm for solving the modern color management problem.

But in the meantime, if you’re working in the Utah area and want to figure out the best way of calibrating your workspace or home office, give us a call.  We’ve got the right equipment and know how to make sure that when you look at your display or projector, you’re seeing as close to the standards as possible.

Color and deliver with confidence: make sure it’s calibrated.

Written by Samuel Bilodeau, Head of Technology and Post Production


Color management and calibration are trickier than I’ve made it sound.  I’ve simplified a few things and tried to be as clear as possible, but there are many, many gotcha’s in the process of preserving color that can make it maddening.  And this is one area where a small amount of knowledge and trying to do things yourself can get you into huge amounts of trouble really quickly.

Trial and error is important to learning, and often it’s still the only way to feel out exactly what an application is doing to your files.  But be smart: calibrate your displays and let the programs manage things for you, unless you’re intending on experimenting and know the risks associated with it.



1 Note, this is not a bad thing.  In most cases it’s a good thing.  It’s just something to be aware of and to understand how it works.

2 It’s also possible to use lookup tables to shape the signal for viewing on a reference display.  Here, the software will measure the actual values produced by the display, and calculate the offsets as values to put in a 3D LUT.  When attached to multiple displays using the same professional signals, LUTs should be applied using external hardware, when attached to one display only it’s acceptable to apply the LUT in the software application generating the output signal or in a hardware converter.  Ensure that the LUT is not applied to any place on the signal upstream of the final output recording.

3 This is a big problem with the iMac, or any other Wide Gamut / DCI-P3 display.  Colors will look different than expected without enabling color management within After Effects.

4 At least it did, until DaVinci Resolve Beta 14b8, 14b9, and 14.0 release - the option to flag on and off color management for the display disappeared with this update and I haven’t had time to test whether it’s on by default, works under Windows, or whether they’ve gone a different way with their color management.

Resolving Post Production Bottlenecks

Every system has one or more bottlenecks - the factors that limit all other operations or functions and controls the maximum speed things can happen.  This is true in every aspect of life, whether we’re talking chemistry, physics, biology, human resources, a film set, or editing and grading footage in post-production.

We’re not going to get into the bottlenecks in film production here since they tend to have a variety of causes and are often unique the type of production you’re working on or the companies or individuals involved.

Instead we want to look at finding bottlenecks in Post-Production, understanding how each one can limit the speed at which you can work, and when it can be simple or inexpensive fixes that can increase the level of productivity.

Broadly speaking, all bottlenecks in post fall into the following categories: storage device speedstorage transfer speedsperipheral transfer speedsprocessing power (CPU and GPU)software architecture, and workflow.

Read More

When Should You Buy a REDROCKET-X?

It’s no secret among those we work with that we love RED.  And yet, with all of our camera purchases here at Mystery Box, we’ve never bought our own REDROCKET or REDROCKET-X.  On occasion we’ve borrowed a REDROCKET for projects here or there and we regularly discuss whether we should get one or not.  But we haven’t.  Even after the upgraded REDROCKET-X was released in 2013, we were still on the fence as to whether it would actually accelerate our workflows.

But instead of arguing about what-ifs and maybes, we decided to use a couple of days near the end of last year to really put it to the test.  We borrowed a friend’s REDROCKET-X and two full days of testing later, we had our results.

The TL;DR version of our results is that the the value of a REDROCKET-X depends significantly on your workflow.  For some it’s definitely worth it, while for others (including us) it’s far less so.

Specifically, you should consider a REDROCKET-X when your workflow demands 1. Real-time or faster R3D decoding and 2. The bottleneck / choking point is the actual decoding process, and not another point in the workflow.

Read More

Delivering 8K using AVC/H.264

YouTube launched 8K streaming back in 2015, but the lack of cameras available to content creators meant 8K uploads didn’t start in earnest until late 2016.  That’s around the time when we uploaded our first 8K video to YouTube, and while we ran into some interesting problems getting it up there (which aren’t worth discussing because they’ve all been fixed), overall we're impressed with YouTube’s ability to stream in 8K.

Being naturally curious, I wanted to know more about what they were using for 8k compression, so I downloaded the mp4 version YouTube streams to see which codec it was using.  Let me save you some time finding it yourself and show you what settings YouTube uses for 8K streaming on the desktop:

MediaInfo of a YouTube video file showing the 8K resolution in the AVC/H.264 codec

Does anything look weird to you? Unless you’re a compressionist, maybe not.

Here’s what’s strange: it lists the codec as AVC, otherwise known as H.264.  The problem with that is the largest frame size permitted by the H.264 video codec standard is 4,096 x 2,304, and yet somehow this video has a resolution of 7,680 x 4,320.  Which means that either this video, or the video standard must be lying.

Well, not exactly.  The frame resolution is Full Ultra High Definition (FUHD - 7,680 x 4,320), and the video codec is H.264 / AVC.  It’s just a non-standard H.264 / AVC.

Being able to make and use your own non-standard H.264 (or any other codec) video files is a really useful trick, and right now it’s an important thing to know for working with 8K video files.  Specifically, it’s important to know what benefits and drawbacks working outside the standard format offers and how to make the best use out of them.


In 2014, a client asked about 5K, high frame rate footage to use on a demonstration display.  Since we’d been filming all of our videos at 5K resolution, remastering the files at their native camera resolution wasn't an issue and we were happy to work with them.

But as things moved forward with their marketing team we ran into a little problem.  We had no problem creating and playing 5K files on our systems, but when their team tried to play back the ProRes or DPX master files on their Windows based computer (which they were required to use for the presentation), they weren’t available to get real-time playback.  Why not? The ProRes couldn’t be decoded fast enough by the 32 bit QuickTime player on Windows, and the DPX files had too high of a data rate to be read from anything but a SAN at the time.

Fortunately, we’d already been experimenting with encoding 5K files in a few different delivery formats: High Efficiency Video Coding (HEVC / H.265), VP8 and VP9, and Advanced Video Coding (AVC / H.264).  The HEVC was too computationally complex to be decoded in real time for 5K HFR, since there were no hardware decoders that could handle the format (even in 8 bit precision) and FFMPEG still needed optimizations to playback HEVC beyond 1080p60 in real-time, on almost every system.  The VP8 and VP9 scared the client, since they weren’t comfortable working with the Matroska video container (for reasons they never explained - quality wise, this was the best choice at the time), which left us with H.264.

Which is how we delivered the final files: AVC Video with AAC Audio in an MP4 container, at a resolution of 5,120 x 2880, though we ended up dropping the playback frame rate to only 30fps for better detail retention.

Finding a way to encode and to play back these 5K files in H.264 wasn’t easy.  But once we did, we opened up the possibility of delivering files in any resolution to any client, regardless of the quality of their hardware.

So how did we do it?  We cheated the standard.  Just like Google does for 8K streaming on YouTube.  And for delivering VR video out of Google’s Jump VR system.

And since you’re probably now asking: “how do you cheat a standard?”, let’s review exactly what standards are.


Standards like MPEG-4 Part 10, Advanced Video Coding (AVC) / ITU-T Recommendation H.264 (H.264) exist to allow different hardware and software manufacturers to exchange video files with the guarantee they’ll actually work on someone else’s system.

Because of this standards have to impose limits on things like frame size, frame rate for a given frame size, and data rate in bits per second.  For AVC/H264, the different sets of limits are called Levels.  At its highest level, Level 5.2, AVC/H.264 has a maximum frame size of 4,096 x 2,304 pixels @ 56 frames per second, or 4,096 x 2160 @ 60 frames per second, so that standard H.264 decoders don’t have to accommodate any frame size or frame rate larger than that.

Commercial video encoders like those paired with the common NLEs Adobe Premiere, AVID Media Composer, and Final Cut Pro X, assume that you’ll want the broadest compatibility with the video file, so the software makes most of the decisions on how to compress the file, and strictly adheres to the available limits.  Which for H.264 means that you’ll never be able to create an 8K file out of one of these apps.

While standards allow for broad compatibility, sometimes codecs are needed to work in a more limited use setting. “Custom video solutions” are built for a specific purposes, and may need frame sizes, frame rates, or data rates that aren’t standard. This is where the standard commercial AVC/H.264 encoding softwares often won’t work, and you either write a new encoder yourself (time consuming and expensive) or turn to the open source community.

Open source projects for codec encoding and decoding, like the x264 encoder/decoder implementation of the H.264 standard, often write code for all parts of the standard. x264 even includes playback features beyond the AVC/H.264 standard, specifically an ‘undefined’ or ‘unlimited’ profile or level where you can apply H.264 compression to any frame size or frame rate. The catch is that it just won’t playback with hardware acceleration because it’s out of standard; it’ll need a software package that can decode it.

Spend enough time with codecs and compression and you’ll run across a term: FFMPEG.  FFMPEG is an open source software package that provides a framework for encoding or decoding audio and video. It’s free, it’s fast, and it’s scriptable (meaning it can be automated by a server) so a lot of companies who don’t write audio-video software themselves can simply incorporate FFMPEG and codec libraries like x264 for handling the multimedia aspect of their programs.

Which is exactly what YouTube does.

"Writing application : Lavf56.40.101" indicates the file was written using FFMPEG in this 8K file from YouTube.

That’s right, when you upload a video to YouTube, Google’s servers create encoding scripts for FFMPEG, which are sent off to various physical servers in Google’s data centers to encode all of the different formats that YouTube uses to optimize the video experience for desktops, televisions, phones, and tablets, and for internet connections ranging from dial-up to fiber optic.

And for 8K content streaming on the desktop, that means encoding it in 8K H.264.

Why AVC/H.264 for 8K?

Which, of course, leads us to our last two questions: Why H.264 and not something else? And How can you do it too?

For YouTube, using AVC/H.264 is a matter of convenience.  At the time that YouTube launched 8K support (and even today) HEVC/H.265, which officially supports 8K resolutions, is still too new to see broad hardware acceleration support - and even then few hardware solutions support at 8K resolution.  (Side note - as of the last time we tested it [Jan 2017] the open source HEVC/H265 encoder x265 struggles with 8K resolutions, so there’s that too).  Google’s own VP9/VP10 codecs still weren't ready for broad deployment when 8K support was announced, and hardware VP9 support is just starting to appear.

YouTube selecting either HEVC/H.265 or the VP9/VP10 codecs would severely limit where 8K playback would be allowed.  And since software decoding 8K H.264 can work in real time while H.265 doesn’t on most computers (H.264 is about 5 - 8 times less processor intensive than H.265) we have YouTube streaming in 8K in the AVC/H.264 codec, at least until VP10 or H265 streaming support is added to the platform.

Encoding 8K Video into H.264

So you want to encode your own 5K or 8K H.264?  It’s easy - just download FFMPEG and run it from the command line.  Just kidding, that’s a horrible experience.  Use FFMPEG, but run it through a frontend instead.

The syntax for running FFMPEG from the command line can get a little complicated.

An FFMPEG frontend is a piece of software that gives you a nicer user interface to decide your settings, then sends off your decisions to FFMPEG and its associated software to do the actual work.  Handbrake is a good example of a user-friendly cross platform front end for simple jobs, but it doesn’t give you access to all the options available.  The best that I’ve found for that is a frontend called Hybrid.

Hybrid is a little more complicated than, say, Adobe Media Encoder, but it gives you access to all of the features that are available in the x264 library (i.e. all of the AVC/H.264 standard + out of standard encoding) instead of the more limited features that other packages give you.  It’s a cross-platform solution that works on Windows and MacOS, it’s updated regularly to add new features and optimizations, and it by default hides some of the complexity if you just want to do a basic encode with it.


Here are the settings we’d use for a 5K or higher H.264 video:

Main Pane of Hybrid showing where to select the audio and video codecs, and where to set the output file name.

On the first pane of the program, select your input file, generate or select your output file name, and decide on which video codec you want to use (in this example, x264) and whether to include audio or not (set it to custom).

Set the Profile and Level to None/Unrestricted to encode high bitrate 8K video

Now, under the x264 tab, make the following changes: Switch the encoding mode to “average bitrate (1-pass)”, and change the Bitrate (kbits/s) value to 200,000.  That’ll set our target bitrate to 200Mbps, which for 8K it’s the equivalent quality as 50Mbps for 4K.

Then, under the restriction settings, change the “AVC Profile/Level” drop downs to “none” and “unrestricted”.  Leave everything else the same and jump over to the Audio tab at the top.

Add the audio by selecting "Audio Encoding Options" and then clicking the plus to add it to the selected audio options

In the audio tab, add an audio source if your main source file doesn’t have one, turn on the Audio Encoding Options pane by using the check box, choose your audio format and bit rate (in this case I’m using the default AAC with 128 kbps audio, then click the big plus sign at the top right of the audio cue to add that track of audio to your output file.

What to click to add your job to the queue and get the queue started

That’s it.  You’re done.  Jump back to the Main tab, click the “add to queue” button to add your job to the batch, and either follow the same steps to add another, or click on “start queue” to get things rendering.

When you’re done you’ll find yourself with a perfectly useable 8K file compressed into H.264!

Who Cares?

Is this useless knowledge to have?  Not if you regularly create 8K video for YouTube, or if you create VR content using the GoPro Odyssey rig with Google Jump VR.  In both of those cases you’ll need to upload an 8K file.  While the ProRes format works, it’s quite large (data wise) and may be problematic for upload times.  Uploading AVC/H.264 is a better option in some cases, and it can always be used as a delivery file for 8K content when data rates prohibit DPX or an intermediate format.

To playback files created this way, you need a video player that also leverages lightweight playback and non-standard video, like MPC-HC on Windows or MPV on Windows or MacOS.  Sometimes QuickTime will work, though it rarely works on Windows because it’s still a 32 bit core, and VLC is also a solid option in many cases.  But both of those have more overhead than FFMPEG core players and can cause jittery playback.

Spending time learning new programs, especially ones that aren’t at face value user friendly like Hybrid or FFMPEG, doesn't seem like it’ll pay off.  But the process of discovery, trial, and error is your friend when you’re trying to stay ahead of the game in video.  Don’t be afraid to test out something new.

It’s how we were able to deliver 5K video content to a client when no one else could, and how we still stay at the forefront of video technologies today.

Written by Samuel Bilodeau, Head of Technology and Post Production