HDR Insights Series Article 4 : Dolby Vision

HDR Insights Series Article 4 : Dolby Vision


In the previous article, we discussed the HDR tone mapping and how it is used to produce an optimum viewer experience on a range of display devices. This article discusses the basics of Dolby Vision meta-data and the parameters that the user needs to validate before the content is delivered.

What is HDR metadata?

HDR Metadata is an aid for a display device to show the content in an optimal manner. It contains the HDR content and mastering device properties that are used by the display device to map the content according to its own color gamut and peak brightness. There are two types of metadata – Static and Dynamic.

Static metadata

Static metadata contains metadata information that is applicable to the entire content. It is standardized by SMPTE ST 2086. Key items of static metadata are as following:

  1. Mastering display properties: Properties defining the device on which content was mastered.
    • RGB color primaries
    • White point
    • Brightness Range
  2. Maximum content light level (MaxCLL): Light level of the brightest pixel in the entire video stream.
  3. Maximum Frame-Average Light Level (MaxFALL): Average Light level of the brightest frame in the entire video stream.

In a typical content, the brightness and color range varies from shot to shot. The challenge with static metadata is that if the tone mapping is performed based on the static metadata, it will be based only on the brightest frame in the entire content. As a result, the majority of the content will have greater compression of dynamic range and color gamut than needed. This will lead to poor viewing experience on less capable HDR display devices.

Dynamic metadata

Dynamic metadata allows the tone mapping to be performed on a per scene basis. This leads to a significantly better user viewing experience when the content is displayed on less capable HDR display devices. Dynamic metadata has been standardized by SMPTE ST 2094, which defines content-dependent metadata. Using Dynamic metadata along with Static metadata overcomes the issues presented by the usage of only the static metadata for tone mapping.

Dolby Vision

Dolby Vision uses dynamic metadata and is in fact the most commonly used HDR technology today. This is adopted by major OTT service providers such as Netflix and Amazon, as well as major studios and a host of prominent television manufacturers. Dolby Vision is standardized in SMPTE ST 2094-10. In addition to supporting for dynamic metadata, Dolby Vision also allows description of multiple trims for specific devices which allows finer display on such devices.

Dolby has documented the details of its algorithm in what they refer to as Content Mapping (CM) documents. The original CM algorithm is version (CMv2.9) which has been used since the introduction of Dolby Vision. Dolby introduced the Dolby Vision Content Mapping version 4 (CMv4) in the fall of 2018. Both versions of the CM are still in use. The Dolby Vision Color Grading Best Practices Guide provides more information.

Dolby Vision metadata is coded at various ‘levels’, the description of which is mentioned below:

Metadata Level/Field   Description
Mastering Display Describes the characteristics of the mastering display used for the project
Aspect Ratio Ratio of canvas and image (active area)
Frame Rate Frame Rate
Target Display Describes the characteristics of each target display used for L2 trim metadata
Color Encoding Describes the image container deliverable
Algorithm/Trim Version CM algorithm version and Trim version
L1 Min, Mid, Max Three floating point values that characterize the dynamic range of the shot or frame

Shot-based L1 metadata is created by analyzing each frame contained in a shot in LMS color space and combined to describe the entire shot as L1Min, L1Mid, L1Max

Stored as LMS (CMv2.9) and L3 Offsets

Reserved1, Reserved2, Reserved3, Lift, Gain, Gamma, Saturation, Chroma and Tone Detail Automatically computed from L1, L3 and L8 (lift, gain, gamma, saturation, chroma, tone detail) metadata for backwards compatibility with CMv2.9
L1 Min, Mid, Max Three floating point values that are offsets to L1 Analysis metadata as L3Min, L3Mid, L3Max

L3Mid is a global user defined trim control

L1 is stored as CMv2.9 computed values, CMv4 reconstructs RGB values with L1 + L3

Canvas, Image Used for defining shots that have different aspect ratios than the global L0 aspect ratio
MaxFALL, MaxCLL Metadata for HDR10

MaxCLL – Maximum Content Light Level MaxFALL – Maximum Frame Average Light Level

Lift, Gain, Gamma, Saturation, Chroma, Tone Detail, Mid Contrast Bias, Highlight Clipping

6-vector (R,Y,G,C,B,M) saturation and 6-vector (R,Y,G,C,B,M) hue trims

User defined image controls to adjust the CMv4 algorithms per target with secondary color controls
Rxy, Gxy, Bxy, WPxy Stores the mastering display color primaries and white point as per-shot metadata


Dolby Vision QC requirements

Netflix, Amazon, and other streaming services are continuously adding more and more HDR titles to their library with the aim of improving the quality of experience for their viewers and differentiating their service offerings. This requires that the content suppliers are equipped to deliver good quality and compliant HDR content. Moreover, having the ability to verify quality before delivery becomes more important.

Many of these OTT services support both the HDR-10 and Dolby Vision flavors of HDR. However, more and more Netflix HDR titles are now based on Dolby Vision. Dolby Vision is a new and complex technology, and therefore checking the content for correctness and compliance is not always easy. Delivering non-compliant HDR content can affect your business and therefore using a QC tool to assist in HDR QC can go a long way in maintaining a good standing with these OTT services.

Here are some of the important aspects to verify for HDR-10 and Dolby Vision:

  1. HDR metadata presence
    • HDR-10: Static metadata must be coded with the correct parameter values.
    • Dolby Vision: Static metadata must be present once and dynamic metadata must be present for every shot in the content.
  2. HDR metadata correctness. There are a number of issues that content providers need to check for correctness in the metadata:
    • Only one mastering display should be referenced in metadata.
    • Correct mastering display properties – RGB primaries, white point and Luminance range.
    • MaxFALL and MaxCLL values.
    • All target displays must have unique IDs.
    • Correct algorithm version. Dolby supports two versions:
      • Metadata Version 2.0.5 XML for CMv2.9
      • Metadata Version 4.0.2 XML for CMv4
    • No frame gaps. All the shots, as well as frames, must be tightly aligned within the timeline and there should not be any gap between frames and/or shots
    • No overlapping shots. The timeline must be accurately cut into individual shots; and analysis to generate L1 metadata should be performed on a per-shot basis. If the timeline is not accurately cut into shots, there will be issues with luminance consistency and may lead to flashing and flickering artifacts during playback.
    • No negative duration for shots. Shot duration, as coded in “Duration” field, must not be negative
    • Single trim for a particular target display. There should be one and only one trim for a target display.
    • Level 1 metadata must be present for all the shots.
    • Valid Canvas and Image aspect ratio. Cross check the canvas and image aspect ratio with the baseband level verification of the actual content.
  3. Validation of video essence properties. Essential properties such as Color matrix, Color primaries, Transfer characteristics, bit depth etc. must be correctly coded.

Netflix requires the Dolby Vision metadata to be embedded in the video stream for the content delivered to them. Reviewing the embedded meta-data in video stream can be tedious and therefore an easy way to extract & review the entire metadata may be needed and advantageous.

How can we help?

Venera’s QC products (Pulsar – for on-premise & Quasar – for cloud) can help in identifying these issues in an automated manner. We have worked extensively with various technology and media groups to create features that can help the users with their validation needs. And we have done so without introducing a lot of complexity for the users.

Depending on the volume of your content, you could consider one of our Perpetual license editions (Pulsar Professional, or Pulsar Standard), or for low volume customers, we also have a very unique option called Pulsar Pay-Per-Use (Pulsar PPU) as an on-premise usage-based QC software where you pay a nominal per minute charge for content that is analyzed. And we, of course, offer a free trial so you can test our software at no cost to you. You can also download a copy of the Pulsar brochure here. And for more details on our pricing you can check here.

If your content workflow is in the cloud, then you can use our Quasar QC service, which is the only Native Cloud QC service in the market. With advanced features like usage-based pricing, dynamic scaling, regional resourcing, content security framework and REST API, the platform is a good fit for content workflows requiring quality assurance. Quasar is currently supported for AWS, Azure and Google cloud platforms and can also work with content stored on Backblaze B2 cloud storage. Read more about Quasar here.

Both Pulsar & Quasar come with a long list of ‘ready to use’ QC templates for Netflix, based on their latest published specifications (as well as some of the other popular platforms, like iTunes, CableLabs, and DPP) which can help you run QC jobs right out of the box. You can also enhance and modify any of these QC templates or build new ones! And we are happy to build new QC templates for your specific needs.

QC for Presence of Emergency Alert System (EAS) Message

QC for Presence of Emergency Alert System (EAS) Message


The Emergency Alert System (EAS) is a national public warning system in the United States, commonly used by state and local authorities to deliver important emergency information, such as weather and AMBER alerts, to affected communities. EAS participants include radio and television broadcasters, cable systems, satellite radio, and television providers, and wireline video providers. These participants deliver local alerts on a voluntary basis, but they are required to provide the capability for the President to address the public during a national emergency. The majority of EAS alerts originate from the National Weather Service in response to severe weather events, but an increasing number of state, local, territorial, and tribal authorities also send alerts.

EAS messages are sent as part of the media delivery channel of various participants. Key characteristics of EAS messages include:

  • Designed in a way that it immediately catches the attention of viewers to increase the probability of the general population listening to the emergency message and acting accordingly.
  • It contains the location information so that the message is delivered only in the target geographies.
  • Contains actual audio/video message warning the public.

Since the EAS messages are specifically designed for emergencies, it is prohibited to be used by participants for any other purposes, intentionally or unintentionally. Participants are not even allowed to transmit a tone that sounds similar to EAS messages. The simple rule is that no one is allowed to misuse EAS messages to attract attention to any other content such as advertisements, dramatic, entertaining, and educational programs, etc. To enforce this, FCC has imposed heavy penalties on various broadcasters for violating this rule. Some of the major pending or settled violations and their proposed or actual fines are listed here.

Many of these violations have been accidental but some of these have been because of creative intents also. One specific case is an episode of the TV show “Young Sheldon”. In the Season 1 episode titled “A Mother, A Child, and a Blue Man’s Backside,” Missy (Raegan Revord) is watching the classic “duck season/rabbit season” Looney Tunes short and is annoyed when a tornado-watch alert interrupts it. According to a source familiar with the situation, the scene used a muffled, background sound that was altered to balance the authenticity of a family’s reaction to a severe-weather event and the FCC’s rules against the misuse of the EAS tones. Nevertheless, FCC has proposed a $272,000 fine against CBS for this violation.

To avoid such penalties, broadcasters and other content providers must ensure that an audio tone similar to an EAS message is not present in the content they broadcast. Any missed instance can attract heavy penalties and would also potentially tarnish the brand image of that content provider. Therefore, every content must pass through stringent QC (validating there are no EAS tones) before getting delivered to the end-users.


EAS Message Structure

The EAS message structure is based on Specific Area Message Encoding (SAME). Messages in the EAS are composed of four parts: SAME header, an attention signal, an audio announcement, and SAME end-of-message marker; as described below:

  1. SAME Header: SAME header uses Audio Frequency Shift Keying (AFSK) at a rate of 520.83 bits per second to transmit the codes. It uses two frequencies – 2083.3 Hz (Mark frequency) and 1562.5 Hz (space frequency). Mark and space-time must be 1.92 milliseconds. Key information in the header includes originator, type of alert, region for which alert is issued, and date/time for which the alert is applicable.
  2. Attention signal. Single tone (1050 Hz) or Dual audio tone (853/960 Hz). Commercial broadcast operations use dual-tone (853 and 960 Hz together), while the single tone (1050 Hz) is used by NOAA weather radio. It is designed to attract the immediate attention of the listeners.
  3. Actual audio, video, or text message.
  4. SAME end-of-message marker. It indicates the end of the emergency alert.

EAS Message Structure

File-based QC

File based QC tools are now commonly used in the content preparation and delivery chains, thereby reducing the dependency on manual QC. Many content providers resort to spot QC as against a full QC, exposing them to the risk of missing potential violations of FCC guidelines. Therefore, a QC tool that can reliably detect the presence of an EAS message or a tone similar to an EAS message can potentially save a content provider from the potential losses and embarrassment.

EAS message detection is part of all of our QC offerings – Pulsar & Quasar. As a result of QC, the report will contain the exact location of such violation that users can use to review and decide.

EAS message detection

EAS tone


Our QC tools not only detect the ideal EAS message tones but can also report tones that sound similar to EAS message tones. Considering the case of “Young Sheldon” episode, this becomes important and can potentially save content providers from potential penalties.

We provide a range of QC tools for various deployments, be it on-premise or on Cloud. Whatever operational mode the users use, they can ensure that EAS tones are not present in the content they send out to end users.

EAS - File based QC

Major EAS violations

Below is a list of some of the recent fines proposed by FCC for violation of EAS usage

  • FCC has proposed a $20,000 fine on 7th April 2020, against New York City radio station WNEW-FM, for using the attention signal during its morning show on October 3, 2018, as part of a skit discussing the National Periodic Test held later that day
  • FCC has proposed a fine of $272,000 against CBS for transmitting a simulated EAS tone during the telecast of Young Sheldon episode on 12th April 2018
  • Meruelo Group was fined $61,000 for including an EAS-like tone during a radio advertisement for KDAY and KDEY-FM’s morning show
  • ABC Networks is being fined $395,000 for using WEA (Wireless Emergency Alert) tones multiple times during a Jimmy Kimmel Live sketch
  • iHeartMedia was fined $1 million on 19th May 2015, for the use of the EAS message during the 24th October 2014 episode of Bobby Bones nationally-syndicated radio show
  • Cable providers fined $1.9 million on March 3, 2014, for misuse of EAS tones in the trailer for the 2013 film Olympus Has Fallen
Remote File QC: During & Post COVID-19

Remote File QC: During & Post COVID-19


The entire world is going through unprecedented times. The COVID-19 outbreak has taken everyone by surprise with the businesses and livelihood of so many people affected worldwide. There are a lot of challenges occurring in all aspects of our lives, but these times are also a testimony to the resilience of humanity.

Until a few months ago, it was completely unthinkable that our entire workforce will work from home. We used to think that only a limited amount of work can be done from home and coming to the office was kind of necessary. Work efficiency, team coordination, and communication were all apparent and major challenges for us with the work from home model. It is just amazing to see how we and the vast majority of organizations worldwide have completely moved to work from home within a span of a few weeks, while still remaining efficient.

COVID-19 crisis has interrupted the life for all of us and has made us re-think our priorities and the best way of working in a post COVID-19 era. It is very much possible and probable, that life after this pandemic will never be the same. Good or bad, time will only tell.

Most of our customers have relied heavily on on-premise software & equipment usage so far, with setting up of centralized content workflows that remain under tight controls and regulations. However, that mode of working has suddenly become infeasible and impractical. These same customers are now forced to equip themselves for working and managing operations remotely.

A few years back, during the time that Pulsar on-premise file QC solution was our flagship offering, we started investing time and resources in additional offerings such as Pulsar Pay-Per-Use (PPU) – on-premise usage-based QC service, and Quasar – Native Cloud File QC service. Pulsar PPU has been very attractive for small organizations which don’t have budgets for upfront QC license purchase and have been rather more comfortable in paying on a usage basis. Quasar is primarily designed for organizations whose workflows are already in Cloud. In the current circumstances, Pulsar PPU and Quasar, present a compelling alternative to all media organizations for Remote QC, in pursuit of ensuring their business continuity.

And with our latest QC offering, “QC as a Service” (QCaaS), we are taking it even one step further. For those who want to have their content be verified with a state of the art QC solution, but simply don’t have the computing or human resources to perform the QC, we are now offering them the convenience of simply uploading their content to the cloud and letting us do the QCing for them, returning them with the QC report, and peace of mind that their content meets the QC requirements of their intended platform or end customer. For those who want to keep their focus on their business of content creation or delivery and not the operational challenges of doing the QC, we gladly handle that process for them.

Remote File QC

We have geared up our offerings to help our customers ensure their QC needs are met, irrespective of their location. Here is the summary of our Remote File QC offerings.

Quasar – Native Cloud File QC Service

Quasar is the world’s first Native Cloud Automated File QC Service. It is designed especially to exploit the Cloud infrastructure benefits while still making use of the rich QC capabilities of our on-premise File QC system – Pulsar. Advanced capabilities such as dynamic scaling, regional resourcing, and content security features with REST API, make Quasar an ideal system to QC your content in Cloud. Quasar is available as a SaaS service or a Private edition that you can deploy in your own VPC (Virtual Private Cloud). Quasar works natively on AWS, Azure, and Google Cloud. Read more at www.veneratech.com/quasar.

Pulsar PPU (Pay-Per-Use)

Pulsar PPU is the only usage-based on-premise file QC solution in the market. It has all the analysis capabilities of our enterprise QC solution – Pulsar, while allowing users to pay based on the content duration. In addition to QC, it also allows users to perform Harding PSE validation and get a Harding certificate. Users simply load the credits using their credit cards, which are debited as users perform QC tasks. Read more at http://stage.veneratech.com/pulsar-file-qc-ppu.

QCaaS (QC as a Service)

QCaaS is the QC service offered around our QC solutions. Users simply upload the content to us and we return a QC report to them. QCaaSpricing is usage-based as well where the user pays based on the content duration. No software installation is required for QCaaS on the user side. Read more at www.veneratech.com/qcaas.

Delivering to NETFLIX? QC Requirements in A Nutshell and How to Comply with Them

Delivering to NETFLIX? QC Requirements in A Nutshell and How to Comply with Them


It is a well-known fact that Netflix is very conscious of the quality of content that is delivered via their service. Whether it is the overall Audio/Video quality or the structural compliance of the content delivery packages, all of it needs to be in compliance with their technical specifications before it can be accepted by Netflix.

Becoming a Netflix Preferred Fulfillment Partner – NPFP, or being part of the Netflix Post Partner Program – NP3 is a tough task and continuing to remain a partner is also not easy, requiring consistent attention to quality. Netflix maintains the track record of the partners and the failure rates are published on its website from time to time.

It is therefore pertinent for the Netflix partners to ensure the compliance of their content before delivering. Here is a list of some of the common areas of QC that suppliers need to pay attention to before delivering their content.

  1. IMF Analysis: Netflix requires most of its content to be delivered in IMF packages. That means you need to verify the accuracy of your IMF packages before delivery. This includes the basic compliance with IMF Application 2E SMPTE standards and specific validations on asset maps, packing list and other package elements.
  2. Dolby Vision: Increasingly more content is now being delivered in HDR and Netflix has selected Dolby Vision as its HDR format of choice. This requires you to ensure the basic Dolby Vision compliance along with specific structure recommendations outlined in the Netflix specifications.
  3. Photon: Netflix also requires for your IMF packages to pass their own ‘Photon’ IMF tool before delivery. These checks are performed while uploading the package to Netflix. If Photon fails the asset then the content will not be sent to Netflix.
  4. Harding PSE: Detecting video segments that may cause Photo Sensitive Epilepsy (PSE), particularly for content that is being delivered to the UK and Japan, is becoming very important. Netflix may require PSE validation for certain category of content.
  5. Audio/Video baseband quality: The content must be thoroughly checked for a wide range of artifacts in the audio/video essence before delivery.

Many of the above items are difficult and/or time-consuming to perform with manual QC and therefore warrant the use of a QC tool. Venera’s QC products (Pulsar – for on-premise & Quasar – for cloud) can help in identifying these issues in an automated manner. We have worked extensively with the IMF User Group, and Dolby and Netflix teams to create the features that do what user needs, and have done so without introducing a lot of complexity for the users.

We have also integrated the industry-standard Harding PSE engine and can generate a Harding certificate for every file processed through our Pulsar & Quasar file QC tools. And the Netflix Photon tool has also been integrated so that you can receive ONE QC report including the Photon messages as well.

The results are provided in the form of XML/PDF reports for easy assessment. If desired, the Harding certificate and the QC reports (which will include the Photon results) can even be shared with Netflix along with the delivered content.

Pulsar – on premise File-based Automated QC

Depending on the volume of your content, you could consider one of our Perpetual license editions (Pulsar Professional, or Pulsar Standard), or for low volume customers, we also have a very unique option called Pulsar Pay-Per-Use (Pulsar PPU) as an on-premise usage-based QC software where you pay only $15/hr for content that is analyzed. And we, of course, offer a free trial so you can test our software at no cost to you. You can also download a copy of the Pulsar brochure here.

Quasar – Native Cloud File QC Service

Quasar – Native Cloud File QC Service

If your content workflow is in the cloud then you can use our Quasar QC service, which is the only Native Cloud QC service in the market. With advanced features like usage-based pricing, Dynamic scaling, Regional resourcing, content security framework and REST API, the platform is a good fit for content workflows requiring quality assurance. Quasar is currently support for AWS, Azure and Google. Read more about Quasar here.

Both Pulsar & Quasar come with a long list of ‘ready to use’ QC templates for Netflix, based on their latest published specifications (as well as some of the other popular platforms, like iTunes, CableLabs, and DPP) which can help you run QC jobs right out of the box. You can also enhance and modify any of them or build new ones! And we are happy to build new QC templates for your specific needs.

HDR Insights Article 3: Understanding HDR Tone Mapping

HDR Insights Article 3: Understanding HDR Tone Mapping


In the previous article – HDR Transfer Functions, we discussed the transfer functions and how digital images are converted to light levels for display. This article discusses how the same HDR image can be displayed differently by different HDR devices.

What is HDR Tone Mapping?

Tone mapping is the process of adapting digital signals to appropriate light levels based on the HDR meta-data. This process is not simply applying the EOTF (Electro-Optical Transfer Function) on the image data but it is rather trying to map the image data with the display device capabilities using meta-data information. Since a broad range of HDR display devices are available in the market, each with their own Nits (i.e. ‘brightness’) range, correct tone mapping is necessary for a good user experience. Since the tone mapping is done based on the meta-data in the video stream, presence of correct meta-data is necessary.

Source footage can be shot at HDR with best of cameras and then mastered on high-end HDR mastering systems, but it still need to be displayed optimally on the range of HDR televisions available in the market. Tone mapping performs an appropriate brightness mapping of the content to device without significant degradation.

Need for HDR Tone Mapping

Let’s say an image is shot with peak brightness of 2000 Nits. If it is displayed on a television with 0-2000 Nits range, the brightness range will be exactly as shot in the raw footage. However, the results will be different on other devices:

High Dynamic Range Tone Mapping


Since tone mapping is a necessary operation to display PQ based HDR content on HDR display devices, the television needs to know the native properties of the content in terms of the brightness range used along with mastering system parameters. This information is conveyed in the form of HDR meta-data. After reading the HDR meta-data, display devices can decide the tone mapping parameters so that the transformed video lies optimally within the display range of the display device.

Next article will discuss the specific meta-data for HDR-10 and HDR-10+, two different implementation of the HDR. Stay tuned for that.

Article 2: Transfer functions


cd/m2 – The candela (cd) is the base unit of luminous intensity in the International System of Units (SI); that is, luminous power per unit solid angle emitted by a point light source in a particular direction. A common wax candle emits light with a luminous intensity of roughly one candela.

Nits – A non-SI unit used to describe the luminance. 1 Nit = 1 cd/m2.

HDR – High Dynamic range. It is a technology that improves the brightness & contrast range in an image (upto 10,000 cd/m2)

SDR – Standard Dynamic range. It refers to the brightness/contrast range that is usually available in regular, non-HDR televisions usually with range of upto 100 cd/m2. This term came into existence after HDR was introduced

WCG – Wide Color Gamut. Color gamut that offer a wider range of colors than BT.709. DCI-P3 and BT.2020 are examples of WCG offering more realistic representation of images on display devices.

EOTF – electo-optical transfer function. A mathematical transfer function that describes how digital values will be converted to light on a display device.

OETF – optical-electro transfer function. A mathematical transfer function that describes how the light values will be converted to digital values typically within cameras.

OOTF – opto-optical transfer function. This transfer function compensates for the difference in tonal perception between the environment of the camera and that of the display.

PQ – PQ (or Perceptual Quantizer) is a transfer function devised to represent the wide brightness range (upto 10,000 Nits) in HDR devices.

HLG – HLG (or Hybrid Log Gamma) is a transfer function devised to represent the wide brightness range in HDR devices. HLG is quite compatible with existing SDR devices in the SDR range.