Quasar Leap – We went where no Cloud-QC service had gone before!

Quasar Leap – We went where no Cloud-QC service had gone before!

By: Fereidoon Khosravi

We have achieved something that has been near and dear to my heart for a couple of years, as it relates to our Cloud-based QC capabilities. And that is how we extended the capability of our Quasar native-cloud QC service to the level that I don’t believe anyone else has actually reached! And we call it “Quasar Leap”.

When we started the development work on Quasar®, our native cloud QC service, the goal was clear. We didn’t want to just take our popular on-premise QC software, Pulsar™, run it on a VM and call it “cloud” QC. We made the deliberate decision that while we would use the same core QC capabilities of Pulsar, we would build Quasar architecture from grounds up, to be a ‘native’ cloud QC service.  And we did accomplish that by being the first cloud-based QC to legitimately call ourselves ‘native’ cloud. And the phrase ‘native’ cloud meant capabilities like microservices architecture, dynamic scalability, regional content awareness, SaaS deployment, usage-based pricing, high grade content security, and of course redundancy.

But we wanted to go even further. And that was when the project we code named ‘Quasar Leap’ came about. To borrow and paraphrase from one of my all time favorite TV shows, Star Trek, we wanted to “take Quasar to where no cloud-QC had gone before”! (Those of you who are Star Trek fans know what I am talking about!).

Quasar was already able to process 100s of files at the time, but the goal of ‘Quasar Leap’ was to show that Quasar can process ONE THOUSAND files simultaneously! Of course, anyone can claim that their solution is robust, scalable, reliable, etc, but we set out to actually do it and then record it to prove that we did it!

This was not a marketing ploy, although to be honest, I knew there would be great appreciation and name recognition telling our customers and prospects that we can QC 1,000 files simultaneously. But there was a practical and quite useful benefit of doing so. After Quasar’s initial release, we found out when we started to push the boundaries of how many files Quasar could process simultaneously, there were some practical limitations to our architecture, even though we were already way ahead of our competition. And while we could easily process a few hundred files at the same time (more than any of our customers had needed), when we tried to push beyond that, the process could break down, and impact the reliability of the overall service.

So because of project ‘Quasar Leap’, our engineering team took a very close look at various components of our architecture. And while I am obviously not going to give away our secret sauce (!), suffice to say, they further enhanced and tweaked various aspects of our internal workflow to remove any bottlenecks and stress points, to make Quasar massively and dynamically scalable!

Quasar Leap Workflow

And then we decided instead of just ‘saying’ that we have the most scalable native-cloud QC solution, we would ‘actually do’ it and record it!! And I can now tell you confidently and based on the recorded video that we have actually done that! That, we actually submitted 1000 30-minute media files, watched our Quasar system dynamically spin up 1000 AWS virtual computing units (called EC2s in their terminology), and process (QC) those 1000 files simultaneously, and then spin down those EC2 instances once they were no longer needed.

So while we recorded the event, 30,000 minutes (500 hours!) of content was processed in less than 3 hours which even included the time of spinning up of the EC2 instances!  To put things in perspective, 500 hours of content, equivalent of approximately 330 movies, or 10 seasons of 7 different popular TV Sitcoms, was processed just shy of 3 hours! To say it differently, with our massive simultaneous processing capability, approximately 160 hours of content can be processed in one hour!

If you say to yourself “that is great, but who has that much content that they need to process them that quickly”, I have an answer! Actually a three-point answer:

  1. You will be surprised how many media companies have PETA bytes (that is with a “P”!) of content sitting in cloud storage! They face the daunting task of managing a cloud-based workflow to monetize that archived content by restoring, ingesting, transcoding, and ultimately delivering that content to audiences. And one step that is naturally important in this workflow is ensuring that all of that content goes through content validation at various stages before delivery to the end user. And that is where this massive simultaneous QC processing ability of Quasar will be much needed to minimize delays in this effort.
  2. Some of our customers get content in bursts with strict delivery timelines. Ability to process that burst of content immediately offers significant business value in addition to workflow efficiency.
  3. And let’s not forget the main gain from the ‘Quasar Leap’ project, which was the behind the scene tweaking, in some cases revamping, and enhancing of our underlying architecture. And that has resulted in a solid platform, which will benefit ALL of our Quasar SaaS users, whether they have 100,000 files or 100 files, or even a few files! It ensures reliability, scalability and confidence in that they can rely on Quasar to meet their QC needs regardless of their normal volume or any sudden increases (bursts) in their content flow due to an unexpected event or last minute request.

All that effort for ‘Quasar Leap’ by our talented and dedicated development team, conducted during the challenging time of the pandemic, is finally complete! The new release of Quasar with all the architectural changes resulting from the ‘Quasar Leap’ project has rolled out.

According to Tony Huidor, SVP, Products & Technology at Cinedigm, a premier independent content distributor, a great customer of ours, and an early benefactor of ‘Quasar Leap’: “Given the rapidly growing volume of content that uses our cloud-based platform, we needed the ability to expand the number of files we need to process in a moment’s notice. Quasar’s massive concurrent QC processing capability gives us the scalability we required and effectively meets our needs.”

And now ‘Quasar Leap’, giving us the ability to massively scale up our simultaneous processing capability, is ‘live’, and “our Quasar native-cloud QC has gone where no other cloud-QC has gone before”!

Learn more about our Quasar capabilities here, or contact us for a demo and free trial!

And as Mr. Spock would say: “Live Long & Prosper”!

CapMate – Key Features of Our Closed Caption and Subtitle Verification and Correction Platform

CapMate – Key Features of Our Closed Caption and Subtitle Verification and Correction Platform

By: Fereidoon Khosravi

In my last blog, which coincided with the official launch of CapMate, our Caption and Subtitle Verification and Correction platform, I gave the background on how the concept of CapMate came about and, at a high level, what the capabilities are that it brings to the table. Here is the blog, in case you need a refresher! It is now time to dig in a little deeper into what CapMate can actually do and why we think it will add great value for any organization that has to deal with Closed Caption/Subtitle files.

We had many conversations with our customers and heard their concerns about the issues they ran across when processing or reviewing captions, and why the closed caption verification and correction is a slow and time-consuming process. Based on those feedbacks, we derived a list of key functionalities which would allow them to reduce the amount of time and effort they regularly had to spend in verifying and fixing the caption/subtitle files. For our first release, we set out to tackle and resolve as many of these issues as we could, and to provide an easy and user-friendly interface for operators to review, process, and correct their caption files.

Here is a subset of those functionalities and a short description for each. Some of these items are complex enough that deserve their own dedicated blog. Hopefully soon!

Caption Sync:

How many times have you been bothered by the fact that the caption of a show is just a tad bit behind or ahead of the actual dialog? The actor stops talking and the caption starts to appear! Or the caption and audio seem to be in sync but as time goes by, there seems to be a bigger and bigger gap between what is being said and the caption that is being shown on the screen. It makes watching a show with closed caption/subtitle quite annoying. There are many reasons for such sync issues, which I will leave for a different blog. But suffice to say, fixing such sync issues is a very time-consuming effort and probably as challenging for the operators who have to deal with them, as it is for you and I who want to watch the show! The operators have to spend painstaking time, adjusting the timing of the closed captions all the way through, making sure that fixing the sync issue in one section doesn’t have a ripple effect of causing sync issues elsewhere. The time to fix the sync issue could vary from a few hours to more than a day!

CapMate, with the use of Machine Learning techniques, can provide a very accurate analysis of such sync issues, determining what type of sync problem exists, and how far off is the caption from the spoken words. And deploying a complex algorithm, CapMate can actually automatically adjust and correct the sync issue throughout the entire file at the operator’s press of one button! This action alone can save a substantial amount of an operator’s time, with amazing accuracy. Users can also perform a detailed review of the captions using CapMate viewer application and perform manual changes.

Caption Overlay:

Another item that can be annoying to an audience, is when the caption text, usually placed at the lower part of the screen, overlaps with burnt-in text in the show. Operators need to manually review the content with the caption turned on to see if and when the caption may overlay a burnt-in text present on the screen. This is another time-consuming process.

CapMate, using a sophisticated algorithm can examine every frame and detect any text that may be part of the content. It can then mark all the time codes where caption text is overlaying on the on-screen text, simplifying the process for the operator who can quickly adjust the location of the caption and remedy the issue.

Caption Overlap:

While this sounds similar to the previous feature, it is actually quite different. There are instances where due to missed caption timing, the beginning of a caption may occur before the end of the previous caption. That, as you can imagine, has a big impact on the viewing experience and is not acceptable.

CapMate can easily detect and report back on all instances where such caption overlaps exist and like many of its other features, CapMate provides an intuitive interface for the operator to have CapMate make the necessary adjustment to all affected captions.

SCC (and other) Standards Conformance:

Closed caption and subtitle files come in many different formats. One of the oldest and most arcane formats (and yet quite prevalent) is called SCC, which stands for “Scenarist Closed Captions.” It’s commonly used with broadcast and web video, as well as DVDs and VHS videos (yes, it is that old!). It has very specific format specifications and is not a human-readable file. Therefore checking for format compliance is a very difficult task for an operator, always requiring additional tools. And making corrections to such files is even more difficult as it is easy to make matter worse by the smallest mistake. There are also a variety of XML-based caption formats that while more human-readable, are still difficult to manually verify and correct.

CapMate has automated Standards conformance capability, and can quickly and easily not only detect file conformance issues for SCC and other formats, but it also can make corrections accurately, and effortlessly. There are a variety of different templates defined for IMSC, DFXP, SMPTE-TT, etc, which CapMate can verify for conformance.

Profanity and Spell Check/Correction:

While some content may include profanity that is spoken, many broadcasters may choose not to have such words spelled out as part of the caption/subtitle. In many cases where automated speech-to-text utilities are used to create the initial caption files, such profane words are transcribed without any discretion. And in case of human authoring where captions are generated manually, spelling mistakes can be easily introduced by the authoring operators.

CapMate provides quick and accurate analysis of the caption text against a user-defined profanity database, and a user-extendable English dictionary to detect both profanity and spelling mistakes. Similar to word-processing software, CapMate allows the operator to do a global replacement of a profane word with a suitable substitute, or fix a spelling mistake. This work will take a fraction of the time using CapMate compare to manual caption/subtitle detection and correction.

Many other Features:

To detail all the features of CapMate here would make this a very long blog! Suffice to say, there is a wealth of other features that deal with items such as CPL (Characters Per Line), CPS (Characters Per Second), WPM (Words Per Minute), or number of lines, that CapMate can verify and provide an intuitive interface for the operator to fix.

I will have to leave those for a separate blog (it is called job security! J )

But if you want to get more details about CapMate please go here or contact us for a demo and free trial! You can also check out the launch video here we made announcing CapMate!

HDR Insights Series Article 4 : Dolby Vision

HDR Insights Series Article 4 : Dolby Vision

 

In the previous article, we discussed the HDR tone mapping and how it is used to produce an optimum viewer experience on a range of display devices. This article discusses the basics of Dolby Vision meta-data and the parameters that the user needs to validate before the content is delivered.

What is HDR metadata?

HDR Metadata is an aid for a display device to show the content in an optimal manner. It contains the HDR content and mastering device properties that are used by the display device to map the content according to its own color gamut and peak brightness. There are two types of metadata – Static and Dynamic.

Static metadata

Static metadata contains metadata information that is applicable to the entire content. It is standardized by SMPTE ST 2086. Key items of static metadata are as following:

  1. Mastering display properties: Properties defining the device on which content was mastered.
    • RGB color primaries
    • White point
    • Brightness Range
  2. Maximum content light level (MaxCLL): Light level of the brightest pixel in the entire video stream.
  3. Maximum Frame-Average Light Level (MaxFALL): Average Light level of the brightest frame in the entire video stream.

In a typical content, the brightness and color range varies from shot to shot. The challenge with static metadata is that if the tone mapping is performed based on the static metadata, it will be based only on the brightest frame in the entire content. As a result, the majority of the content will have greater compression of dynamic range and color gamut than needed. This will lead to poor viewing experience on less capable HDR display devices.

Dynamic metadata

Dynamic metadata allows the tone mapping to be performed on a per scene basis. This leads to a significantly better user viewing experience when the content is displayed on less capable HDR display devices. Dynamic metadata has been standardized by SMPTE ST 2094, which defines content-dependent metadata. Using Dynamic metadata along with Static metadata overcomes the issues presented by the usage of only the static metadata for tone mapping.

Dolby Vision

Dolby Vision uses dynamic metadata and is in fact the most commonly used HDR technology today. This is adopted by major OTT service providers such as Netflix and Amazon, as well as major studios and a host of prominent television manufacturers. Dolby Vision is standardized in SMPTE ST 2094-10. In addition to supporting for dynamic metadata, Dolby Vision also allows description of multiple trims for specific devices which allows finer display on such devices.

Dolby has documented the details of its algorithm in what they refer to as Content Mapping (CM) documents. The original CM algorithm is version (CMv2.9) which has been used since the introduction of Dolby Vision. Dolby introduced the Dolby Vision Content Mapping version 4 (CMv4) in the fall of 2018. Both versions of the CM are still in use. The Dolby Vision Color Grading Best Practices Guide provides more information.

Dolby Vision metadata is coded at various ‘levels’, the description of which is mentioned below:

Metadata Level/Field   Description
LEVEL 0 GLOBAL METADATA (STATIC)
Mastering Display Describes the characteristics of the mastering display used for the project
Aspect Ratio Ratio of canvas and image (active area)
Frame Rate Frame Rate
Target Display Describes the characteristics of each target display used for L2 trim metadata
Color Encoding Describes the image container deliverable
Algorithm/Trim Version CM algorithm version and Trim version
LEVEL 1 ANALYSIS METADATA (DYNAMIC)
L1 Min, Mid, Max Three floating point values that characterize the dynamic range of the shot or frame

Shot-based L1 metadata is created by analyzing each frame contained in a shot in LMS color space and combined to describe the entire shot as L1Min, L1Mid, L1Max

Stored as LMS (CMv2.9) and L3 Offsets

LEVEL 2 BACKWARDS COMPATIBLE PER-TARGET TRIM METADATA (DYNAMIC)
Reserved1, Reserved2, Reserved3, Lift, Gain, Gamma, Saturation, Chroma and Tone Detail Automatically computed from L1, L3 and L8 (lift, gain, gamma, saturation, chroma, tone detail) metadata for backwards compatibility with CMv2.9
LEVEL 3 OFFSETS TO L1 (DYNAMIC)
L1 Min, Mid, Max Three floating point values that are offsets to L1 Analysis metadata as L3Min, L3Mid, L3Max

L3Mid is a global user defined trim control

L1 is stored as CMv2.9 computed values, CMv4 reconstructs RGB values with L1 + L3

LEVEL 5 PER-SHOT ASPECT RATIO (DYNAMIC)
Canvas, Image Used for defining shots that have different aspect ratios than the global L0 aspect ratio
LEVEL 6 OPTIONAL HDR10 METADATA (STATIC)
MaxFALL, MaxCLL Metadata for HDR10

MaxCLL – Maximum Content Light Level MaxFALL – Maximum Frame Average Light Level

LEVEL 8 PER-TARGET TRIM METADATA (DYNAMIC)
Lift, Gain, Gamma, Saturation, Chroma, Tone Detail, Mid Contrast Bias, Highlight Clipping

6-vector (R,Y,G,C,B,M) saturation and 6-vector (R,Y,G,C,B,M) hue trims

User defined image controls to adjust the CMv4 algorithms per target with secondary color controls
LEVEL 9 PER-SHOT SOURCE CONTENT PRIMARIES (DYNAMIC)
Rxy, Gxy, Bxy, WPxy Stores the mastering display color primaries and white point as per-shot metadata

 

Dolby Vision QC requirements

Netflix, Amazon, and other streaming services are continuously adding more and more HDR titles to their library with the aim of improving the quality of experience for their viewers and differentiating their service offerings. This requires that the content suppliers are equipped to deliver good quality and compliant HDR content. Moreover, having the ability to verify quality before delivery becomes more important.

Many of these OTT services support both the HDR-10 and Dolby Vision flavors of HDR. However, more and more Netflix HDR titles are now based on Dolby Vision. Dolby Vision is a new and complex technology, and therefore checking the content for correctness and compliance is not always easy. Delivering non-compliant HDR content can affect your business and therefore using a QC tool to assist in HDR QC can go a long way in maintaining a good standing with these OTT services.

Here are some of the important aspects to verify for HDR-10 and Dolby Vision:

  1. HDR metadata presence
    • HDR-10: Static metadata must be coded with the correct parameter values.
    • Dolby Vision: Static metadata must be present once and dynamic metadata must be present for every shot in the content.
  2. HDR metadata correctness. There are a number of issues that content providers need to check for correctness in the metadata:
    • Only one mastering display should be referenced in metadata.
    • Correct mastering display properties – RGB primaries, white point and Luminance range.
    • MaxFALL and MaxCLL values.
    • All target displays must have unique IDs.
    • Correct algorithm version. Dolby supports two versions:
      • Metadata Version 2.0.5 XML for CMv2.9
      • Metadata Version 4.0.2 XML for CMv4
    • No frame gaps. All the shots, as well as frames, must be tightly aligned within the timeline and there should not be any gap between frames and/or shots
    • No overlapping shots. The timeline must be accurately cut into individual shots; and analysis to generate L1 metadata should be performed on a per-shot basis. If the timeline is not accurately cut into shots, there will be issues with luminance consistency and may lead to flashing and flickering artifacts during playback.
    • No negative duration for shots. Shot duration, as coded in “Duration” field, must not be negative
    • Single trim for a particular target display. There should be one and only one trim for a target display.
    • Level 1 metadata must be present for all the shots.
    • Valid Canvas and Image aspect ratio. Cross check the canvas and image aspect ratio with the baseband level verification of the actual content.
  3. Validation of video essence properties. Essential properties such as Color matrix, Color primaries, Transfer characteristics, bit depth etc. must be correctly coded.

Netflix requires the Dolby Vision metadata to be embedded in the video stream for the content delivered to them. Reviewing the embedded meta-data in video stream can be tedious and therefore an easy way to extract & review the entire metadata may be needed and advantageous.

How can we help?

Venera’s QC products (Pulsar – for on-premise & Quasar – for cloud) can help in identifying these issues in an automated manner. We have worked extensively with various technology and media groups to create features that can help the users with their validation needs. And we have done so without introducing a lot of complexity for the users.

Depending on the volume of your content, you could consider one of our Perpetual license editions (Pulsar Professional, or Pulsar Standard), or for low volume customers, we also have a very unique option called Pulsar Pay-Per-Use (Pulsar PPU) as an on-premise usage-based QC software where you pay a nominal per minute charge for content that is analyzed. And we, of course, offer a free trial so you can test our software at no cost to you. You can also download a copy of the Pulsar brochure here. And for more details on our pricing you can check here.

If your content workflow is in the cloud, then you can use our Quasar QC service, which is the only Native Cloud QC service in the market. With advanced features like usage-based pricing, dynamic scaling, regional resourcing, content security framework and REST API, the platform is a good fit for content workflows requiring quality assurance. Quasar is currently supported for AWS, Azure and Google cloud platforms and can also work with content stored on Backblaze B2 cloud storage. Read more about Quasar here.

Both Pulsar & Quasar come with a long list of ‘ready to use’ QC templates for Netflix, based on their latest published specifications (as well as some of the other popular platforms, like iTunes, CableLabs, and DPP) which can help you run QC jobs right out of the box. You can also enhance and modify any of these QC templates or build new ones! And we are happy to build new QC templates for your specific needs.

QC for Presence of Emergency Alert System (EAS) Message

QC for Presence of Emergency Alert System (EAS) Message

 

The Emergency Alert System (EAS) is a national public warning system in the United States, commonly used by state and local authorities to deliver important emergency information, such as weather and AMBER alerts, to affected communities. EAS participants include radio and television broadcasters, cable systems, satellite radio, and television providers, and wireline video providers. These participants deliver local alerts on a voluntary basis, but they are required to provide the capability for the President to address the public during a national emergency. The majority of EAS alerts originate from the National Weather Service in response to severe weather events, but an increasing number of state, local, territorial, and tribal authorities also send alerts.

EAS messages are sent as part of the media delivery channel of various participants. Key characteristics of EAS messages include:

  • Designed in a way that it immediately catches the attention of viewers to increase the probability of the general population listening to the emergency message and acting accordingly.
  • It contains the location information so that the message is delivered only in the target geographies.
  • Contains actual audio/video message warning the public.

Since the EAS messages are specifically designed for emergencies, it is prohibited to be used by participants for any other purposes, intentionally or unintentionally. Participants are not even allowed to transmit a tone that sounds similar to EAS messages. The simple rule is that no one is allowed to misuse EAS messages to attract attention to any other content such as advertisements, dramatic, entertaining, and educational programs, etc. To enforce this, FCC has imposed heavy penalties on various broadcasters for violating this rule. Some of the major pending or settled violations and their proposed or actual fines are listed here.

Many of these violations have been accidental but some of these have been because of creative intents also. One specific case is an episode of the TV show “Young Sheldon”. In the Season 1 episode titled “A Mother, A Child, and a Blue Man’s Backside,” Missy (Raegan Revord) is watching the classic “duck season/rabbit season” Looney Tunes short and is annoyed when a tornado-watch alert interrupts it. According to a source familiar with the situation, the scene used a muffled, background sound that was altered to balance the authenticity of a family’s reaction to a severe-weather event and the FCC’s rules against the misuse of the EAS tones. Nevertheless, FCC has proposed a $272,000 fine against CBS for this violation.

To avoid such penalties, broadcasters and other content providers must ensure that an audio tone similar to an EAS message is not present in the content they broadcast. Any missed instance can attract heavy penalties and would also potentially tarnish the brand image of that content provider. Therefore, every content must pass through stringent QC (validating there are no EAS tones) before getting delivered to the end-users.

 

EAS Message Structure

The EAS message structure is based on Specific Area Message Encoding (SAME). Messages in the EAS are composed of four parts: SAME header, an attention signal, an audio announcement, and SAME end-of-message marker; as described below:

  1. SAME Header: SAME header uses Audio Frequency Shift Keying (AFSK) at a rate of 520.83 bits per second to transmit the codes. It uses two frequencies – 2083.3 Hz (Mark frequency) and 1562.5 Hz (space frequency). Mark and space-time must be 1.92 milliseconds. Key information in the header includes originator, type of alert, region for which alert is issued, and date/time for which the alert is applicable.
  2. Attention signal. Single tone (1050 Hz) or Dual audio tone (853/960 Hz). Commercial broadcast operations use dual-tone (853 and 960 Hz together), while the single tone (1050 Hz) is used by NOAA weather radio. It is designed to attract the immediate attention of the listeners.
  3. Actual audio, video, or text message.
  4. SAME end-of-message marker. It indicates the end of the emergency alert.

EAS Message Structure

File-based QC

File based QC tools are now commonly used in the content preparation and delivery chains, thereby reducing the dependency on manual QC. Many content providers resort to spot QC as against a full QC, exposing them to the risk of missing potential violations of FCC guidelines. Therefore, a QC tool that can reliably detect the presence of an EAS message or a tone similar to an EAS message can potentially save a content provider from the potential losses and embarrassment.

EAS message detection is part of all of our QC offerings – Pulsar & Quasar. As a result of QC, the report will contain the exact location of such violation that users can use to review and decide.

EAS message detection

EAS tone

 

Our QC tools not only detect the ideal EAS message tones but can also report tones that sound similar to EAS message tones. Considering the case of “Young Sheldon” episode, this becomes important and can potentially save content providers from potential penalties.

We provide a range of QC tools for various deployments, be it on-premise or on Cloud. Whatever operational mode the users use, they can ensure that EAS tones are not present in the content they send out to end users.

EAS - File based QC

Major EAS violations

Below is a list of some of the recent fines proposed by FCC for violation of EAS usage

  • FCC has proposed a $20,000 fine on 7th April 2020, against New York City radio station WNEW-FM, for using the attention signal during its morning show on October 3, 2018, as part of a skit discussing the National Periodic Test held later that day
  • FCC has proposed a fine of $272,000 against CBS for transmitting a simulated EAS tone during the telecast of Young Sheldon episode on 12th April 2018
  • Meruelo Group was fined $61,000 for including an EAS-like tone during a radio advertisement for KDAY and KDEY-FM’s morning show
  • ABC Networks is being fined $395,000 for using WEA (Wireless Emergency Alert) tones multiple times during a Jimmy Kimmel Live sketch
  • iHeartMedia was fined $1 million on 19th May 2015, for the use of the EAS message during the 24th October 2014 episode of Bobby Bones nationally-syndicated radio show
  • Cable providers fined $1.9 million on March 3, 2014, for misuse of EAS tones in the trailer for the 2013 film Olympus Has Fallen
Remote File QC: During & Post COVID-19

Remote File QC: During & Post COVID-19

 

The entire world is going through unprecedented times. The COVID-19 outbreak has taken everyone by surprise with the businesses and livelihood of so many people affected worldwide. There are a lot of challenges occurring in all aspects of our lives, but these times are also a testimony to the resilience of humanity.

Until a few months ago, it was completely unthinkable that our entire workforce will work from home. We used to think that only a limited amount of work can be done from home and coming to the office was kind of necessary. Work efficiency, team coordination, and communication were all apparent and major challenges for us with the work from home model. It is just amazing to see how we and the vast majority of organizations worldwide have completely moved to work from home within a span of a few weeks, while still remaining efficient.

COVID-19 crisis has interrupted the life for all of us and has made us re-think our priorities and the best way of working in a post COVID-19 era. It is very much possible and probable, that life after this pandemic will never be the same. Good or bad, time will only tell.

Most of our customers have relied heavily on on-premise software & equipment usage so far, with setting up of centralized content workflows that remain under tight controls and regulations. However, that mode of working has suddenly become infeasible and impractical. These same customers are now forced to equip themselves for working and managing operations remotely.

A few years back, during the time that Pulsar on-premise file QC solution was our flagship offering, we started investing time and resources in additional offerings such as Pulsar Pay-Per-Use (PPU) – on-premise usage-based QC service, and Quasar – Native Cloud File QC service. Pulsar PPU has been very attractive for small organizations which don’t have budgets for upfront QC license purchase and have been rather more comfortable in paying on a usage basis. Quasar is primarily designed for organizations whose workflows are already in Cloud. In the current circumstances, Pulsar PPU and Quasar, present a compelling alternative to all media organizations for Remote QC, in pursuit of ensuring their business continuity.

And with our latest QC offering, “QC as a Service” (QCaaS), we are taking it even one step further. For those who want to have their content be verified with a state of the art QC solution, but simply don’t have the computing or human resources to perform the QC, we are now offering them the convenience of simply uploading their content to the cloud and letting us do the QCing for them, returning them with the QC report, and peace of mind that their content meets the QC requirements of their intended platform or end customer. For those who want to keep their focus on their business of content creation or delivery and not the operational challenges of doing the QC, we gladly handle that process for them.

Remote File QC

We have geared up our offerings to help our customers ensure their QC needs are met, irrespective of their location. Here is the summary of our Remote File QC offerings.

Quasar – Native Cloud File QC Service

Quasar is the world’s first Native Cloud Automated File QC Service. It is designed especially to exploit the Cloud infrastructure benefits while still making use of the rich QC capabilities of our on-premise File QC system – Pulsar. Advanced capabilities such as dynamic scaling, regional resourcing, and content security features with REST API, make Quasar an ideal system to QC your content in Cloud. Quasar is available as a SaaS service or a Private edition that you can deploy in your own VPC (Virtual Private Cloud). Quasar works natively on AWS, Azure, and Google Cloud. Read more at www.veneratech.com/quasar.

Pulsar PPU (Pay-Per-Use)

Pulsar PPU is the only usage-based on-premise file QC solution in the market. It has all the analysis capabilities of our enterprise QC solution – Pulsar, while allowing users to pay based on the content duration. In addition to QC, it also allows users to perform Harding PSE validation and get a Harding certificate. Users simply load the credits using their credit cards, which are debited as users perform QC tasks. Read more at http://stage.veneratech.com/pulsar-file-qc-ppu.

QCaaS (QC as a Service)

QCaaS is the QC service offered around our QC solutions. Users simply upload the content to us and we return a QC report to them. QCaaSpricing is usage-based as well where the user pays based on the content duration. No software installation is required for QCaaS on the user side. Read more at www.veneratech.com/qcaas.

Delivering to NETFLIX? QC Requirements in A Nutshell and How to Comply with Them

Delivering to NETFLIX? QC Requirements in A Nutshell and How to Comply with Them

 

It is a well-known fact that Netflix is very conscious of the quality of content that is delivered via their service. Whether it is the overall Audio/Video quality or the structural compliance of the content delivery packages, all of it needs to be in compliance with their technical specifications before it can be accepted by Netflix.

Becoming a Netflix Preferred Fulfillment Partner – NPFP, or being part of the Netflix Post Partner Program – NP3 is a tough task and continuing to remain a partner is also not easy, requiring consistent attention to quality. Netflix maintains the track record of the partners and the failure rates are published on its website from time to time.

It is therefore pertinent for the Netflix partners to ensure the compliance of their content before delivering. Here is a list of some of the common areas of QC that suppliers need to pay attention to before delivering their content.

  1. IMF Analysis: Netflix requires most of its content to be delivered in IMF packages. That means you need to verify the accuracy of your IMF packages before delivery. This includes the basic compliance with IMF Application 2E SMPTE standards and specific validations on asset maps, packing list and other package elements.
  2. Dolby Vision: Increasingly more content is now being delivered in HDR and Netflix has selected Dolby Vision as its HDR format of choice. This requires you to ensure the basic Dolby Vision compliance along with specific structure recommendations outlined in the Netflix specifications.
  3. Photon: Netflix also requires for your IMF packages to pass their own ‘Photon’ IMF tool before delivery. These checks are performed while uploading the package to Netflix. If Photon fails the asset then the content will not be sent to Netflix.
  4. Harding PSE: Detecting video segments that may cause Photo Sensitive Epilepsy (PSE), particularly for content that is being delivered to the UK and Japan, is becoming very important. Netflix may require PSE validation for certain category of content.
  5. Audio/Video baseband quality: The content must be thoroughly checked for a wide range of artifacts in the audio/video essence before delivery.

Many of the above items are difficult and/or time-consuming to perform with manual QC and therefore warrant the use of a QC tool. Venera’s QC products (Pulsar – for on-premise & Quasar – for cloud) can help in identifying these issues in an automated manner. We have worked extensively with the IMF User Group, and Dolby and Netflix teams to create the features that do what user needs, and have done so without introducing a lot of complexity for the users.

We have also integrated the industry-standard Harding PSE engine and can generate a Harding certificate for every file processed through our Pulsar & Quasar file QC tools. And the Netflix Photon tool has also been integrated so that you can receive ONE QC report including the Photon messages as well.

The results are provided in the form of XML/PDF reports for easy assessment. If desired, the Harding certificate and the QC reports (which will include the Photon results) can even be shared with Netflix along with the delivered content.

Pulsar – on premise File-based Automated QC

Depending on the volume of your content, you could consider one of our Perpetual license editions (Pulsar Professional, or Pulsar Standard), or for low volume customers, we also have a very unique option called Pulsar Pay-Per-Use (Pulsar PPU) as an on-premise usage-based QC software where you pay only $15/hr for content that is analyzed. And we, of course, offer a free trial so you can test our software at no cost to you. You can also download a copy of the Pulsar brochure here.

Quasar – Native Cloud File QC Service

Quasar – Native Cloud File QC Service

If your content workflow is in the cloud then you can use our Quasar QC service, which is the only Native Cloud QC service in the market. With advanced features like usage-based pricing, Dynamic scaling, Regional resourcing, content security framework and REST API, the platform is a good fit for content workflows requiring quality assurance. Quasar is currently support for AWS, Azure and Google. Read more about Quasar here.

Both Pulsar & Quasar come with a long list of ‘ready to use’ QC templates for Netflix, based on their latest published specifications (as well as some of the other popular platforms, like iTunes, CableLabs, and DPP) which can help you run QC jobs right out of the box. You can also enhance and modify any of them or build new ones! And we are happy to build new QC templates for your specific needs.