Latency levels in livestreaming have improved over the past few years, but as audiences for livestreamed content grow, so too does demand for even lower latency. That’s where CMAF comes in.
Common Media Application Format (CMAF) isn’t necessarily a new format. It’s closely related to fragmented MP4, which has been used for years. Co-authored by Apple and Microsoft, the idea for CMAF was to create a standardized transport container for either HLS or DASH, the two dominant streaming protocols, to avoid added costs and complexity within video streaming workflows.
Jon Alexander, senior director of product management at Akamai Technologies, said CMAF allows for chunk transfer. That means that the video segment is still being created by the encoder as it is being played by the player. So, the player has to be configured to start rendering the video before it has even received the entire file.
Chunk transfer could help lower latency levels from where they are now.
Alexander said that about three to four years ago, the default was about 30 to 45 seconds for end-to-end latency whether using HLS or DASH. Akamai said its low-latency product, which launched about two years ago, offers 10 to 12 seconds of latency and is now the company’s standard.
That’s the standard that has been used for big livestreaming events like the 2018 World Cup. But as these events continue to draw larger livestreaming audiences, broadcasters want latency to improve even more.
“What we’re starting to see now is customers saying ‘Hey, we want to go lower,’” Alexander said.
With that request in mind, Akamai at IBC was demonstrating CMAF streaming in September with chunk transfer, which drops latency down to sub-one-second levels.
It’s supported natively on Akamai’s platform today, but the challenge is that video workflows need an encoder and a player that can support chunk transfer.
Moving toward wider adoption
Akamai has an encoder verification process and it currently has five encoders certified for its CMAF ultra-low latency solution. For comparison, it has 13 encoders certified for its current standard 10-12 second latency media services product. So, there’s still some catching up to do but CMAF support is moving along within Akamai’s encoder program.
But to show off its ultra low-latency demo at IBC in September, Akamai built its own player.
Using a custom dash.js player, Akamai demoed a player that used a target latency. That means the player tries to synchronize with the live broadcast and stay 3-5 seconds behind live.
The company was also addressing slippage, which refers to variations or delays within low-latency streams. Akamai’s dash.js player can dynamically reconverge the livestream with the live broadcast using the set latency target to prevent slippage from accumulating during extended viewing times, which can put a livestream up to one to two minutes behind live over the span of 60 minutes.
But just because Akamai built its own player doesn’t mean that CMAF support isn’t in the works among the player community.
John Luther, senior vice president of technology at JW Player, said his company is working on adding CMAF support to its players in 2019.
CMAF adoption has taken longer to gain industry adoption than many people had hoped, Luther said, which he said is also somewhat true of DASH. He said that HLS is the dominant format for streaming today, and that HLS with MPEG-2 transport segment works well enough for most streaming needs today.
“But in last six months, I’ve been hearing almost nothing but requests for lower latency adaptive streaming,” Luther said.
He said part of that demand is coming from the Real-Time Messaging Protocol (RTMP) part of Flash essentially going away—and the industry realizing that HTML5 doesn’t have a true real-time delivery protocol. He said that CMAF chunk transfer can fill that need.
“In order to do all that and make sure everyone conforms to [CMAF], tests it and puts it in their encoding pipelines, packaging, CDNs and the whole ecosystem, there’s a lot of work needs to be done. And that work is now starting to be done,” Luther said. “It’s getting there and I think 2019 will be the breakout year for it.”
The chunk transfer future
CMAF chunk transport got a big push in 2016 when Apple announced it was adding fMP4 support to HLS. The thinking was that CMAF would alleviate the need to have separate silos for content encoded to HLS and DASH.
But at the time, encryption was an issue. Namely, separate video streams would still be needed for the two incompatible encryptions modes—cipher block chaining (CBC) and counter mode (CTR)—that CMAF supports. That’s because Apple’s HLS only supports CBC and historically Google’s Widevine only supported CTR, Luther said.
“You had this logjam. But Widevine now supports both so that broke the logjam,” Luther said. “It wasn’t a fault of CMAF. It was a fault of the two biggest vendors of DRM technologies agreeing to disagree.”
Luther said there’s also a new API in the encrypted media extensions spec for detecting which encryption mode a browser supports, and that it should further help speed CMAF adoption.
There’s still a ways to go for CMAF to begin impacting consumer experience. But Luther said that if CMAF gets implemented by all the content delivery networks, packaging vendors and everyone else, then it has the potential to enable sub one-second delivery of adaptive streaming.