Deeper Dive—Inside the streaming architecture CBS built for the Super Bowl

Super Bowl LIII has come and gone with another victory for the New England Patriots. Behind all that, CBS Interactive built a new streaming architecture to deal with all the online viewers.

This year, CBS’s livestream of the game drew an average minute audience of 2.6 million viewers, up 31% year over year. But CBS was expecting a massive audience. Liz Carrasco, chief technology officer of CBS Interactive, and Chris Xiques, vice president of the video technology group at CBS Interactive, were part of the team that spent months preparing.

CBS Interactive used multiple CDN vendors and a common cloud origin approach for handling the livestream. Xiques said it’s an architecture he believes will be adopted by other media companies.

“Frankly, if the numbers grow for this event like we think they’re going to, then I’m not sure other media companies will have much choice but to pursue a strategy that’s like this,” Xiques said.

Carrasco said that no single CDN would sign up to handle the kind of traffic that CBS was forecasting for this year’s game. “So, we really had no choice but to do that.”

“We actually did go to a couple of the bigger CDNs and said, ‘Guys, we’re looking to get 30 or 35 terabits from you on game day,’ and it was just crickets,” Xiques added.

FierceVideo got a chance to talk with Carrasco and Xiques about the specific vendors and constructs used for the Super Bowl 53 streaming workflow to build in redundancy at every step and ensure a high-quality experience for the livestream.

The following interview has been edited for clarity and length.

FierceVideo: What did CBS Interactive do differently in building its live streaming process for this year’s Super Bowl?

Liz Carrasco: Going into previous Super Bowls, we’ve known that there are certain vendors that you need to use for an event of this scale and that was pretty standard practice. Last time, for example, we used one vendor for signal acquisition and encoding, and another vendor for origin storage, service delivery and security. That was Super Bowl 50, three years ago.

For this Super Bowl, we decided to take the whole stack in house and so we used our own signal acquisition and encoding, and we took a new approach to handling origin and delivery. We were essentially using a collection of vendor solutions that we put together to give us maximum control on game day so we would have primary, secondary and tertiary options for these components, so if there were any failures within the system we would be able to handle that gracefully.

FierceVideo: So, the goal this year was to give CBS more control?

Carrasco: That’s right, because that control inherently gives us a better customer experience. In the past when it was one vendor for encoding and signal acquisition and another for origin, if there was an issue with that vendor the entire system would be down. In this situation, we were able to have redundant, parallel systems in place.

Chris Xiques: Another thing that I would add that really drove this was the realization that with each Super Bowl that we do every three years, the audience is just growing exponentially. At this point everyone’s got some sort of smart device that they can watch sports on if they want to. When we started doing traffic estimations, we quickly realized that counting on one vendor to supply us all the bandwidth we might need was unrealistic. Going from that premise, we decided that we were going to have to use multiple CDN vendors and that we had to have a way to control that usage for the benefit of our users and to make sure we had a really stable experience. That’s what started to drive the design of bringing this whole thing in house.

FierceVideo: What benefits did you see this year because of the new architecture?

Xiques: The biggest benefit that we saw across the board on devices was less rebuffering and a smoother streaming experience. Anecdotally, we had some engineers call us from the Apple TV platform and they said to us it was the smoother Super Bowl they’d ever seen on the platform. That was echoed by a lot of users and a lot of the statistics that we looked at.

FierceVideo: Can you break down the mix of first- and third-party solutions CBSi used within its tech stack this year?

Xiques: Yeah, sure. Starting with signal acquisition, we took advantage of the fact that we partner with the CBS corporate engineering team in New York City so we located our encoders in that broadcast center and used all the smarts from the CBS engineering team together with our CBS Interactive team to really nail down the signal acquisition and encoding. We used Elemental encoders; I think that’s pretty standard across the industry for our contribution feeds.

For origin, in order to use multi-CDN effectively you have to have a common origin that all the CDNs can go to and pull the segments from and do their delivery. The Amazon folks had a new suite of products and one of them is called MediaStore, which acts as an optimized S3 origin that we were trafficking all of these encoded segments to.

We also used a couple vendors to do origin shielding so that that one common origin didn’t get overwhelmed by requests from the edge CDN. And then we used four different CDN vendors to do delivery along with a CDN decisioning vendor that allowed us to create a fairly sophisticated app that allowed us to tailor exactly how much bandwidth we were using from each of the CDNs to make sure we didn’t overwhelm any of them or exceed any of our bandwidth commitment arrangements we made with them.

At a high level, those are the big pieces of the stack.

FierceVideo: Do you think CBSi’s livestream model for this year’s Super Bowl can be replicated by other companies going forward?

Xiques: I imagine it could. When you break it down in terms of the storage, origin shielding, CDN decisioning and the edge services, we basically just came up with this scheme and got all of these vendors to participate. That’s certainly something that other folks could do.

I think the piece that is tough to replicate would be the signal acquisition. Part of the reason that you pay a big-name vendor to do that at scale is it can be pretty tricky to get that just right. The way that signal acquisition is fed to the encoders can be a bit brutal and you definitely want to have multiple failovers with multiple encoders and maybe even more than one facility with backups to the backup.

I think we were really fortunate to work with the CBS engineering team that helped us smooth out some of that stuff with getting from the truck all the way to our encoders. I certainly think it’s possible for other media companies to do that but I think that’s the hardest piece to get right depending on the resources available.