Scalable Wowza Transcoding

I need a Wowza/AWS Engineer to reduce costs

Live transcoding is never easy. Doing it yourself from scratch may be way too complex, running it off a cloud service looks like a rip off. Middle ground solutions still involve a lot of decision making.

It gets worse if you run non-24h streams and their number fluctuates. The optimal strategy depends a lot on how many streams you are running and the dynamics of these. As they’re rarely predictable, yet tend to peak at various times of day or week, provisioning enough capacity for all is definitely a waste.

Wowza has long had transcoding built-in, but as they’re promoting the cloud they document no way of scaling the server product. And if you’ve customized a bit of it, the cloud may no longer be compatible. Not to mention costly.

Customer was running a single Wowza server in AWS, capable of transcoding a handful of simultaneous streams. It was good enough at the time but as the portal grew in popularity it would require more and more resources, up to the point where even the most powerful AWS instance would no longer be good enough. Furthermore, given the nature of the business, most events would run at the same time of day or week and the ever pricier single server would sit idle for most of the time.

The proposed solution was quite simple and them knowing exactly what they wanted helped a lot. The existing Wowza would stay in place and be turned into a “master”. It would do no transcoding on its own but instead spin up ephemeral transcoder workers in independent EC2 instances. Upon a new stream getting published to the “master”, it would start a transcoder instance, push the unprocessed stream to that, and pull the transcoded streams back from it. Very little would change for the application layer as broadcasting and playback would stay almost the same.

There’s a a downside to the approach, the processed (transcoded) feed would only be available some 1-2 minutes after it had started, this being the time needed for the instance to initialize. Yet, given the scheduled nature of streams, this would not be a problem.

They courteously agreed to share the solution with the world and here it is, complete with some instructions. Do make sure it fits your needs.

Does it scale?

Not fully. Although not doing any transcoding, there’s a limit to how many streams the single “master” can handle. But rest assured that number’s pretty high, I have a client running more than 700 streams on a common Wowza configuration.

There are ways to programatically make this a multiple-‘master’ setup so that it scales all the way. If ever needed.

Is it stable?

I would say so. It’s been running since January and I’ve had no complaints.

Is it worth it?

It depends 🙂 It sure has for them and still can’t think of a better setup to fit their needs. But again, this was a very particular use case to start with.

Unmentioned, they had customized the transcoder to display some nice scoreboard overlays. Unless ready to recreate that, the solution had to stay within the Wowza confines.

The transcoder instances management could have been set up at some other level (i.e. Lambda) but we decided to just build it into the ‘master’ Wowza as it would be compact to deploy and easy to grasp by the team in place, since they already had some exposure with coding Java for Wowza.

Not least, they had already invested in wowza licensing long term. And it would pay out as they’d continue to use it on the ‘master’ (which has to run 24/7 anyway as it’s also serving VOD) while paying by-the-hour to for the transcoders’ licensing.

Even if any of these is your case, it may still be viable. Especially if you come from a Wowza background and don’t want to spend much time looking around for options.

Simple Video Sharing Platform

I would love to allow my users to upload their own videos to [presumably] AWS S3. As usual on web, we cannot assume much about uploaded videos.

The customer actually came up with the architecture. They just wanted to know if it’s feasible and if it can be done easily.
Sure thing! The diagram hopefully says is all.

Using Presigned URLs is a great way to let your users (or anybody else) securely upload content to your S3, saving you the bother of having to proxy or manipulate large files.

Every video upload triggers a Lambda function, which in turn asks MediaConvert to process that video. It gets transcoded for Adaptive Bitrate and properly packaged for segmented delivery. When ready (or failed) it fires a notification that lets your backend know the video can now be played (or not).

Content is packaged as CMAF, which brings significant savings in transcoding, storage and bandwidth over traditional HLS and DASH, while still compatible with both.

Original videos (the videos that users upload), get archived to low-cost storage. As processed, play-ready copies of these are already in place, you may never need the originals again, yet you don’t really want to throw them away. Just in case…

Complete solution, less the URL signing, is available for grabs here. You should be able to set it up and transcode a few short videos in less than 20min.

Does is scale?

Oh yes! There’s virtually no bottleneck in the whole AWS-driven part of the architecture. As long as you can keep up with the requests for signed URLs and the SNS notifications, sky’s truly the limit.

Is it expensive?

It depends. 🙂
Transcoding alone will set you back some $7 for every hour of content. Yet the default preset being used is overkill (10 quality variants), you can easily duplicate and customize it to proportionally cut costs. You can save even more with reserved pricing for MediaConvert once your portal reaches a steady flow of uploads.

You’ll also be paying for storage, CDN, S3 and internal traffic. Finally, pennies or nothing for traffic, Lambda and notifications.

Is it worth it over other solutions?

There’s no doing better if you need it scalable and easily deployed. It’s also maintenance free, which many tend to disregard when factoring in alternatives.

Hot topic here is transcoding, and there are many different approaches to it, with pros and cons to each. Factor in volume, size, and fluctuations of video uploads, required readiness (how soon after upload you need a video to be available, at minimum), SaaS vs cloud vs on premises, development and testing capabilities etc.

Low Cost Adaptive Bitrate Streaming

My videos are buffering…

It almost comes as a reflex these days, when a client tells you their video website is buffering a lot, to let them know they need Adaptive Bitrate. That’s the easy part. As the next logical questions are how long it would take and how much does it cost, I’m sure they all love to hear that “Well, it depends…”

And it does quite a bit, you need to decide where and how to transcode your content, where to store it, what format to store and distribute it in, what player to use, and so on.

If low cost is key, your video library is small, and you have a spare computer you can keep busy transcoding for a few days, then you may like the simple solution here. It has been deployed (with adjustments) to a handful of clients, of which Pierre was kind enough to let us share the code, implement details and some usage statistics after running for almost 2 years.

Setup is meant to be deployed in AWS, however components can be adapted to other clouds or a dedicated infrastructure. Setting up a CDN is optional.

Content will reach your viewers via the HLS protocol. Compared to its more agile friend MPEG-DASH, it has been around for longer and it has better support of free and low-cost players.

The scripts here will transcode your videos to 6 quality variants. They work on both Windows and Mac/Linux and will also generate the HLS manifest. Just put all your files in the originals folder and run the .cmd or .sh script. And wait…

Rather than transcoding directly to HLS, the quality variants will be .mp4 files. That is to not have you upload (hundreds of) thousands of files to S3, and also save you the storage cost of the HLS overhead (up to 15%). Transmuxing to HLS is to be performed on the fly by a nginx-vod-module set up between S3 and the CDN. There’s costs to this of course, but it runs just fine on a nano(!) EC2 instance.

At a glance, the setup looks like this:

Does it scale?

Viewer-wise, yes. There’s no limit to how many can watch, and no limit to how many can watch at the same time. There are 4 levels of caching (CloudFront, the VOD module’s, and Nginx’s own 2 levels), which ensure that the tiny EC2 instance has little chance of getting overwhelmed.

Content-wise, not really. A 15 minute video takes some 3 hours to process on a regular computer(!). If you only need to upload 1-2 videos a week that’s no big deal. If you need more, processing time can be improved with trade-offs, but just by a 2-3 fold; consider cloud transcoding if that’s not good enough for you.

Is it stable?

So it looks like. Pierre’s Nginx has been running nonstop since May 2017 with no restart or upgrade (although that may come in useful I guess). His library is only some 200GB but I dare say the setup can easily take ten times as much. Nginx elegantly caches the video files in chunks (i.e. via range requests) while CloudFront will just cache the HLS manifests and segments. This ensures that the most popular parts of the most popular items get cached prioritarily, to save resources and protect the system from overloading.

The only thing that would bring it to a crawl is the scenario where a lot of people suddenly start to watch a lot of different pieces of content at the same time. But as content gets added gradually, that’s highly unlikely. Just in case your site becomes viral, you can upgrade the EC2 instance in 2 minutes or so, with just partial downtime for clients. 🙂 And if that’s not enough because you’re instantly watched by many millions, you may clone that as part of an Auto Scaling Group.

How cheap is it exactly?

You’d be paying some $45/year for an on-demand t3.nano instance, and you can easily cut that in half by purchasing a reservation.

S3 storage costs some $0.28 per GB per year. You’ll be storing about 3.5GB for every hour of content.

CDN traffic will be the bulk of your cost, and that’s hard to estimate. Depends on viewer count, their location and bandwidth capabilities. You can get a free 50 GB/mo for the first year though, as part of their free tier.

There’s also some internal traffic to count, but expect that to be negligible (say under $1/year). It’s the traffic on the S3 to EC2 and the EC2 to CloudFront routes, but these are both cached so you’re safe.

And that’s it.