DISCLOSURE: AS AN AMAZON ASSOCIATE I EARN FROM QUALIFYING PURCHASES.
THIS POST MAY CONTAIN AFFILIATE LINKS, MEANING, AT NO ADDITIONAL COST TO YOU, I EARN FROM QUALIFYING PURCHASES. AFFILIATE LINKS ARE MARKED WITH #ad. "I" IN THIS CASE MEANS THE OWNER OF FILMDAFT.COM. PLEASE READ THE FULL DISCLOSURE FOR MORE INFO.
Update: After I initially wrote this article, YouTube included a new codec called AV1 (aka AV01)
AV1 is not to be confused with the old AVC1 I write about in the article.
AV1 offers better quality than VP09 – even at lower bit rates.
It doesn’t look like you can force YouTube to play your own videos with the AV1 video codec.
It appears that YouTube is still testing the codec on their chosen videos.
But you can force YouTube to play high-quality videos with the AV1 when possible by going to the playback performance page under your YouTube account and choosing “Always prefer AV1”.
Then you can go to the AV1 Beta playlist page and see videos compressed with the AV1 video format.
You can see if a video plays with AV1 by using “Stats for nerds,” as described under the “How to know which codec YouTube has used to compress your video” subheading further down in this article.
So you’ve just finished editing a video, and everything looks great when you watch it on your computer.
Then you upload it to YouTube, and the image quality looks horrible due to the compression YouTube applies to your footage. Sucks right?
In this article, I’ll guide you through some tips and tricks to ensure you maintain the quality of the video after you’ve published it to YouTube.
It’s a simple step-by-step guide that covers all the best practices for exporting videos with the highest quality possible.
But first, let’s look at what can influence YouTube video quality…
Why is the image quality bad on YouTube?
No matter what codec you use when you export your video from your editing program of choice, YouTube will apply extra compression to your video content when you upload it to reduce the video file size.
That’s great for keeping the file sizes to a minimum on the small YouTube server (irony detection alert!) and ensuring video playback is possible for users with a poor internet connection speed.
But it sucks if you’re a YouTube content creator who has just spent weeks in the video editing cave creating a new video – because this data saver function decreases the image quality of your video.
When YouTube compresses your videos, it can cause digital artifacts.
You can experience anything from blockiness to banding, bad skin tones, and blurry YouTube videos.
Such artifacts are often seen in dark areas or shadows, large areas of a single color, in blurry shallow depth-of-field backgrounds.
Digital artifacts are also common in footage containing film grain, smoke, or other random particles.
The reason for this is why interframe compression is designed.
Interframe compression don’t like randomness in video footage
Interframe compression works by throwing away information (data), which appears similar across frames, e.g., a dark corner in a kitchen scene in your short film.
With minor changes (such as in shadows), the compression codec might interpret it as a single dark area and throw away much of the information in the shadows.
You’ll end up with a lot of digital blockiness in the dark areas of your frame.
YouTube compression can also have difficulty compressing footage with many randomnesses (such as in film grain).
The arbitrary nature of film grain, smoke, rain, snow, and other random visible particles will make every frame contain vastly different information from the last.
Randomness will give a codec a hard time. That doesn’t keep the algorithm from trying, resulting in your nice clean video ending up looking like digital mayhem.
But don’t take it from me. Watch this excellent video by Tom Scott, which shows what compression can do to footage with many random elements.
I find it interesting that randomness causes bad image quality on YouTube.
Because manually adding random noise is a well-known trick to battle banding and compression artifacts in still photos you upload to, e.g., Facebook and Instagram.
But if you add heavy film grain (which is random) to your video to get a more organic look or your footage is underexposed and noisy, then you’re setting yourself up for failure.
The worst thing you can do to test this is manually adding black bars plus film grain to your video.
That’s a big no-no! Have I done this when I first started? Oh yes… everyone wants that cinematic look, right?!
How to increase your image quality on YouTube
So what can you do about it?
When you upload a video to your YouTube channel, YouTube will reencode and recompress your footage. This will cause a loss of quality no matter what you do.
So it doesn’t matter what codec you use to export your video if you use a container that YouTube will accept.
And as long as you don’t use a setting (bitrate) that degrades the footage too much.
YouTube has two codecs it automatically chooses between – the avc1 and the vp9. The vp9 codec offers a much higher picture quality than the avc1.
How to know which codec YouTube has used to compress your video
But how do you know which codec YouTube has used to reencode your video?
To see what codec you have right-click on the gear cog where you would change the video resolution & select “stats for nerds”:
Force YouTube to always use VP9
I’ve read several attempts on how you can force YouTube always to use vp9 whenever you upload a video.
One of these steps includes adding 1% saturation to your video with YouTube’s video editor, which seemed to work for a while. But I’ve not found it to work any longer.
Another trick was to upload a single short video in 4K.
While it works for the high definition 4K video, I’ve found that it doesn’t automatically apply the vp9 to later videos with a standard definition like 1080p.
So how can you increase the quality of the YouTube video, then?
There seems to be a consensus among YouTube users online that if you run a big channel with millions of subscribers, vp9 will be the standard codec applied.
Since I don’t have such a channel, I have no way of testing whether this is true. If you do, please share your experience in the comment section below.
You have to take another approach if you’re not a big YouTube star.
If you browse to the YouTube section on Recommended upload encoding settings, you read this under the recommended video bitrates for lower resolution SDR uploads,
And below this statement are a couple of tables (one for SDR and one for HDR) with recommended bitrates.
I’ve taken the liberty to reproduce the most commonly used video sizes for SDR (as HDR videos aren’t that common yet) in the table below:
|Video Size||2160p (4k)||1440p (2k)||1080p (FullHD)||720p (HD)|
|Video Bitrate for 24, 25, and 30 fps||35-45 Mbps||16 Mbps||8 Mbps||5 Mbps
|Video Bitrate for 48, 50, and 60 fps||53-68 Mbps||24 Mbps||12 Mbps||7.5 Mbps
Now, this is interesting. It says that any 4K videos will be rendered with VP9.
So the first solution I’ve found is to export your 1080p Full HD videos as if they were 4K.
They may not look beautiful in 4K – or 2K, for that matter – but in FullHD, they will look normal. And because they were uploaded as 4K, YouTube will use the VP9 codec even when they’re played back in 1080p.
I then tried to upload a video in 1440p (2k) with a frame rate of 50fps – and lo and behold – it also used the vp9 codec:
It suggests that the high-resolution processing somehow triggers the default setting into being the VP9 codec. Nice!
But the same video in 1080p 50fps had the avc1 codec applied instead.
Also, I tried uploading a 1440p 25fps video with the recommended bit rate settings, and that video also was shown with the avc1 codec.
So I thought that maybe if I fiddled with the bitrate, I might be able to force YouTube to use VP9.
So I exported the video as FullHD but with the bitrate settings used for 4K:
But no luck! The bitrate settings alone don’t decide the codec applied by YouTube.
This suggests that the best resolution to trigger the default video quality to be VP9 is anything above 2K.
So what can we learn from this? Well, a couple of things:
YouTube doesn’t like randomness in video footage because of the interframe compression. Because of this, it is not wise to add grain to your film.
Adding noise to reduce banding is only advised for still photos – not video.
You should also always remember to expose correctly, so your footage isn’t underexposed and noisy. Check out my guide to shooting in the dark to reduce grain.
The vp9 codec offers better image quality than the avc1 codec.
If your channel is big enough, you might be lucky and have the vp9 codec on as default. I still need proof of this.
There seem to be two minimum factors needed to manually tricker the vp9 codec for your footage during the upload process:
- Upload your video in higher resolution 4k (2160 p video)
- Upload your video in high resolution 2k (50 fps). 48 fps might also work – I haven’t tested this.
In other words, the low-resolution video doesn’t seem to trigger the VP9 codec, so for the best viewing experience, upload in the highest possible resolution.
Did I miss anything? Do you have a better way? Or did you spot a fault in my approach? Please let me know in the comments.
About the author:
Jan Sørup is a videographer and photographer from Denmark. He owns filmdaft.com and the Danish company Apertura, which produces video content for big companies in Denmark and Scandinavia. Jan has a background in music, has drawn webcomics, and is a former lecturer at the University of Copenhagen.