Video Bitrates and Export Myths

Video compression is vital. Getting the lowest file size with the best quality for the delivery medium doesn’t just save space, it also saves time and money.

Given the importance of video compression, and the complexity of the algorithms involved, it’s not surprising that there are plenty of myths and misconceptions surrounding this topic. So let’s start with a closer look at video compression in general, then we’ll break down some of these myths.

If you want to skip to the myths, click here.

What is bitrate?

A term you’ll bump into almost immediately when exporting a file is bitrate. Video bitrate is simply the number of bits (ones and zeros) used per second to describe the video image. The bitrate times the duration (in seconds) will be the size of the file.

It’s measured in bits per second (bps), but since video files require a huge number of bits per second, video bitrate is most often referred to in megabits per second, one million bits each second, which is commonly written Mbps. A 10Mbps bitrate for 10 seconds makes a 100Mb file. A 20-second clip with the same bitrate would be 200Mb, and so on.

Bitrate is the only thing that affects the file size.

Higher bitrates will often give you better quality, but not always. The choice of video codec will affect the quality a lot, since different codecs have very different compression algorithms. Some codecs are 8-bit only, some only do 4:2:0 color. Some can only create video files in certain frame sizes. Some compress each frame independently (intra-frame codecs) while others look at previous frames (inter-frame codecs), etc.

Some codecs are designed to perform best at high bitrates, while others are made to excel at lower bitrates. Different codecs also have individual thresholds, where rising the bitrate will not further improve the quality.

Given all of this, we can’t know much about the quality of the video from bitrate alone—and comparing the bitrates of different codecs doesn’t make much sense, since their ability to compress video with minimum quality loss may be very different.

Is 45Mbps a high bitrate for 1080p HD at 29.97fps? For a ProRes file it’s very low. it’s the bitrate of ProRes 422 Proxy, and you’ll see a lot of compression artifacts. For the XDCAM format, it was the standard for many broadcasters for more than a decade. For H.264, it’s high, with very few visual problems.

That’s why it’s not very informative when people say that their editing project is “huge, with more than 10TB of material attached.” If that footage is 6.5k Arri RAW Open Gate on ALEXA 65, the data rate is 5.89 Gbit/s at 24fps. 2.6TB per hour, so the ten terabytes is just short of four hours of footage, which is not much. If the footage is H.264 shot at 50Mbps, the ten terabytes will store hundreds of hours.

Constant bitrate vs variable bitrate

Constant bitrate, or CBR, means exactly that; the bitrate is fairly constant, no matter what the content is. It’s useful for situations like broadcasting, where a consistent, predictable data stream is preferred. But CBR will sacrifice quality when the complexity of the scene pushes the encoder beyond its data limit. Variable bitrate, or VBR, uses the bits in a smarter way, increasing the bitrate when the image is complex, and reducing it when the image is simpler. VBR can be done in one or two passes. 1-pass VBR is faster, but it only analyzes a chunk of the video at a time. 2-pass VBR is slower, as it analyzes the whole video before allocating the bits.

The VBR algorithms may not always agree with you on what’s more important in the video, and allocate too few bits to the areas you feel are most interesting. Maybe you don’t tolerate as much blocking or noise in the shadows, or as much smoothing, as the algorithms. In these cases, try CBR instead, as it may look better in some cases.

Unless you’re using CBR, the bitrates you set when exporting are not absolute. They’re target bitrates. If the footage is almost static, or if there’s very little detail in the subject, or a shallow depth of field, the bitrate will be lower because it’s easier for the algorithms to compress that image. If it’s a handheld shot from a phone, with a huge depth of field and lots of movement and detail—think clouds of confetti or a windy forest—the bitrate required will be higher.

A look at YouTube playback bitrates

It might surprise you to hear, but bitrates on YouTube videos are not created equal. The HD version of Metallica performing King Nothing (St. Louis, MO – November 5, 2023) has an average video bitrate of 1,367Kbps when watched in the AV1 format, while YouTube Premium users get up to 3,724Kbps in the VP9 format.

The Premium version is noticeably better (more detail, less artifacts, feels cleaner). There’s even a 1,762Kbps VP9 version, which is more like the AV1 version when comparing quality, and the 2,690Kbps AVC version also looks about the same as the AV1 version.

You can use Stats for nerds on YouTube to see what version you’re watching. Right-click in the video and choose Stats for nerds to see the codec and codec ID (typically a three-digit number in parenthesis), then use this list to find out what that means.

In comparison, Mamoworld posted an HD tutorial on Getting Started with Automation Blocks for Premiere Pro which got a measly 439Kbps, six times lower than our Metallica example. And it must look awful, right?

No, it looks okay. But why is the bitrate so much lower?

A video tutorial with lots of almost static screen recordings of software doesn’t have as much movement and small details as the rock concert. Most of the screen is static for long periods at a time, with only the mouse and a few panels changing—there’s no image noise in screenshots, either. And when Mathias is on camera, it’s a locked down shot. This means there’s a lot of redundancy in consecutive frames, so the codec algorithms can compress the image very effectively without losing too much quality.

Of course, the Premium version of the Metallica concert was encoded with a different, more modern codec (VP9) than this tutorial example (H.264), so a direct bitrate comparison isn’t that useful. Mamoworld also has 20k subscribers, while Metallica has 10.1 million, so YouTube gives them higher priority. Hence, no VP9 version of the Mamoworld tutorial.

Note that the bitrate of 439Kbps in the Mamoworld video is an average. Bitrate Viewer shows that it varies between 17Kbps and 3776Kbps, depending on how much movement and detail there is. The video starts with Mathias on screen, which demands a higher bitrate, while the middle part is mostly almost static screen recordings. This part has a much lower bitrate, and if you look carefully, you will be able to spot the I-frames in the Group of Pictures, or GOP, showing as short spikes in bitrate.

Somewhere in between, 7 Years of Film Editing Knowledge in One Big Video, posted by This Guy Edits got an average of 1,645Kbps in the VP9 version.

Although this 96-minute piece has a lot of feature film snippets, they seem to be mostly low-resolution copies, which can be compressed more easily than full HD clips.

Video compression basics

Uncompressed video results in gigantic bitrates and therefore huge file sizes, so most video is compressed in some way. Different methods are used for different purposes.

A quick glance at the table here shows that uncompressed video will produce files that are prohibitively huge. High-quality versions of the most common editing codecs, AKA “mezzanine” codecs, give us significantly lower bitrates, but still large files.

The H.264 export presets in Premiere Pro delivers bitrates that are 60-70 times lower than the uncompressed ones—while YouTube streams may be more than a thousand times lower!

WordPress Tables Plugin

Sources: Apple ProRes White Paper, Wikipedia, and Avid Knowledge Base. There may be some rounding errors.

Intra-frame codecs like ProRes and DNxHD compress each frame independently, very similar to JPEG. That’s why they are easy to edit, so they are often used in post-production, and when making proxies.

Inter-frame (long-GOP) codecs look at multiple frames at a time in search of repeating pixels, motion estimation, and patterns, and can compress the signal much more effectively than the Intra-frame codecs, yielding much smaller file sizes. Without getting too deep into the weeds, they use a mixture of different frame types—Intra-coded (I), Predicted (P), and Bidirectional predicted (B)—which you’ll occasionally see listed as “IBBPBBPBB” or similar.. This is the “group of pictures” that defines the term GOP.

Because of the smaller file size, they’re widely used in cameras, smart phones, and drones.

Inter-frame codecs like H.264 (AVC) and later are excellent for distribution and viewing purposes that just require playback. But they are harder to edit and scrub through, since displaying each frame (except for I-frames) depends on reading several other frames in a GOP.

Premiere Pro uses hardware acceleration to read some of these formats, so depending on your recording format, your CPU and your GPU, your system could struggle a lot, or scrub smoothly without dropping frames. Exports and renders too will be faster when the acceleration kicks in.

The codec choice depends on what your video is meant for. If you’re exporting a video file that will be edited further by you or someone else, or used as part of a film, you’ll mostly want to export to codecs like ProRes, AVC-Intra, or DNxHD. The files will be big but will withstand further adjustments like color grading very well. If the film is finished, and the file you export is for distribution only, you’ll likely be using H.264, or other highly compressed long-GOP codecs.

There are many other codecs available, such as VP9, AV1, etc., each with its own advantages and disadvantages. DCPs for cinema distribution and Netflix IMFs both use compression based on the JPEG2000 still image format.

Some codecs are protected by patents, which means NLE developers and streaming companies must pay to use them. This is why companies like YouTube are moving towards the AV1 and VP9 codecs, which are open and have no patent issues. Note that, while Apple ProRes is also a patented codec, the decoding is an SMPTE standard. So it’s safe to use as an archiving codec, as it will basically be readable forever.

Lossy and lossless compression

When encoders compress video, they’re removing redundant, repeating, or irrelevant data, such as consecutive frames that are similar or identical, or details that are not noticeable by the human eye. The objective is to make the file smaller without losing any significant information.

It’s certainly possible to do completely lossless compression, using algorithms like run-length encoding (RLE), Huffmann encoding, etc. It’s a bit like compressing a folder into a ZIP file. Nothing is lost, even if the file size is smaller. The original files in the folder can be restored without any loss.

DCT (discrete cosine transform) is used in most of the video codecs you’ll encounter (and JPG images). In itself it’s not lossy, but the principle is to represent the signal in a way that’s easier to compress, and end up with small numbers that can be rounded off to make RLE-based compression algorithms happy. When we do such rounding, we introduce errors, and the original signal can no longer be recreated perfectly. We’re now doing lossy compression, and we’re also dramatically reducing the file size.

If we reduce the bit depth, we can save a lot of data. We also use math to convert the RGB image to YUV, where Y is the black and white info, and U and V contain the color info. Then we can use chroma subsampling, lowering the resolution in our color signals, to reduce the data by an additional 33% (4:2:2) or 50% (4:2:0), compared to 4:4:4. Don’t worry if you don’t thoroughly understand those chroma subsampling numbers. All you need to know is that higher numbers means higher color resolution.

Encoders can also do things to the image before encoding long-GOP codecs, like noise reduction, blurring, skin smoothing, etc., to help the algorithms compress better.

YouTube Help says: “YouTube’s video processing may remove some noise to achieve streaming bitrates. You can get more control by denoising your video before rendering it for upload. Denoising can also help if your video looks too “compressed” when streamed.”

As you can see, there are many ways to reduce the bitrate. But we’re now throwing away a lot of info for each pixel, so we’re definitely getting lossy results. Surprisingly, you may not be able to see any difference in quality between an 8-bit and a 12-bit image, or between 4:4:4 color and 4:2:0 color. Since our eyes don’t see as much detail in color as we do in black and white (you know, rods and cones) the video may still look very good!

But when we do things like color grading or keying and compositing, then throwing away all that info keeps us from getting a clean result. Recordings from consumer and prosumer cameras are often heavily compressed, and may initially look good, but they can break down rapidly when doing post production work.

When we push the limits of the compression methods, we get compression artifacts like DCT blocking, ringing, posterizing and “mosquito noise”. The more we lower the bitrate, the more artifacts we get. For a visual explanation of how lossy compression looks, watch “Why Snow and Confetti Ruin YouTube Video Quality” by Tom Scott.

Tom Scott pushes the algorithms to their limits and beyond in this YouTube video.

If you want to really understand all the settings for H.264 in the Export tab, read about H.264 profiles and levels on Wikipedia.

Hardware vs, software decoding and encoding

Most contemporary CPUs and GPUs have hardware support for decoding and encoding video formats, ranging from ProRes to AV1. Apple and AMD aren’t all that open when it comes to listing what codecs are supported by which hardware, but you’ll find details for Intel and Nvidia on the links below.

Nvidia hardware codec support

Intel hardware codec support

Premiere Pro and other software can use this built-in hardware support to make decoding and encoding these codecs much easier and faster. If you choose a setting in Premiere Pro export that falls outside your hardware encoder’s abilities—which can be affected by frame size, frame rate, 2-pass VBR, bitrate, profile/level, etc.—you’ll see a message saying, “Your system’s hardware does not support hardware acceleration for the current settings.”

In these cases, encoding on the CPU (software encoding) will be the only option unless you change the settings to fall within the limits of your hardware.

Be aware that hardware encoding is primarily built for speed, not necessarily for quality. Hardware accelerated VBR exports are always 1-pass, and some only support 4:2:0 color.

There have been reports that hardware encoding has a noticeable lack of quality over software encoding at the same bitrate, especially when doing 2-pass in software. But you’re paying with your time—exports take much, much longer.

If your system has both Intel & Nvidia accelerated decoding chosen, the Intel decoding is prioritized over the Nvidia decoding by Premiere Pro. If your timeline has a hardware decoded H.264 source clip, and you’re exporting to a hardware encoded H.264 video file, Premiere Pro will use the Intel acceleration for decoding, even if Nvidia is chosen. This is to avoid issues with decoding and encoding at the same time with Nvidia acceleration.

Interlacing? Please no!

Interlacing is harder to encode, since you really have two half-res image sequences, with no shared pixels, taken 1/50th or 1/60th of a second apart. In many areas, broadcast is still produced at 1080i50 or 1080i60, which means 50 or 60 half-res images per second—interlaced. The European Broadcasting Union did extensive testing of digital transmission of 1080i50 and 720p50 (50 progressive images at 1280×720 px). They found that 720p50 gave a much better perceived image quality than 1080i50 at a given bitrate.

Since bandwidth is a cost, most broadcasters have been transmitting 720p50 for many years, with very few complaints. Many of them are now switching to 1080p50 and even 4k for digital distribution. Avoid interlacing if you’re not specifically asked to deliver interlaced video.

Frame size and frame rate matter

UHD (3840x2160px) is four times as many pixels as standard HD (1920x1080px). That means you’ll have to use a lot more bits to describe the image without any loss in quality—but not four times as many. In an image with higher resolution, more neighboring pixels will have smaller differences, which helps the codecs describe the image using fewer bits. You don’t need as much as four times the bitrate for UHD compared to HD, but rather a little bit less than that, if you want to keep the same quality.

In Premiere Pro, the “High Quality 1080p HD” preset has a target bitrate of 20Mbps while the “High Quality 2160p 4k” preset has 80Mbps—exactly four times as high. You should therefore, according to theory, get a slightly better image quality with the “High Quality 2160p 4k” preset.

For many purposes, exporting a high-quality HD file can be a much better choice than exporting a low-quality UHD file. For the same reason, delivering a high-quality 25fps file instead of a lower quality 50fps file will also be better in many cases.

This is equally relevant when you’re exporting an audio podcast (yes, even audio only podcasts are often edited in Premiere Pro, especially since we got speech to text). If the only thing that’s stereo sound is the jingles at the start and end, you may want to export mono audio, because the whole audio bitrate is used on that one channel instead of two (stereo). Even though it’s one channel instead of two, you can’t halve the bitrate and keep the same quality. But you can reduce it quite a bit—or just get better sounding audio if you keep it at the same bitrate.

Take advantage of Smart Rendering for faster exports

If you’re exporting long or complex timelines, you may benefit from exporting a ProRes master file, and use that file to create the H.264 export. This has the extra benefit of giving you a good quality file for the archives—maybe even with multichannel audio containing stereo mix, nat sound, dialogue, ambience, music, etc. on separate channels.

To use Smart Rendering—for super-fast exports—I recommend using ProRes 422 or ProRes 422 HQ as the preview codec in Premiere Pro, instead of the default ProRes LT. Use the flavor you want your master file to have. If you then render previews of parts of the timeline when you’re having a pause, you can benefit from smart rendering by using these preview renders when you export. On large projects, you can even do overnight renders, just like we did in the old days.

Smart Rendering means your NLE is mainly copying frames, with no re-encoding. Avid DNxHR would also be a good choice for Smart Rendering. 

Apps for calculating and checking bitrate and file size

There are several apps and web pages that can be useful when calculating storage needs, file sizes, bitrates etc.

The AJA Data Rate Calculator, aka AJA DataCalc, computes storage requirements for professional video and audio media. Available for macOS, iOS, and Android

The ARRI Formats and Data Rate Calculator is an online tool that calculates the recording durations and data rates for different Arri camera settings.

Wonder what’s inside your video file? Use MediaInfo, a free and open-source software that displays technical information about media files, as well as tag information. You’ll get info on format, codec, chroma subsampling, bitrate, bit depth, frame rate, etc.


OK, now that you understand how video compression codecs work, it’s time to have a look at the myths. See the end of this section for a summary.

Myth #1: MOV is better quality than MP4. (False)

MOV and MP4 are containers. Containers can have different codecs inside them, and it is the codec that describes the image (and audio). A MOV and an MP4 could have the exact same H.264 data stream describing the pixels in the video in exactly the same way.

There are other differences: You can’t have a timecode track in an MP4 like you can in a MOV, and a MOV can contain more codecs (and in Premiere Pro; more audio tracks) than an MP4.

The container format does not directly affect video or audio quality.

Myth #2: Higher bitrate yields higher quality. (Not always true)

It’s commonly assumed that higher bitrates yield better video quality. Here’s a quote from an online article on video bitrates:

“The bitrate has a direct correlation to the quality of the video. Higher bitrates result in superior video quality but larger file sizes as well.”

One part of the statement is true: It’s a fact that a higher bitrate results in a larger file size. After all, bitrate is the only thing that affects file size. The problem with the quality aspect of the statement is that, without telling you so, it assumes that everything else stays the same: frame size, frame rate, codec, etc. Even keeping frame rate and frame size constant, you could encode to four different codecs with four different bitrates, and there’s no guarantee that the one with the highest bitrate will have the best quality.

As we’ve seen, differences in the footage also affect how efficient the compression will be. Modern codecs are more efficient and produce higher image quality than an older codec at the same bitrate. The good old QuickTime Animation codec results in a huge bitrate, but does it have great quality? No, it’s 8-bit only! Also, it will not work in FCP, in Finder, on phones, or in many video player apps. If you need transparency, Use the more modern ProRes 4444, and get smaller files (still large) with better quality.

The quality may even differ between different encoders. Some encoders can outperform Adobe Media Encoder because they use more advanced algorithms. YouTube has some very good encoding algorithms. Try exporting a ProRes HQ master file from Premiere Pro and then use Handbrake or Shutter Encoder to convert a H.264. They both use the same FFMPEG engine.

So, higher bitrates do not necessarily mean better image quality.

Myth #3: Export bitrate shouldn’t exceed the footage bitrate. (False)

No, this is just wrong. You may have shot 1080p AVCHD video at about 15Mbps. Then you add some text and graphics, and maybe some transitions, stabilizing and color grading. All of these calculations are done in 32-bit internally in your NLE.

Should you then deliberately degrade the quality of all these things when exporting an MP4 file, just because the video was shot at a low bitrate? Absolutely not!

For screen recordings it’s almost impossible to get down to the original bitrate, which can be extremely low if the content is a software tutorial or something similar, with very little movement. When we add transitions, graphics, zoom ins etc., we’ll have to export at a higher bitrate than our original screen recordings, or it’ll look bad.

Our output should definitely not be dictated by the bitrate of some video files used.

Myth #4: The YouTube HD preset is perfect for YouTube. (False)

We had a look at the bitrates we get when watching YouTube earlier in this article. What about the files we upload to YouTube? Since the bitrate we get from YouTube is so low, it seems logical that we don’t need to upload videos with a high bitrate. But that would be a mistake.

First, let’s make it clear that the file you upload to YouTube and other social media is never seen by the users. YouTube and all other social media will use that uploaded file to make copies at different frame sizes and bitrates. It’s these copies that are shown when you watch video on social media. Better originals make better copies!

The better your uploaded file is, the better the copies will be. Pre-compression to a low bitrate before uploading to YouTube often results in visual artifacts, especially in the shadows.

The YouTube export presets in Premiere Pro and Adobe Media Encoder are all H.264. The 1080p HD version has a target bitrate of 16Mbps. It’s not a terribly bad quality preset, but these presets are not made for best quality, they’re a compromise between file size/upload times and quality.

Since the file will be re-encoded by YouTube, you want to upload the best quality you can afford (time- and storage-wise) as long as you don’t exceed the maximum file size of 256GB.

Uploading a ProRes HQ file to YouTube yields a better result from YouTube re-compression than H.264, even if the resulting bitrate you get back is the same. Whatever you choose, don’t use the bitrate recommended by YouTube; a measly 8Mbps. Don’t trust advice from social media companies. Facebook recommends 3-6Mbps for 30fps HD video. Then again, they also recommend 1280×720 frame size, so maybe their specs need a little updating.

For an extreme example of how re-compression degrades video, watch “This Is What Happens When You Re-Upload a YouTube Video 1000 Times!” by Marques Brownlee.

The YouTube HD presets are OK, but will not give you the best possible quality.

Myth #5: Just adjust your Target Bitrate to get your desired file size. (False)

In Facebook groups and forums, when someone asks, “I have edited a short video, and I need the file size to be less than 500MB” people will often answer, “Just set the bitrate, and Premiere Pro will calculate the file size accurately.”

Most codecs, including Apple ProRes, have a variable bit rate (VBR), so the actual bitrate for a ProRes file may be a bit lower, or up to 10% higher than the “norm” for that flavor, depending on the complexity of the content.

As we have seen, the target bitrate you set is not necessarily the one you get, since screen recordings and action-filled scenes will result in different bitrates. The Estimated File Size in the Premiere Pro Export mode is just that: An estimate, based on “normal” content—and it may very well be wrong. Premiere Pro and Adobe Media Encoder calculate the exact size for CBR and estimate this as the size for VBR. You won’t know until you’ve exported the file if this was a good guess or not.

Myth #6: Video compression always reduces video quality. (False)

Lossy video compression removes some data from the video file permanently, making it impossible to recover it later. Lossless video compression preserves all the data from the video file intact. However, lossless video compression results in large file sizes, which may not be desirable for some purposes.

So video compression is not always lossy, but because of the large file sizes that lossless video compression creates, video compression will reduce quality in most practical cases—but not always.

Here’s a surprising fact: some lossy video compression techniques can actually improve the perceived quality of a video by removing noise, smoothing edges, etc.

Myth #7: 2-pass encoding is always better. (Partly true)

Yes, 2-pass encoding should theoretically always give you a better result than 1-pass encoding. But you will only see a difference at low bitrates. For bitrates higher than 10Mbps (for 1080p HD) you’d be hard pressed to spot any difference, but on material with lots of fine details and motion, you may still see a difference.

Hardware accelerated export is always 1-pass encoding.

Myth #8: Exporting high quality video takes a long time. (False)

Smart Rendering and hardware accelerated decoding and encoding can help us export very high quality at super speed. So no, this is not necessarily true. Hardware accelerated encoding does not support every flavor of every codec, though, so if you need a special flavor of H.264 or some other codec, you may experience slow export times.

But, as an example, exporting a high quality ProRes HQ video file can be super-fast if you take advantage of Smart Rendering and/or Hardware Accelerated encoding (macOS on Apple Silicon only).

Myth #9: Upscaling to 4k gives you better HD quality. (Partly true)

Some users report that they get better quality on YouTube when they upscale their HD videos to 4k before uploading them. YouTube will use a more modern codec for 4k uploads, so this will help in some cases.

Will your viewers spot the difference? There is a good chance that they will, if they get to see the “4k” version, but if they see the HD version, they may or may not see the difference. Using AI upscaling, the quality of the 4k video may get so good you will see a difference in all the versions.

You’ll have to do a test to see if it’s worth the extra effort. 

Myth #10: Exporting at a high bitrate will make the video stutter. (Partly true)

In almost all cases this is wrong. Every social media channel (YouTube, TikTok, Instagram, X, Facebook etc.) will never show the file you upload. No one will ever see it! The viewers get to see a copy of the original, encoded in different sizes and bitrates for different viewing situations.

The original you upload may be stored on their servers waiting for more efficient encoding algorithms in the future, though. Some places, like on Vimeo, you can choose to let viewers download the original file that you uploaded, but they will not stream it.

Only if you’re exporting a file to a self-hosted video service that shows the original video file, or for local playback on a system, will the bitrate have any impact on playback performance.

Myth #11: You can’t export ProRes on a Windows system. (False)

For some reason, this is still a common misconception. You can definitely export ProRes .mov files from Premiere Pro on a Windows system. Premiere Pro supports importing, editing, rendering and exporting ProRes in most of its flavors.

Callout: Just a quick warning about exporting ProRes from Premiere Pro: The presets are all set to render in 8-bit. Always change this to 16-bit to ensure you get a true 10-bit output, to avoid banding and posterization.

I also recommend that you enable Render at Maximum Depth and Use Maximum Render Quality for all exports out of Premiere Pro. For an in-depth analysis of these settings, read my article on Maximum Render Quality in Premiere Pro.

Summary of myths

WordPress Tables Plugin


If the only thing you know about a file is the bitrate, you don’t know much about the quality. Video compression algorithms are complicated and hugely different, so directly comparing the bitrates of two files with different codecs It’s not meaningful.

If you want the best possible quality on YouTube, Vimeo, etc. you should upload a high-quality 4k file in DNxHR or ProRes HQ format, or better, even when the original is HD.

No matter what you do, videos with high-speed particles like confetti, and videos with lots of moving small details, will fall apart and look bad on social media platforms, because the algorithms are pushed too far at the low bitrates.

Jarle Leirpoll

Jarle Leirpoll is a film maker based out of Norway, and author of "The Cool Stuff in Premiere Pro". He runs, where he shares free templates, presets and projects. Jarle has trained people at top broadcasters and media production companies like Disney, BBC, NRK, DR, Swedish TV4, Warner Bros, Discovery Channel and countless others.