End of Codec Wars?

Can AV1 deliver and together with FLAC and Opus make codec fights history? If AV1 is as good as they promise, that is when free royalty free codecs are best at everything: lossy and lossless audio, and then with AV1 video. There should be no reason to use anything else in my books. Mux those in MKV or its imouto format WEBM, both free as well.


Even typically tech advanced anime scene is to this day sticking usually with h.264 + aac combo. Will they be the first to switch to AV1+Opus? They skipped VP9+Opus combo for some reason.

Attached: free-codecs-and-containers.png (768x569, 172.54K)

Other urls found in this thread:

en.wikipedia.org/wiki/Alliance_for_Open_Media).
cnet.com/news/apple-online-video-compression-av1/
gist.github.com/l4n9th4n9/4459997
forum.doom9.org/archive/index.php/t-163182.html
en.wikipedia.org/wiki/Nvidia_NVENC
anandtech.com/show/12601/alliance-for-open-media-releases-royaltyfree-av1-10-codec-spec
hooktube.com/watch?v=50qPT0h9Vr0
hooktube.com/watch?v=vzz30MB83Xg
hooktube.com/watch?v=47KRct-Lulw
dic.nicovideo.jp/a/evaアニメータ
trac.ffmpeg.org/wiki/HWAccelIntro
wyohknott.github.io/image-formats-comparison/speed_results.html
youtu.be/64FRE9L40oI?t=1118
twitter.com/SFWRedditGifs

/thread

Attached: standards.png (500x283, 23.74K)

Epic, you must be quite intelligent, posting xkcd comics and all! Too bad you have to go when summer comes to an end, we'll surely miss your valuable input.

...

There was no (free and efficient) encoder for VP9.
By efficient I mean reaching at least the same ratio of (file size, quality, used watts) as in x264 but targeting VP9 bitstream.
Will there be one for AV1? Can it really be called free and open if the encoder is effectively not open for everybody?

If you don't fall for the 4K meme, x264 is already the best there is. The only reason to wait for something else would be indeed the royalty free aspect of AV1 and HDR support (and hardware decoding for 10-12bit, if you're a normie watching stuff on his jewphone).
I'm way more hyped for a replacement to JPEG and maybe PNG.

AOMedia group behind AV1 is pushing hard to have even hardware based decoders ready on chipsets by 2020. It's much bigger interest group now than what was for VP8 and VP9.


Do you even understand what topic is? Can you read? Are you that completely illiterate communist polish child? Fuck off.

It's called FLIF

Nope because whatever happens AV1 will be late to the market. h265 is already out and already supported.
In 10 years when we're talking h266 maybe whatever else the free software community is working on will have a chance.

Sadly, it's kind of abandoned (all the last commits are build system stuff).

The problem is that it is who accepts the codec that matters. Look at who's behind AV1 (en.wikipedia.org/wiki/Alliance_for_Open_Media).

How does it perform for animation? I've always felt 2D animation would benefit from a custom compressor vs live action video.
Maybe a PNG-like compression format?

That's what tuning is for. x264 has `-tune animation`, for example.

AV1 will be a thing because the companies involved are terrified of having to pay streaming royalties.

Reminder that x264 is the encoder software, and h.264 is the codec.
Reminder that Xvid l produces better results than h.264 at Bluray rate (>2500 kbps).
Reminder that the .mkv container can virtually anything (including Xvid encoded video with multiple subltitles/audio streams).

Your codec choice will ultimately be dictated by Google and Apple. Is Joe Phone user very concerned about the codec that plays his videos? No. He doesn't know what that is. What incentive would Apple have to switch to AV1?

Why don't you try to prove your bullshit, brainlet?

To help answer my own question, I found this.
cnet.com/news/apple-online-video-compression-av1/ Interesting.

WHY THE FUCK IS THIS STILL HAPPENING!? Everything is a real computer that can run software now, every device is capable of using reprogrammable DSPs or shader pipelines of some sort, OSs have had stuff like QuickTime (lol, no more 3rd-party component architecture for you since QTX!!), DirectShow/MF, and GStreamer forever. "Standard codecs" should be a thing of the past.

Mark my words, if this keeps up, we'll be back to the days of individual programs hard-linking drivers for individual models of printer.


No, for the majority of things PNG is used for, it's SVG, because most PNGs originate as vector.

Attached: this-is-fine-oc-im-okay-with-the-events-that-26943851.png (500x761, 201.43K)

There is plenty of reason to use other things. AV1 is great for ahead-of-time encoding, but the encoding performance is really fucking heavy. It's useless for real-time video (for streaming and video chat).

Flattened animated .SVG

But many animators are still using frame-by-frame bitmaps.

What you said has no value until the encoder is optimized. I mean, come on, the bitstream spec was frozen only 4 days ago.

x264 is not that good, is the best because compatibility.
4K is not a joke, human eyes can focus their vision up to 7K IIRC, so 8K IS a joke and will be forever.
But yeah while I do admit official encoders suck ass compared to some actual torrent pros Like Grym or RARBG which can keep quality while reducing size by 17%, depending on the movie it will start to suck below from there but I've seen some good looking 1080p movies from BDRips at 4GB but they're usually light on special effects or cartoons.
x265 is slightly better though not as good as advertised maybe 33% tops.
vp8 is a joke, it looks like ass, even if you copy the freaking bitrate or render above it which makes me paranoid about AV1.
vp9 I haven't tried, I'm a poorfag and I'm afraid my shitty PC will implode.

The real fucking joke here is Level 6.2 for h.264, like, who thought of that, or they made it for x265 and they just realized they could use it for 264?

I honestly don't know why everything doesn't just use ffmpeg as its backend for playing video/audio. It's good, and it just works. Every unix-like OS tries to do its own snowflake thing but they can just use ffmpeg already.
This is where the real faggotry lies. If the asics were more versatile they could be adapted for a multitude or standards and do hybrid software/hardware decoding which would be overall better.
What really happens is they either enforce a monopoly or they're only useful for 2-3 years after which either software is better or everyone is using a new level/profile/whatever that the asic can't touch and it becomes unused hardware.
Hardware decoding ASICs are the graphics card version of "have a CPU with a trillion CISC instructions nobody ever uses and that you can't do away with because you need to maintain backwards compatibility for the nobodies"

Attached: butter lube.jpg (750x589, 48.39K)

how about encoders?

Encoders are a different story, they're actually useful.
There's a capture card market, but there isn't a decoder card market. Encoding is usually too heavy to be done in real time well, but hardware decoders in GPUs nowadays can make lossless video with good compression, and they benefit from being integrated into the GPU.

Well I use Microshits Windows and while is true that ffmpeg can do pretty much all you need is really annoying to use, is better to have three GUIs (xmedia recode, mkvtoolnix and handbrake). Might seem like it's more of a hazzle at first but is easier to correct and spot user mistakes.

You mean workstation GPUs?

The whole reason behind creating h.264 is to produce better results for low bitrate videos. Any other debatable improvement is just a bonus. H.265 at least adds something meaningful for high bitrate: HDR and 8K support. But h.264 is only good for collecting royalty fees if the video used commercially.

SVG is a piece of shit format.

ffmpeg is also a library for encoding and decoding.
A lot of video players like mpv and vlc use it as a backend, and the user doesn't need to touch the terminal program. What I'm saying is we have whatever faggoty way codecs work on Android, garbage like gstreamer and developers manually implementing every single codec they want their program to have.
When using a single thing for everything is a really fucking dumb idea like with Systemd it happens, but when there's only one true answer in a sea of trash as is the case with ffmpeg it doesn't become the norm. It's a fucked up world.

No I mean every AMD/Intel/nVidia GPU in the last 5 years.

...

The key sentence in your pic is

Really, I just wish the entire world could be like the anime rip scene: Fresh new codecs whenever computers get faster, new resolutions, chroma formats, etc.


Sure, SVG is corpulent XML "human readable" bloat that has to be compressed separately, and it has (OPTIONAL) stupid features like scripting and remote transclusion, but it's still the only widely supported format to kill off the cancer of gratuitous bitmap images, especially now that Flash is dead.

I guess because some of those codecs have royalties (like almost every MPEG codec), if you're doing some freeware app like ffmpeg itself then there's no trouble but if you wanna be hosted in an appstore or have legit (not scammy) ads to win shekels and shit then you have to invest and be careful with what you put in there and that means adding codec by codec as you grow making sure having the most compatible ones scratched down first. Besides big companies love it as they can be smug about their devices having more codecs than the competition Buy a good android box with VLC at $100, not that Roku bullshit.


This, Netflix already uses VP9 for their own shows in 4K and x265 for every other thing. So I can see them swapping to AV1 the second is out if they haven't started yet.

Few years ago I've read about Sharp Extended Vector Animation format (.eva) which had some very good properties svg still lacks. I don't know what happened, can't find anything about it today. Maybe some japanese user here knows more about it...

Attached: Capture2.PNG (1710x439, 60.86K)

You just reminded me of something else.
Why does every retard encode anime in 10 bit color when not a single studio has ever put out 10 bit color anime? Does nobody know the hi10p profile doesn't actually require 10 bit color?
I know the average person doesn't understand shit and loves to spread misinfo, it's blatant when talking video encoding/decoding, but I at least expect the encoders to know how to operate their encoder. They could just encode hi10p in the 8bit color the anime was actually put out in, and make things look better while being smaller.

And when is 4:4:4 chroma subsampling becoming the standard? They had a shot with UHD blurays, but they decided to go for 10 bit color + the HDR meme, keeping 4:2:0. HDR being an industry standard name for "I don't know what this thing I just did actually is, but it's related to images". HDR supposedly tells the display what the black and white averages are and adjusts the screen's brightness to better display each frame. That's very nice, but the screen can do that completely alone, so it's pointless. They should've dropped HDR entirely and forced everyone to use 4:4:4 chroma subsampling or at the very least 4:2:2, instead of forcing everyone to use 4:2:0 + 10 bit color as is what happened.

10 bit color is only supported by a tiny fraction of the market, 4:2:2 and 4:4:4 are more important and supported by 100% of the market, but nope they took the retard path.

Attached: real human bean.gif (450x432, 27.03K)

FUCKING THIS
Nothing fuels my insides like having to recode an episode for 70 minutes just because the retard put it in 10-bit and I can't watch it on my big screen.

gist.github.com/l4n9th4n9/4459997
tl;dr: Because it increases compression efficiency even for 8-bit content, especially flat/gradient-heavy/banding-sensitive content such as anime.
Couldn't agree more. Using simple chroma subsampling (just delete entire lines!) inside of a lossy image encoder is just as ridiculous as if we were still doing interlaced-scan. It made sense in the analog days, but it's totally superfluous today.

On a related note, though, even the preliminary step in a 4:4:4 YCbCr encoder of switching colorspace is typically done in a fashion that needlessly throws away heaps of bit-depth in all three channels, just because of a need to save RAM in encoders during the 1980s.
No, HDR actually dedicates additional bits to encode extra "whiter than white" and "blacker than black" tones/shades so that material can adjust to your postprocessing chain, display device, and viewing environment, without clipping.

Only problem is h264 is too blurry to preserve sharp patterns, even at bluray bitrates, any grain generally being compression artifacts. Leave the debanding to be done by the player. This is misinfo spread by "golden-eyed" people with no pseudovisual or visual simulation benchmarks to back it up. Any and all compression improvements come from the encoder's limits being extended and features being added by the profile, and this is also why new profiles are not backwards compatible. At most they have a shitty video player that can't do debanding in real time and they looked at the image they proudly smeared with vaseline and thought "Hey, I don't see banding!".
That's just a wider color gamut, which is an obligatory part of the "HDR10 Media Profile". The poo-in-da-loo insanity is the part where it needs to specifically tell the display the brightness average of each frame, when the screen is displaying the frame and can calculate it alone without requiring any adaptation on the film maker's side.
That's where "HDR+" by samsung comes in. They realized the obvious, that they can just let the monitor do that alone and play back any material ever in "HDR", and they released this new cool marketing babble that basically means the monitor is calculating brightness averages in real time instead of relying on some stupid metadata baked in.

Chances are even at 8 bit color if the profile is hi10p it would still be incompatible with your screen. The profile is more than just allowing people to use a higher bit depth.

You can disable deblocking without problems. You xvid fanboys never fail to amuse me, thinking x264 is the same as ten years ago. Protip: it's not, it doesn't care about retarded metrics like PSNR since a long time now (and defaults to deblock=1:1 now).
You clearly don't know shit. The problem is that x264's internal precision is tied to the bit depth; internal precision used when computing the motion vectors.

Honestly, all of your complaints are the ones x264 users say about x265 and its SAO retardation.

Kill yourself for being that ignorant on a technology board, please.

To add a bit more, this thread on doom9 is pretty good forum.doom9.org/archive/index.php/t-163182.html (I used to use -3:-3, but I only force it to -1:-1 for cel anime these days).

Thanks for the Wikipedia link, it was really-really helpful and answered everything why was it discontinued...

No need to be a sarcastic prick I think.
No idea, external links are dead and there isn't a Japanese Wiki article despite the alleged popularity, not even jap Youtube videos.

Read the link I gave you again. The key point is that h.264 can store sharp fine patterns such as dithering, but it works best with smooth gradients and flat colors. By interpolating 8-bit dither into 10-bit gradients, it allows the encoder to produce smaller files of equivalent size.
No, it's a nonlinear colorspace. Think of the way a meatspace HDR photograph works, taking multiple exposures of the same scene, but with the iris dilated to different f-stops unless you're using a lightfield camera, but that's beside the point. That isn't just relatively "brighter" or "darker", it corresponds precisely to certain absolute levels of illumination in the actual scene.
...is the color equivalent of HD blowups, 24Hz-120Hz smoothmotion, surround spatialization, bass boost, etc. Missing bits can't be magicked up from thin air.

Not an Xvid fanboy, but the situation for HIGH BITRATE* videos is something like this:
Buy disks and pick a lossless codec.
Pick Xvid if you are sane.
Pick h.264 if you want to require asic for playback/encoding, want to use the advanced settings for every scene differently (which most users don't do it), or want to pay royalty fees
Pick VP9 if you want to hammer someone's hardware.
Pick VP8 never.

*With high bitrate I mean the quality from the original video source is insensible (not just small).

Sorry, pick VP8 and VP9 if you use your browser as a video player.

Now, I suggest you prove your claims, with a recent enough build of x264. What you said WAS true.

Baking in support for certain codecs inside SoCs and chipsets has been done for a long time, it's nothing new, your GPU supports some formats already (most likely h264). It helps with battery life by making it more efficient which is why it's crucial on mobile chips. Expanding encode/decode to AV1 is not a bad thing at all.

It's a retarded practice that needs to die. Including hierarchical support for features common to various popular codecs, along with API access to build codecs that use it, is somewhat acceptable. But just dumping an entire bitstream in, getting a fully decompressed one out, and puking up your guts if something as tiny as a profile-level feature changes, dumping you on your ass back in pure software? That's unacceptable.

2x 1080p screens > 4k

Too late. The entire computing world is ebbing back to the bad ole days. Most people are already back to mainframe/thin client architecture on mobile.

That is good and inevitable. The problem is that user processes aren't truly isolated in current architecture.
The solution is to just own your own server you remotely log into. Maybe in the future there will be a provably secure cloud with homomorphic encryption tech so that the cloud providers can't snoop on your data/computations.

On my 4690k, it took slightly longer than two days to encode the twelve minute long 1080p Buck Bunny using --cpu-used 8 which is supposed to be the fastest option for encoding. Decoding in real time on this machine with VLC 4.0 wasn't an issue at all. They need to heavily optimize their code, or this wont get much use until hardware encoders become available.

There's no speed increase whatsoever past cpu-used 5 on an 8350, dunno about jewntel specific optimizations.

Actual solution is doing that with GPU and that requires hardware level support. For example en.wikipedia.org/wiki/Nvidia_NVENC and its Intel and AMD equivalents. Doing h.264 conversion is ridiculously fast compared to VP9 and AV1 really needs that level of hardware support, and it really looks like it will get it: anandtech.com/show/12601/alliance-for-open-media-releases-royaltyfree-av1-10-codec-spec


Support for Opus is already there, with even iFag systems supporting it now.

You fags shit up every thread when codecs are discussed.
NVENC is shit and so is all baked in hardware encoding. Most of the time it doesn't even support most of the standard
The first-gen AMD thunderbirds can decode standard definition h.264 using just the CPU. XviD is only useful for 90s-early 2000s set-top boxes and the Nintendo Wii. It's only kept around for fags with ancient hardware. If you need to support this just set-up a transcoding server on something more modern.

Better question: When are we going to see something with Vulkan support to offload more of this to the GPU?

Only gnewfags don't know that hardware encoding looks like dogshit since it's simplified to the max compare to software.

You didn't read the thread, didn't you? The point of the xvid fanboy is that it's better than x264; which is almost true: it WAS better because x264 had too strong in-loop filtering leading to loss of detail. This was before all the psy opts.
This is the exact same reason x265 is shit for now: it cares too much about metrics and low bitrate results.

better than x264 for transparent encodes*

got any proof to back that up? codec ic's are quite accepting of user inputed parameters.

Yea you've obviously never done much video encoding.

I'm sure he totally believes he can see the difference on video running 24-30fps too. I can't stand faggots that use old stuff just because it's vintage. Shocker that he doesn't profess his love for DivX 3.11 ;-). Probably because no one encodes anything in it anymore. They'd drop Xvid just the same if they weren't concerned about street shitters being able to make bootlegs.

He is right, though. As long as there is a way to do something, someone else is going to want to do it differently.

Not proof. Show something concrete.

How much of a turbo-autist do you have to be to use Xvid in fucking 2018? Do you play all your videos on your PS2?

Why PNG?

Only a few (some might be taken down by copyright enforcers).
hooktube.com/watch?v=50qPT0h9Vr0
hooktube.com/watch?v=vzz30MB83Xg
hooktube.com/watch?v=47KRct-Lulw
>video link embedding is not available for this board

but that also gets (((them))) more money.
and pollutes the earth more, too.

Odd. I'll take your word for it, but I was able to find videos created with EVA Animator pretty easy on niconico. They even have an article on their part of their site which is kind of like wikipedia about EVA Animator dic.nicovideo.jp/a/evaアニメータ

1) Because it's a bitmap format designed for flat color synthetic imagery that almost always originates in a vector editor, so a vector format would be infinitely more efficient.
2) Because it was written in 1996, around the DEFLATE algorithm used by contemporary zip files, and hasn't broken compatibility since then. As a result, it's fallen into the same trap as JPEG, where lossless compression with a modern archive format yields double-digit percentage savings on a single image.


Tough to say which is better for the environment, honestly.

Don't have a jewvidia GPU to prove you wrong. Test it yourself.
trac.ffmpeg.org/wiki/HWAccelIntro states the same thing.

After messing around with this for awhile, I'm not even sure it's passing options given. This is what it says every encode.

Stream #0:0: Video: av1 (libaom-av1) (AV01 / 0x31305641), yuv444p, 1920x800 [SAR 1:1 DAR 12:5], q=-1--1, 200 kb/s, 24 fps, 1k tbn, 24 tbc (default)

I can change crf and cpu-used to minimum and maximum setting and the video still looks like shit.

Okay, I guess unless you specify -b:v it's going to use 200k everytime. I think constant quality mode is not working in FFmpeg.

I am not talking about using stupid shit GPU's. I am talking about ASIC's.

Fuck AV1, I want my Daala.

Software encoders have too much branches to be implemented efficiently (performance:cost ratio wise) in hardware. It'll always be gimped.

Yeah, too bad about Daala. The "good" news is that AV2 will probably integrate more of it.

Prove it.

There's no point to replacing PNG as file size isn't an issue anymore. We'll only need to scrap it when we switch from RGB to a luminance-based model. IIRC, while PNG can be forced into working that way as it supports 16 bit luminance sampling with a customizable transfer function, none of that shit actually works correctly in any PNG library so it's like it doesn't exist.

And that's where you're wrong! A 7MB PNG isn't cool to have around.

Who cares? If you need 7MB, then you have the bandwdith.

Dunno but programming against regular hardware compute API sounds like a cool idea. Why isn't it done already? At least I haven't heard of such. Why not make encoder target OpenCL?

Are you in India or something? 7MB is nothing. I have a few thousand pics in Canon RAW format sitting around at about 50MB each and I still don't care.

file size isnt an issue only if it provides fast access like wav
otherwise its still an issue

See the lossless table here: wyohknott.github.io/image-formats-comparison/speed_results.html


PNG is a 21 year old format.

AV1 isn't in the same league as Opus. Xiph took a look at the audio codec landscape and how it was shaped by a triangle of needs: bandwidth, complexity, and latency. Anyone else would pick two but they said "fuck it" and delivered a scaleable low latency low bandwidth audio codec that can decode and encode in realtime on even the shittiest SBC. AV1 is anything but efficient, both encode and decode.

Will AV1 be ever ready to use or it's even more of vaporware than VP9?

It's the only way for the big internet video companies involved to keep their balls out of an eventual licensing fee vice, but they're all largely webshit companies who have no experience in writing optimized code, so dunno. They might never get it fast enough to be useful.

(checked)
REEEEEEEEEEEEEEEEEEE that's worse than BMP
You sound like my Zig Forums illiterate family.
Just keeping them on a 500GB hard drive with no copy waiting for it to fuck up

If avif becomes a thing we at least don't have to deal with GIF, jpg, WebP, APNG, MNG and all that other crap anymore.
I mean having a cost free standard that's better than JPEG would make me happy. I'm kind of tired of still seeing jpg everywhere.

Can some of you experts tell me why video codecs don't just use Vulkan when available?
Wouldn't that save us from producing new hardware decoders all the time?

It keeps them in the format of the data retrieved from the sensor. It's a good way to handle it. The conversion to a regular image format discards a lot of data. I have them backed up online via Amazon as they store an unlimited amount with prime and I have gigabit internet to upload them with. Be less of a peasant.

how does it feel to share everything with CIA?

If you convert them to PNG 16bits and move the metadata to EXIF which "DATA" are you even talking about
Hi CIA

Attached: 4cd688b3e6117818f610e53761f2dc887bc70bf794269c3fac371f11f6913832.png (655x1116, 526.85K)

Answer to the bitstream freeze:
youtu.be/64FRE9L40oI?t=1118

Archive.org's Panoramio archive weighs 269,673 GB. Even jpegs add up at scale.

Not only that. They also don't look as good as a picture in for example H265/BPG/HEIF
Not to mention the eternal non-solutions for GIF
* converting to lossy video looks shit
* converting to APNG means you can never tell that they actually contain an animation
* converting to MNG means nothing can play it
* converting to video means you'll have a fucking UI around it
I WANT AVIF

nope it doesn't. it's optional

However you'll have it on by default and no one detects if it's soundless and how long it is and disables it based on that which would be retarded too.

PNG doesn't support mosaiced data, or any sensible form of metadata which tells the color filter array layout, color primaries, etc.
You probably just don't know jack shit about digital photography.

It's trivial to detect if it's soundless. It's always in the beginning of the file, in headers.
If you use shit browser, it's your problem.