the webm is showing h265 vs av1
Webm vs AV1
which is the future?
the webm is showing h265 vs av1
Webm vs AV1
which is the future?
When will technology advance enough that we have videos as a string of PNGs?
also how do i compress this video to a webm to 16mb.
Run ffmpeg with the fs parameter set to 16MiB.
its the full 1 hour 30 movie with audio.
trying to keep quality same. i reduced it to 40mb but its too large to post.
Not possible bud. Just split it into multiple parts.
Fuck av1 where's daala?
AV1 is based on work of Daala and VP10 if I remember corretly. Multiple companies working together instead of each making their own codec.
This demo makes me happy. Hopefully AV1 is good enough so there is no reason to implement the hardware acceleration of codec that (((MPEG))) group will develop next.
I dont mean same quality, i mean the same quality as the one in the 40mb.
AV1 was supposed to have its bitstream frozen at the end of last year. They've removed bits of daala so it's easier for current hardware to playback. It currently take a year to encode one 90 minute film. I don't like h265 either, but at least it exists. h264 isn't going anywhere soon.
where do you think you fucking are?
So I tried 2-pass aomenc but it takes four hours to encode a single 3 minute video.
Anyone know some options to make it encode faster?
Not even that, it's just a subset of Matroska
Ditch your chinkpad
Also it went up to 7 hours encode time with all cores at 100%, wtf is this codec.
If it's a whole movie, there's nothing wrong with splitting it into a few 16MB chunks.
yeah but i want to compete with this.
i cant seem to get it below 40mb.
Just drop the bitrate below double digits and downscale the shit out of it.
Also don't do 2 pass, I know that goes against what most people say but using 2 pass will make your file larger.
You can't get a full movie down to 16mb without losing quality, it's not possible.
1) use VP9 in CRF mode (aka "constant quality"):
… -b:v 0 -crf X … where 20 ≤ X ≤ 40
larger X = worse quality
2) if size is too big even with X = 40, add downscaling:
… -vf scale=iw*0.75:-1 … keep reducing size until it fits into wanted size
you can add more threads to make it go faster and also produce more heat if you want.
other options are of little use, they won't change anything really much.
targeting average bitrate without 2-pass is a silly thing unless it's for realtime streaming. it will always be less efficient than CRF.
It's called GIF.
Still don't understand why they even bothered with WebM. MKV is perfectly suitable for the web as it is. Even in the case of streaming it's better (youtu.be/hip7Vz3zJN0). Just another case of reinventing the wheel?
not with vp9, even if you give it 10 cpu's it won't matter, it's only going to use 1, at most 2 for anything encoding under 1080p.
if you want to multi-thread with vp9, you have to write a script that splits the video into N parts for N cores, then run N instances of ffmpeg on those N parts, and then merge them all back together again at the end.
it sounds like a pain in the ass but once you script it it really isn't bad at all, it's all well documented with ffmpeg and you can split to the exact frame, you shouldn't notice that the file was split at all, and you'll get all your cores going with vp9, which is otherwise impossible.
Really? Could you share such script?
i would but i lost it because of lack of a backup a few months ago. it's not as hard as you might think to write though, it only took me a day and I did it in python, might be able to do it in bash if your good in bash.
if there's demand for it maybe i should rewrite it and post a thread or something.
if you rename a vp8/9+vorbis/opus .mkv to .webm it will be 100% standard compliant, that's what I meant with
But I'm encoding AV1 not VP9.
aomenc --codec=av1 --webm -p 2 --pass=1 --fpf=niggers.txt -t 8 --lag-in-frames=25 --end-usage=cbr --target-bitrate= --bias-pct=0 --auto-alt-ref=1 --tile-columns=6 -w -h -o "NUL" "input"aomenc --codec=av1 --webm -p 2 --pass=2 --fpf=niggers.txt -t 8 --lag-in-frames=25 --end-usage=cbr --target-bitrate= --bias-pct=0 --auto-alt-ref=1 --tile-columns=6 -w -h -o "output" "input"
CQ modo doesn't werk, it won't open the input file for some reason.
ffmpeg will not always match the fs parameter. It will try, but a lot of other options are needed to get close.
The key is deciding the bit rate & screen size. Calculating the bit rate will determine the rate available for the screen size, so I'll look at the bit rate - since it really is the deciding factor in all of this:
1.5hrs = 5400s = A
16MiB = 16777216bytes = 134217728bits = B
B/A = 24855b/s = 24.8kb/s (b=bits, also - if you said 24.9kb/s the file would end up be larger than 16MiB, so you have to round down to 24.8, not up to 24.9)
24.8kb/s is extremely low. This is the total bit rate for both audio and video!
Decent audio only with no video is 96kb/s upwards.
So in short it is possible, but it will look nothing like the original. (Probably more of an animated icon, with accompanying background hiss)
Find more on encoding in >>>/webm/
You can search for such multi-thread two pass encoding scripts online, or have a look in >>>/webm/
The next version of the webm spec includes av1 as a codec option along with vp9 and vp8.
perhaps it only means that aomenc is trash atm.
unless you develop something with it, better to just wait.
Will AV1 content run on Raspi3, and with what resolution?
APNG and MNG
Both are supposed to be successors to GIF
Both fail horribly since no one can decide on which to use.
PNG itself is dated and only supports up to 8bit per channel
flif will support up to 16-bit, better compression, better interlacing with nearly no filesize increase, animations (also interlaced)
It's just a matter of time and contributions until it's finished and can be implemented everywhere
I think Netflix is using AI to do just that.
forgot to mention that PNG eats CPU for gamme used for crts
what is up with every open source crap and shit naming conventions. hur dur upload a flif.
for video, there are better codecs already (H.264, VP9) and yes they allow lossless mode as well.
there's no fucking reason to drop proper motion prediction as soon as you call it "animation" for some reason. and most bloated GIFs are in fact not an "animation", but a slices of photorealistic content for which proper video codecs are specifically optimized.
Who cares? 360p source footage will look like dogshit no matter how you encode it. Also 3D and 3DPD shit is a useless metric. I want to see 2D anime comparisons, 1080p, with lots of stark red on black coloration like Kill La Kill.
That's true but we're talking about the ones that are.
Those are completely different They often even contain different frame rates for single layers.
They are either real animation or a static image with some elements animated.
And because GIF is the only widely supported option great art gets ruined through shitty 255 colors and 1 for transparent
I didn't bother to look because I've blocked all .gif URLs but "real animation" doesn't take this much size unless it's overlong
I'm sad that FLIF will realistically never take off. I tried it and it's much better than PNG but fucking nothing supports it.
But, since PNG just werks and normalfags are retarded nobody will care enough to support it.
I hope I'm wrong, but I'm not holding my breath.
GIF compression is just shit at bigger sizes
The developer is pretty overconfident
Look at the to do list
A format will only be widely implemented when it's clear that nothing changes
there are only a few cases when it's not total shit
(notably, GIF beats PNG at encoding 1x1 transparent pixel, which is commonly used in (((web))) for bullshit purposes)
wow, the absolute state of nu/tech/
There are noscript larpers who think they are safe with current day CSS+HTML. FLIF would be another nail into their paranoia coffin.
This is implementation dependent. You could disable it or change quality per size then it would no longer work.
I'm not sure that even if the format got support that that feature would also be used.
also be used.
also be implemented.
Pardon my stupidity I'm still entirely new with encoding and I'm a little sleep deprived so my reading comprehension might be lowered but are you saying crf without 2-pass is better than non-crf 2-pass or that 2-pass crf is better than single pass?
Two-pass crf doesn't exist as far as I know
As far as my experience goes, Two-Pass encoding makes a difference even in lossless mode in VP9. So yeah, Two-Pass CRF does not only exist but also makes a difference (at least in file size).
What software lets you do two-pass crf encoding?
Well fuck handbrake then, you can't pick both two-pass and crf
handbrake is just a shit gui front-end for ffmpeg. git gud with ffmpeg cli. it's not that hard.
Apparently gstreamer 1.14 has experimental support for mkv-contained AV1.
Does it werk?
Good or bad?
Have you ever tried to rip a track from DVD or Bluray with ffmpeg? If not, I suggest you STFU.
It would be pointless to encode anything and expect the file to be compatible with the final bitstream.
You mean over week/s. Encode times are absolutely abysmal, but to be fair they said they haven't optimized the encoder yet.
The first quarter of this year is almost gone and they're still not done. I really wanted to be excited for AV1, but they've pulled some of the best features like daala transforms and perceptual vector quantization. All because they were worried older hardware couldn't handle it.
yes, and your a fucking idiot if you cry about merging a few dvd files together and pumping them into ffmpeg and insist on using a bloated gui instead.
cat [dvd files] | ffmpeg -i - [encoder/options]
You have simply concatenated vob files together, not selected a title. Try again.
I would also like to point out that you would know this had you owned physical media and not dumpster dived like a filthy peasant.
Here we go. How is ffmpeg going to handle this, genius? I don't even know what fucking track the real one is.
how about you merge the files that are longer than 30 seconds like the gui bloat is saying it's going to do.
Not a solution
Way to avoid answering
Why don't you faggots pipe down the next time someone brings these programs up? They're not meant for you.
21 tracks tell me which ones to concatenate for the main title.
the one that you've opened and identified as being the start of the movie, then count up. it's not that hard.
MakeMKV is also proprietary. how about you just use windows instead and pay for a digital downloadable version since you seem to love giving Hollywood money.
You have successful encoded a menu into the main title. Congratulations.
Because I can rip the DRM from it and supply you mouth breathers with more content.
AV1 is a codec, .webm is a container.
AV1 will be supported in .webm
AV1 is the future, no one wants to pay royalties to the HEVC guys anymore.
Also, AV1 is the best thing right now regarding quality.
have you ever tried to have brains?
I see some bags of money definitely changed hands
next time HEVC2 uses something similar they will convince everybody that hard decoding is okay
WTF are you talking about? looks like complete horse shit
FFmpeg isn’t really that hard to use (unless you want to replace After Effects with it but even then it doesn’t become as much “hard” as just “too verbose”).
Here’s how my command looks like (if I have plenty of time on my hands):
ffmpeg -hwaccel auto -y \ -i $INPUT -map 0:v:0 -map_chapters -1 -map_metadata -1 \ -c:v libvpx-vp9 -pass 1 -threads $THREADS \ -deadline best -cpu-used 0 -aq-mode 0 -auto-alt-ref 1 -frame-parallel 0 \ -lag-in-frames 25 -tile-columns $COLUMNS -row-mt 1 -crf $CRF -b:v 0 \ -f webm /dev/nullffmpeg -hwaccel auto \ -i $INPUT -map 0:v:0 -map_chapters -1 -map_metadata -1 \ -c:v libvpx-vp9 -pass 2 -threads $THREADS \ -deadline best -cpu-used 0 -aq-mode 0 -auto-alt-ref 1 -frame-parallel 0 \ -lag-in-frames 25 -tile-columns $COLUMNS -row-mt 1 -crf $CRF -b:v 0 $OUTPUT.webm
I’ll try to explain the options I use in the attached image file.
Another thing to note is audio encoding. WebM container only allows Vorbis and Opus codecs. Opus beats Vorbis (and every other lossy audio format), therefore Vorbis is depreciated.
Here's my command:
ffmpeg -hwaccel auto \ -i $INPUT -map 0:a:0 -map_chapters -1 -map_metadata -1 \ -c:a libopus -compression_level 10 -frame_duration 60 -vbr on -b:a $BITRATE $OUTPUT.opus
Unlike VP9, there’s really no reason to explain what all the options do because audio encoding is fast and these are objectively best options for Opus. Bit rate is controlled with -b:a #k. Opus supports bit rates between 5 kbit/s and 510 kbit/s (e.g. -b:a 96k). According to Xiph.Org Foundation, bit rates between 64 kbit/s and 96 kbit/s are good for music, 128 kbit/s and above give you transparent results. Bit rates below 64 kbit/s can still sound good but the results are less predictable, they’re recommended mostly for speech.
To merge the resulting files use this command:
ffmpeg -i $INPUT_VIDEO -i $INPUT_AUDIO -c copy $OUTPUT.webm
according to the ffmpeg manual -hwaccel is only for decoding, it's not going to automatically try to encode with hardware.
i had to look it up because I was going to say, watch it enabling hardware encoding automatically, because hardware encoding can be inferior by a lot depending on the gpu. the intel hardware encoder for example, with h264 atleast, is shit. it's fast, but there's considerably fewer knobs to turn for quality/size and no matter what you do your going to get something that looks like shit and is twice the file size than with the software encoder.
i'm sure it varies by gpu even between intel gpu's.
Yeah, I heard that as well, but all this information is pretty old and FFmpeg updates quite often so I just didn’t bother with removing it.
Lrn 2 spel gud
confirmed as useless by an Opus developer who obviously knows better.
I've heard it's only true for bitrates smaller than ~160, and otherwise Vorbis is at least just as good and requires less work to decode.
that is if you target 100% transparency, you may as well use Vorbis.
Oh, you also have mixed french and english for some strange reason. This is why concatenating VOB files is NOT A SOLUTION.
I had a feeling that I might make a typo here so I double-checked with Aspell. So yeah, suck a dick.
I did some tests and it slightly decresed the file sizes.
Probably, but since Opus is pretty much transparent at 128 kbit/s, I don’t see any reason to use higher bit rates. I’d just use FLAC in such cases.
I’m retarded and confused the post numbers.
nah, there are some killer samples.
Musepack, or even WavPack-hybrid can also guarantee transparency while using less bits.
FLAC makes sense if you are archiving or are planning to do further editing.
while producing exactly the same decoded audio stream? if not, you can't simply compare file size.
okay, not-a-solution guy, then what about you go and solve your special snowflake problem by yourself instead of bitching about your made up issues here
Don’t really know. Not so much of an audio guy. Only know how to squeez stuff under 16 MiB.
then you can do this by decreasing target bitrate or quality, you know
file size doesn't mean anything if you also don't know quality
i was going to argue with you but your right
i even checked the linux kernel source, they use deprecate
you can get decent audio with HE-AAC (mp4 container) with 8kpbs. i've done it with the vegas audio and squeezed two hours worth of audio in. I think it's better than opus for that.
I think h264 is more efficient video stream wise for sticking a single background image in over vp9 too (to merge with the above audio, 8ch won't let you post an mp4 without a video stream), there's a tune setting for it.