Video codec wars

>Codemumkey still hasn't added AV1 support to Zig Forums
Who is likely to win?
When will proprietary software be outlawed?

Attached: cia_clusterfuck.webm (1120x720, 964.11K)

Other urls found in this thread:

archive.fo/fnnaH
archive.fo/3TUnY
llvm.org/devmtg/2017-03//assets/slide/spirv_infrastructure_and_its_place_in_the_llvm_ecosystem.pdf
archive.fo/RLpAS
github.com/OpenVisualCloud/SVT-AV1
github.com/OpenVisualCloud/SVT-VP9
cs231n.stanford.edu/reports/2017/pdfs/423.pdf
arxiv.org/pdf/1703.01467.pdf
en.wikipedia.org/wiki/List_of_Qualcomm_Snapdragon_systems-on-chip#Hardware_codec_supported
en.wikipedia.org/wiki/Video_Core_Next
en.wikipedia.org/wiki/Nvidia_PureVideo
en.wikipedia.org/wiki/Intel_Quick_Sync_Video
twitter.com/SFWRedditImages

It's not viable yet. You get a better quality video at the same bitrate and encoding time at VP9 than with AV1 atm.
Let's wait until the war is over and there are actually solid encoders and decoders instead of rusted rav1e and so on.

who cares

I just want my Daala.

dav1d recently added some SSSE3 optimizations and can now decode 8-bit yuv420p at 1080p30 with relative ease on non-AVX2 CPUs.

Attached: body_positivity.jpg (915x611, 106.57K)

just ignore them. they use full botnet phones anyway so the autism with computers is pointless

This. It would probably be better to make a solid video codec decodable by software than to make some half assed we-have-to-beat-x-until-xx/xx/xx.

AV1 4k+ is unplayable on my machine.
I hope it doesn't become a standard, I don't want to be forced into upgrading to botnet just to be able to watch videos.

AV1 already won this battle, it just needs hardware acceleration to start being deployed, as decoding (and encoding) it with software is simply prohibitive.

>(((hardware acceleration)))
I don't wanna buy new (((shit))).
VP9 and h264 10-bit decode just fine with software on my PC.
Software decoding is the future. Imagine what would have happened if hardware decoding was done for images?
You would buy your image accelerators for all kinds of formats instead of people writing good fast decoder implementations and inventing new formats.
Hardware decoding stops progress.

Because you're not decoding multiple keyframes per second when loading a single image. Though when displaying photo's and images taken and stitched from Nasa's telescopes I would rather have a much faster experience rendering millions of pixels. Not to mention the enormous power reduction from hardware decoders compared to software. Remember apple's macbook(?) draining battery because their 4k screens didn't come with a 4k VP9 decoder because Intel didn't implement it in their processors? Apples solution was to disable software rendering and called it a day. Hardware decoding doesn't stop progress either. There are plenty of people working on AV1 that will no doubt use non-meme machine learning and new algorithms to come up with the next version. And if you suggest that adoption rate sucks because it will take a while before everyone has new hardware with the new decoder then might I ask you wether the old computers, laptops, tablets, phones were ever able to use software rendering on the old or new format without their users complaining about power draw/battery life or stuttering due to underpowered arm socs.

Having 4k screens on a laptop drains the battery very quickly independent of software or hardware decoders for video.
I don't want to buy shit nor be stuck with half assed AV1. We already know it's just VP10.
Old computers don't have hardware decoding and software decoding VP9 worked on a dual core centrino laptop.
Fuck them. Phones are for making calls. People misunderstood the concept.

>I don't wanna buy new (((shit))).
That's because you were retarded enough to buy things at the wrong time.
You can't even compare them with AV1.
Only that it isn't.
We would fix loads of problems, specially for professionals.
No, that's happening right now and still need them because things don't go in only one way.
No, it is a necessity of higher complexity.

You're retarded, it seems.

I guess we should make the scales for video files, image, games and the internet be all the way compatible with those early 800x600 Windows 95 PCs then.

thats not the same thing tho. there has now been a new video format almost every year and none of them work with existing hardware decoders so the only option to get support is buying a new gpu or laptop if you use those

When was the right time? 'Cause I've been buying hardware for 3 decades now. Oh you mean I'm a retard for not updating to the botnet just to watch shitty hollywood propaganda?
99% of the population can't see the difference between DivX 3.11 ;-) in widescreen SD and H.264 in 720p. 99.9999999% can't tell the difference between H.264 in 1080p and AV1 in 4K.
Hardware decoders have always become obsolete pretty much overnight after release while software decoding has been the go-to for pretty much all applications going back to the 90s. The only place with hardware decoding makes sense for the average consume is set-top devices. 99% of people aren't going to notice any difference between AV1 and whatever they're streaming now. They might pretend to but like audiophiles they just get the latest standard to brag about it to their friends.
I notice you didn't list them.
You okay user? 'Cause you're looking like a real baka right now.
Go on...I want to know how hardware decoding is more complex than software decoders when the past has shown that all hardware decoding does is produce a worse result with the inability to update the software.

hmmmmmmmmmmmmmmmmmmm...


based

Dont strawman, the point I made was that hardware decoding was vastly more power efficient than software. Hence becoming a problem on a laptop. Wether the 4k retardation exists or not.
Hardware decode ALWAYS saves power and CPU cycles for something people use on a daily basis.
Figured you would go for low hanging fruit. The market disagrees with you. People use their phones like they used to use PC's and Laptops (Which I am not advocating for) therefore their interests are important. Though being able to stream av1 to low powered Socs will be usefull somewhere other than phones.

Its almost like this was already well known. All compression schemes have their limits due to entrophy. AV1 only offers "marginally" better compression in relation to quality compared to the last codec. It's not a silver bullet. Increasing bandwidth is the only way you'll truly increase quality.
You're right in that a hardware decoder is out of date the second its released due to the massive amount of resources poured in to the development of better codec's. Though hardware decoders ARE being updated as a result of new products saturating the market. Which is somewhere around 10 years depending on the product.

user you replied to here.
Why? I have enough processing power and I can't watch multiple videos at once anyways.
GPU acceleration is also a possibility. Why does no one ever use that?
Why? They are all video codecs. What does AV1 have over VP9? Developers: MORE
They are pretty similar.
I'm no professional, so why should I care. Professionals should use a method specialized to their use case.
So where can I buy my PCIe JPG accelerator or a GPU/CPU with hardware decoding for JPG?
Why? There is no MS Windows accelerator and no Google Chrome accelerator even through they are the most complex clusterfucks in the software world. Through modern CPU power they still run fine, through.
H264 10-bit and VP9 also decodes just fine without any hardware accelerator.

The market isn't a person.
They don't do anything productive on there. Try using a spreadsheet program on a phone.
Do I look like a Youtube datacenter?
We are far away from the point where we have the most efficient solution.
Half truth. Depending on the content there is lots of artifacts that can simply be compressed away (H264 hi10p and anime) and as above:
Current solutions are very rudimentary and only describe pixel changes and left right up down movements.
If something on the screen rotates it can't be well compressed.
There are many other common things to compress but that one alone should suffice.
You can't make any fundamental changes to them. Only slight improvements can be added on.

Attached: b8t.gif (937x530 14.21 KB, 49.04K)

For several reasons. First and foremost the codecs that were popular in the past were patent encumbered which means NDA's and huge costs to get the copy+pastable library. Secondly because GPU's are essentially a CPU of varying artchitectures heavily optimized for much fewer commands, a reduced instruction set if you will. So the codec library must be implemented with either special instructions in hardware or by having access to the ISA of the GPU. The problem with this is unless you are talking about ancient GPU's or nvidia gpu's on nouveau prior to the 900 GTX series no one gets access to the GPU's ISA. Except via intermeditary layers that are blobs loaded at runtime such as AMD's microcode or nvidia's signed firmware. So implementing the newer codecs requires knowledge few but the NDA signers have, which work for the companies, who get kickbacks from codec producers who they already implemented for not to implement newer codecs.

Say you wanted MP4 acceleration on a GPU, you just implement it at the ISA level just like a modern CPU via a library calling instructions. But you have to be able to program those instructions which most modern GPU's will not let you do.

Third the patent encumbered codecs want to stranglehold the industry in order to get more liscensing shekels. Jewgle allows vp8/9 only because it benefits them so much in electricity costs saved for youtube streaming and bandwidth costs.

The only way out of this mess is someone developing a codec that uses the best of the best of the algorithms, including the patent encumbered ones, and just making the best software possible for implementation. Then publishing and updating a implementation anonymously using p2p software so that patent trolls can't fuck them over with cease and dissists. Then on the GPU's with access to the ISA of said GPU's GPGPU offload could be implemented.

But few have knowledge of the codecs to make a better codec. Even fewer would know how to upload a thing without getting a cease and dissist. Much much less fewer then that would have the knowledge to implement GPGPU offload, and the few that do already are in the hands of corperations or are blackmailed into doing nothing and or are too busy with irl survival.

Attached: 36695ebb045008e78be1a715a517cc38c516ed00a7d67f2178ca12e659206b42.jpg (600x625, 51.57K)

Case and point to my post, see a user like this that understands why a encoder/decoder needs to do what it does and who doesn't give a shit about copyright. He could implement a codec maybe and upload it anonymously because scenefag but then has none to little knowledge about gpgpu stuff.

I understand as much about gpgpu as publicly available. As you said user the problem lies in the fact that modern GPUs are so locked down. It's impossible to learn when you can't even make them say Hello World.

You can, just RTFM for AMD/ATI and intel here archive.fo/fnnaH and for nvidia/nouveau here archive.fo/3TUnY . It's just theres alot of fucking GPU's that would need an implementation all using different ISA's and features i.e mmu or no mmu.

...4 years later...

transcoders

What about OpenCL? Is it a meme?

For sure, I guess I put it badly. The problem stems from a lack of standard between the major GPUs and them being so closed off. I have found in my experience working with video that the GPU produces worse results or is very limited in what it can do in my tool chain. I would off-load what I could to the gpu but often would have to choose to do things on the cpu because the results weren't acceptable to my eye. In filter chains there were only specific things that could be off-loaded to the GPU without causing issues of quality.

Also as the years roll on I've found less need to off-load to the GPU while doing final encodes simply because when the goal is quality waiting a bit longer isn't as much of an issue. CPUs are really fast now compared to when I started doing this years ago. Gone are the days of waiting 24 hours for 25 minutes of video to finish being encoded. Now in an area where time is crucial, say live streaming or moving lots of video around in real time like in a studio I could see where the trade off might be worth it. Even then the GPU decoder needs to be a lot faster than the CPU to make it matter enough to be worth the degradation of the quality.

These days I'm sure the days of making filters through scripting and tuning the encoder by hand are long past. I get the impression that most people are copy/pasting ffmpeg commands or using a full fledged GUI to do production work. Which is fine I used plenty of GUI based software too but there was always a script there to glue the various tools I used together. I feel like the proper entry to this stuff was lost in the last decade or so as well. I know in the various fansub/warez scenes it's rare to find groups doing anything but ripping a show off a streaming service or doing basic Blu-ray/DVD rips. Folks aren't working with old sources of material that need to be cleaned up much anymore.

OpenCL is like C for GPU's. Developer writes a opencl libray for a GPU and then a user can call opencl functions after importing opencl header files. Just like a developer writes a C library for a CPU and then a user can call C functions after importing opencl header files. Its just middleware to the assembly code, but it is standardized middleware to the assembly code making it easier to call for a bunch of different GPU's. But opencl is a unoptimized peice of shit because AMD has paid professional solutions for opencl so they don't optimize the FOSS solution. Nvidia is the same as they have CUDA. The nouveau/redhat developers haven't even implemented opencl publically for nvidia gpu's anyways. Intel has a semi-decent opencl implementation but its not optimized as well as it could be and of course intel IGPU's are slow unless you get the newer botnet/blobbed ones.

There's plenty of gigakikes into the transcoding scene as youtube has ways of detecting the slightest changes in a banned video and banning videos. So its a rat race between noobs discovering tricks, or the man page, of ffmpeg and varying codecs. Or things like encoding a video in one format but making it turn into a completely different video encoded in another format. Your a scenefag you should know this stuff.

Does Vulkan have any hope of offering a proper GPGPU compute standard with non-gimped implementations at some point in the future?
Are fully fledged video decoders running almost entirely on the GPU with no decoding ASIC present science fiction?

It depends on how good the assembly to library implementation takes advantage of the hardware. Technically there is already a way to go from assembly > opengl/vulkan > whatever. Just convert it to SPIR-V and then go wherever the fuck you want as long as it goes back to hardware via something. See mesa's clover and LLVM's SPIR-V for implementations llvm.org/devmtg/2017-03//assets/slide/spirv_infrastructure_and_its_place_in_the_llvm_ecosystem.pdf archive.fo/RLpAS . Anything that has a assembly > gallium implementaion in mesa can do opengl/vulkan > spir v > whatever you want which will make GPGPU offload possible for anything a reality for FOSS. You could do something obscure like opengl/vulkan > spir v > javascript Why the fuck would you do this? if you really wanted too, and it can go in reverse too.

Just make sure to use it correctly as some instructions are better left to a x86/dedicated cpu such as single threaded apps with x86 specific math instructions. Some assembly to *insert library here* implementations are going to be better then others. Like the botnet AMD GPU to vulkan implementations being better then their opengl.

Depends on who you believe. For encoding in particular, some parts of the process (motion compensation, quantization) multithread nicely to GPUs, while others (entropy encoding) are serial operations ill suited to GPUs.

According to x264's devs, the bulk of CPU time in an encoder is spent on serialized workloads, so bothering to accelerate the other parts isn't worth their effort. But reading their invective on the subject, another explanation that leaps to mind is that they're just too assblasted about all the APIs Intel/AMD/nVidia have provided over the years to take advantage of them.


Open sores implementations of proprietary codecs have been a thing for a while
At least you eventually get around to admitting in that SPIR-V exists


Gee, great insight grandpa. Upboat.


That's not what the post you replied to was talking about, namely GPGPU (which executes arbitrary code). What you're talking about here is fixed-function ASIC SIP cores like Quick Sync, NVENC, or VCE.

Attached: 1280px-AMD_VCE_hybrid_mode.svg.png (1280x720, 88.77K)

That picture tells your nothing of how it works. Picture goes in magical bullshit engine and something comes out as data of the other side.
What you neglect to mention is that those functions are not always using dedicated hardware functions and it gets generalized to the GPU somtimes because the hardware dev wanted to save money and the customer can't see the difference as long as its 1% faster then the last generation due to software optimization of the same exact hardware rebranded several times.
Now I know you are larping and a troll, system in package, also known as a spread out SOC has nothing to do with dedicated math functions in hardware. Just because you put out a buzzword doesn't mean that buzzword is actually using dedicated hardware. You have to prove it does or doesn't like you can with FOSS software.

Enter every GPU made past 2012, just rebranded shit that is the same hardware underneath with minor changes or with software changes for respective vendors.

It does if said functions are hidden behind proprietary firmware.

It tells you what can be offloaded to the GPU in a best-case scenario, and what can't, which was what I was talking about with GPGPU encoders.
That's kinda' exactly what that pic is an example of, y'know? Anyways, my point was that what you're talking about is (at least in part) a fixed-function encoder, which strictly MUST impose limits that don't exist for CPU encoders. Whereas a GPGPU-based encoder could function identically to any software decoder, because GPGPU can execute any arbitrary code (although, of course, architectural differences mean that a GPGPU encoder may be faster or slower when using different settings and features compared to execution on a CPU).

Attached: FriezaOwnAttack_2703.png (244x350, 28.27K)

github.com/OpenVisualCloud/SVT-AV1
github.com/OpenVisualCloud/SVT-VP9

Nobody has mentioned Intel's SVT, I see.
Fortunately for you goys, it runs on AMD hardware too.

Attached: 1.png (2400x892, 194.79K)

How does it compare quality wise to the reference AV1 encoder?

How in the world did that happen?
9900k behind TR and the i9 three times as fast.

Netfux will adopt AV1 for livestreaming now that Jewntel's SVT-AV1 encoder has managed to encode AV1 at 1080p60 with abhorrent visual quality.
How would HEVC have performed as general Interwebs streaming codec in alt-timeline where it had a much less kiked patent licensing scheme?
It should've completely trashed VP9 for encoding 16mb mp4s given its better quality/speed tradeoffs at settings above placebo.

Does nobody use Divx or Xvid?

Attached: LwnVfyWs3UijKKKRT8hDx7-970-80.jpg (900x506, 31.49K)

People are working on applying deep learning techniques to video and image compression.
AV1 will be obsolete by 2020.
cs231n.stanford.edu/reports/2017/pdfs/423.pdf
arxiv.org/pdf/1703.01467.pdf

Haha very funny. How do you want to decompress that without a format that has strict rules?

Hi grandpa.

Who cares? They also require some DRM bullshit in browsers.

HAPAS ARE SUPERIOR TO WHITES

HAPAS ARE SUPERIOR TO WHITES

HAPAS ARE SUPERIOR TO WHITES

I love Donald Trump! Heil Israel MIGA 2020!!!

I love Black women. They are so dominant and masculine!

I love eating bagels. Why do you guys want to kill the Jews?

Daily reminder that NEET-Socks deserve the rope.

Either Mods are asleep or glow in the dark.

Attached: CIA niggers.mp4 (1280x720, 468.32K)

Only it is.
FPGA require software to configure the hardware, therefore software decoding is the future.

They're way too busy bumplocking meaningful threads (such as ) the moment they appear. What did you think?

For cuckchan /g/ perhaps, not here.
..and don't say the thread doesn't fall under QTDDTOT because it does when the OP makes zero effort.

You're an imbecile.
investing in software decoding is asking the hardware to scream its guts outs while performing badly, while native hardware decoding is just making it part of its nature.

There isn't a signle instance where software decoding is superior or lighter on resources or battery than hardware.

Attached: (you).png (1671x80, 9.29K)

Except the part where you want to do something to the image after decoding. Unless your codec is extremely heavy, having to copy back to the main memory will nullify the performance gains.
In the end, it's only for laptop users and phone zombies that this is useful.

Install a codec pack and have both then, now stop arguing nonsense because software decoding strains the hardware more and render your arguments useless, childish, ignorant.

It's not native. GPUs are often connected via PCIe which you can hardly call anything but an extension.
And when not used for graphics there's more available for other computing tasks. Thus the hardware can be fully utilized.
Software decoding is artifact free. Are you trying to tell me that hardware decoding isn't?
Lighter on battery or battery?

Maximize utility and make a good fucking processor now!

What I'm trying to say is that through using general instructions for everything you have more general resources you can use for everyting instead of just that one task that might even be surpased one day.

How cute.

Meanwhile in mpv user manual...

How cute

>he needs a precompiled codec pack with a botnet binary (((installer)))
>he doesn't compile libvpx and libaom with PGO to get that 10% speed increase
Why the hell hasn't chodemonkey enabled AV1 support after all this time?
The codec is more than viable for basic webbming purposes at this stage, long encoding times be damned.

Attached: Terry_where_it_all_went_wrong.webm (1280x720, 1.39M)

I'm going to use h.264 High 4.2 profile with a 422p pixel format forever until literally everything supports something better and I can encode to it at greater than 1 frame per core-hour on a half decent CPU. h.264 just fucking works right now and the only people who give a single fuck about Hi10 are manchild Tiananmen Square documentary connoisseurs who throw an autistic panic over extremely mild banding in zoomed in screenshots. The only people who care about h.265 are fucking morons who want to gain an encumbered codec dependency in everything again. The only people who care about AV1 are people who have literal datacenters of compute they can throw at the few things which benefit from the reduced bandwidth i.e. Google and Netflix. h.264 master fuckin race, fight me.

based except for H265. Non muricans don't have to worry about patents.

I just want a codec efficient enough to livestream griefer attacks in minecraft at 90 kb/s.

test

(checked)
classic terry.. love it

Well, Xvid is still the only sane CPU (software en/decoder) friendly libre codec option today. And it is a good choice for high quality, high bitrate video (where the newer codecs don't bring too much improvement).
But for those who need low bitrate and "ok" quality, the other codecs surely provide better results.


They are either hardware accelerated or taking up quite a large amount of CPU.
You don't notice because you are living in your mom's basement and nor paying the electricity bills. On a global scale a 10x increase in CPU clock cycles can really add up.

Seems like you are just associating Xvid/AVI container/CD sized P2P releases and never seen/tried/know about it's advantages.

Some japanese websites still rely on Flash for video streaming, you shithead.
They're huge?
It always does. If it bothers you, you could just compress less lmao. Has nothing to do with the format. And if you invested the money going into hardware accelleration in CPU power we'd have better performance for things other than watching movies too, you kike.
I'm not one of the people who would write: "muh your just poor" but the children of the people here in Thailand have no problems viewing videos on their Chinaphones, so you're really just poor.

I'm having vacation in Thailand on an island you faggot. Even if Xvid was streaming friendly, your shitty compression wouldn't deliver.

As opposed to the additional network throughput and storage needed to shovel around bloated encodes in ancient codecs?

xvid/MPEG-4 ASP is as libre as H.264/MPEG-4 AVC to be completely honest - the tech is still patented. If you want a truly libre codec from the MPEG family, use MPEG-1 Part 2 or, to a lesser degree since it's still patented in irrelevant countries, MPEG-2 Part 2. Note the former will give you better performance at low bitrates, something people often forget.

Also, Theora is technically superior to MPEG-4 ASP in most use-cases but the reference encoder and the one in ffmpeg sucks tremendously as far as performance goes, worse than even xvidcore.

You realize that the cumulative effect of not compressing and not having hardware acceleration is indeed huge, right? That's exactly why big companies pish for it, they have to stream that shit to billions of people.
To the individual, it's also beneficial to have them, as battery and hd space are saved. Oh, and you would be able to let a video playing while doing something else.

By the way, every single computer these days accelerate H264, and that's why this format simply doesn't die and other alternatives like H265, VP9 and the new AV1 can't make a dent on it.

You can aready do that and if you invested the money in CPU power you'd be better off. Formats come and go.
Wrong. Anime torrent encodes use H265. (even 10 or 12 bpp because that somehow allows for stronger compression) The rest doesn't because of the patent issues. Google uses VP9 on the biggest video platform on the internet. Everyone here uses VP9 because it gives the best quality in 16MB achievable with current video formats supported by webbrowsers. AV1 isn't even finished. They finished the format but not the encoders and decoders which is the most essential part.

He doesn't know about the end of the Moore's law. He doesn't know people have better things to do with their money than to throw into highly obsolete-prone products.
And that's precisely why they build AV1 to be a standard for all future codecs to be build on, so that they would be hardware accelerated. This is one of the key points of AV1.
Just because the fringe uses it, doesn't mean it got adopted outside.
You see, you're just incredibly biased because you live in a bubble.

Attached: managing-transition-to-hevcvp9av1-with-multicodec-streaming-8-638.jpg (638x359 49.75 KB, 27.12K)

Not an issue, media codecs are embarrassingly parallel.

Doesn't mean anything, really. Same problems.

Yes, just see the situation we're in now.

MPEG4 ASP is a standard. Xvid is a libre codec that encodes compliant to that standard.
MPEG4 AVC is a standard. H.264 is a patented codec that encodes compliant to that standard, and includes heavy and idiotic restriction such as: you are expected to pay licensing royalties if you want to monetize any of your videos encoded with H.264.

MPEG4 AVC and it's algorithms are not CPU friendly therefore they require acceleration by patented ASIC hardware.
AV1 may be libre, but because it'll most likely require hardware acceleration, it will not be usable on libre hardware for a long time.

AV1 is just barely past feature freeze, and all available software codecs are still completely unoptimized. Compare, for instance, the x265 HEVC software encoder, which gets similar compression ratios to AV1, but performs at hundreds of times the FPS.

Except they are, completely so.
en.wikipedia.org/wiki/List_of_Qualcomm_Snapdragon_systems-on-chip#Hardware_codec_supported
en.wikipedia.org/wiki/Video_Core_Next
en.wikipedia.org/wiki/Nvidia_PureVideo
en.wikipedia.org/wiki/Intel_Quick_Sync_Video

imagine if all the mentally ill trannies learned to love and accept their god given penises

No, retard. They're both exactly the same and you're talking about x264.
Prove it. The biggest gain of AVC over ASP is CABAC, which is not easy on GPUs and ASICs since it's serial in nature (like most entropy coding).

Not what was being discussed

Kek, that's called dripfeeding retard.
Acceleration for certain codecs or tasks is way more "obsolete-prone" than CPU power ever will be.
The fringe is me. I don't watch netflix or TV but anime etc.
I really don't care what normies use and considering they were lucky with 720p TV all these days, they don't care either.

This.
More cores, more efficiency and less heat development will be the future. Waiting for those ARM machines.

why would you?

...

You're an embarrassing redditor trying to fit in.

Has anything happened regarding the (((Sisvel))) VP9/AV1 patent pool?

I wouldn't be so sure of that should Jewntel discover the wonders of NVCTs.