AMD GPUs

Will we finally see a GOOD dedicated FreeSync GPU out of AMD?
Or will we have to wait for variable-programmable GPUs from Intel?

Attached: 1541364753411.png (479x427, 138.83K)

Other urls found in this thread:

wiki.osdev.org/AMD_Atombios
github.com/marazmista/radeon-profile
en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units
en.wikipedia.org/wiki/Shader
en.wikipedia.org/wiki/Compute_kernel
en.wikipedia.org/wiki/Triangle_strip
en.wikipedia.org/wiki/Compartmentalization_(information_security)
hooktube.com/watch?v=uN7i1bViOkU
hooktube.com/watch?v=0dEyLoH3eTA
hooktube.com/watch?v=nVdUrUuDytQ
twitter.com/SFWRedditGifs

Next Gen is AMD's, precisely with 7nm Zen 2 and Navi.
But then, will this shit have AV1 hardware acceleration?

Haha, no.

I'm expecting their Navi GPUs to match the current 2080Ti Nvidia GPUs. I'd love AMD to dethrone Nvidia but I can't remember a time (ever...) they have been better at the high-end. I'll continue to only buy AMD though, because they have FOSS drivers.

What do you mean by variable programmable GPU, OP?

I believe Intel was trying to make a GPU based on the x86 architecture.

Larrabee was abandoned years ago (not least because it was a retarded idea)

what's so bad about rx570 for example? if you take price in consideration.

some features are only available in closed source drivers though, so it's not completely fair

The only thing they lack is FreeSync and a control center, right? They're working on the former.

The open drivers aren't really open all the way down. What they did was essentially hide the important bits on the card, then interpreting it in the kernel. Clever, but also running proprietary code in your kernel.
wiki.osdev.org/AMD_Atombios

You don't need a control center. This does all you need, MESA and the drivers do the rest.
github.com/marazmista/radeon-profile

Why would anyone want hardware acceleration on a desktop machine? That's retarded; you'll have to copy it back into RAM to apply any filters (this include debanding, good scaling) and swim among the sea of shit that the VDPAU/VAAPI situation is.

Because of NoVidia's kikery. It's fucking retarded that I get more FPS on a GTX 1050 than a overclocked RX480 in a game like GTA V.

I really hope amd's 7nm stuff lives up to the hype, but I feel like nvidia is just too far ahead when it comes to gaming.

A mid-generation filler chip a la GCN 1.2 might include a HW decoder, but only if current SW browser decoders get their performance on track for smooth 720p30 playback at minimum.

The concept lives in in Intels manycore Xeon Phis which are essentially GPUs without dedicated geometry hardware used for supercoputers and shit competing with the same market segment as Nvidia Titans

How has that to do with variable programmable cpus

*gpus

Sorry I just don't know what variable programmable means ;_;

I think that was just the OP being retarded and not realizing all GPUs since at least 2005 are programmable

How can a GPU not be programmable?
That's retarded.

Most GPUs before 2005 had fixed shader pipelines, they were essentially ASICs believe it or not. They weren't programmable at all, all geometry and shaders themselves were hardwired into the GPU silicon basically. It wasn't until more recently the concept of the fully programmable shader model became a thing, where instead of a fixed shader pipeline we now have a general purpose processor with hundreds to thousands of execution units and shaders themselves are now basically small programs that run on these units.

unnecessary if the bandwidth isnt even utilized to the full extent
and no there will not be performance gains probably, like there wasnt moving from 2.0 to 3.0

Its likely that Pcie 4.0 isn't even about speed at all, but purely about palatalization and bigger utilization of bandwidth. Right now the biggest bottleneck of large system integration isn't bus speeds, but bus bandwidth and how many PCIe lanes a CPU supports, thats likely what this version will be addressing, in other words the consumer will see 0 benefits over 3.0

*paralellization

I think I kinda get you but since I'm a noob at GPU's, I'd really appreciate it if you'd send me some links to wikipedia or some shit for future reading.

*further fuck me

en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units
en.wikipedia.org/wiki/Shader
en.wikipedia.org/wiki/Compute_kernel

also one more if you want more information
en.wikipedia.org/wiki/Triangle_strip This is what modern fully programmable pipelines typically use to render out complex geometry

I'd say that AMD has gained a shitload of traction in their high-end processors lately, but their GPUs still seem to be lagging.

What are you talking about, nigger? There's only libaom and rav1e.

WEW
Gee I wonder why they still need binary blobs


Really Satan ? Shilling for this ?
This is why you are a faggot.


Power savings.


You're all faggots. AMD gained traction because they began to make advertisement HYPE just like intel does and they also used the Intel ME journal shilling to their advantage.
Intel are still bigger a botnet than AMD, but AMD will be botnet as long as all the hardware/software resources will be closed from their rightful buyers/owners.
They used the

Thanks user

Attached: 1540965364291.jpg (384x384, 96.71K)

Power savings only matter on portable devices, though. At least, they do when the task considered (video playback) isn't run all the time in the background.

Do you even pay bills ?

I do, that's why I said unless it's run all the time. Consuming a little bit more to view a video won't change anything. Especially since CPU power consumption isn't that low unless you're in deep ACPI power states; something that won't happen when viewing a video, even with hardware decoding.

Just finished my shit and will read through it.
I am confused because I thought cuda and shit was an instruction set for gpus and that you'd write to the gpu via mmio pcie.
But I guess that is totally wrong and the reason why I want to dive into gpu internals.

You'd save more on your power bill by leaving the lights off.

Yeah, we should just let them enslave us totally? The reason I like the "FOSS" (in quotes so you won't go apoplectic this time) drivers is because they don't give me any problems + they have Windows-like performance. Have you forgotten how terrible the proprietary drivers were for AMD? They seriously were 1/2 the Windows drivers performance and had so many compatibility issues. AMD is aggressively marketing their PSP as more secure than the ME. I'm obviously not going to trust them, but the most secure you can be on a modern machine (Talos aside) is a Ryzen system with an ASRock motherboard. ASRock allows you to isolate the PSP from the OS using their BIOS.

ASRock gives a knob for the retards like you who don't understand that the BIOS simply can't do that.

Does nobody know the difference between drivers and firmware anymore? Not defending AMD's blobs, but he DRIVER is free.

WEW, nice fallacious interpretation to win the argument.
I'm a Trisquel user, I have never used proprietary software on a GNU/linux os. For AMD GPUs I never had 3D acceleration because you never let documentation for it for the software to be made or didin't make the software at all. That's why I don't buy AMD. I only buy low end Nvidia GPUs (30 freedombucks max) because these are the only cards that gives me 3d acceleration with nouveau.
Want me to buy AMD ? Change the license of the blobs under a license that perpetuates freedom and make the buyers of the hardware their owners and you've got a deal.
That is not the problem. PSP or ME aren't a problem if they were 100% controllable by the users/owners of the hardware but they aren't. If someone buys a computer they own it they shouldn't be restrained to what they can do with it.
For you.

M8 he's paid to promote amd that's all. As long as AMD won't make their hardware freedom respecting like in the 70s then there's nothing to trust.

It's on the die itself, I understand your point--it's not verifiable. But shitting your pants over it is pointless. Buy a Talos.

So I'm a paid shill? I like AMD more than Intel sure, but they're not seraphic in any sense of the word.
Really, you freetards are so out of touch with reality. You're acting like I've acquiesced to slave chains... there are BILLIONS of computers running with the IME/PSP/Spectre/Meltdown/Rowhammer vulnerabilities, but the world is still spinning. Serious question: How do you ever do anything productive?

Attached: PSP.png (723x501, 21.44K)

I just use a macbook with chrome, the genius at the apple store told me that I wouldn't get any viruses this way. Jokes aside: if you want real security: your computer should be air-gapped, inside a faraday cage and running from an independent power source.

Attached: web spider.jpg (3264x3264, 1.22M)

Most GPUs have static (fixed-program) unit cores that only do one thing.
That means that 40% is always going towards geometery, 20% towards shaders, etc.

Intel's GPUs is pure generic CPU, everything is software except I guess they make software fast somehow. This allows you to program the cores to be 90% shaders, or no % shaders, or to really prioritize a certain special effect.
There's some good Jewtube videos but I'll have to find them.

Apparently you don't. Firmware runs on some sort of coprocessor. What atombios is is a coprocessor sending bytecode to your driver that must execute in the kernel on the CPU to complete the various commands.
I would hope (and assume) that you'd be sufficiently pissed if I produced a peripheral card for you with a free driver that was just the snippet:
while (1) *readaddr() = readbyte();
For all you know this is currently rewriting the entire kernel image before overwriting the loop itself with a jump to the new code.

I have an ASRock and Ryzen, whenever I disable PSP in the BIOS sleep doesn't work

Are a blatant stopgap solution
I'd love AMD to dethrone Nvidia but I can't remember a time (ever...) they have been better at the high-end.
Radeon 9700, and to a lesser extent X850. GeForce FX & GeForce 6 were nVidia's Pentium 4 moment.


Yes, but their current "Arctic Sound"/"Jupiter Sound" dGPU architectures are more conventional designs.


Because current software AV1 is completely unoptimized. Having an ASIC on hand now would at least make AV1 usable while we wait for software to catch up. Really though, what we need is an FPGA in every device, a solid API to dynamically load layouts, and an intermediate recompiler to convert C into layouts. Fixed-function ASIC & GP CPUs alone aren't enough.


Really, if a shift to full RTRT (not the hybrid rasterization solutions nVidia is pushing) were done, incorporating non-polygonal geometry and non-bitmapped textures, the flexibility offered by such a design would've more than outweighed the decrease in overall computational power.

It's a shame the PS3's original design, using twice the number of Cell SPEs instead of an old-fashioned dedicated GPU, wasn't used, as that was probably the last point at which such a system would've been able to match GPUs in competing platforms (XB360) with a software renderer as a fallback solution for lazy devs & ports, while pushing wholly new types of graphics impossible on a GPU.


You realize screeching about "muh firmware blobs" is retarded, since the ASIC itself can encode identical arbitrary functionality, without any possible way to patch or analyze it, right?

If you actually care about "muh blobs" beyond kernelspace drivers, the only logically consistent position is to screech just as autistically about open sores VHDL files.

Are you me? I've been saying this for years, and I think (or rather hope) that Intel is planning something in regards to this. Intel isn't as dependent as Xilinx is on their synthesis tool-chains for revenue, so hopefully the can make some progress there.

I've been following Intel's antics with Altera for a while. The presence of an on-die FPGA in future Xeons along with a suite of tools to encourage their exploitation could be a massive game changer for the design of software and hardware alike.

Attached: intel-xeon-sp-integrated-fpga-chip-logo-bw-678x381.jpg (678x381, 44.73K)

High-level synthesis is for fags; just write VHDL.

My ideal would be to have FPGA floorspace managed like CPU cycles, in such a way that if you lack an FPGA or don't have one that meets requirements, routines would be executed in software instead. This would mesh with the ability to swap out virtual SIP cores on the fly, as you run different programs.

I've not seen any good-as-HDL HLS implementation yet, but let's not pretend that VHDL or SystemVerilog are good languages. They are truly god-awful, with every tool vendor supporting a swiss-cheese subset of each language's features. I hope that if a FOSS-revolution comes to FPGA development, it comes with a new language too.

Only newfags can say that. There's a reason hardware video _encoding_ never took off: even AVC is way too complex to be done as well as x264 at a reasonable price (and the really time consuming stuff is already done in SIMD; sad has a fucking instruction on amd64).

I asked about rx570, not 480.
Also GTA probably has made nvidia specific optimizations, try some other tests to get less biased results.

Are you saying it like it's a problem?

Also OpenCL.
FreeSync is also very important feature.

I'm not sure what you're really arguing here but i'll just say that saying there's no use for video encoding/decoding acceleration seems kinda lame. Try to encode hevc with handbrake Vs using ffmpeg with libva on my Raven Ridge system for instance the speed is something like 3-16fps with handbrake vs 125 fps using hardware acceleration. There's no doubt that better encodes can be done with handbrake, but sometimes or even most of the time it's just not worth it

For the love of... IOMMUs exist. This is tantamount to saying any process can access all memory because your machine predates MMUs.

Explain to me why exploits in firmware are worth caring about, but exploits in the hardware that hosts that firmware aren't.

Both rely on you trusting the exact same entity.

That's what you don't get; it's not merely "better", hardware encoding really produces shit results.
And your use case doesn't exist: why would you use H265 if it's not to compress as well as possible? Using x264 would be better than HW HEVC.

Most people's smartphone are still not able to HW decode HEVC.
Tell one of those people to, idk, torrent some HEVC movie and try to play it.
First, their phones wouldn't be able to pull this out on their own, so they would have to use MX Player Pro with codecs.
Then, once you have it with all the settings and speed tricks on, this person would realize two things:
1. The phone is like a toaster and battery is draining like the dutch draining system.
2. The videos stutters and frame drops a lot with SW decoders, making the shit unwatchable.

Then people will start to realize why AV1 hasn't been adopted en mass yet, as services like Youtube or Netflix would be bringing the shittiest experience to people on mobile, and would make an ordinary video opened on say, Chrome, simply freeze the whole computer, specially if it's a low end one or one with many applications opened.

It would be a nightmare, and AV1's sheer complexity makes it dead without this HW acceleration support.

en.wikipedia.org/wiki/Compartmentalization_(information_security)

The only difference between the two is that assuming you know the target ISA(s) you can run firmware through a decompiler to analyze it, whereas you need to x-ray an ASIC or blackbox it to probe for hardware vulns.

As I said, the people who write the firmware and the people who write the VHDL are the same.

You didn't read the thread, right? I said it only makes sense for decoding on portable devices. Goyphone users will eat the shit coming out of Google or Netflix kike ass anyway, so if they decide to use AV1, they'll just have their mouth to shut and their eyes to cry.

Hey, why not just give unfiltered DMA access through your ethernet port while you're at it. I mean you basically have to trust the other party with all the data you send explicitly so why not simply trust them with all data in your entire system.

Zen 2/Ryzen 3 is going to be fucking BONKERS!

Now if only RTG got their shit back on track though.. where's my GTX1080 competitor for 300$?

maybe that's in the eye of the beholder or something. There have been times I gave using the hardware acceleration on linux a go and did get shit results but the latest code doesn't just output garbage and the engine in the ryzen 2400g doesn't have older limitations like no b-frame support.
maybe, maybe not.

VRR is a fucking meme. why would you buy a $600 monitor to fix screen tearing when you have much worse problems on any LCD in the first place. also you don't need freesync if you can do beam racing, which your shitty nu-games can't do because they can't even reach 60FPS ever in the first place

I long for a future where my at least 8k QD-LED monitor maxes out at at least 500Hz and does VRR.

By the time OLED monitors exist, they'll probably be available in 8k.

You aren't proving me wrong tho.
Who said anything about purity ?
Purity isn't the question in this it was never about purity it's about having choices and AMD doesn't give anybody that.
And you haven't ? what was your choice again ? buy amd and shill it ? you didn't buy libreboot computer or SBC ? or one of the few other solutions ?
And your point is ? that people buy hardware that isn't vulnerable to it out of fear ? is that a good reason ? if it's just fear then people should apply the patches intel are giving.
I don't play any games.

>And you haven't ? what was your choice again ? buy amd and shill it ? you didn't buy libreboot computer or SBC ? or one of the few other solutions ?
The fact that you think this is hilarious. Trannyboot is a veritable failure; running a buggy x86 CPU without microcode is moronic. It also can't even offer HALF the operating power of my Ryzen system. And I need modern GPUs because I use a 4k monitor--not because I like wasting money but because more pixels=more productivity. Why are we at the point that me saying AMD is a good deal (and investment, because AM4 will be supported a long time) is considered shilling now.
The point is that you're incredibly out of touch and you won't get anything done if you mewl to your boss/bureaucratic authority like that. You seem like a student or a NEET; you're going to have a rude awakening soon.
I play like 3 games a year, and they're all arcade games that work with MAME.

no it isn't. you have no idea and probably never experienced vrr in practice.

Beam racing and VRR aren't mutually exclusive.

Beam racing reduces lag, that's it.

VRR additionally with mismatches between frame rate at the source, and (evenly divisible) refresh rate at the display.

You could theoretically do VRR on a CRT, but you would also have to adjust beam intensity as a function of scan speed, otherwise the display would visibly dim when FPS increased, and brighten when FPS slowed.

The microcode is here, just not updated.
Blame AMD and Intel, not the sane world.
Having a mostly good CPU with a bad GPU is a lot better than both bad. The GPU doesn't have something like ME/PSP nor access to your complete system.
It's spinning with the europeans being exterminated. And anyone with two neurons to rub together can see that computers play a big part in the brainwashing process, now.

Rationalizing opening your buttcheeks for the kikes is fine and dandy, but let's not pretend you have a real argument other than pleasure/comfort > freedom/efforts to get it.
Different user, by the way.

Most people buy computers to use them, not to make an ideological statement.
Libreboot is only ok at the latter.

FTFY

They've dethroned them before in performance, but regardless of how much cheaper or more powerful they make their GPUs Nvidia always outsells them because the average consumer is retarded. Vids related

hooktube.com/watch?v=uN7i1bViOkU
hooktube.com/watch?v=0dEyLoH3eTA
hooktube.com/watch?v=nVdUrUuDytQ

...