Brainlet gamer here is risc-v a viable alternative to x86?

Brainlet gamer here is risc-v a viable alternative to x86?
Just saw the ltt video but I have more questions and why is it stuck on shitty 28nm?

Attached: Screenshot_2018-08-23-11-46-00-24.png (1920x1080, 1.58M)

Other urls found in this thread:

cnet.com/news/four-years-later-why-did-apple-drop-powerpc/
nytimes.com/2005/06/11/technology/whats-really-behind-the-appleintel-alliance.html
venturebeat.com/2009/02/06/the-race-for-a-new-game-machine-book-chronicles-the-sony-microsoft-ibm-love-triangle/view-all/
www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-118.pdf
people.eecs.berkeley.edu/~krste/papers/waterman-ms.pdf
twitter.com/SFWRedditImages

No, RISC-V targets embedded applications, like ARM Thumb or Intel Quark. In theory, the ISA could be scaled up to match a lower-end ARMv8 or Intel Atom, but that would probably be a pretty significant effort.

That said, for something that isn't particularly demanding, like an RPi-type retrogaming device, it could be useful.

So is it gonna be used like a co processor for normal x86 anytime soon? Would be neat when Intel's new larrabe based gpu comes out

Microcontroller is the word you're looking for. In its current state, RISC-V isn't up to par with ARM. It is free and open though. Nvidia and Western Digital are looking to work RISC-V microcontrollers into some of their products.

Wonder if the risc-v is gonna be used for gay tracing rtx meems

RISC-V and powerpc are viable alternatives to x86 if and only if you only use emulators and open source games. No wine, and no steam for other platforms. It is possible to hack wine to work in a way to play on other architectures i.e RISC-V but no one has implemented the QEMU functionality yet because of (((them))).

That's because RISC-V is lightyears ahead of it without the hardware backdoors and forced pay model for the same peice of silicone. Nvidia already has implemented RISC-V in their 900 GTX series of GPU's and onward.

It's a freetard ISA for companies to save money with and for the turd world. The ARM guys are scared as it has the potential to end their licensing business but I don't think anyone else feels threatened. It might take over phones some day.

Anyone used a sifive? I'd get one but they're like 50-60 bucks.

POORFAG DETECTED

BITCH I MADE 160K OFF LONG AMD AND INTEL SHORTS SO FAR JEWIN AINT EASY!

APOLOGIES, I MEANT:
(LONG AMD AND INTEL) AND TESLA SHORTS SO FAR

If it's open source they could just write nasty code I don't see how it's any better apart from not having as many instruction sets and no management engine

I fucking cringed when I saw that video show up in my recommendations, Linus knows very little about tech beyond intermediate level things and Windows.

inb4 we get flooded with more retards asking stupid questions about this.

Not right now, at the moment there are only 3 ISAs which people care about, x86, PPC64LE, and ARM. I can't be bothered going into details but only hipsters care about RISC-V at the moment (from a use perspective, there are plenty of smart people who care about it from a development perspective), its still lacking some really important things like vector extensions (currently being developed) which prevent it from gaining traction.


Because SiFive are virtue signalers rather than people who actually make products people want and 28nm was the only thing they could afford. Even at 28nm it would have still cost hundreds of thousands to put that chip into production the final product blows the development cost out even more and given how niche it is its unlikely to sell huge volumes, people buy Raspberry Pi's because of the community and software ecosystem for them and the SiFive doesn't have anything close to that.

They also screwed everyone over by licensing IP for critical parts of the chip like the memory controllers and as a result can't release source code pertaining to those parts so the chip can never be properly libre they literally told the community to reverse engineer the compiled binary blobs if they wanted to make libre alternatives to them. The Power9 CPUs are more libre than the SiFive CPUs.

The most cringy part about the announcement between SiFive and Nvidia is the retards going on about how we are finally going to get an open source GPU. Nvidia is literally the worst tech company in existence when it comes to supporting the open source community and the lengths that the Nouveau developers have to go to are completely absurd. The chances of them actually releasing their current generation IP (or even older generation IP) are stupidly low and what we will probably end up with is a RISC-V version of the Tegras which are awful for so many reasons.

Thanks I knew his video was shit but I wanted to know what he was skipping over what a joke cheers user

You'll have to learn to love Nethack.

just wait until ltt get his hands on a talos II.

I uses a BSD license
To try and break it down for you, it means that any chip that comes close to matching or exceeding x86 in terms of gaming will likely be through a heavily-funded proprietary fork. Also LTT is a cringey faggot

He probably would have done it by now if he was ever going to do it, he doesn't really care about anything which can't run Windows, or isn't Apple, or isn't garbage of the month.

Linus knows how to build a PC and run cinebenchmark, prime95 and unigine heaven. That's it.

Because if the company making your CPU is bad (like Intel with tons of vulns, ME), you have more than one alternative because it's open, so anyone can make RISC-V CPUs without paying Intel royalties.
With x86 you basically have two choices, Intel and AMD.

Hasn't x86 reached its limit besides bigger dies and more cores? Apparently 7nm and 5nm ain't that much better

Having less instructions is not outright better like LTT made it out to be. In theory having more instructions makes the CPU faster, but it depends on how they are done internally and if they are pipelined. Although for power consumption and ease of programming the more instructions the worst.

Any feature size shrink will result in lower power consumption and smaller dies (which makes the chips cheaper to produce since you can fit more on a platter). The problem that the pure play fabs like Global Foundries and TSMC are having is that the yields still aren't acceptable, its still better than Intel though who can't even get 10nm sorted.


That still puts him ahead of many of his viewers. If you look at everyone who uses a computer on a regular basis basis just being able to assemble a computer from parts puts you in the top 10% in terms of skill sadly.


One of the problems with x86 is that only a subset of instructions are actually used regularly, if you were to analyze the compiled machine code from various programs I would suspect that the result would look like a Pareto distribution where 20% of the instruction set makes up 80% of the compiled binary. One of the arguments for RISC is that if you got rid of the 80% of x86 instructions which are only used 20% of the time then you need less transistors per core which results in lower power consumption and less silicon area (which allows for more physical cores and/or cache), the instructions you cut out can be achieved by a combination of the remaining.

CISC became popular when memory and storage was expensive, part of the reasoning was that if you had more instructions each machine code could be then the compiled binary could be smaller which allowed for more memory to be devoted to data. RISC came about when memory and storage started becoming cheaper and researches noticed that some of the more complex instructions on x86 could be done in less clock cycles with a combination of more primitive instructions.

If openPOWER couldn't kill x86, your shitty embedded RISC-V has a snowball's chance in hell.

Almost every ISA is a viable alternative, it's the CPU microarch design + compilation toolchain that makes AMD and Intel dominant.

Because that is what that specific CPU is done in, it's a prototyping board for you to do bare metal testing of hardware and software.
You already have 12nm RISC-V cores in Nvidia GPUs.


To more seriously answer your question and to correct some misunderstandings here, the goal of RISC-V is to replace all ISAs, that's why they have a 128 bit version of the spec, it can work just fine as a GPU or a CPU or in any other desired computational work requiring an ISA.
Given how many partners they have, as Linus said, you will have RISC-V everywhere in your rig soon, Western digital have already pledged to use RISC-V, they will be used for RAID, RAM, GPU, SSD, NIC and all other peripheral goodies, CPUs will come in time but that will be hard, gaming? Never, if you make something that does x86 emulation too well expect to have Intel sue you into non-existence.

Personally I think they should just partner with PC Engines and just make an APU2 with RISC-V, I'd buy and deploy it.

Attached: Screenshot-2018-8-25 Members at a Glance - RISC-V Foundation.png (1668x5197, 444.47K)

In time LLVM will make x86 emulation unnecessary.

OpenPOWER is only 5 years old and was meant to start off by bringing the architecture to data centers and the server market. They've given up on embedded Power for the consumer ever since the Cell disaster in the mid 00's.

Their new strategy to have several variants(or modules) of their POWER9/10 chips might pay off in the long run if they're able to get enough momentum going. Imagine it's 2024 and the POWER11 chip is coming out with 96 cores/384 threads and half a gig of cache, you'll be able to get a 16 or 32 core binned chip for next to nothing with dozens of motherboards available for it.

It's not going to kill off x86 anytime soon but they have a good shot at bringing it back to desktop PCs among GNU/Linux users in the next decade that are sick of Intel & AMD. Even gaming on it won't be that much of an issue as all the major game engines are designed to be easily ported to different architectures, remember that the last generation of consoles were all based on Power.

Attached: unrealonpower.png (585x703, 204.21K)

How, user? How come?

Was is really that bad? Is there any reason why something like e5500 or e6500 can't be used for consumer pc's or in something equivalent to an rpi?

...

And at a certain point you have enough performance anyway. So even if something like an 2007 (Open)Sparc T2, shrunk down to 14nm doesn't perform as well as an intel or amd chip, there are still lots of other fields to compete in besides raw single thread performance. Security, price, power, and threading being a few.

Seriously, about performance, I'm running a 4 year old passively cooled cpu which scores an order of magnitude lower than current high-end according to passmark. It does everything I need, including running vm's playing full-hd video and running a bloatware browser.
Even installing and running Gentoo/Funtoo on it is not really a problem.

Steve Jobs dropped Power for Intel x86 thanks to the Cell processor, Sony was begging him to use it. It was terrible at everything it was supposed to do and forced Sony to throw a Nvidia GPU in their PS3 at the last moment when they realized the 'Synergistic Processing Elements' weren't any good which increased the price substantially. The 10-20% yield they were getting by going with a 221mm² die early on didn't help either.

And sure you could use an e5500 or e6500 but asides for Amiga fanatics nobody would be interested in such a computer.


You can already buy one for $1,100.

IBM had no interest in getting the PPC970 in a laptop. Jobs decision had nothing to do with Sony or Cell.

Does tinier dies always mean higher clocks?

cnet.com/news/four-years-later-why-did-apple-drop-powerpc/


nytimes.com/2005/06/11/technology/whats-really-behind-the-appleintel-alliance.html


venturebeat.com/2009/02/06/the-race-for-a-new-game-machine-book-chronicles-the-sony-microsoft-ibm-love-triangle/view-all/

He made the right call on the Cell. It was an expensive disaster that took years to make. It wasn't the primary reason why Apple switched from Power but it's part of it.

Attached: shippy-cover.jpg (400x618, 38.34K)

No. Smaller dies mean less distance the electrons have to travel meaning less heat. Less heat means higher clocks. That's why any RISC architecture has x86 beat in raw power. Less instructions means less die space dedicated to said instructions on a reduced instruction set, i.e RISC.

With a reduced die size you reduce heat, which means higher clocks. Of course there is ARM, which is RISC, but it wastes die space on useless shit like (((ARM kikezone))) and out of order schedulars on the die. You don't need that shit. Hence why RISC-V is the future as it doesn't neccessarliy need to waste die space on useless shit like (((kikezone))) and out of order schedulars. Or even any schedulars on the die, let the software handle that.

There is an exception to this however. You know how if you connect a wire to a battery and a lightbulb it produces power? Now say you attached another wire to the battery and stuck it in the ground, thereby wasting all your electricity. The same can happen with proccessors, which means a smaller proccesor with a short, or wire on the ground, could produce more heat then a larger one. This is only if the proccessor is shittily designed like x86. A 12nm RISC-V proccessor will produce less heat at the same clock as a 12nm x86 proccessor, with no exceptions. This is because x86 has instructions in the ISA that force a short, or wire on the ground, scenario as to maintain backwards compatibility with older x86 proccesors. RISC-V doesn't have to be backwards compatible at the hardware level, just do it in software like wine or QEMU.

Thanks great explanation
If I could ask how does making a x86 CPU (short) maintain backwards compatibility exactly and what does shorting have to do with older CPUs in general? Why was that even used like that and for what use?

I am comparing older instructions on the x86 ISA to a short because they are less energy effiecient then newer instructions. I am sure you have heard of AVX256 or SSE. SSE2 and AVX512 are more energy effiecient/use less heat then the previous instructions for certain things. But yet x86 keeps both SSE and SSE2, thereby wasting die space and creating more heat. Now multiply this out for every instruction added over the past 20 years and it creates alot of uneccessary heat.

RISC-V has none of this bloat/heat/bullshit as RISC-V doesn't need to maintain backwards compatibility at the hardware level, you can just emulate it with WINE/QEMU at the software level.

It's being used heavily for controllers because of cost not quality. We won't ever see a RISC-V that competes with x86 because of the instruction length. It's no surprise, as it comes about 20 years since the industry admitted it was a dead end for the high end and started searching for other solutions like EPIC.

ARM also has this problem as the ISA has to be backwards compatible from ARM11 to ARM4 or so. Which means supporting NEON instructions or some such crap across all their newer proccessors, which means more die space wasted on useless/never used ISA, which means more heat/less energy effieciency.

On ARM energy effiency is a huge deal since they care about battery life so much for phones. So RISC-V would actually get better battery life then a compareable size in Nano meters/NM ARM proccessor. Granted the increase is not going to be much, but then you get the benefits of RISC-V's security and FOSS like nature.

RISC-V blows modern proccessors out of the water on everything, its just no one is producing them besides that shitty SIfive company that includes propietary hardware/software on the motherboard. And the ones being produced need time to perfect the creation of the proccessor with smaller sizes and more effiecient usage of the ISA, the clock rate, and the memory transfer from proccesor to DRAM.

It goes like this, make proccesor design with ISA, make motherboard with GPIO components like DRAM, send design to factory in china, and then wait for them to send the silicone back. This takes like four months, and after all that you have to test them, put software like GNU/linux on them, and sell them.

Then after you use the proccesor or their testing you can find out what you did wrong in the motherboard or proccesor design and start over again. Like say you had SSE in RISC-V for the first mobo/procceosr design. In the second design you could have SSE2 and so on but with other instructions.

Your brain on marketing.

This.

Well of course RISC-V is less expensive. You don't have to pay a royalty fee to intel or ARM to create/use it.
What does this mean? Do you mean how many instructions are proccessed per clock cycle? Do you mean the clock rate? Do you mean the dedicated RISC-V ISA doesn't have "enough" instructions, and if so which are missing?

Calling a post "marketing" is not a arguement. Sure the proccessor is slower right now. It is also royality free to produce and is much much more energy effiecient then a comparable x86 or ARM proccessor, hence its heavy usage in controllers along side less expensive to use. With production optimization and better motherboard designs that aren't patent encumbered such as DDR4 usage in motherboards or modern MMU's, it will be more effecient and libre/secure.

This doesn't make any sense. The G5 PowerMac (PPC970) came out in 03. The PPC portion of Cell was a stripped down version of the PPC970. Why would Apple want something less than what they were already using? Steve had already made the decision to go with Intel before Ken Kutaragi got up on stage during Mac World. Unless plans for Cell dated back to the introduction of the PPC970, I don't think this is accurate. There were rumors that IBM didn't even want to do Cell and that the 360 Xenon was what was originally brought to Sony. Thanks to back door Japanese dick stroking, they insisted that Toshiba be involved.

Modern MMU's and DDR4 are patent encumbered, meaning for a libre hardware system and for avoiding paying royalities to some kike you can't use them or you will get (((shut down))) for using them/producing them by being sued.

This is why only scummy companies like SIfive and nvidia are using RISC-V right now. Because they are ok with paying royalities for non-libre hardware components on the motherboard. So even though RISC-V is libre, GDDR5 in nvidia GPU's and the clock on the SIfive boards are not libre. It's more difficult to use libre hardware and not step on some kikes patents. Now if hardware designers just moved to china and said fuck all that patent nonsense and made awesome proccesors that would be awesome. Of course then the chinese government would (((shut them down))) on behalf of the kikes since motherboard/proccesor/DRAM/the entire computing hardware industry is a kike monopoly. Along with a secure computer that the kikes can't hack means the GCHQ/NSA can't steal from you anymore as easily.

Its part of the kikes plan to make everyone insecure in computing enviroments so that they can be stolen from with ease.

ARM11 which was ARMv6 and ARMv4 which was ARM8 never had NEON. ARM Cortex M4 which is ARMv7 has NEON as do all ARMv7 cores. NEON didn't exist prior to ARMv7. Don't confuse ARM8 with ARM Cortex 8 or ARMv8 though. They are completely different.

Thank you for clarifying that. It doesn't change though that as an example ARM has older/unused instructions in its ISA that take up die space and waste energy/produce heat.

ARM is king of wasted die space. Nobody else does Big/Little CPUs. I don't think they give a shit fam.

How is aarch64 in that regard? It is a bit of a break with the old designs right?

There is quite a bit of stupid in this post but this is the worst
I am not saying that RISC-V won't be adopted but to first say that a single ISA can replace all others is fucking stupid. An ISA isn't just a set of instructions but also things like overall processor architecture, ASIC design in general is a never ending set of tradeoffs and design decisions to make a processor good in one area ultimately makes it weaker in another. Processors and ISAs which don't target a niche fall into the "jack of all trades master of none" situation, for instance the manycore PEZY chips use a barrel threaded MIPS-like ISA because even RISC-V takes up too much silicon but it wouldn't make any sense to use that for an application requiring good single core performance.

< HURR DURR MORE BITS = BETTER !!1!!!111
This isn't the 90's any more, the bit wars are over.


In time LLVM will make ISA fanboy wars pointless.

how can a CPU have security? it only either is a real CPU (does arithmetic and shit) or is a piece of crap with a web browser embedded in it (x86)

DO THE MATH

They know there is no use for it now, and not much research is done on it. Their argument, looking at history was that bit size of an architecture was an obstacle and hard to change, so better plan for it in advance.

They designed the ISA to be modular so people could build ASICs and special processors with it, it's better to have everyone being a RISC-V specialist at the cost of some silicon, but even if you did need a special ISA for something really unique, then so be it, a common base is better.

Nah, at current rates we're going to run into the 64 bit wall by 2030, see page 105:
www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-118.pdf

You don't need anything more then 32 adressable bits unless you are running extremely memory intensive rendering software and or huge servers. Consumers don't need more then 16 bits unless they do video gaming. This is assuming perfectly or near perfectly optmized software, which is never the case. With the bloat of software going on 128 bits will become mandatory for (((the latest updates to windows 10))).

You don't need a modular ISA to do that, the logic people design to perform special functions most of the time are memory mapped like a peripheral device rather than added to the instruction set as special instructions. The logic which WD implements in their chips to control their HDDs and perform LDPC is just memory mapped as its more portable from a firmware perspective and higher performance than hooking it into CPU as an instruction, the reason why they are switching to RISC-V isn't that its free as in freedom but free as in free beer.

Their claim is pretty sketchy, huge systems like the ones they are referring to don't run a unified memory address space like they assume and instead the overall system is made up of individual nodes each with their own internal memory address space. The 48bit addressing used in current generation 64bit CPUs already allows for an address space of 256TB and is designed to be extendable without breaking backwards compatibility. At 64bits the address space becomes 16Exbibytes which to put into comparison is a pile of 64GiB DDR4 DIMMs with a mass of 5400 tons.

Moore's law doesn't scale infinitely like they assume and the concurrency issues facing even distributed systems like current supercomputers are huge. Having all the nodes sharing a single address space would be a nightmare just to keep it running without doing anything and the concurrency safeguards inherent in current operating systems due to things like POSIX would likely slow the entire system to a crawl.

Forgot to address this

Holy fucking shit, this has got to be one of the most retarded things I have ever read on Zig Forums. You are literally saying that most consumers don't need systems with more than 65KiB of address space when even PCIe devices consume 4KiB just for their configuration BAR.

Nice comparison. Pretty sure back in the '70 and '80 the argument was that something like this:
And yet here we are today, with storage in some companies already in Exabyte territory.

Also

RISC-V can have compressed instructions.
people.eecs.berkeley.edu/~krste/papers/waterman-ms.pdf


DDR4 will be ancient by 2030, I doubt POSIX has a place in something that would be designed to have >64 bits of address space, it would probably be for extremely specific tasks if at all, but it certainly is possible which is why it's worth doing that one step up when it's still easy to do so.

I haven't said these companies are switching due to some superior design aspect of RISC-V, a lot there are going to do it because it is cheap. a lot more will come after that because more people are going to be learning it.

Yes that is exactly what I am saying. Unless you are going to be doing video gaming or an equivelent such as graphics renderering for videos then you don't need more then 16bit for text/document editing, web browsing, and song/video watching. This is assuming near perfectly optimized software and or perfectly optimized. If you want an example look at DOS and windows 98, and those were incredibly unoptimized peices of shitware compared to what they could have been. You don't need pci-e with the integrated audio chips of today and SATA controllers on board. You don't need USB or a MMU unless you are a insecure faggot. If you have large collections of videos and audio you might need 32bit like a server to adress them. But most normalfags don't as they steam it all.

A single-buffered 4k frame requires 6.63MiB of VRAM, even a hi-color 480p frame needs 614KiB. For comparison, the Apple IIɢꜱ maxed out at a 4-bit 640x200 mode that consumed exactly 64KiB.

Memory isn't going to magically become ~9 orders of magnitude more dense. Just because DDR4 will be gone doesn't mean what will replace it will be significantly more dense. Even if we reduce each DRAM cell down to 1nm (which means each cell is only about 10 atoms wide) that's still only about a 200x increase in density from what we have today which barely puts us over the 48bit space per node for the highest tier systems.


Storage addressing isn't the same as system memory addressing, modern filesystems already support sizes so big that you would need a pile of harddrives with something on the order of 1/4 the mass of the earth to fill up.


Protip: Those are all memory mapped just like PCIe.

Wut?
4k has roughly 8 million pixels which at 4 bytes per pixel gives you about 32 megabytes of storage. Or am I being a brainlet?

He specifically said single buffering, which is where you only have part of the screen rendered at a time and while that part is being sent to the screen you are rendering the next part. Its more memory efficient and lower latency than rendering the entire screen at once but can introduce tearing.