People are always searching for the next performance improvement...

People are always searching for the next performance improvement. The next tiny little speed boots in processors and systems. This is obviously a good thing on its surface, but it brings up a few issues

Firstly, these innovations in hardware should have been used to allow for innovations in software, and give us functionality that wouldn't have been possible otherwise.
What actually happened is that people just took it as an opportunity to be lazier. We've got email clients and word processors that now use more resources than older computers even had! We've got browsers that use 10+ times as much. And it's not like they're really doing much different than they were in the past. Okay, maybe you could make a case for browsers doing way more with javascript, even though a majority of that is pretty unnecessary like google analytics tracking or some bloated framework, but has email really changed that much? Word processing?

Secondly, It seems that performance gains are sometimes chased without regard for security and safety. Obviously not always the case, but it happens. I forget the source, but some user back during the Spectre/Meltdown freakout was talking about how the issues may have been caused by Intel creating stuff in an unsafe way in an effort to increase performance, which would explain the performance drops that came with the patches. I'm not too sure how true that is, but if true, it's an example of this.

A more long-running example of this would be OS design. It's no secret that Microkernels have innate security benefits over monolithic ones. However, we don't exactly see microkernel OSes used everywhere, do we?
When disadvantages of microkernels are brought up, or arguments against them are discussed, the big talking point is always "Muh bad performance! Muh IPC overhead!". But two things. First, are you really so fucking desperate for that next performance high that you will sacrifice security for it? Second, the performance hits apparently aren't even that bad, at least on L4.

I don't really have an end to this. Just wanted to rant a bit.

Also, OwO

Attached: boy.jpg (480x613, 31.89K)

Next decade or so will be the decade of software optimization, as hardware will probably stale hard due to semiconductors reaching minimum levels of size.
If the solution is coming, it's not right now and not in the immediate future.
No, quantum computers are bullshit, they have severe problems with noise (math) and aren't even quantum, it's a marketing name.

This has been discussed here already.

Wait, was this post meant for me? I didn't mention quantum computers at all

Fuck Moore's law. Without it there would be no smartphone or Internet of Botnet, it's dead now but the damage is done. I wish computers were slower.

Pretty sure he was just saying it preemptively.

Performance growth has been reduced to a patently phony drip-feed since around 2005 when we smashed our faces into the 4GHz barrier, eking out tiny IPC improvements in the ludicrously inefficient 80x86 architecture, and cranking up core count.

If there wasn't a monopoly in place on CPUs, especially with AMD totally somnambulant since the Athlon Xp days, we'd have simply gone straight to 5 nanometers almost immediately and been working on other performance improvements the whole time.

The performance penalties of patches only effect VM environments, and are basically irrelevant to 99.999% of applications.


Quantum computers aren't bullshit, they (and the optical analog quantum networks they're attached to) are good enough for any entity with deep pockets to smash any non-quantum form of crypto you can use right in the cunny. The problems you named are mainly just reasons why lowly plebs like us aren't going to get one anytime soon.

I don't wish they were slower at all. I just wish that their power was being used responsibly

I remember there being an article where one of the industry guys was saying how pointless it was to keep getting better and better processor speeds when it was so much easier to keep throwing more cores and threads into the cpu instead of hitting better speeds.

Also didn't one of the power processors hit 5ghz around ten years ago?

I know Oracle's latest SPARC servers are clocked at 5GHz, with Fujitsu's being slightly lower.

This Moore's law laziness isn't only for programmers. CPU companies are lazy too. There won't be any real improvements until there's a new architecture. Computers are too fast and have too much memory so they don't care about instructions and encodings anymore. Assemblers are huge now because the x86 encoding doesn't follow any useful rules. Encodings used to be chosen to make code smaller and assemblers and compilers simpler and smaller (which also makes machine code easier to read). With x86, they just stick instructions wherever there's room. If Moore's law ended at the 386, we would not be using x86 today because it's too inefficient. They would have made much more optimal CPUs and software.

Haphazard branch prediction without much care for security concerns is what caused Spectre/Meltdown CPU vulnerabilities.

So Moore's law was a self-fulfilling prophecy? Once he formulated it, everyone felt compelled to keep increasing performance at all cost for it to stay in line with the law so as not to "fall behind" in comparison to competition?

Who is that semen demon?

Are the OP from the wayland thread who made everyone question their sexuality?

*giggles* maybe!

Not going to happen here retard.

Instead of quantum computers I wish people spent millions of dollars on the Pifs.

RISC-V?

Reminder the 80x86 ISA used in the decoder/microcode has zero bearing on the RISC internals of modern Intel/AMD CPUs that actually do the work.

It has a huge bearing on the entire CPU design and on all software. Assemblers and disassemblers are bigger because of the large number of irregular instructions, prefix bytes, and multiple encodings. Instructions like "add eax, ecx" can be encoded in multiple ways. Multiplication and division use specific registers. All the new instructions have huge and inefficient encodings with prefix bytes, which wastes instruction cache. Compilers are bloated because they have to choose between all these different instructions and work around all these irregularities like flag usage and partial registers.

How to spot a dumb crossboarding redditor.

Can't wait to listen to the new Tool album and play Half-Life 3 on muh shiny new RISC-V computer. Oh wait

binary
lol at proprietary game lusers.

there's already a (albeit expensive) SBC from SiFive, and LowRISC should be releasing theirs sometime this year.

But I guess if you're THAT fucking impatient, It looks like ARM is going to be entering new markets outside of its usual SBCs and smartphones in the near future. We're already seeing 48-core ARM server chips from Cavium and Qualcomm, with the Caviums already being sold by various companies (two Linux-friendly ones being PogoLinux and System76).
And the existence of higher-end ARM chromebooks like Samsung CB+ with 6 cores 2.0GHz shows that ARM laptops could be feasible.

...

It's kind of hilarious when you consider that (as a result of sabotaging DEC) Intel owned the leading ARM technology in StrongARM during the rise of the smartphone, until they sold it off. They could've been playing both sides of the market by now.

I hate this mentality, especially when it shows up in security and privacy topics, but here as well.
Yes. RISC-V is under a permissive license. Yes there are issues with that. Yes GPLv3 would have been preferred. No that doesn't mean it's not a good thing, because it's still a step above the other architectures. x86 is a fucking duopoly, ARM requires special licensing agreements to be able to make chips for it, OpenPOWER, at least from the wikipedia article on it, seems to be only "Open" to IBM's business partners, and other open ISAs seem to be much older or inferior to RISC-V. pic related.

Attached: picrelated.png (427x141, 21.32K)

also, take note of that address part. RISC-V is future-proof as fuck

No that has to do with precalculations for MUHpotential speed increase (because of repetitive tasks; now it's useless) not with new calculation.
We should switch architectures and emulate old stuff as soon as possible tbh.
I'd like seeing us with something for maximum throughput like cell.

64-bit ougtta be enuff for anybody for, you know, like, forever.

Retards will always invest in x86 just because
If you want RISC-V/POWER to start rolling you have to invest or make great software for it.

Attached: tumblr_inline_p1g32l0DWQ1r7ef8e_250.gif (225x305, 130.21K)

Without shrinking the size of components you could never push the speed.

Clearly you've never read Black Magic.

Spotted the LARPer who does not understand how reality works

People will always invest in x86 because it's an established architecture that provides the performance required for its intended workloads. Switching to some exotic architecture would result in significant cost that frankly to most companies will never be worth it because the majority of people don't concern themselves with autism of this level.

People will switch to RISC-V if and only if they can produce a processor with superior thermal design, and IPC with fewer cost vs comparable x86 machines.

Note, I said "superior" not "comparable" people simply wont care if you pull out a RISC-V chip with comparable specs to the latest Intel Colgate processors or whatever because x86 will still have an advantage of superior support at that point. And "chip-level security" is a laughable selling point given the lack of security auditing associated with new untested designs.

More on this?

Spotted the amerimutt
No shit.
The majority of people ARE autistic AKA vaccinated, including yourself.
Ok CIA/FSBnigger. I'll sure listen to you. Do you want me to poz myself too?
About fucking time.

As soon as more RISC-V SBCs come out and are fully supported by a distro, i'm buying that shit and making a NAS out of it.


seL4 is a "new design", and is untested in that it hasn't seen widespread use yet.
Would you ever in a million years say it has a lack of security?


meanies! >_<


pics related ^_^

Attached: Pockys.jpg (480x480 36.68 KB, 28.24K)

There is so much wrong with this post it hurts. Please kill yourself my man

this

Think more horizontally - people will switch from Intel once ME backdoors start being exploited widely.

Sel4 is used in Genode, but yeah it's quite untested. But L4 itself is massively used, especially OKL4.

Yeah I have a feeling that these botnets will be taken seriously by the masses when they are actually directly and obviously getting fucked by them.

normalfags don't even care about meltdown. You guys live in a fantasy world

speak for yourself