People are always searching for the next performance improvement...

People are always searching for the next performance improvement. The next tiny little speed boots in processors and systems. This is obviously a good thing on its surface, but it brings up a few issues

Firstly, these innovations in hardware should have been used to allow for innovations in software, and give us functionality that wouldn't have been possible otherwise.
What actually happened is that people just took it as an opportunity to be lazier. We've got email clients and word processors that now use more resources than older computers even had! We've got browsers that use 10+ times as much. And it's not like they're really doing much different than they were in the past. Okay, maybe you could make a case for browsers doing way more with javascript, even though a majority of that is pretty unnecessary like google analytics tracking or some bloated framework, but has email really changed that much? Word processing?

Secondly, It seems that performance gains are sometimes chased without regard for security and safety. Obviously not always the case, but it happens. I forget the source, but some user back during the Spectre/Meltdown freakout was talking about how the issues may have been caused by Intel creating stuff in an unsafe way in an effort to increase performance, which would explain the performance drops that came with the patches. I'm not too sure how true that is, but if true, it's an example of this.

A more long-running example of this would be OS design. It's no secret that Microkernels have innate security benefits over monolithic ones. However, we don't exactly see microkernel OSes used everywhere, do we?
When disadvantages of microkernels are brought up, or arguments against them are discussed, the big talking point is always "Muh bad performance! Muh IPC overhead!". But two things. First, are you really so fucking desperate for that next performance high that you will sacrifice security for it? Second, the performance hits apparently aren't even that bad, at least on L4.

I don't really have an end to this. Just wanted to rant a bit.

Also, OwO

Attached: boy.jpg (480x613, 31.89K)

Next decade or so will be the decade of software optimization, as hardware will probably stale hard due to semiconductors reaching minimum levels of size.
If the solution is coming, it's not right now and not in the immediate future.
No, quantum computers are bullshit, they have severe problems with noise (math) and aren't even quantum, it's a marketing name.

This has been discussed here already.

Wait, was this post meant for me? I didn't mention quantum computers at all

Fuck Moore's law. Without it there would be no smartphone or Internet of Botnet, it's dead now but the damage is done. I wish computers were slower.

Pretty sure he was just saying it preemptively.

Performance growth has been reduced to a patently phony drip-feed since around 2005 when we smashed our faces into the 4GHz barrier, eking out tiny IPC improvements in the ludicrously inefficient 80x86 architecture, and cranking up core count.

If there wasn't a monopoly in place on CPUs, especially with AMD totally somnambulant since the Athlon Xp days, we'd have simply gone straight to 5 nanometers almost immediately and been working on other performance improvements the whole time.

The performance penalties of patches only effect VM environments, and are basically irrelevant to 99.999% of applications.


Quantum computers aren't bullshit, they (and the optical analog quantum networks they're attached to) are good enough for any entity with deep pockets to smash any non-quantum form of crypto you can use right in the cunny. The problems you named are mainly just reasons why lowly plebs like us aren't going to get one anytime soon.

I don't wish they were slower at all. I just wish that their power was being used responsibly

I remember there being an article where one of the industry guys was saying how pointless it was to keep getting better and better processor speeds when it was so much easier to keep throwing more cores and threads into the cpu instead of hitting better speeds.

Also didn't one of the power processors hit 5ghz around ten years ago?

I know Oracle's latest SPARC servers are clocked at 5GHz, with Fujitsu's being slightly lower.

This Moore's law laziness isn't only for programmers. CPU companies are lazy too. There won't be any real improvements until there's a new architecture. Computers are too fast and have too much memory so they don't care about instructions and encodings anymore. Assemblers are huge now because the x86 encoding doesn't follow any useful rules. Encodings used to be chosen to make code smaller and assemblers and compilers simpler and smaller (which also makes machine code easier to read). With x86, they just stick instructions wherever there's room. If Moore's law ended at the 386, we would not be using x86 today because it's too inefficient. They would have made much more optimal CPUs and software.