The Failure of Moore's Law

WTF are you talking about? webservers, databases, all kids of shit are many many times faster. Next you say:

well no shit

Sorry, I conflated what Moore said with what David House said. But more transistors = more CPU power seemed like a tacit part of the law to me.

...

I'm sorry that I failed to explain it well (I've been drinking). I mean that while adding more threads/cores per die is certainly more performant (and the "modern" way to optimize CPUs) the performance itself comes from the cores synergizing - the point being that single core/single thread performance has stalled. I still don't think I explained it well, whatever.

It sounds like you are trying to talk about Amdahl's law. Even raytracing eventually caps out performance above a certain level of cores, even though it it 95% parallelisable.

There's a great video on youtube from an actual chip designer, google Sophie Wilson - The Future of Microprocessors

It does explain that 3d chips won't be a panacea because it doesn't solve the underlying problem of heat generation (can make it worse).

There's no need to apologize, I think I understand where you're coming from. To the pedants lurking this thread, don't read any further.

Sure, we can technically maintain Moore's law by making bigger dies with moar cores. Sure, performance for some problems will scale with the number of cores. But something about this feels wrong.

Don't any of you remember when computers were actually taking leaps and strides on a monthly basis? You could buy a new computer every year and it would blow your old one out of the water. And look where we are now. I'm using the same desktop I bought 5 years ago, and what's my incentive to upgrade? Better TDP? A couple more FPS for muh gaymen? A shiny new botnet?

Moore's law represented the idea of an ongoing technological revolution--now it's just a biannual opportunity for Intel to jerk themselves off. If Moore's law is still alive, its spirit is long gone.

No.

They still are. Just not for consumer shitware game tech. Look at servers. They get better and better all the fucking time. Four terabytes of ECC ram in a 1U rackmount server is not crazy anymore.

woah as if we have reached the physical limitations of raw speed of flipping transistors on silicon and we have to focus on optimizations instead

Well I guess that depends on what you mean by CPU power. If you mean Instructions Per Second than a multicore CPU would be plenty powerful the problem is that all our single threaded languages and code are dependent on IPS being driven on a single core based on clock speed and we hit the thermodynamic limit on that a while ago.


I suspect that sounded better in your head.