The Failure of Moore's Law

We had CPUs clocked at 3.0GHz+ in the early 2000s. They've been beclouding their lack of progress by increasing the cache, adding more cores, and adding more threads. At what point will progress stagnate completely? What will happen then?

Other urls found in this thread:

edwardbosworth.com/My5155_Slides/Chapter01/ThePowerWall.htm
twitter.com/SFWRedditVideos

CPU development will stagnate eventually and programmers will have to finally stop being pajeets and optimize their software.

Text editors will take 100 gigabytes of ram in the next 20 years and you will be saying the same thing.

Moore's Law doesn't say anything about CPU clock rates.

More's law isn't a law it's an observation about miniaturization of components that is supposedly correct because of physics.

They'll find a way to make 3D transistors (which is already the case).
They'll use a material which cost 0.10$ more than the one they usually use.
More botnet.

Sounds like progress to me. a higher clock speed is not the only technology that can improve a processor

Transistors have been increasing but performance hasn't.

Attached: transistor_scaling.png (640x637, 84.31K)

Did you even look at the graph you posted? Its measure clock speed and performance per clock cycle. We have many more cores now that can do way more processing compared to older CPUs.

Nothing about clock speed you ignorant nigger. Its about the number of transistors on a die and going by the number of cores being squeezed onto CPU's Moores law is still in action.
It will fail when the transistors become so small that electrons will pass through them ie. quantum tunnelling which we aren't too far away from.

Isn't that just an illusion of performance? yes, it's possible to have a double-digit amount of cores per die now (and multiple threads per core) but if intel were to release a processor that was a single core without hyper-threading (made using the technology of today) - it'd probably barely outperform the Pentiums of the 2000s.

WTF are you talking about? webservers, databases, all kids of shit are many many times faster. Next you say:

well no shit

Sorry, I conflated what Moore said with what David House said. But more transistors = more CPU power seemed like a tacit part of the law to me.

...

I'm sorry that I failed to explain it well (I've been drinking). I mean that while adding more threads/cores per die is certainly more performant (and the "modern" way to optimize CPUs) the performance itself comes from the cores synergizing - the point being that single core/single thread performance has stalled. I still don't think I explained it well, whatever.

It sounds like you are trying to talk about Amdahl's law. Even raytracing eventually caps out performance above a certain level of cores, even though it it 95% parallelisable.

There's a great video on youtube from an actual chip designer, google Sophie Wilson - The Future of Microprocessors

It does explain that 3d chips won't be a panacea because it doesn't solve the underlying problem of heat generation (can make it worse).

There's no need to apologize, I think I understand where you're coming from. To the pedants lurking this thread, don't read any further.

Sure, we can technically maintain Moore's law by making bigger dies with moar cores. Sure, performance for some problems will scale with the number of cores. But something about this feels wrong.

Don't any of you remember when computers were actually taking leaps and strides on a monthly basis? You could buy a new computer every year and it would blow your old one out of the water. And look where we are now. I'm using the same desktop I bought 5 years ago, and what's my incentive to upgrade? Better TDP? A couple more FPS for muh gaymen? A shiny new botnet?

Moore's law represented the idea of an ongoing technological revolution--now it's just a biannual opportunity for Intel to jerk themselves off. If Moore's law is still alive, its spirit is long gone.

No.

They still are. Just not for consumer shitware game tech. Look at servers. They get better and better all the fucking time. Four terabytes of ECC ram in a 1U rackmount server is not crazy anymore.

woah as if we have reached the physical limitations of raw speed of flipping transistors on silicon and we have to focus on optimizations instead

Well I guess that depends on what you mean by CPU power. If you mean Instructions Per Second than a multicore CPU would be plenty powerful the problem is that all our single threaded languages and code are dependent on IPS being driven on a single core based on clock speed and we hit the thermodynamic limit on that a while ago.


I suspect that sounded better in your head.

Look up the power wall. We can't get much faster without parts melting. That's why we began using multiprocessing

edwardbosworth.com/My5155_Slides/Chapter01/ThePowerWall.htm

Oldfag here, they didn't bother to optimize shit when the average PC had 32MB of RAM, they're not gonna bother now

Well we can go back to vacuum transistor technology and that'll give it another 10-50 clock speed boost.

Attached: sad pepe.jpg (864x406 50.88 KB, 57.41K)

Did you forgot to check your nanoseconds ?

Simple: Go back to the 6502. Computers went wrong when they made them for niggers.