Get ready for absolute ass rape Intel 7nm Zen next year as well as 5nm very soon 2020

Get ready for absolute ass rape Intel 7nm Zen next year as well as 5nm very soon 2020

Attached: cbr15ipccomparisonscores.png (1278x959, 86.54K)

Other urls found in this thread:

boards.4chan.org/g/thread/67185781
kit.edu/kit/english/pi_2018_097_smallest-transistor-worldwide-switches-current-with-a-single-atom-in-solid-electrolyte.php
twitter.com/SFWRedditGifs

But luckily there's always some new tech on the horizon to excuse the unbroken string of failures since the Athlon.

boards.4chan.org/g/thread/67185781
Sorry to link cuckchan but I made too many posts to copy paste here I'm the fag going on about how fucked intel are from 2020 onwards
Funny how the shills there literally have no response other then muh games and ipc and even amd beats the shit out of Intel in that + streaming with better thermals and low avg fps

Yeah 2006-2018 was fucking brutal for amd but they finally turned it around late this year

...

IPC has peaked across the board and has been largely the same both for team red and team blue since Ivy Bridge. At this point its clear smaller node sizes will not increase performance but will make CPUs more power efficient, but even that's starting to see diminishing returns.
And every other CPU architecture designer is struggling to keep up because for raw performance CISC is just better than RISC, you can just keep tacking on feature sets to improve the efficiency of software ad infinitum. You can't really do that with RISC, otherwise it just becomes CISC.

Attached: 1430751364016.jpg (550x412, 68.58K)

So what do? Hit the wall in ten years waiting for new lithographic 1nm or smaller? Move to risc/away from x86? /g/ cuckchan couldn't give me an anw

Seeing as performance is not getting any better and freetards are moving in the direction of RISC where performance will never be as good as CISC but is now adequate enough for most shit. I think its safe to say the enthusiast market is just dead now. Do whatever you want because its not going to get better or worse in terms of performance

Sweet looks like I'll be keeping my 2700x for 5 years like I did with my 4690 and i7 920 before that
Get comfy

I am. In many ways we're finally at with computers what the automobile industry has gotten to in the 90s. A 90s shitbox is still a solid daily driver today. A late 2010s computer is going to be a solid daily driver for the next 20 years at the very least now as well. IT was hyperbolic to say the enthusiast scene is dead, just stagnant. Its likely they will be the only ones driving hardware growth in the same way the enthusiast scene does for cars nowadays

Kek funny u mention that my 2001 shit box has a insane aftermarket even today (based Ls1 t56 + gm v) and a stock gen 3 ecu so I know exactly what u mean fam.
Didn't even have to turbo the fucker only needed 350rwhp detuned to spin the rear tyres up to 4th gear because torque
Anyway back to tech only thing I see really taking off is multi core module gpus and h/w acc raytracing is already here with the rtx
202x is gonna be nuts dual quad Oct core gpus woo

Attached: 1476904657471.png (413x350, 133.73K)

I didn't say that I meant back to pc tech learn to reading comp

Right now it seems we've reached the current physical barrier
kit.edu/kit/english/pi_2018_097_smallest-transistor-worldwide-switches-current-with-a-single-atom-in-solid-electrolyte.php
It doesn't seem to be done on large scale right now, but there might be a last spur on performance/energy saving "soon"(TM)

We can go much smaller.

lol

Meanwhile I'm just using a core2duo still and patiently saving for a Talos II.

Why are you bringing up Talos II in a thread about AMD/Intel?

POWER10 is the future. IBM revival to 80's IBM soon.

I really don't understand the IBM fangirlism among Linux users--you do realize they are "patent bullies" just like Apple?
Go ahead and denigrate me, call me an x86 "botnet" "shill" or whatever--the point is that IBM is not worthy of your trust, _please_ stop fellating them. IBM just wants people who will work for free (Open Sores). Freetards going apoplectic about Apple and then jumping up and down like a child when IBM endorses the penguin are retarded.

He's a retard, go easy on him. Next he'll call Intel/AMD Jews even though IBM also has an R&D Center in Israel

Very little effectively uses AMD's MOAR CORES approach. Rendering does but semi-pro 3D rendering these days just rents time on a render farm. That they continue to get shit on in single core is a problem as it is what really matters to most people.

IBM was tremendously influential re making Linux a serious business OS in the '90s, provided a huge amount of the earliest commercial kernel work, and spent a fortune over a decade defending Linux against Microsoft, Novell, and SCO's "Linux licenses". They earned their love more than anyone.

I read that as "America gets to rape Europe" and as an American all I've got to say is: buckle up.

ibm is pajeet approved ibm has more indian employees than american. india will be the #1 superpower by 2025

brb selling my Haslell i5 setup for a 2700X

Bought an AMD CPU thinking that inter-CCX latency wouldn't be an issue. Boy was I wrong. Monolithic CPUs are more expensive but are worth the extra money.

It'd be interesting to benchmark Factorio as it's possibly the only game optimized so deeply that it would be a big factor. Overclocking ram actually gives that game a huge boost as the inner loops are so tightly grinding arrays.

They only did that to kill Sun and other UNIX competition.

Who cares? I'd rather have processors that I can actually trust.

...

Then don't make them out of atoms.

Do more basic research on quarks. There's no reason to stop at atom scale.

On those scale uncertainty principle starts becoming a real issue

...

India will poo in the loo by 2225

>not Pooperpower
missed opportunity fam

Sure there's quantum computing next, but so far the current state doesn't seem to be stable/usable yet.
There's a good chance it'll be usable somewhere in the future since research is still being done in quantum physics.

The point is we're nowhere near a physical limit. We'll need to do things differently to progress but that's how it's always been with technology.

Attached: Triody_var.jpg (1200x854, 605.46K)

Since you posted a different technology, you just undermarked your own point. Tubes were unscalable for switching, and thus were quickly obsoleted in favor of transistors. (and soon, even transistors were obsoleted for field-effect transistors)

Now, for our trust vlsi mos gates, the width of the latice at room tempertures is the limit. The diamond lattice will not exist for a thinner silicon waffer, which makes it impossible to suspend boron and phosphorus since the lattice isn't even there. This is the end of the line for reducing the width of CMOS.

lol
We've been through gears, vacuum tubes, transistors, and silicon transistors in the past 100 years to solve the same problem. Next will be subatomic. The fatalism about there being no smaller we can go is ridiculous as we're fully aware today of smaller constructs than atoms. It will take research but so did everything else. Man up.

Good luck with that.

Nigger, everything we have is made of those building blocks. There are surely many fun things we can do with them.

there is no Intel 7nm
its 10nm thats BROKEN and LOW YIELD
and delayed for 3 years

what's all this shit about IPC? is that when i run a "frontend" for gnupg and it tries to talk to gnupg in english and then it fails to verify a single message and my OS crashes?

this thread is about the unprecedented level of botnet they will have in the next generations right?

CCX latency doesn't only impact FPS in games, but the responsiveness of the entire system. I disabled a CCX just to get the responsiveness of my old Intel system. And it still isn't quite there yet. I wouldn't really recommend Ryzen to anyone except amateurs. The Zen shills on /g/ are a fucking cancer, as well as their underage following. Now this board is dickriding Zen.

Attached: 2ccx.PNG (602x95 8.19 KB, 8.05K)

Are you implying Intelel is any better than AyyMD? I'd take AMD above Intel nowadays because 8 cores, soldered IHS and less """"""exploits""""""

Both are shit, it's just that Intel is the better buy for anyone doing realtime work.

Intel may score better perfs than AMD, but when you think about
all the underground shit they put and will continue to put in their CPU, you may realize AMD is a preferable alternative.
You may ever go above and switch to ARM/Z80.

Attached: outserv2.gif (40x40, 1.25K)

There's always some sort of impossible to prove, "just trust me bro" thing wrong with AMD when they have a competitive product.

I still remember Anandtech telling readers that cores counts that weren't power of two hurt performance and caused weird latency issues when the 6 core Phenom came out. And then Intel releases a 6 core desktop part and the power of two no longer matters.


This is a troll or marketing thread, the only benchmark they use is single thread rendering, which has no real world use case and negates the benefit of any additional cores. Not to mention they're all overclocked. Most people fall for this type of bullshit benchmark, you can tell by some people in this thread thinking that this benchmark means anything beyond single thread performance. This benchmark could easily be won by an unlocked pentium, doesn't mean that's the fastest CPU on the list.

Isn't IBM Power a RISC architecture?

Is that even true any more, with all the mitigations Intel's had to do?

yes, because people only run one (1) program on their computer at a time and OS schedulers that split work across cpu cores don't exist :^)

goddamn you Intlel shills are getting desperate lmao

Intel(R) Meltdown(R) and the new Spectre-NG exploits got you working overtime to salvage your reputation, kek

When there are hundreds/thousands of interrupts, those microseconds become milliseconds. So yes, system responsiveness does change.

Is your mouse polling at 1Ghz or something?

Basically, yeah. How often does a desktop have two programs simultaneously pushing the CPU as hard as they can over a few seconds or more? Roughly never. What do you hope to improve, the user experience of systems infected with bitcoin malware?

Face it man, your 4c4t quad core is obsolete if you do anything more than gaming at the same time.

Shit's irrelevant. Benchmarks show that the desktop experience is all about single core performance. And I'm sure if AMD ever surpasses Intel in single core performance you'll be repeating my words to me.

"benchmarks"

Not representative of the real world most of the time.

Because real world users are rendering a project in blender while they play Witcher 3, right?

No, real users are checking emails while writing some shit in word at the same time as they download shitloads of documents from the internet, opening window after window after window without ever closing it again to get faster access meanwhile there's the spyware the boss installed on the machines to keep an eye on the employees, or the remote desktop software from the IT guy who needs it to fix the users fuckups as quickly as he can.

IBM is inherently anti-jewish because they were instumental during the holocaust
try again

intel is only for gaymers and literal uphill gardeners

Unless they have multiple programs that cap out the CPU for sustained periods it doesn't matter. Desktop users only have one program at a time that does that. Even the "power user" types don't have more than one. It's not until you hit developers and workstations that you get those workloads.

I don't know much about gaymen workloads, but I upgraded my dead Xeon E3-1265Lv2 earlier this year. My options were basically a similarly priced system to the price I paid for the old one 5 years ago or however long it was, which was basically no better in any meaningful way, or something interesting. So I upgraded it to an EPYC 7351p.
Gotta say, AMD is on to something here. It's an order of magnitude cheaper than the same power from Intel. Eight channel memory and it doesn't ever seem to get bottlenecked on the RAM, and when saturating all the threads it just powers through it at 60-80% of single core performance per thread like a beast.
For a dev system/server, it's truly a modern marvel. I can see it giving Intel some serious trouble in the server hosting market, even though the single core performance is a tad lower than several generations old Intel chips.
There's also just something magical about compiling a fairly hefty linux config in 40 seconds.

1 MHz? Yes, and my graphics card, and my audio device, and my keyboard, and I/O, etc. Optimizing the OS and hardware for low latency starts with the CPU, and I've concluded that Ryzen can't be used in realtime systems. Sure, workstations won't be affected, but audio and gaming won't benefit from more cores if they are extremely far apart.

Yeahhh no. You're way off.
The others aren't causing frequent interrupts, either. Check /proc/interrupts. A system spamming you with interrupts is broken.

Well, I'm on a Windows system. In the span of 10 seconds I got 6169 DPCs and 18102 ISRs. So 617 and 1810 per second respectively, on average. That's with audio playing and me not moving the mouse.

Now, with a game in the background and me moving the mouse, I got 93068 DPCs and 34749 ISRs over 20 seconds. So 4653 DPCs and 1737 ISRs per second. That's a lot, and that's where the microseconds become milliseconds. In gaming, it is critical that latency stays low for smooth frame delivery and responsive input. For audio production, that means being able to monitor audio. I don't feel like restarting my system and enabling a CCX, but usually the average interrupt to process latency is 1.25-2x larger. Not worth the extra cores.

Irrelevant.
No. It's also considerably less than a million per second as in

Are you trying to count total time spent in a second? Why? What matters for latency is how long the longest interrupt takes to service and if the rate was so high that interrupts were arriving while interrupts were disabled as that will also result in latency. With something pathetically low like 1737 per second that isn't happening, and anything slowing down a single interrupt is going to be a difference of nanoseconds.

who cares, I just want to install gentoo faster

Real Zig Forums users are. The rest can die in a gas chamber as far as I'm concerned.

When you guys say real-time, do you actually just mean low latency? Because I have a hard time imagining anyone doing actual real-time stuff would use Windows and a modern x86 processor and not, you know, an RTOS on an architecture either built for it or at least analyzed well enough that guarantees can actually be made.

Good luck doing that on Windows without either manually setting thread affinity of both programs or suffering stutter.

Yes low-latency.

why?

where do you see million per second?

Yes, the average interrupt to process latency significantly declines with a CCX disabled. The average can be multiplied by the number of DPCs to give an estimate of driver latency.

DPCs don't run while masked, they're like Linux's tasklets. They aren't contributing to interrupt latency and the number of them is meaningless in this context.
I asked if you think your hardware is generating a billion interrupts per second and you thought it generates a million interrupts per second . I'm well aware the real number is usually in the thousands on a desktop.
I doubt it affects interrupt latency at all as it shouldn't need to communicate with the other cores. Even if it did, you'd be talking about a difference of nanoseconds. It also wouldn't matter how many interrupts per second you have as the latency is independent unless you have so many that they're getting blocked in which case your computer is likely unresponsive.

Oops, I meant 1 KHz mouse polling.

Well, that's not the case if you look at the interrupt to process here

It shows there's possibly ~200 nanoseconds of difference in the handling of the interrupt before it's offloaded. That's so small that I'm sure the result will differ each time you reset it and recheck those numbers. The rest is purely inside of Windows, and at most about 1 microsecond of difference. The "highest" results will be heavily affected by luck as DPCs can be interrupted (same as tasklets) and are irrelevant in this context.
1 microsecond of difference, if there actually is one (reboot, play for a bit, recheck the numbers, and see how stable they are), is way below what you can perceive and way below other sources of latency like the scheduler.

Exactly

They do differ. I cycle the test to try to get the lowest result, which is what I work off of. The highs vary too wildly, whereas the lows I can get consistently.

Actually, I was able to shave off about 2us by optimizing Windows and UEFI settings. There are many guides out there. Most of the savings come from increasing the timer resolution to .5ms, and disabling the power saving bullshit. On Windows 8+ they added tickless, which can be disabled.

I'm not really arguing with you that this should be imperceptible, but for me it is perceptible. The system is a lot more responsive on one CCX than two. The average lowest DPC latency on 1 CCX is .5, where on 2 CCXs it's .7 microseconds. The interrupt to process varies a lot more, from 1.1us to 2.3, and on 2 CCXs it's 1.6-3.1us, usually in the 1.6us range for 1 CCX and 2.7us for 2 CCXs. It makes tracking targets a lot easier.

Obviously microseconds aren't perceptible, or even a few milliseconds to most, but somewhere down the line they add up. Even on a blind test I would notice the difference between 500 and 1000Hz mouse polling, as far as latency goes (stability issues are another thing).

No fucking shit, how big is the fraction of lost time?

Why are you so hostile? It's roughly 6/10 time lost or 1.67x faster.

Ask someone to help you test if you can tell if a system has it enabled or not.

same here, fam. Just got myself a fresh SSD since my old one still works but is worn out (6 years old). Ubuntu + xfce is running amazinlgy fast on my c2d machine. Wouldn't even consider the ~300mb RAM after boot too bloated. Seems like I will get comfy again after considering the new ThinkPad A485.

Would be nice if I knew someone with a slo mo camera, 10000 FPS for the ease of math.

...

It's a bit late, but I think I figured out what might be happening to this guy and why his statement is likely true in a way. What could be happening is that the NT kernel can behave differently depending on the CPU features it finds at boot time. So what he might be experiencing are not microseconds of lag from the CPU but milliseconds of lag caused by the kernel employing slower scheduling methods, or just the scheduler getting bogged down by the extra threads.

Attached: dc-comics-swamp-thing-statue-prime1-studio-feature-903174-1-990x575.jpg (990x575, 166.95K)

Tell me more about this Talos shit. Is it really free from proprietary code?

It's free from all code as no one uses it.

Remove the ethernet controller and yes, it is completely free of any sort of propietary code. Its fucking awesome.

unlike ur mom, whos seen plenty of DNA code in her time

Absolute kek

Attached: (you).png (239x90, 34.01K)

Perhaps a switch to magnonics instead of electronics if anyone manages to make something that can facilitate them without blowing up the second it gets exposed to oxygen.

You're describing a RAM problem, not a thread problem. All of those tasks use very little CPU time, they benefit from a couple threads but there is no normal situation where you actually need 16 threads. They are quite limited by RAM though, particularly the browser tabs.