What exactly does the future hold for intel?

What exactly does the future hold for intel?

Intel is having a rough time of it lately, AMD appears to have finally been able to reap some benefits from their MOAR CORES strategy while at the same time Intel's planned 10nm die shrink appears stillborn and as if being stuck on the same 14nm architecture from 2015 wasn't bad enough they're not even able to move enough parts to fill demand.

AMD's zen 2 line is likely no more than six months away from release, the probability that they will surpass intel in terms of raw IPC are slim and the OC headroom simply does not reach as high but if Intel don't get their shit in order it's really just a matter of time.

At this point whats the likelihood 10nm frustrates Intel to the point they simply try their luck with 7nm instead?

Attached: cargo.jpg (722x799 168.19 KB, 61.43K)

Other urls found in this thread:

semiaccurate.com/2018/10/22/intel-kills-off-the-10nm-process/
youtube.com/watch?v=r5Doo-zgyQs
overclock.net/forum/11-amd-motherboards/1624603-rog-crosshair-vi-overclocking-thread-3897.html#post27694588
twitter.com/NSFWRedditImage

ARM is the future, x86 is on life support at this point

They are getting fucked, hard. 7nm Ryzen processors will (in all likelihood) be able to clock at ~4.8GHz, or at least the equivalent of the 3700X will. Zen+ processors with 3200 memory actually have an advantage in IPC over Intel, what makes Intel better is their clock-speeds and that advantage is going away. Too big to fall? Not at all.

Total undocumented proprietary garbage. Even when we get RISC-V, x86 will still be the only option for workstations. Yes, POWER9 exists, but you can't do much with it.

web browsers and servers in the fabric

ARM has it's own issues. Sure you can get a low clocked many core processor (I am talking 16-96 cores here, idk if it scales more than that right now), those cores generally clock low (~2GHz) and then you have to deal with kernel support. I know of at least one company who supports UEFI so that's pretty cool.

Intel spent way too much of its energy on cramming more and more spyware and "oopsie totally not intentional" vulnerabilities into its CPUs and not enough on making them faster.

This is a corporate culture problem and it can't be solved because they spend hundreds of millions on shitty diversity programs to hire underqualified niggers. Heck they spent $100 million on Will.I.AM's promotional campaign so black people would buy computers instead of getting free Obamaphones.

I blame China, whom Intel is obviously in bed with.

You are conflating architectures with the rest of the ecosystem. There are also plenty of different ARMs, I have all the documentation that I need for tinkering every ARM based system that I own except one of my odroids. It's the soc makers that are the cancer in arm I'm afraid. There is nothing more wrong with any of the POWER isas than probably any other risc, save maybe for paired singles living too long. I'd love to roll my own shit for one of those talos boxes, but I can't afford to plop that kinda a cash for hobby shit.

Meanwhile IBM is selling 5.2ghz 14nm CPUs right now with the z14 for mainframes and the next version of POWER will probably clock in at around 5ghz using 10nm. And that would be the base clock rate not turbo or boost speed, the POWER8 from 2013 was able to go up to 5ghz. This is thanks to their silicon on insulator design that's very different compared to how typical CPU are made.

There's a lot you can do with POWER9, most of the issues on getting chrome and firefox to work on it have been dealt with and compiler support has been improving. It's still years away from a novice being able to replace his x86 desktop with one and not running into major problems but progress is being made.

You are a fucking idiot.

You fucking what? POWER9 is great for AI/servers.

Not exactly "workstation" grade when modern browsers are still buggy. Also, why spend that much money and then use it as a server? I'll buy it only when it can give me the same experience x86 can.

...

phoneposter pls go

my understanding is all of this bullshit
reason 10nm has problems is because its not much better than 7nm and it NEEDS TO BE for Intel to succeed, but they cant get the silicon lottery to good enough level, everything else seems to be smokescreen around this goal

You mean their their womyn outreach program didn't pay off?

Attached: 77448e6c2987265a0c9c587061d0fc8f62c29a451b2b021fd71a9528a8f7c374.png (680x410, 164.79K)

...

True, their 10nm products won't be able to touch their 14nm+++++++ products; and they've pretty much pushed 14nm as far as it can go with the 9900k. No idea what they'll do next.

If AMD's® Intel® x86™ succeeds, Intel® still succeeds.

Wrong. The future is POWER. Feel the POWER!

Just placed a order for a Crosshair VII and some RAM, can't wait!

Still on the fence about the CPU however, should I get a 2600 or go balls to the wall and get a 2700X?

Depends on your board (among other things). If you're buying something with an X470 chipset, it'd be a waste not to get the 2700X.

Went with a 2600 as i'm planning on upgrading to Zen 2 late next year anyways.

semiaccurate.com/2018/10/22/intel-kills-off-the-10nm-process/

Everyone's favorite roach claims Intel's 10nm node has suffered SIDS

Attached: 1368736345579.jpg (425x312, 64.92K)

Probably the right choice, just make sure to buy an aftermarket cooler so you get the most out of XFR2.

Even if they deliver, it's not going to perform much better than 14nm, because that is a very mature node. They're fucked.

Already have a H100i so that's no issue.

That's the beauty of capitalism.
They get shitty from hiring incompetent fools, we stop buying their cpus.

Got my stuff in, and god damn RAM compatibility is AIDS with Ryzen.

Guess that's what I get for not using B-die.

If you're a Linux user, mind telling me how much your CPU boosts? My 2700X only seems to hit 4.1GHz maximum, and I have the high-end Noctua cooler. I'm wondering if it could hit the full 4.3GHz if I was a Windows user.
Here's a command that'll help you:
watch -n0.1 "lscpu | grep MHz"

Not a Loonix user, but with Performance Enhancer Level 2 it boosts to 3.9 all core.

And I just unfucked my RAM, it needed to be in different slots.

What speeds are they running at? I shelled out for Flare X, but I don't know if I should have, Zen+ is supposed to support most 3200 modules.

Currently at 3000CL16, might try getting it to 3200 CL15.

Corsair Vengenance LPX 3200 CL16 here btw.

Also, ASUS AI Suite's auto overclock function is fucking AWESOME. It managed to OC my 2600 to 4.025 GHz.

They hired Jim Keller (main architect behind ryzen) to fix 10nm and etc
Intel even if it misses track for year or two is coming back strong

I hear people on Zig Forums saying "x86 is dangerous", well moreso that Intel is dangerous but a lot of people saying x86 itself is the issue.
Please explain to a softwarefag why other architectures such as ARM are more secure than x86 for a desktop/laptop machine.
security through obscurity isn't enough :^^^)

Final timings for now, got 3200 working.

Attached: 74tOScM.png (768x182, 12.02K)

they aren't optimized for windows DRM

I think you might be seeing the effects of using a proper server-level MMU on the entire lineup. Intel servers are also exceptionally picky about the RAM they can use. With my old Xeon server I had to jump through all sorts of hoops with BIOS flashing to be able to use my faster RAM and newer model CPU and with my recent upgrade to EPYC I had to use a shitter, slower stick of RAM for it to be able to self-flash the updated AGESA firmware.
I don't know what it is with adding support for ECC and creating a nightmare maze of compatibility issues.

Found some timings made by Ryzen guru The Stilt, seems to be stable.

Attached: EnqsZLC.png (789x175, 11.88K)

Very interesting that this kernel does not keep a fixed clock on my 1700 despite it being OC'd to 3.9GHz

Attached: screenshot_20181027_120539.png (995x457, 54.63K)

See SMM and hidden ("undocumented") x86 instructions.

Tried his Hynix MFR Fast timings, seems stable.

My CPU voltages are pegged at 1.4V though, and that's something I don't like.

Fuck forgot pic.

Attached: eUV7LcA.png (1920x965, 99.91K)

Surely you mean dreamer instructions.

Attached: spics.jpeg (1000x667, 661.79K)

2018 was a great year.

Well user, even the worst of P4 could be kept down to reasonable temps (as in sub 70°C) with a good aircooler
The 9900k will still exceed 80°C with a liquid cooling setup with direct die contact and a die grind.

Surely the 9900K can't be that bad I thought. But I've never before seen a review where in order to reach temperatures below 90°C they unironically brought out an industrial chiller.
I mean props to Intel's arch for being able to handle 5GHz sustained with proper cooling but what the actual fuck are they doing?

He is referencing der8auer's video:
youtube.com/watch?v=r5Doo-zgyQs

If you don't know who der8auer is, he is one of the best extreme overclockers in the world.

I just googled "9900k review" and the first one I got was tomshardware, which seriously used an industrial chiller as if that was somehow a perfectly normal thing to do.

who wants to Nanking those POS?

Attached: 9900k.png (1000x746, 235.49K)

Attached: Intel_Cooler.jpg (515x424, 39.13K)

Reminder

Attached: cryocool.png (651x487, 31.58K)

i lol'd

Fucking kill me: overclock.net/forum/11-amd-motherboards/1624603-rog-crosshair-vi-overclocking-thread-3897.html#post27694588

Had to turn down my OC because I was getting hardlocks and bluescreens, so i'm now at 4.065 GHz.

Finally, that shithead was the sole person responsible for the ASUS is superior circlejerk on OCN.

It's weird that ASRock (which started as the chink rip-off) is now the "premier" brand of sorts when it comes to AM4. Their VRMs and support for Linux/Windows 7 are great too.

15nm