Intel's 10nm processor development encounters more roadblocks, delayed into 2019

tomshardware.com/news/intel-cpu-10nm-earnings-amd,36967.html
archive.li/xmpE9

>We continue to make progress on our 10-nanometer process. We are shipping in low volume and yields are improving, but the rate of improvement is slower than we anticipated. As a result, volume production is moving from the second half of 2018 into 2019. We understand the yield issues and have defined improvements for them, but they will take time to implement and qualify. We have leadership products on the roadmap that continue to take advantage of 14-nanometer, with Whiskey Lake for clients and Cascade Lake for the data center coming later this year.

Will the defection of Jim Keller to Intel mitigate the 10nm problem?

Attached: intel-thumb[1].jpg (770x578, 52.19K)

Other urls found in this thread:

nature.com/articles/ncomms15434
centos.org/
en.wikipedia.org/wiki/CentOS
deepin.org/en/
ubuntukylin.com/index.php?lang=en
kylinos.com.cn/
gizmodo.com/-top-500-smart-appliances
en.wikipedia.org/wiki/List_of_multinational_companies_with_research_and_development_centres_in_Israel
blog.cloudflare.com/arm-takes-wing/
multicians.org/multics-vm.html
web.archive.org/web/20000919185204/http://www.zdnet.com/zdnn/stories/news/0,4586,2601717,00.html
twitter.com/NSFWRedditGif

They're still preparing to open their new Arizona based chip fab that will ultimately produce 10nm processors

Yes, not having to deal with cheap Chinese shit and actually support American manufacturing takes some time booo hoo hoo

...

Fuck off 50 Cent Party and the rest of you bugmen chinks

Attached: AAAAAAAAAAAA.jpg (292x292, 13.14K)

CPUs appear to be racing towards a dead end as far as Moore's law is concerned.
Do HDDs/SSDs have some overhead left or will their development also stall with no survivors sometime after CY+10?

Based Trump bringing jobs back to the US. He will get my vote again.

smugloli.jpg

Good. Hopefully now other architectures can reach that dead end (ARM and RISC-V)

This. It's the only thing that still matters. We have to get rid of x86. High electricity cost, being bugged by the 3 letter agancies and inefficiency is the only thing they still have.
I DON'T think the emulation costs on other architectures is worth the struggle with x86 or amd64.

Why do you hate x86-64 if you value efficiency? Given the fact x86-64 has significantly higher IPC than ARM?

Don't answer that. I know its because you're a retarded LARPer. Shoo shoo now retard this isn't your Role Playing safe space

Go away CIA! >_

Attached: LennyKittenMeow.jpg (480x480, 28.24K)

That's true but not long term when my PC has no heavy application running.
what you mean is throughput
x86 is also like a clusterfuck of tape, overly complicated, filled with legacy shit and full of bugs.

This.

Attached: x86.png (698x792, 189.63K)

...

Exactly how new are you?

Sure thing you glow in the dark

Attached: u_fucked_up.jpg (800x367, 26.95K)

Nobody in the entire world uses ARM for speed. If it were an issue of efficiency, surely they would use ARM or RISC-V to keep electricity bills down. Perhaps somewhere inside that pea brain of yours, you think the CIA got to all of these other countries and forced them to use x86?

There is still POWER.
No the operating systems are already manufactured there. And Windows and OS X (the main consumer OSs) use x86.
Apple switched to it and Microsoft had a long partnership with Intel.
Since most applications are on Windows naturally most people use Windows and thus x86.
It was a historic development. You think because there are tons of Apple or Java users it makes it good?

Naw you're retarded. Intel Atoms have comparable TDP to ARM but significantly higher IPC. The only real reason all our Smartphones aren't x86 right now is the fact Intel is a bunch of Jews not only towards their designs but also pricing compared to ARM. That and Qualcomm's anti-competition policies basically playing Intel at their own game. ARM is cheap chunk garbage all around though

I never said anything about ARM. I hoped for a great future with POWER or the advancement of RISC V but x86 and amd64 are irredeemably broken.
I know you don't like to hear that you told me already

Will RISC-V use standard form factor motherboards with a standardized EFI/BIOS and configuration interfaces? All I ask is anything that wants to succeed the IBM PC standard will not be taking any steps backwards in this regard. I have a deep seeded hatred for ARMshit in case you couldn't already tell for various reasons.

I share your pain.
At the moment nearly nobody produces them, they are fucking 2000$, can only be interacted with over CLI since you can't even attach a monitor yet but they can run Linux and GCC.
SiFive and others are working on it. It will take it's time. Yes. I know it will make you sad.

Attached: mysadnessverybig.jpg (392x495, 37.53K)

See I don't really care whether it's 100% "standardized", but I do hope that any platform that rises up has some variation of this installation process that's common on traditional computers:
It doesn't have to be exactly those steps, or even have to have a standard form factor, but it just needs to allow the user to freely change OS and wipe the one it ships with off of the storage.

Hello your picture is very uwu. Are u a cute?

Attached: image.jpg (480x613, 31.1K)

Maybe, maybe not.

Attached: __fujiwara_no_mokou_touhou….jpg (396x494, 45.02K)

God damn you are pathetic.

Attached: Screenshot_2018-04-27_17-14-02.png (729x787, 83.12K)

Can you fuck off with your server lists taken from random websites with no source?
I highly doubt chink spyware CentOS makes up 21.8% of all systems let alone all server systems.
WAY TO PROVE YOU'RE A FAGGOT

I am not sure what the ramifications are for CPUs, but it could revolutionary for ram/storage.
nature.com/articles/ncomms15434
TL;DR: storage that's not ferromagnetic, fast (sub-ns latency), resistant to electromagnetic field, has low power requirements and afaik can hold more than binary value.

Attached: 4bd04050c818d5da45601a5d67040a5412bf7458dc9033c79420a8c38e0b6012.jpg (600x450, 54.36K)

umm... do u know what CentOS is?
It's the gratis, community version of RHEL.
centos.org/
en.wikipedia.org/wiki/CentOS
That said, his list is kinda bad. Like, Linux is listed as an OS, but below it are a bunch of distros with it.

a-are u a boy? OwO

Attached: 6bdacb6d987640c2c6f6f7518f78a520.jpg (600x670, 61.58K)

Damn it.
I thought china had some equivalent of Red Star OS but maybe I was just imagining shit.

Attached: ARM.jpg (1181x1181, 96.98K)

No they're not that bad.
Although they do have their own distros. 3 of them actually!

There's Deepin OS, which is a Debian-based distro with their own DE.
pretty much confirmed to be spyware by some people on jewtube recently, although I would HIGHLY suggest trying it in a VM just for the sake of laughing at the translations in their app store. Last time I checked they were really hilarious.
deepin.org/en/

Then there's Ubuntu Kylin, a special version of Ubuntu with, once again, a custom DE (UKUI)
ubuntukylin.com/index.php?lang=en

And lastly, there's Kylin OS. I can't read the moonrunes, so I have no idea what it's based on, but it looks like it's meant for servers and cloud stuff.
kylinos.com.cn/

gizmodo.com/-top-500-smart-appliances

Attached: 1524866664893.jpg (400x386, 24.8K)

??

Attached: ClipboardImage.png (877x312, 14.64K)

A cornerstone of intellectual integrity is being able to cite your sources.

Attached: ItsSoHardToPutAURLIntoAnImage.png (734x399, 28.81K)

Another cornerstone of intellectual integrity is not being so new you've never heard of linpack or the top 500 list of supercomputers in the world updated biannually.

Attached: Processor_families_in_TOP500_supercomputers.svg.png (600x426, 228.11K)

Ignorance isn't an integrity issue, unless you have intentionally kept yourself ignorant.

And you still can't manage to put a fucking URL in an image that you lazily stole from Wikipedia.

Attached: StillCantPutAFuckingURLInAnImage.png (665x364, 27.04K)

Sorted by Rpeak per core

Attached: Meh.png (1657x884, 152.83K)

I'm too lazy to google it apparently.

I don't want to support (((Amerimutt))) Israel manufacturing though.

Attached: mutt.png (400x400, 37.57K)

That's because nearly no one wants them, supply is predicated by demand.

Only a few thousand Talos systems have been ordered my order was in the low one thousands and I placed it only a few months ago and the demand is largely coming from people who need a Power system for development reasons more than people who are security-minded. RISC-V is aimed at a completely different market to Power and as such has even less demand.


Probably not, that takes more effort and most people who make ARM shit are lazy fucks.

You're on a website hosted in Reno, Nevada and founded by Americans living in the Philippians with Flip escorts

This entire website is a "mutt" website

It doesn't take effort so much as it takes an industry consortium to come together and draft former standards for something like this. And these industry types are very conservative when it comes to these things (mostly for good reason, appeal to novelty fallacies.etc) And the reality is that there is simply no good reason to switch from x86-64. Its a fast, solid, well supported, and proven architecture for its workloads. RISC-V would have to be SIGNIFICANTLY faster before industry types take notice.

He wasn't talking about drafting new standards, he was talking about ARM boards being things like ITX/ATX form factor compliant and using things like a BIOS like how x86 and Power systems work.

You used false pretense to shit up this entire thread when you could have just said you used ARM because it wasn't Israeli. Are you proud of yourself?

en.wikipedia.org/wiki/List_of_multinational_companies_with_research_and_development_centres_in_Israel
ARM is Israeli, as is Intel, and AMD, and IBM, and pretty much every major architecture design firm

Just because a company has an R&D office in Israel doesn't mean its an Israeli company. A lot of tech companies have been setting up offices outside the US for quite a while now since they worked out that in many cases its cheaper and easier to have an overseas office than it is to import the talent to the US even more so since Trump has been tightening down the H1B program, you just don't hear much about those offices since they usually operate under the radar in terms of media so that competitors don't also set up offices and start competing for talent.

ARM is actually Japanese since its owned entirely by Softbank which brought it using money from the Saudis.

No one cares. We have an ample supply of intelligent and skilled workers to fill those jobs. H1B just needlessly lowered the wages for those positions.

Not only do you have no idea how the US tech industry works when it comes to its workforce, you also thought I was criticising Trumps actions in regards to the H1B program I don't live in the US and as a result don't give a fuck since I don't want to live there, I was just mentioning the link.

Attached: trashtruck.jpg (650x429, 57.9K)

k dude

Attached: Trash01.png (680x697, 778.58K)

What's the advantage of 10nm?

The TL:DR is smaller feature size means lower power consumption and less silicon area for any given design, theoretically going from 14nm to 10nm while keeping everything else the same would yield ~50% decrease in power consumption but in practice its less because of other factors.

Intel follows the 'tick-tock' approach, where every 'tock' is an architecture update and every 'tick' is feature size decrease (and sometimes bugfix).

Moore's law has been dead for years, user.

>>>Zig Forums

Attached: 43bcacdebe15e6772942382966aee3a0177c9baeb495a8bbc47e95dcd386f0f5.jpg (1080x1068, 172.74K)

Laughably wrong. x86 has never been able to match ARM in mobile power efficiency, ARM is even creeping into the low end server market thanks to its performance/Watt:
blog.cloudflare.com/arm-takes-wing/


Depending on the priorities of the chip's designers, increased headroom for some combination of:
>lower power consumption and heat output
OR

...

so the big autobot in your PSU can't send as many angry bees into a smaller CPU, and it so the CPU doesn't get stung as many times

If you're only going to run C and UNIX programs, there's no reason to switch, but there are other CPU architectures and operating systems that don't make your computer into a faster PDP-11, like mainframes, segmented architectures, and tagged architectures like Lisp machines. This is where a new architecture has an advantage.

multicians.org/multics-vm.html

The fundamental design flaw in Unix is the asinine beliefthat "programs are written to be executed by computersrather than read by humans." [Now that statement may betrue in the statistical sense in that it applies to mostprograms. But it is totally, absolutely wrong in the moralsense.]That's why we have C -- a language designed to make everymachine emulate a PDP-11. That's why we have a file systemthat forces every file to be viewed as a sequence of bytes(after all, that's what they are, right?). That's why"protocols" depend on byte-order.They have never separated the program from the machine. Itnever entered their tiny, pocket-protectored with acalculator-hanging-from-the-belt mind.

Maybe the idea of a portable operating system is a good thing, but unfortunately, they only invented the idea, but still haven't come up with an implementation.Multics was written in a high-level language first. ITS ranon the PDP-6 and PDP-10.Sure they came up with an implementation. You just make amachine that looks just like a PDP-11 and you can port unixto it. No problem!The latest idea is to build machines (RISC machines withregister windows) which are designed specifically for Cprograms and unix (just check out the original Berkeley RISCpapers if you don't believe me: it was a specific designgoal). Now, people tell me that the advantage of a Sun overa Lisp machine is that it's a general-purpose machine ("Ofcourse it's general purpose." they say. "Why it even runsunix.").Hmm, well this example shows that at least the weenix uniesknow how to USE recursion!

Hello, mister!
where can I buy a new Multics or lisp system today?
...oh wait

also im not really a programmer, but I disagree with this part.
When your language of choice has a more confusing array of parentheses than a Zig Forums poster, I don't think it's really written to be read by humans, UwU

Attached: HeSezWoofButHasKittyWiskersOwO.png (500x595, 144.49K)

Attached: go_pub_key_1_core.png (1347x939, 429.2K)

I don't think u know what the point of those ARM servers is.
u do realize they have 48 cores, right?
and the rackmounts, at least with Cavium's stuff, usually have 2 of them, for a total of 96 CORES OwO!

Attached: Kagamine.Len.full.1126670.jpg (1200x1600, 284.56K)

Wrong. You get used to the parentheses fairly quickly.

Is having (){{{([(([][]),())])}}} found in C based languages any better though?

WE'VE GOT A REDDITOR ON OUR HANDS

Shitting on your keyboard doesn't produce idiomatic C code.

and just typing parenthesis doesn't produce idiomatic Lisp code.

So... they're less powerful than dual-core x86 chips?

I don't think you understand what "efficient" means

More cores only matters when programs can use them. If the programs can't be multithreaded for whatever reason, those cores end up being waste.

True, but that's a problem even Intel/AMD/IBM/Sun have been struggling with ever since they slammed into the 5GHz barrier back in 2005, instead eking out tiny IPC improvements through careful architectural tweaks.

Game devs are the only programmers who've still been able to get away with slacking on multithreading, even with console/mobile hardware vendors leaning against them to cut it out since 7th gen.

Attached: intel-xeon-ipc-chart.jpg (411x460, 38.05K)

How do they get more than 1 instruction per clock? Isn't that the theoretical maximum for any single core?

Only with a scalar (single execution unit), non-pipelined core. Basically all modern CPUs have a substantial amount of instruction-level parallelism, even within single cores.

Actually a non-pipelined core will achieve less than 1 IPC since many operations take multiple clock cycles to perform.


To get higher than 1IPC you simply add more ALUs, more Load/Store, etc. Since a single scheduler can queue up work for the compute faster than the respective parts of the core can actually do them you can achieve more than 1 IPC. Pic is the architecture of the 4-way SMT Power 9 core which shows this.

Attached: p9smt4core.png (1746x1131, 240.98K)

Wasn't it rather a 4GHz barrier that manifested around 2004? Intel's planned 4.0GHz P4 housefire never made it into production

Yes, though SMT is a little beyond what I was talking about, since it requires multiple explicit threads. By "modern", what I meant was since the late '80s or mid '90s in the case of 80x86.


Yeah. Reading a little to refresh my memory, I was amused to see some articles from 2000 about Intel's plans at the time to scale to 10GHz by 2011, presumably using bazillion-stage pipelines:
web.archive.org/web/20000919185204/http://www.zdnet.com/zdnn/stories/news/0,4586,2601717,00.html

How do we know that these barriers aren't completely staged to force us onto the cloud? All the corporations are owned by the same handful of satanists.

Attached: capitalist pig.png (500x514, 34.35K)

4 times the number of cores + 1/30th the performance per core core = fucking TERRIBLE performance.

The ARM implementation of Go clearly wasn't up to snuff, as that article noted. In all other tests, ARM single-thread single-core performance was 40-80% of 80x86.

The problem with going faster is kind of a catch-22 situation.

The faster you clock tje CPU the higher the power draw and therefore the higher the current density, so you go to a smaller feature size to reduce the transistor gate capacitance and your current drops, but now the wires that connect all the transistors together are also smaller so they have higher resistance and since the transistors are smaller they also have higher on-resistance so in the end you can't push as much current around so you can't clock your chip any faster. Not to mention as the wires in your chip get smaller the effects of electromigration start becoming significant so you can't just safely bump up the supply voltage and clock it higher since then the chips don't last as long, people seem to forget that an aluminium wire 14nm wide is only 80 or so atoms wide.

There is also other effects going on like signals taking a finite amount of time to travel around the chip, meaning there is an upper limit to the clock determined by how long data takes to travel along the internal busses of the chip.

Every new fab node I can recall resulted in a linear increase in clockspeed at a similar power/dissipation req, even for totally unmodified optical shrinks.
I'm aware of all the quantum fuckery happening at the smallest nodes 7nm, but keep in mind that we've already made single-atom semitransistors in the lab. The main barrier to smaller fab nodes is simply the ability to economically mass manufacture devices with acceptable defect rates per wafer.

Reminds me a little of a paper about space I read back in the late 90s about how using current day technology, the benefits (zero-g, cheap vacuum) would actually be enough versus the obvious additional expenses to make orbital factories financially viable in the electronics sector.

they should revisit the cell architecture...
cores start big and get smaller, slower
some processes need the big GHZ. Crunching some decompression or whatever.
Others just need a few cycles to respond to the user, stay alive. A UI button highlights itself on a mouseover callback.

Attached: square.png (500x500, 10.81K)

sad state of affairs

sad state of affairs

That's cool and all but NO OS or software will EVER make enough use of such a system. It's bad enough trying to get shit to work with just multi-core period, much less multi-core with different specs per core.

GF and TSMC have fabs in the US, GF has their most advanced ones in the US
There's literally no 14nm fab in China from anyone, chinks were preparing 14nm nodes but iirc they still aren't online

Intel won't hit 10nm until 2020 at the earliest. Cap this.

blogpost:

I fucking hate Intel's naming scheme. This gay, completely nondescriptive "-Lake" bullshit has no fucking rhyme or reason. Its prefix words "coffee, cascade, etc." have no theme or pattern, they're just fucking retarded.

Is this some new kind of D&C? You fucking idiots are so goddamn new it's pathetic. If you paid shill kikes had ever built a computer you could answer this question: When was the last time you installed a cpu that was made in the USA? You should be able to give me a year. I ask the same question of the other idiot shill in the first post - when was the last time you installed a cpu that was made in China?

You are pathetic disgraces even to shills, but beneath contempt for any of the competent individuals left on this board. Go back to /g/ and fucking stay there.

Are you done looking like an idiot? Can you post your argument now or do we both have to keep wasting our time?

Intel is cianiggers, so they use the same kind of stupid names. Except they're not in all caps (that's enough to fool the goyim).

Pathetic.

Attached: 1405913599550.jpg (500x500, 51.37K)