How is amd doing so well

How can this be please explain

Attached: 20180827_081236.jpg (1080x1269, 123.62K)

Other urls found in this thread:

en.globes.co.il/en/article-1000651071
wiki.raptorcs.com/wiki/Talos_II/Hardware_Compatibility_List#Graphics_Cards
phoronix.com/scan.php?page=news_item&px=AMD-PSP-Disable-Option
en.wikipedia.org/wiki/Binary_blob
man.openbsd.org/radeon.4),
bugs.freedesktop.org/show_bug.cgi?id=106258
cpu.userbenchmark.com/Compare/Intel-Core-i9-7900X-vs-AMD-Ryzen-TR-1950X/3936vs3932
youtube.com/watch?v=7KQnrOEoWEY
youtube.com/watch?v=QGw-cy0ylCc
phoronix.com/scan.php?page=news_item&px=Raptor-Prepping-New-POWER
phoronix.com/scan.php?page=article&item=nouveau-410-blob&num=1
en.wikichip.org/wiki/ibm/microarchitectures/power9#Memory_Hierarchy
phoronix.com/scan.php?page=article&item=nouveau-summer-2018&num=2
devever.net/~hl/intelme
youtube.com/watch?v=BDByiRhMjVA
twitter.com/NSFWRedditVideo

Have you been living under a rock lately?

DELET THIS

Attached: delet.jpg (466x432, 69.18K)

because anti-semite

Not that I think AMD actually competing instead of being maintained as an antitrust fig leaf for Intel/nVidia is bad at all, but try to be pragmatic, Zig Forumsyps:
en.globes.co.il/en/article-1000651071

Because Intel is shitting the bed, and they're going to get a huge boost from speculation that Intel will end up paying out billions for its microcode fix's performance.
AMD still needs to produce a desktop processor that isn't shit, but people are starting to think that due to Intel strangling itself with $300M of diversity that they might do it.

So thread ripper is not a good product? Why would people invest when the company doesn't even have a good product / track record

most likey due to China devaluing the yuan recently, so chinese investors flooding into whatever assets they can. Timing seems to line up.

only retards think transitory news move markets to that degree. capital flows are everything.

Attached: cnyusd.png (1174x595, 56.01K)

Mindshare and Intel jewery. Bulldozer was a massive fuckup and people are still not happy about it.

Epyc and Ryzen on the other hand are competitive with Intel.

I'm perfectly happy with my 8320. Not everyone is a gaymer.

Is there a way to disable the PSP in AMD proccessors? No? Then into the trash it goes until then.

Because Intel is shitting the bed left and right, AMD Processors are actually hitting really close single-core performance levels while still having heavily superior multiprocessing capabilities for far better prices, and AMD is making a strong push toward good FOSS video card drivers, making them the only viable option in the Linux FOSS GPU space.

So it's not that they're really doing well. It's that their competitors are fucking up in specific ways and they're the only big alternative in either space.

This is bullshit. The only viable libre AMD GPU's are the r300 series from over a decade ago as they have everything FOSS. The AMD GPU's after that are locked down in the sense you need a blob from AMD to boot the GPU that is loaded from the kernel/userspace. AMD is making their opengl stack FOSS so that others will maintain it in the future for them, and nothing else. If you wanted to use your AMD GPU on some unsupported architecture like RISC-V you are out of luck since the microcode blob loaded by the kernel doesn't (((support))) RISC-V or other non-pozzed architectures.

If you use nvidia GPU's with nouveau you get entirely libre software up to the gtx 800 series. Meaning you could use a GTX 7** on RISC-V with no problems. If you use intel HD GPU's you get entirely libre software up to the haswell series of GPU's and then a blob is required. But I somehow doubt you are going to put a intel GPU in another CPU architecture, even if it would work properly.

wiki.raptorcs.com/wiki/Talos_II/Hardware_Compatibility_List#Graphics_Cards
AMD words fine with Talos II

That's because it is (((supported))) by the blob. That and most talos II customers would be entiprise users who would pay big money for the blob to work on powerpc. Now try getting a SIFIVE risc-v board and putting a amd GPU on it. You will fail. Also see the early itanium proccessors with no out of order schedular on the die, which means non-pozzed. Won't work with those either.

I always thought those blobs were firmware that ran on the graphics card itself, and not the CPU?
Hence why OpenBSD and such say it is not a security risk, because it doesn't run on the CPU.
That would explain why it works with Talos II.
The reason it wouldn't work off the bat for RISCV is because the drivers need to be ported.

Pick one retard.
There are indeed blobs that run on the hardware itself you can never change, firmware, that are essentially part of the hardware and hence their usage on openbsd. But AMD goes a step further and makes it so the GPU's firmware doesn't work unless you upload a seperate blob by the userland/kernel that can be changed on demand by AMD something nvidia didn't do until their GTX 8 series of GPU's and later**.

Hence why it won't work on architectures AMD doesn't support. If you have the source code you could just compile it for the architecture you want and it will just work. But the blob loaded by the kernel, which is seperate from the on GPU firmware that is part of the hardware, doesn't support all architectures because it is a blob.

Nvidia got backlash for doing this with their GTX 800 and later series of GPU's, requiring a blob loaded by the kernel that is. AMD has (((somehow))) avoided backlash for doing this for every GPU they have created since the R300 series from over a decade ago! Boycott intel, amd, and nvidia GPU's that can't be made libre at the software level. Which means all GPU's after haswell for intel, all GPU's after the r300 series for AMD, and all GPU's after the GTX 800 series for nvidia. You will never be able to use them on unsupported architectures unless the GPU software maker lets you. Even then, you are taking security lightly to use a interceptable blob created by a company you are downloading over the internet! Anything could be in the blob or placed in the blob as you download it! Take for example the MINIX install on all recent intel CPU's for the ME microcode. You could be doing the same thing with your GPU for all you know.

phoronix.com/scan.php?page=news_item&px=AMD-PSP-Disable-Option

GPUs are the worst because you just know they include everything a computer needs. Graphics cards nowadays have volatile and non-volatile memory + a general purpose CPU, and they're behind multiple layers of proprietary malware.

How would something like LLVMpipe work on a Threadripper 2990X?

en.wikipedia.org/wiki/Binary_blob
And up to GCN1 cards work on OpenBSD (man.openbsd.org/radeon.4), so the blobs must not run on the CPU.
Since no binary blob is running on the CPU itself, it must not matter what architecture the CPU has. Also, you imply that the drivers existing on the linux Kernel would require no architecture dependent changes, which is strictly incorrect.

I wouldn't call that 'fine'. Most of their clients probably don't use a graphic card so tackling such a headache is not a priority for them. Right now you've got to use patches to get it work properly
bugs.freedesktop.org/show_bug.cgi?id=106258

Are we talking about what's possible to do or what the company actually did in order to get these results?
AMD's FOSS efforts are mostly their own doing nowadays. AMD just bought out the good contributors to the FOSS drivers and made them work them full time. This is a move that can be attributed to them.

Nouveau, on the other hand, works in spite of Nvidia, not because of them. Nvidia gave them almost nothing to work with. And the performance is pretty bad for 3D.

Because Zen is good and Intel hasn't done anything worth shit since Sandy Bridge.

lulwut faggot. The nouveau drivers on latest mesa and kernel git outperform the blob now if you use the right generation of nvidia hardware and the new auto-reclocking kernel options. The nouveau opengl stack is more buggy though for newer opengl functions making them less then useful for the newest games. But for older games and emulaiton it either matches or beats the blob. Or atleast for me it does. Granted I haven't used the blob in years now so maybe the blob's performance has improved for the same hardware years down the line unlikely. Now it just needs opencl 2.0 support and the nvidia blob is useless garbage henceforth.

They aren't fully libre, but you can't deny that making more of the code FOSS is a good thing.

Duh. That's one of the major advantages of FOSS, that other people can maintain things. That's one of the driving arguments for it and was the reason the whole fucking movement was birthed. You act like it's a bad thing that a company is opening things up so the community can do the work better for them and have better drivers for themselves.
That said, they have staff working on AMDGPU, so they haven't abandoned it to the community yet. Your fears are unfounded as of yet.

Everybody still wants the firmware blobs to be opened up, and that is a valid argument againts AMD, but with arguments like "they just want other people to do the work for them", you sound like an anti-FOSS shitter. Their motivation is less important than the result, and more FOSS is better.
Nvidia has been deblobbed up to GTX800, which is 4 years old, and AMDGPU has been deblobbed up to Fiji, which is 3 years old. It's not as bad as you imply. AMD isn't great, but they're better than Nvidia when it comes to actually supporting FOSS drivers. AMD has FOSS drivers that are better than the proprietary ones, they release technical papers to the public, and they have proprietary firmware blobs. Nvidia gives no whitepapers, don't have any official FOSS drivers, and has mandatory signed firmware blobs. Neither is ideal, but one is obviously a better choice than the other if you want modern graphics hardware and care about freedom.

Where's the source code to the GPU intialization firmware that has been made FOSS for up to FIJI then? I thought that was in a blob that AMD hadn't made FOSS and will never make FOSS. I might switch to AMD hardware in the future if you can find it for me.

Intel has no answer.

Attached: threadripper 2.png (913x494 1011.32 KB, 388.43K)

There's nothing stopping Intel from just slapping 4 dies on a single chip package and making a fuckhuge heat spreader for it from a technical standpoint, they don't because its retarded

Intel not releasing substantially improved products meanwhile they are hit after hit with security vulnerability.

cpu.userbenchmark.com/Compare/Intel-Core-i9-7900X-vs-AMD-Ryzen-TR-1950X/3936vs3932

Ultimately AMD offers ```60% more cores``` and 27% lower price in exchange for 12% less clockspeed per core, compared to Intel's offering. That's a good summary of the matchup across both Intel and AMD's product lines.

If it's retarded, then why is AMD winning? Multithreading is the future.

The numbers and truth don't lie. Explain to me why it's retarded (you can't).

Attached: wrong.gif (480x287, 1.21M)

I don't understand how you haven't caught on at this point.

- New graphics cards targeting more specialized markets increasing market diversity for AMD
- Amazing CPU lineup that Intel has lost millions in profit for
- Amazing CEO pushing growth
- Short squeeze to bring it all together

tl;dr buy AMD processors, they're the best; buy AMD shares, they're the best

This alongside the fact that China is failing to accomplish much with internet-related/tech-related stocks right now.

Attached: photo_2018-08-29_12-27-04.jpg (639x463, 36.4K)

what do you expect from a country which can't even access Netflix.

Unintended side-effect of information control. On another note China has high morals, it's quite nice.

Not sure if sarcasm or retarded.

Don't buy x86 proccessors you dumbass. Buy powerpc or RISC-V for hardware level safety.

Attached: 17d5f0091819a7e88cbc845bdc091655c2815902e1060570f1262a62f6f483a9.jpg (554x428, 42.74K)

Hopefully they pants nvidia next. I hate them even more than intel.
At least intel contributes to FOSS stuff.

I was going to buy a Talos II, but then I heard about the Mill architecture and have decided to wait for that.
youtube.com/watch?v=7KQnrOEoWEY
Xeon level performance at 1/10th the power.

AMD has succeeded in every market except GPUs and even that was correctly planned because they didnt fall for Bitcoin so fast, they took their time and now the competitor is overstocked why they sold at massive profit most of their inventory

Oops, should start from beginning.
youtube.com/watch?v=QGw-cy0ylCc

Eh. Outside of gaming, Piledriver wasn't a bad deal for the money. It's just a shame Vulkan and DX12 were so late to the party; we never got to see what these CPUs were actually capable of outside of Nu DOOM.

Attached: 1471749751087.jpg (576x1024, 226.01K)

Because the onslaught of retard articles intel paid companies like forbes to put out to push their collective noses on the scales stopped working, investors finally stopped listening and the reality that actually matters for consumers is setting in.
If you make something that performs/outperforms your competition at a significantly lower cost, you win.

The mill has been in development since 2003, it's vaporware.

So then I should just buy a Talos II? It would cost a third of my savings at least.

Talos II is a meme machine, the DEC Alpha for a new generation. Don't get caught up in hype, ask why you really need it.

To avoid x86_64 botnet. Also, security by obscurity of architecture.

Also, completely open specs and firmware and everything.

Wrong. Its literally just another PowerPC machine you fucking retard. Wait for RISC-V at the very fucking least

Ryzen
Tt really is an awesome processor. Buy one.

are you a moron?

disturbing amount of Jewtel shills on Zig Forums. You'd think we'd know better by now.

Attached: jewtel.png (568x612, 186.8K)

Depends on what your use cases are. Even if it's expensive now it'll be a decent computer for the next 5-6 years at the very least. I'd be mining crypto with it to help pay it off.


I don't know of any PowerPC machines that can use up to 2 terabytes of DDR4 ram or have 44 cores with 176 threads.

Have you not noticed that despite the hype they're not even close to being relevant on the desktop? It's "team red" marketing you're sucking down.

Attached: amd revolution according to tech.png (1300x571, 325.45K)

The desktop market is mostly gaymers these days--they're being told that Intel is the better choice for their use case. For Gentoo, Ryzen is great.

Intel literally is the better choice for their use case due to the single core performance. So there goes the desktop segment. And Intel is better for servers as their cores don't starve for ram (as much) and stall. I don't think Gentoo users are a large market. They ended up only providing value to the modern workstation.
But confidence is growing that they might make a legitimately good desktop processor soon.

Really? Do you have experience with Epyc? My Xeon server works well, but I'm curious about your experience.

I own a Ryzen cpu, I have first hand experience, friendo.

check dsogaming and see the utilization, every game that had proper development is optimized

What would you consider relevant? 90%?

Yeah, except EPYCs have 8 memory channels and Xeons have 6. So the only reason EPYCs would starve for RAM is if they're at least 33% better.

They're getting ready to release a newer cheaper model in October.

phoronix.com/scan.php?page=news_item&px=Raptor-Prepping-New-POWER

Its probably an ATX or smaller single socket mobo which will drive the cost down even further while making it more convenient since dual socket E-ATX cases are rare.

They are reliable.. My processor is from 2013 and works fine. I'm not anti nvidia my graphics card is nivdia

I've not seen large set TPC benchmarks that suggest that they can translate on paper performance to the real world. Does anyone have any? Meaning, not phoronix or anandtech testing how fast Quake 3 runs or some stupid shit, an actual test of a dataset large enough to dwarf the L3 cache.

They could do a soldered CPU to save money as well. The thread for the article on their forum has their head of engineering giving hints about it;

They'd save quite a bit of money by getting rid of the socket and special IBM heatsink. If they go with the defective CPUs that can't do virtualization and don't have the spectre mitigations it'll be even cheaper maybe $500 for a 4 core desktop box.

I'd still go with a dual socket board starting off with 4 cores adding in 22 cores when the price goes down.

Depends on your use case. In case you didn''t know, there's already a 'Lite' version as a stop-gap that uses the same motherboard, but has half the components unpopulated. There's also a Micro-ATX version in the works that should support a single 4/8core CPU that should be ready around October.

Did they even make enough of the DD1.1/2.1 CPUs to profit off that? I was under the impression that they already sold out.

Bro don't get something that costs so much of your savings unless it makes you money.
Financial security first.

Jesus Christ. Either way, I find that unlikely unless you mean a card older than GTX 600 series.
phoronix.com/scan.php?page=article&item=nouveau-410-blob&num=1

And soldered RAM.

Thats because IBM makes things with the intention of them being run at near capacity for years on end. I seem to remember hearing someone complain about the POWER8 CPUs having poor power management which resulted in excessive power consumption at idle, to which an IBM engineer responded with something along the lines of "why are you idling the CPU?".

Just remember that you don't gain anything in terms of L3 cache once you go past 12 cores, in fact the 16 core CPUs have less cache than the 12 core CPUs. 12 core and less have 10MB L3 per core while anything above that has 5MB L3 per core.


It would also need to be heavily threaded to properly test the Epyc and Threadripper CPUs since there is non-uniform memory access due to it being a multi die design.

Naa, use kepler with latest git and not year year old shit like pozzorinix uses. Mesa 13.2 is much much slower then mesa 18.*.* git.

You sure about all that? In the last thread we had about the Talos 2 one guy said every CPU has 120 MiB eDRAM for L3 even the cheapest 4 core. Looking at wikichip this seems to be correct as only the L1 and L2 caches have the 'per core' description.
en.wikichip.org/wiki/ibm/microarchitectures/power9#Memory_Hierarchy

Ok. These are the results for Mesa 18.2-devel with a kepler card and newer. Similar results.
phoronix.com/scan.php?page=article&item=nouveau-summer-2018&num=2
Still awful in comparison.

Yes, the Power9 CPU is divided into 12 chiplets each containing 10MB of L3 cache and two SMT4 cores. For the 4, 8, and 12 core versions one of the cores on each active chiplet (of which there are 4, 8, or 12 respectively) is disabled.

Calling "lshw -C memory" on my dual 8 core system shows a total of 16 10MB L3 caches.

That's because L3 is shared between 2 cores.

Cache size is how they artificially partition the gamer gear and the server hardware. A small cache is fine for games, but a server with a working set of 1TB would be fucked. They're likely only a few percent more expensive to produce due to the physical size increase.

VMs also like cache.

Big caches kill yields.

I doubt they kill yields. Even monster caches aren't increasing the physical size by monstrous amounts. They definitely reduce yields, but not in a way that matches the increased price.

Attached: 16 giga bees.jpeg (640x354, 94.01K)

as a holder of 15 shares of amd and a major amd stakeholder explain to me why i should let you just take everything you want instead of paying for my hard work?

The goyim are waking up. AMD stock will be over $100 by the end of 2019. You're welcome.

Attached: Superior AMD Ruby.jpg (2100x1200, 388.53K)

WHY IS TERRY A. DAVIS'S DEATH BEING CENSORED?

Checking out Raptor's store this seems to be the way they describe it. Not that disappointing considering that a 32 core threadripper has 64mb and costs $1800. You can buy two 8 core CPUs with 160mb total and still have $600 left over. Both setups have 64 threads.

From now on I'm going to get a single 8 core for starting out. Add in either another 8 core or 18/22 core later depending on what would benefit my work the most.


The thread about that has been stickied, I thought the same thing as this board almost never has any stickies and didn't think to look for it there in the catalog.

Because then you increase your marketshare for hardware sales first of all. As more users see the security vulnerabilities with x86 and eventually GPU's having everything open source creates user trust. Secondly it reduces the time needed to dev as anyone from the community can contribute improvements. This also acts as free advertising as users will advertise their improvements to other users, and thereby your hardware sales increase further. Third it improves the user feedback proccess for the managers, as they can give concrete examples of what is wrong with the product and how to fix it in the future, thereby improving sales.

Open source is better in every way for future sales. Switch now.

Not if they don't open source the PSP and the GPU firmware blob. Otherwise you are asking for a intel shitshow if anyone finds a bug in it in the future. Which someone will eventually. If it was opensource to begin with then this wouldn't be such a shitshow, it would just get fixed and few if any would care.

AMD & Intel have contractual obligations that prevent them from doing so
devever.net/~hl/intelme

There's no point in asking them to free it or make chips that don't have it.

This is for x86 remember. AMD can keep making effort to opensource, or even outright remove troublesome parts of their hardware like the PSP for consumers. But the future is RISC based chips like RISC-V. When and or if AMD starts developing their own RISC-V chips for embedded, supercomputers, and or consumers then opensourcing the rest of the motherboard such as the clocksource would make them more trustworthy.

Sure for the next 5-10 years corperations and consumers might keep using x86/trash but investing in a successful risc startup such and sifive is a good alternative when all the hardware goes to shit. Or better yet AMD should they start developing their own chips. I'd buy a risc-v chip and board if it were opensource by AMD. But the heads of the CEO board are all (((fake jews))) who would never let that happen because it ruins the monopoly they have on computing hardware.

Wait they by NDA from the kikes are contracted to backdoor every chip? Well all the more reason for AMD to expand their hardware lineup into something they are not contractually obligated to backdoor/make defective by design.

Risc-v will only make it where ARM is now and that will be years from now. x86 will always be the king of the desktop. Risc-v/ARM will maybe replace x86 in servers and laptops soon though.

Attached: Tray inside.jpg (1178x922, 1.03M)

You are a retarded nigger. RISC-V does not need nore have (((ARM trustzone))), legacy instructions, and uneccessary on die schedulars unlike ARM. So heatdeath is not a issue for RISC-V unlike x86 already having hit the performance bottleneck and ARM approaching it. It might take years for RISC-V to catch up, but after that it will just piledriver straight through the hard performance limit you have with x86 right now.

You can thank me. I bought a ryzen 2600

Attached: 1310155890001.png (634x768, 16.98K)

Thank you


Yeah but then if you had waited and sold them at $24.86, you would have regretted not waiting a couple of months/years when it gots to $40. You never know. One can only work with the current information one has at hand.

I pulled some ether already and already got back my 1800 so no major loss but I think about it even though I try nto to

There's a saying in the stock world:
Bulls make money.
Bears make money.
Pigs get slaughtered.

You did better than someone that didn't even buy any AMD stock. Of I wasn't such an extreme poorfag I'd get some AMD and just DRIP it.

Attached: 1rtnfu4d.wizardchan.feel_crying.png (645x773, 58.3K)

This thread is full of idiots...

AMD is rising in stock because they have been following their release schedule. It doesn't matter if their products are 100% better than Intel's because the huge companies that make them most of their money don't care. What they DO care about is the fact that AMD has been meeting their chip release schedule, and as of late, Intel has been pushing their new chips back further and further. What this means is that when the huge data centers come around to purchase mass amounts of chips for new machines they're going to look at Intel and see that they're being inconsistent, making them not a good upgrade path (because there haven't been any real new chips in so long) and that by comparison AMD is working at a steady and accurately published pace, making them the easy buy. Because of this, investors are seeing that data centers are leaning towards buying bulk AMD chips for the immediate future and are starting to gobble up AMD stocks because when Amazon for example decides to order 30,000 ryzen chips for a new data center it's going to drive AMD's income through the rough and that will spill back onto investors.

so it goes

This is right on the money, investors care about future growth more so than earnings per share and a company hitting its milestones is a good indicator of future growth.

I was going to upgrade to a 2600, but then I decided to wait for 7nm.

Goodbye 40 thousand dollars. Never knew ya.

the just reward of traitors. no strategy can outmatch buy and never sell! idiots.

This.

This shows naivete to business and the consumer target.

We're looking at a bell curve here. The folks who can even parse what you wrote are such a small fraction of the consumer base that the energy you spent writing the words could have better been spent jerking off.

The subset of people who are intelligent enough to understand the benefits of freedom in this space is probably large enough to make a moderate impact to the status quo if it acts in unison. The subset of people who are intelligent enough, knowledgeable enough, and personally/professionally incentivized enough to make a major change still isn't large enough yet. I know so many normies who, if they had a reason or inclination to look into this stuff, would agree completely. Their biological strategy is closer to "long term stuff will sort itself out", so any Stallman prophecies are pretty much non starters. Nevermind the future dystopia my children will face. Tbh probably not a bad strat.

The reasons you provided for AMvidia to open up their systems aren't wrong--but those who recognize them won't shift the course for a while.

Maybe too blackpill. Progress has been made and continues, so there's that.

Why are Intel shills such mongrel mongoloids? Remember this?

youtube.com/watch?v=BDByiRhMjVA