phoronix.com
Anyone else disappointed?
phoronix.com
Anyone else disappointed?
Other urls found in this thread:
fsf.org
fsf.org
libreboot.org
raptorcs.com
gnu.org
arm.com
phoronix.com
x.org
wiki.raptorcs.com
archive.fo
archive.fo
twitter.com
Not really. I'm more disappointed in Intel and AMD for including nonfree backdoors on their systems.
fsf.org
fsf.org
libreboot.org
Pretty much this.
/thread
x264 is expected (Altivec doesn't get as much as love as SSE/AVX) but 7zip and timed compilations are pretty bad. Almost looks like a compiler and/or kernel problem.
some commenters said as much
...
honestly not that bad when you consider the security advantage.
it's far more than that. Even though it was below average when compared to the high-end Intel/AMD server stuff, it's still a million miles away from "Faceberg machine"
On top of that, nobody who buys this thing is going to be on Facebook.
How about the entire Debian repo?
How about the entire Fedora repo?
How about the entire OpenSUSE repo?
How's the whether in Jerusalem?
Not too shabby
Yeah no fuck off
the software is of course better optimised for x86. x86 has been a thing for years and developers have been working with it for years.
raptorcs.com
There is also a 22 core cpu that will be available in the future
It's actually quite cheap for a workstation
It really doesnt this isnt even the highest end cpu they will release
raptorcs.com
Yes optimisation is a thing that exists and it does make the performance quite better
You are not going to run windows on it you are going to run linux and most linux software is available on many architectures.
let me know when they make freedom affordable for the 99%
Workstations are not affordable by default
Also fuck off faggot if you knew anything about not spending all your money on dragon dildos you'd afford one easily
Also
Fucking February of 2016.
Of course it's unoptimised they are probably running debian stable for stability's sake.(although i dont know what software versions the debian stable repo has, debian users please correct me.)
yeah, no
I'm currently using Debian 9 with 7zip 16.02
The x86 parts are way more expensive and better spec'd. Why are you spreading FUD? Seems like you have some kind of agenda....
massive OPsexuality, like other mental disabilities, is not an agenda
seems like you have an agenda. rust must run like shit on power
WEW
Compare what's comparable.
This
Intel/Amd please GTFO!
it's like runny shit everywhere
How butt blasted do you have to be to confuse the messenger with the message? Did your brain melt when you saw the results?
A little bit:.
4 core 90W TDP
8 core 160W TDP
I hope theres a patch for gcc floating in the deep depths of IBM that could allow its power to be properly utilized.
nice try
It would be nice if they released the clock speeds for each CPU option.
This, and the AltiVec unit on POWER9 is likely underpowered anyway, since this is a processor made explicitly for datacenters, not HPC.
likely due to the spinning rust HDD
If the benchmarks aren't made in tmpfs, this guy is a massive retard.
This, it's nearly as dumb as that thread where OP points to one outlier test out of an entire benchmark suite to suggest POWER is swamped by a cheaper ARM server system, when the overall benchmark results say quite the opposite.
OP, trolling is the national sport of imageboards, but you have to try harder, 4/10.
Maybe things have changed since the PPC G4/5 days, but wasn't Altivec always MASSIVELY overengineered compared to every other ISA's vector coprocessors, eating up something like 1/3rd of the floorplan die space?
GCC is shit, and that's even truer in the case of fringe ISAs like POWER. IBM maintains its own proprietary compiler, XL, that is MUCH better.
Does anyone know of any GPLv3 compatible compilers with potential for being good that's NOT written in C++?
It doesn't have to be complete, only have a sane design.
(sage for self-reply)
Do you realize that what you're asking for is meaningless.
I pointed to it because ARM is shit. You fags are so sensitive to a fucking CPU architecture. You need to grow a pair of balls and get over it. The fact that the AMD and Intel CPUs cost about as much as this machine would suggest it did okay, but some of you can't even handle that and are crying because of a fucking benchmark. Don't attach yourself to hardware and you can see how silly some of you sound.
I want a C compiler that:
* is licensed under the GPLv3 or later with the runtime exception, or under a GPLv3 compatible license, such as GPLv2+, MIT, Apache License v2, 3 clause BSD[1] …
* is written in a language that doesn't obscure what's really happening, such as C++, Perl, Rust …
* is much easier to grasp than both GCC and Clang
* produces good code
* transpiles other high-level languages into C
* compiles C to machine code in many stages, to make the architecure of the compiler easier to understand
* has documentation under the same license as the software (or if it uses the GNU FDL, it must not use any of the options)
* is not mainly developed and sustained by a corporation with monetary interests in tivoization and walled gardens (looking at LLVM and Apple)
[1] gnu.org
...
Your thread was clickbait, attempting to stir shit by taking an article that said "POWER is the fastest RISC, ARM's server efforts are respectable" into "POWER outperformed by cheapo ARM server".
No, they cost MORE than this machine.
A fair comparison would be either similarly priced rigs (for current value), or similarly configured rigs (for future value once economies of scale settle), both using decent compilers like XL for POWER and ICC for Intel.
That ARM server board was $3k dumbshit. It should have lost in every benchmark too.
read a book nigger
One out of 7 otherwise lopsided tests isn't a loss, it's a fluke obviously caused by poor implementation.
Way to defeat the purpose
Security versus performance has always been a tradeoff, now more than ever with the need for speculative execution to keep IPC growing.
t. Pajeet
Talos II Basic Bundle - Dual CPU 8 Core
$3,365
AMD EPYC 7551 CPU ONLY
$3,700
AMD EPYC 7601 CPU ONLY
$4,500
Considering this Mobo + CPU bundle competes with a $3,700 CPU this is actually a great price.
Media encoding falls short, someone will have to tune the compiler like x86 has.
I am not disappointed at all, this is actually way better than I thought, unless you're doing media encoding perf per dollar BTFOs x86.
And according to the website yes that is the latest stable.
They should have used unstable to see what would happen with a newer kernel and software.
Yea basically this, I look forward to moving most of my stuff over to a completely free platform when mine arrives next month since I ordered mine quite late.
Its a bargain compared to buying an OpenPower server since the only decent ones come with a bunch of P100/V100 GPUs which jacks the price up astronomically and I don't think Power9 systems are available on the IBM cloud yet.
Also this, holy fuck these comparison benchmarks are a joke.
Any word on whether the pricing will go down in the future? I'd love to show my support but as a poor college student I can't justify spending over $2k on a desktop...
Why does a college student need that much POWER? Just use one of the ARM boards that comes free in some boxes of cereal.
I have a RPi3 but doesn't that have the ARM flavor of ME/TrustZone?
There's no such thing the fuck are you talking about?
I was under the impression that basically any CPU you can buy apart from Loongson or whatever has some form of backdoored coprocessor, thus the hype surrounding POWER9
Just remembered it's called the PSP module on AMD
Raspi requires nonfree binaries to boot as well as locked down media decoders.
POWER9 has an open coprocessor that you can program.
Phoronix only had remote access to the machine. I'd assume once he gets on in house they'll be testing with more up to date software
Why do you care? He said he wants to support them, which I agree with.
Why did you post that video?
tcc
Why are people calling this expensive? Are we getting Intel running damage control? A 7551 with the cheapest piece of shit motherboard that doesn't have PCIe 4 still costs $1000 dollars more.
Sure they don't have an i3 variant but this appears to be better value than x86 if you're looking in the same ballpark.
I expected to buy a Talos II with the idea that I'd be spending more for freedom, but it looks like I will be spending less.
I thought one of it's primary features was being open sourced hardware?
Intel is hard at work shilling on a Bhutanese-Mongolian oven modding collective i see.
None of these variations on "japanese animation" have ever been funny
4u
Any word on OpenCL/ Vulcan compute on these?
I haven't paid that much since the 80's (adjusted for inflation). No need to, since I'm not running Internet servers or playing modern games. An old Thinkpad, or a sub-$100 ARM board can do everything I need.
PowerPC processors were consumer oriented, this shit is for massive datacenter servers. Not exactly the place where you crunch uniform data streams amenable for vectorization.
That's modern x86 for you. Why do you think Intel chips clock down when you start using AVX instructions?
pick one
pick one
Your breath stinks
arm.com
I did not know this, do you have proof? I might consider compiling out AVX instructions for my gentoo install in such a case.
Think of it this way, a corporation that buys a talos for things like their datacenters might spend alot compared to cheap off the shelf x86/arm/AMD shitware. But they save in security costs down the road that could affect various aspects of their company. Like the bad name it would give them to get hacked and their data shared with the world, who would buy their product/service if you can't trust them? People are starting to cease using free services for reasons like that, why would they pay to be scammed or have their data stolen? Or like if you are a company that sells a service like a VPN, you would lose buisness even if you keep the cia/nsa bribe money if you got hacked or were reported using shitware like AMD/intel. In the longterm for security purposes you save alot of money just making it securer by default.
I don't get what they think to accomplish by shilling a mongolian finger painting forum.
Can't use AMD GPU's because of properitary VBIOS. Can't use anything except pre-kepler nvidia GPU's because nouveau support is non-existent. But with nouveau you can just compile the support for your architecture and use the up and coming opencl support in mesa which has had work started recently on it.
If media de/encoding is your thing just stick some fermi nouveau GPU's on it and go nuts. Maybe contribute to their opencl support while you are at it.
These >>893074
Not the compiler, but the encoders. Media codecs generally have a lot of hand-tuned SSE/AVX code. The same has to be done for AltiVec.
Is that why they're getting code mainlined into the most recent versions of Linux? Oh wait
Talos II doesn't have NVLink, but I would imagine the CUDA toolkit for ppc64le works on a PCIE card.
Nope, RISC-V ISA is going to become the industry standard as a basis with apossibility of adding oneś own DRM blob for major corporations which would be a variations, but the basic ISA would still function the same [e.g. the speed]. So any user could flash back to the open source ISA and suffer no bricking the system. The industry itself has proven to be utter shit at it.
AMD is getting the majority of their kernel space and all of their userspace code opensources to reduce maintenance burdens in the future. But their closed source video bios still needs to be loaded at runtime as a blob that only works on architectures that AMD supports such as x86, and not whatever you want to compile it for.
You don't need any closed-source crap with nouveau as you can use entirely opensource solutions as long as the pci/e space is exposed for the nouveau driver. If it has pci/e you can use a nouveau card on it if you compile it for the proccessor that is on the board. This is because noueavu directly communicates with the various co-proccessors on the GPU directly over the pci/e space in their respective assembly languages.
Just use the upcoming opencl support in mesa instead. Why are you going back to closed-source with a platform built for security? Sure, open source /=/ security, but you can atleast fix it yourself compared to a blackbox program such as the CUDA toolkits.
I'm not the one who wanted GPGPU on a meme machine. IBM and Nvidia have already worked together to bring the best performance to this platform. AFAIK OpenCL with run on CPUs as well, so he could use a decade old GPU and get really slow performance if he wants to.
No, that was a reference to PPC. Seriously, Apple/Motorola dedicated 1/3rd of transistor budget in every new midrange/high-end Mac's CPU to their first ever vector coprocessor, first pic related.
Even ignoring the fact that many such Macs were just being used for stuff like word processing that didn't use FPU, let alone vector, there were no Mac dev tools initially, and as a result no Mac apps with vector optimizations, this was back when practically everybody outside the HPC market ignored vector instructions, because x86 vector coprocessors like MMX/3DNow! were complete jokes (look at that little smudge in the middle of the pic 2) until SSE3 at the very earliest, so even ports had to add AltiVec optimization basically from scratch.
Any rational company who wanted to add a vector unit to prosumer PCs that outdid the vector performance of HPC/workstations ten times its price, for software that doesn't exist on your platform, with a resulting 1/3rd penalty to silicon yields/prices/clock binning, would've perhaps put it on a separate OPTIONAL module, like FPUs were initially, but not Motorola/Apple! That's far from the only cockup, of course. There was also failure to use the "MPX" FSB's 128-bit option to leapfrog Intel/AMD's FSB bottleneck years before DDR/Rambus caught up because Apple refused to make new mobos, lengthening the G4's branch-friendly pipeline with the 7450 to increase clockspeed for marketing reasons, eliminating the quad-CPU options as soon as Daystar was killed right in the middle of pushing dual-CPU to mainstream, antagonizing Motorola by killing their Starmax clones, and countless more...
You sound like you're confusing vector coprocessors with iGPUs (pic 3 related). If so, then yes, forcing desktop customers to sacrifice enough die space on a GPU they'll never use, to go from 6 to 10 cores, is absolute madness in an era when MS has included the WARP software renderer since Win 7, removing the only possible excuse (non-gayman office drones) for Intel/AMD's waste of silicon.
That is not true, the blob mostly runs independently of the CPU arch.
Talos II works with Polaris and above AMD GPUs
You stupid fuck read phoronix.com
There are architecture dependent blobs/microcode needed for every AMD GPU since the HD2400 - HD4290/r600 series see x.org
But then you could use the r600 series of AMD/ATI gpu's for any platform then? Since they would be entirely opensource including the blob they now require to communicate on the pci-e bus just like the newest nvidia GPU's require signed firmware to communicate on that space which are also firmware/architecture dependent. But the newest nouveau supported gpu's are newer then the r600 AMD gpus.
Read your own source, the architecture dependent blobs are for the GPU architecture, not the CPU.
Linux on POWER just loads those blobs like any other arch and uses the open source drivers.
>You stupid fuck read phoronix.com
I don't understand what you are trying to prove, the binary blob doesn't run on the CPU, its code which is loaded onto the GPU at runtime. RaptorCS even lists AMD GPUs as compatible on their wiki.
>not a security risk at all, trust (((us))) at AMD
I'll eat my words if they ever opensource the microcode. But we know that will never happen intentionally.
I agree that the firmware should be opened but what possible vulnerability do you foresee?
The GPU still only interacts with the kernel via an open source driver, it has no control over the RAM, MMU or network card.
What can it actually do?
You realize every single ASIC in nearly every component of any computer you'd care to name is the hardware equivalent of a "binary blob", since you don't can't read the VHDL files they're made from, let alone verify the fab's fidelity to those files, right?
Most modern PCIe devices run their own microcode, GPUs, NVMe SSDs, HBAs and RAID controllers, etc. Even some NICs have microcode from what I understand.
Technically the GPU can read and write to any system memory address it wants via its DMA engines.
If a GPU is behind a PCIe switch (or on a root complex which supports P2P transactions) along with the NIC then theoretically it could communicate to the outside world by interacting with it directly. However the microcode would have to have an entire copy of that NICs driver, which in practice would have to be every single NIC ever made for it to reliably work, and the GPU would have to work out where the NIC is and what model, etc...
Similar attacks could be done by a userspace application via a filesystem running on top of an NVMe SSD. NVMe SSDs have DMA engines and many common ones run firmware.
IOMMUs protect against this type of attack though.
Well, it could house a entire operating system+rootkit like minix that is only loaded while it is on. Kinda like jewtel's ME or AMD's PSP or ARM's (((kikzone))) but more effecient.
Open source does not mean secure. The possibly independent operating system running entirely on the GPU could start sending errors down the command stream of instructions as to implement persistent rootkit like functionality. For one example your CPU directs the buffer to the screen but in that moment when it does so you could implement your rootkit with faulty GPU code hidden as bugs in the kernel. And considering amd contributes to the kernel in large amounts it would be very easy to do. This is just one example of a few that spring to my mind. Yet more reasons not to use newer AMD GPU's and the newest nvidia GPU's with their blobs. Stick to the old r300 series or the nouveau supported GPU's you insecure faggots.
No you moron, show me a single use case of AMD's newest GPU's on another architecture without specialized blobs. The only thing I could find even related is archive.fo
Yes I am aware of that. But just like GNU's libre OS list, I treat non-writeable blobs that are burned onto the hardware as part of the hardware. Loadable blobs can be manipulated however. But non-loadable's not so much unless it was malicious from the factory. But if you care about it being malicious from the factory you are stepping into true security type areas at which point, why aren't you designing the hardware from the ground up as you point out? Why bother with a talos ii other then it's easier to verify its functionality being entirely opensource or able to be so?
And this is why people buy devices like the talos II, because you don't have to deal with any of this shit. Get all of it FOSS and begone blobs. There's even opensource SSD's now for your opensource/security autism.
Or it could just call linux kernel functions in memory. Get creative.
CPUs hold special privileges compared to every other component (except maybe your NICs, I guess). None of what you speculated is any likelier than malicious SIPs being hidden inside your retro hardware.
How? If you verify checksums before flashing ROMs with each upgrade, that's impossible unless your entire system has already been compromised to the extent counterfeit ROMs could be flashed without your knowledge.
If you're so paranoid you don't trust that the ROM file you're downloading is genuine, what about the OS, kernel, all the "open" firmware ROMs, and every single piece of software, from your package manager's repo?
If you don't trust checksums, your only possible alternative would be to personally visit AMD/nVidia/whoever, pick up a freshly burned BD-r fresh from their vault, and carry it back to your PC in one of those locked briefcases handcuffed to you.
Jesus fucking christ, there are ZERO practical reasons to care whether or not ROM files are "blobs", unless the entire device's VHDL files are also available, and even then, open-source anything is meaningless from a security perspective unless someone has used that source access to ACTUALLY CONDUCT A FULL FORMAL AUDIT. The only reason we care about ME/PSP/TZ/etc isn't because of inane conspiritardation revolving around their being they're "closed", but because the manufacturers proudly tout their publicly intended featuresets as doing things we don't want.
...
Where did you get that CD?
Why would you trust somebody to make "blobbed" hardware that could be packed with untold numbers of malign SIPs, but not to make firmware, firmware that ships flashed on every newer part from the factory, replacing the earlier version of that very same firmware that's already sitting in the ROMs of your mint unboxed GPU?
Unless every ASIC on that board has been x-rayed and audited, it is not one bit more trustworthy than the ROM images its manufacturer puts on your repo, "open" or not.
checked
Never said I trusted them one bit. Atleast it is easier to begin half-assedly verifying their claims, both FOSS software and the supposed usage of the hardware, with a few computers of similar hardware compared against each other with fully FOSS code and an osciliscope. All of which is not mathematically verified but hence the half-assed part and not perfect security.
Wew the autism.
But yet indeed, i'm not going for perfect security here, nor am I using pre-compiled packages. Now install gentoo faggot. inb4 goes into the insecurity of downloading packages off the internet, source code or not.
The pro-AMD blob shilling here is real.
Including computers with the same firmware blobs?
Except all the undocumented closed-source mystery meat on the silicon
Obviously not every individual copy, but at least random factory line pulls to ensure compliance with the VHDL. It's sometimes done with military hardware and the like.
Gay
I never cared for 3D GPUs to begin with. Before I started using laptops (which was about 10 years ago), I only had simple SVGA cards, none of that 3D crap. Now the mobos have those GPUs built-in, so I just don't load the drivers.