Are professional gpu (quadro/fire pro) worth it?

Are professional gpu (quadro/fire pro) worth it?.
I'm a hobbyist CAD designer, is the improvement worth the investment?

Attached: w8100-vs-k5000-688.jpg (688x426, 127.65K)

Other urls found in this thread:

stackoverflow.com/a/23587649
twitter.com/SFWRedditVideos

Certified drivers and in the case of Jewvidia better double precision performance.

Is the better perf worth the price difference?

if it was worth the price difference the goycoin miners would be all over it but they have no interest

I don't know how mining works but i'm guessing it doesn't need double precision.

you would think opencl and general gpu compute would be faster on these things, but that's what bitcoin is doing and nobody buys these for bitcoin. clearly whatever bitcoin hits hard these don't perform well enough at for the price.

No

GPU in general is one of the biggest scams of the computer world, the chips between a "pro" and a "normal" one are none, they are identical because it would cost a lot more to make another assembly line so instead they just DRM the whole thing via PCI hardware and software limitation and sell their shit at different prices.

To respond your question:
It depends on what software you are going to use.
The proprietary drivers and proprietary softwares (soliworks, games etc...) needs to be compatible with each other to pass trough the artificial limitation of the GPUs so if you want to buy any GPU, RTFM and see the compatibility of the drivers and software that are given by the manufacturers.

Do you even get double precision on consumer-grade cards? You can look up benchmarks, they can give you some estimate on how faster (or slower) it'll be. If you can rent a machine with it that'd be best I think, it' but not sure if you can get one for one hour or so.

Doesn't mining use integer operations?

Tbh they use larger ECC memory which cost allot.
Also those cards are made by them not by another company (msi, asus, ...), they cost more because they produce less of them, they're also tested.
it goes beyond just hardware/software limitations.
They really put the effort into it, including the drivers.

mostly CAD with SolidWorks and some FEA with Ansys

I have no proof of that.

If you're a hobbyist? Nope, 10x the price just to get official support for what you can get with hacked software is most definitely not worth it.

The point of these retuned drivers/firmwarecards is to fund support for professional creative software, especially for turnkey systems, separately from consumer software like games and DEs. For such people, the "pro GPUs" are generally negotiated as part of a larger package including high-tier support, so their outrageous retail price is eliminated.


Proof? Last I heard, this was also caused by the drivers/firmware, and completely equal performance can be obtained by flashing them for different drivers.


ECC is the only physical difference I'm aware of.

Since the Pascal series Nvidia stopped putting ECC on all its Quadros, the P5000 is the cheapest Nvidia workstation GPU with ECC.

i've never understood the point of ECC. i've never had ecc and i've never seen a warning stating sorry this software fucked itself because your memory is shit and it fucked it self.

thermal memory corruption and the rare gamma/cosmic ray memory corruption. the software would just shit itself - segfault, produce erroneous outputs. it's about continuous operation with at least some redundancy against errors.

When a DRAM bit is read or refreshed there is a chance that it will flip, in enterprise or research environments where there is money or lives staked on the integrity of the data in memory its an extra safeguard as ECC will allow a single bit flip in a 64 bit word to be corrected. ECC memory is also manufactured to higher tolerances than consumer DRAM as its made to be operated continuously for years like other 'enterprise grade' hardware.

Whats really concerning is that ECC isn't considered to be reliable enough by some people in the HPC domain.

You don't need ECC. It's more expensive and slower. Worst case a few pixels will look wrong for a while.

Not only that, you have solar rays that can flip bits. Not a problem for a game, but can completely fuck things up if you're using the hardware for anything serious. Odds of it happening are low but they happen every day, and if you have a large machine, run it for a long time or fly a lot it's guaranteed you'll have bitflips. Get more than two in the same machine and you're fucked since you don't even know it happened.

shouldn't there be software to fix and detect this? seems like this should be in the kernel somewhere to double everything in memory and check both copies for bit flips, this can be solved in software even if it uses twice as much ram and cuts memory performance in half.

You have software methods to recover. Not as costly as duplication (or triplication if you don't want checkpoints...) but they aren't free.

I have a Radeon Pro WX5100. I use it for Photoshop because I do a lot of work in higher bit depth and lab colour space using tools and filters that can be enhanced with OpenCL. At the time I bought it, it was cheaper than buying a RX480. It does have a few benefits over a gaming card. Temperatures are lower and more stable, it's also makes a lot less noise. AMD provide a ten year warranty and official Linux support, which is pleasant bonus.

Found the shill.

when companies use lots of them at once for more important things it starts to matter.

how about used cards for linux?

No. The only improvement they offer is certified drivers and consistency of output. Otherwise they are literally just very expensive obsolete hardware.

When you need one, you'll know because your company will buy one for you.

ECC (unbuffered, and even buffered/registered) memory are not inherently slower. Benchmarks comparing ECC and same speed/timings non-ECC ram are very inconsistent. The only reason they are in practice slower is because:
-There are a limited number of valid specifications available, and the typical people buying ECC are all about those proper specs and never straying from the path.
-The best binned chips are used in ultra-gaymen-XXXtreme assRAM.

There is a cost premium, but if you're patient and use price scripts you can get perfectly viable modules on clearance that are near match or sometimes even better than normal ram price. Usually you can fine ram at a 10-20% premium, which isn't that unreasonable given there's an extra chip for the ECC parity.

You can also overclock ECC just fine. I'm doing it right now. Have a load of Super Talent F24EA8GS DDR4-2400 17-17-17-36 sitting rock solid at 3050(2933 but at 104 bclk) with 16-15-15-33* timings. The trick here is to get something with the samsung B-die, and make sure Ram is at 1.30-1.35v and [email protected] for AMD ryzen/tr4 systems.
*I can actually go a lot lower with tRAS, but I see mention of it technically needing to be tRCD + tCAS + 2, so I'm not sure what's up with this. Going lower doesn't seem to help performance any.
As it turns out, overclocking ECC ram is worlds easier than usual. No more wondering if you are truly stable or "crash every several weeks" stable. ECC fixes any occasional problems and you just need to read the logs. You can also get a feel for the rate of errors occurring.

So my feeling is the opposite. I can't imagine why the fuck anyone would want ram without ECC. The only reason ram that reaches 4000mhz doesn't have ECC is because of deliberate consumer/server market segmentation and some knuckleheads that insist on asking why they would ever want fantastic features.

Attached: ram.png (835x507, 1.02M)

Is it possible for RAM to have a bit flipped and the computer not completely shit itself with kernel panic or blue screen?

Yes. You would have data corruption (assuming the data structure remained intact).

computer only shits itself when it breaks its own rules i.e. kernel and kernel config is borked
a few stray flippies on your mpegs is fine and computer will not stop to save your precious cartoons

Why does everyone seem to be okay with that?
What went wrong?

niggers

I live in Russia and we have no niggers. Still it's the same.

It is a bit bewildering, especially considering all the other engineering effort that goes into beating awe inspiring consistency out of the clusterfuck of physics that is a computer.

The biggest cause of errors in ram isn't cosmic rays though. Its heat, and traces of radioactive materials decaying in the ram material itself. Companies go to great lengths to make sure that shit is as pure as possible.

potato niggers are still niggers

mass production and the associated race to the bottom as a response to market forces is bewildering?

bump

its not
the fact that raytracing needs quadros is embarrassing

How is the kernel supposed to know that a bit has been flipped? Unless you want it to use checksums for every word of RAM it accesses, but I'm not sure that would be particularly fast or cheap to implement.

It doesn't need to know that one of its important routines has machine code fucked up and do something entirely different than intended. It will just execute it and fuck something up. Then there's a good chance the kernel will panic, but it can also just freeze or hard crash as well.

Well, sure, but it could also be a trivial bit change, for instance, a string somewhere goes from "9K4J" to "8K4J". Even if it fucks up, the kernel doesn't know why.

Kernel based checksuming would be worth the performance hit.

Hell no. The chances that such a bit flip affects a kernel badly enough are so small it's not worth even bothering about it, at least not on planet Earth, where computers are pretty safe thanks to our magnetosphere. If anything, the checksum has to be hardware-implemented (trivial to do), and when detected, it launches an interruption.

Just wrong.

stackoverflow.com/a/23587649

Many good studies cited here

Well, I learned something today.