Windows Storage Spaces vs FakeRAID

To the left is 4 drives set to "two-way mirror" in Windows Storage Spaces, to the left is motherboard RAID10 on the same drives. FakeRAID is somehow faster.
WHY??

To be clear: I'm trying to do a RAID10 via Windows Storage Spaces, which people tell me "it's like that zfs thing man".

The FakeRAID in this case is AMD-RAID on a CrossHair VII Hero / 2700X. Mind you, just installing the damn drivers was a pain, because the installer tells you "it can't be installed if your OS is installed on an NVMe, install your OS on a different drive" (as if it's an acceptable thing for a driver installer to ask). Drives are 4x 4TB IronWolf NAS drives. My intention is to RAID10 them for performance and a bit of fault-tolerance.

Is there a proper way to RAID10 in Windows? I tried Disk Management but it just lets me stripe or mirror, not stripe and mirror. Even the RAID5 is grayed out, so maybe they gimped it in Win10.

Don't get me wrong, Win10 is awful in general, but I need this machine to run Windows for more than one year and Win7 dies next january.

Attached: Winblows vs fakeRAID.png (1136x549, 981.13K)

Other urls found in this thread:

newegg.com/Product/Product.aspx?Item=9SIABGC6C90984&Description=raid bbu&cm_re=raid_bbu-_-9SIABGC6C90984-_-Product
boards.4channel.org/g/thread/69988697>Be
twitter.com/SFWRedditImages

For reference, here are the individual drives' performance. I expected better sequential reads out "two-way mirror", which by all measures seems to be RAID10 internally.

Attached: individual drives.png (2001x1725, 1.69M)

Which doesn't mean it will stop working, just that it won't get more updates. And if you care about security... well, you shouldn't be using windows at all.

Hardware raid is just an excuse for either a extremely outdated MINIX or linux install running software raid on a dedicated SOC on your board. Use software raid unless its a FOSS dedicated SOC.

For a work machine it's fine. There's security against common malware and security against glow-in-the-darkies. The former is more pragmatic and just requires not leaving the OS unpatched.


There isn't a FOSS dedicated SOC. I wish there was, but it's all botnet. That said, I just want something that doesn't cost an arm and a leg. I wouldn't even mind running some ARM board with a FreeNAS server inside the machine, would be cool and all, but I don't think any of them have 4x SATA ports + GbE.

Is all hardware RAID shit? I hope not.

if you've been installing any windows updates at all over the past 5 years you've been doing it wrong, the updates are nothing but spyware
fresh windows 7 iso -> disable windows update -> done

raid 10 can be configured in different ways
remember there's no standard for raid modes, any combination of stripe+mirror is a "raid 10", that doesn't mean it's required to actually read or write to multiple drives.

raid 1 in linux for example will not read from multiple drives for a single sequential read. (in raid 10 far mode it will).

in short who knows what the fuck it's doing
there is very little actual raid "standard". If it's striping and mirroring they can call it raid 10 no matter what is going on behind the scenes. If there's N copies of the data spread across N drives they can call it raid 1 no matter what it's doing behind the scenes.

compare the windows performance to linux raid performance to judge how shitty or not shitty windows storage performance is.

the fact that it needs drivers is retarded. most motherboards that support a fake raid feature will to it at the motherboard level, however it's doing it without it being hardware raid. it must be using the cpu somehow, i don't know the internals, but it'll be transparent to windows or anything else, it'll just show up as a single drive instead of 4.

they are both fake raid. the fact that you had to install a driver for the motherboards fake raid ( which is especially shitty ), should make it clear. your just using the motherboards drivers to do it instead of windows 10's, which to the surprise of nobody, some random chinkshit software is faster.

there is a reason not to use software raid, even if the SOC is nothing but an ancient linux, the motherboard caps out with speed on it's sata bus.

the PCI bus is faster.

That's only true and good if the dedicated SOC actually uses the full bandwidth of the PCI bus which it probably won't if its cheap chinkshit that is running ancient linux. If it doesn't then not only do you risk your data going into a blackhole via bugs in implementation you are wasting power using the SOC blackbox.

That's pretty confusing to my intuition of stripe and mirror. Any striped read/write should involve 2+ drives, right? That's the whole point, is it not? And mirroring duplicates that, I'd assume?

Seemingly all motherboard RAID offloads the "heavy duty" to the CPU, which baffles me because an ARM Cortex should have been enough on the motherboard. Intel Rapid Storage also implements RAID via the driver, someone correct me if I'm wrong.

But that's the surprising part!
How come software RAID running on a decent processor is slower than a chink driver?

>To the left is 4 drives set to "two-way mirror" in Windows Storage Spaces, to the left is motherboard RAID10 on the same drives.
Reminds me of that faggot that did 6 drives with RAID0 and was on here asking what to do when 1 died. Oh how we laughed. Good times.

A true soft-RAID solution like Storage Spaces or whatever it's called has to account for variance in the way different disk controllers and other things work whereas chinkdriver has to account for one controller on one motherboard. The Windows soft-RAID also does checksumming, if I recall, which will always be slower than non-checksummed RAID.

Sorry for the nigger moment. The right one in the OP, which is faster, is fakeRAID. This is what I cannot understand.
That one had to be a troll though.


None of that makes sense. Any variance between disk controllers is irrelevant if they're all using the same protocols (e.g., SATA) Even if it was checksumming the contents of files, which I don't think it is (isn't that ReFS you're talking about? which is seemingly deprecated), that would still not explain a slowdown since it's a CPU operation and this barely consumes 1% of the CPU if that (I checked while benchmarking the storage).

Pic related: Two pairs of disks as two-way mirrors done in Storage Spaces, striped via disk manager. The performance is almost as good as motherboard RAID10, except for unqueued random IO and random writes. This is not CPU-bound; so none of this makes sense.

Attached: WSS WDM RAID10-like.png (1002x864, 1.4M)

Software RAID generally surpassed hardware RAID some years ago.

(that is, unless you fork out for a high end RAID controller)

Sure, in Linux. Windows Storage Spaces is slow as shit and I have the benchmarks to prove it. You have the same fucking data in two separate drives, and it will read off of just one, it's a glorified RAID 10 that performs worse than a single fucking drive. This does not have resilience or anything, it's just NTFS, which is as complex as ext3 or something.

After some testing I'm going to just use the shitty motherboard RAID 10 because it's like twice as fast and just as resilient. In fact, did I mention the stupid Windows "Two-way mirror" across four drives will die if any two drives die? So it's even worse than the "maybe two drives" of resilience in actual RAID 10.


Just tell me what that is. What is a good 4-port SATA RAID controller?

mdadm
ZFS
Don't buy raid hardware.

You can get fast as fuck harware RAID by buying an LSI 92xx series card with BBU (battery backup). The BBU will allow you to use card-level write caching (it is disabled otherwise). Best way to purchase this is through e-bay; there's a lot of Chinese vendors selling these as OEM. Probably you'll spend 350-400 USD.

Or get one refurbished for extremely cheap from Newegg: newegg.com/Product/Product.aspx?Item=9SIABGC6C90984&Description=raid bbu&cm_re=raid_bbu-_-9SIABGC6C90984-_-Product
(only fucking $120, wow)

Also
Stop listening to those people. Windows storage spaces is a garbage fire and you are asking for data loss, not because a drive dies, but because windows decides it doesn't like your array anymore and fuck you.

If you are needing Raid10, it's really about time to learn how to put together a NAS and share those files with samba. If you're willing to burn money on fucking hardware raid, then the following will end up saving you money.
-Don't fall for the tiny NAS meme. You want as many PCI-E slots as possible, and never get a chassis with less than 8 spaces for HDD's. Old computers, undervolted if possible, are great for this, otherwise bargain bin ryzen hardware is the way to go.
-FreeNAS or OpenMediaVault (debian) for the OS
-10G SFP+ hardware from ebay is amazingly cheap. Mellanox connectx-3 single or dual port cards are preferable. Connectx-2 is too old. If you want a switch, look into what mikrotik offers.
-If you need more sata ports, use cheap used LSI 9211-8i HBA's also from ebay. A tiny fan on the heatsink helps. Take the heatsink off, drill two holes in the corners, use twisty ties, done.
-Take the time to learn whatever file system you use, and learn what is does and does not protect against. Outside of hardware failure/windows storage spaces/btrfs, the reason for data loss is realistically just you being a dumbass.
-6TB, 8TB and 10TB Easystores and WD Essentials go on sale regularly, dropping down to about $110, $140 and $170
-3-2-1 backups faggot. Come here crying you lost everything and we'll laugh at you.
-Set up the system so if a drive dies or starts throwing errors, you get an email.

Attached: 1324304542523.jpg (1353x1001, 238.39K)

Now that's an enticing price range, but it seems to only have SAS connectors.


I've never dealt with SFP+, thanks for your post. (Neat pic too)

Man, I don't even know what to tell you. All the info I need is in your post. I'll save it and give a hard thought about a NAS. Still, just using a NAS for backup seems like a viable solution as well, and living with an unreliable software RAID.
I wonder how anons deal with the 1 part. The usual solution is trusting your data to some big botnet like Amazon, I find that so unsavory.

Lol
I bet you browse reddit.

Same shit with NTFS. If there is a mismatch in the NTFS partition data itself it will also decide to altogether stop functioning. Could recover everything though if not exclusively winfag-cock only.

Literally this.
boards.4channel.org/g/thread/69988697>Be in California>earthquake>Destroys RAID1 array on NAS due to shake

linux raid 1 will only read off one drive for a sequential read no matter how many drives you have mirrored. you will only see a performance boost if you have multiple simultaneous reads, which is why you always use raid 10 far 2 in every single use case.


that's break for backup but the performance is shit unless you have a 10 gigabit network. max throughput on 1gigabit is only 125MB/s

your going to need a hardware raid card no matter what you do if you want to raid ssd's and get the performance you should be getting. 2 ssd's in raid 10 will be gimped by the sata bus.

I don't get it. Why doesn't RAID 10 read off of 4 disks at once? If you're only reading a single large file, it should do this. This makes no sense.

Attached: chinese ssd.jpg (610x409, 70.41K)

(For the record, this isn't a defficiency of the SATA bus, since I can get 750 MB/s just fine on 4 hard drives RAID0'd)

Attached: CrystalDiskMark Striped (Disk Manager).png (1002x864, 1.63M)