Ubuntu is going to make it easier than ever to use ZFS!

What does Zig Forums think about this? How does this impact the current 'filesystem war' of ZFSonLinux, Btrfs, and Stratis?

Attached: zol.png (760x563, 168.52K)

Other urls found in this thread:

ceph.com/
slickdeals.net/f/12838102-my-best-buy-members-10tb-wd-easystore-external-usb-3-0-hard-drive-32gb-usb-flash-drive-170-free-shipping?src=frontpage
twitter.com/SFWRedditGifs

Except that Red Hat's Stratis is much younger

Who gives a fuck? If you're hoarding data, you should move to object storage.


Learning and installing CEPH is the best thing I've ever done.

Good stuff. The less reason we have to keep FreeBSD around, the better.

...

wtf even IS ceph?

ceph.com/
*inhales marijuana*
*gains 23 points of corruption*
it's the cloud, maaaan.
it's files but they're on clouds instead of disks, man.
you'll never have to deal with a single server urgently needing a disk swap in its single dedicated RAID, maaaan.
a few disks fail on the 20th of December? you can just wait until after Christmas to swap 'em out.
this well-known, easy to deal with, easy to understand problem is gone :)

zfs sucks. cant have more than few terabytes of storage because it requires enterprise hardware after that. you cant have 100+gb ram on consumer hardware that often comes with 4 or less ram slots.

That's just not true. And you can turn off features that require more RAM in order to get similar functionality that other filesystems have.

t. 4GB zfs setup.

If you can afford many TBs of storage, the RAM is not a problem.
No, stacking no raid no backup hard disks does not count.

hdds are cheap. you can have over 10tb in a single drive and even the worst motherboards have at least 4 sata ports. most consumer hardware support maybe 32gb ram and some meme matx/itx boards even less. the server parts would cost at least as much as the drives would.

The more platters a disk has, the higher the risk of failure: getting 10 TB disks is irresponsible without some highly redundant scheme such as RAID.
Also, double the cost to consider space needed for backups and their own redundancy system.
Average consumer hardware has 4 RAM slots, and 16 GB sticks are not rare, so you can easily get to 64 GB total.

what

It's almost as if you WANT to lose your lifes.

Attached: JkBFUCt8to.jpg (645x480, 46.23K)

I wish people would stop being so scared of btrfs. The RAID 5 and 6 modes are still flagged as "experimental" but I've been running btrfs in RAID 5 mode for about 6 years now without issue. It's even been through an HDD failure and several expansions/replacements and one drive removal, and has run on kernels between 3.2 up to 4.14 and I've yet to lose any data. not 1 corrupted file in 5.5tb of both frequently read/written and archival data in 6 years. The only problem I'd worry about is if it's doing frequent small writes like databases, or if you live in an area with very inconsistent power, you should either disable CoW or get an UPS.

if you don't know anything at all, you'll need to ask better questions than that.

ext4 is enough for me so I don't need ZFS.

I don't really expect a file system to do much other than these:
Does ZFS satisfy all these use cases? I'm on ext4 right now though mostly because it came by default with my distribution of choice.

ZFS
ext4

There you go.
ZFS is incredibly fast for reading if you give it enough ram because you're essentially running everything off a ramdisk.


Is what everyone should be doing. It's cheap, and commies are purging all wrongthink from the internet.
how retarded

its the other way. ext for long term so you dont need some crazy expensive hardware and zfs for something that you need to access often.

Wrong. The overhead it requires is so small it's irrelevant on anything Core2Duo or newer.


You don't. What makes you say that? Any computer in the last 13 years will run it fine.


I'm considering moving my 100TB ZFS server over to Ceph because I now have a second server with 30TB and I don't want to use an overlay system on top to combine both directory structures.

Yeah I'll get there eventually...


How else would you expand storage easily? All file system level stuff have inane limitations like "you can only add another drive of the same size" and "you can't just add more drives to a volume, you gotta gradually replace existing drives with bigger ones" and "since you can't easily expand volumes you end up creating new ones and having to manually keep track of which volumes your files are located in". It's a pain in the ass to deal with this.

What I want is:


If it's not CEPH then what is it?


If you do please share your experiences. Would like more discussion on this subject

I've already switched to btrfs, and with the write hole getting fixed there's no reason not to do the same except that ZFS has name recognition to people who've been in the industry for a long time.

Unless it relicenses I couldn't care less. Only place I might use ZFS would be on a dedicated NAS rig.

Attached: 2ced59a6acb1ffc7e148692ae8d2c9ee5619507774aeae2dfa49fc974e1aa352.png (500x700, 207.84K)

ZFS

ZoL runs on 512MB ram, P4 equivalent.
Most nas appliances from the past few years can actually use it, but for whatever reason they don't.

Why do lincucks always resort to lying?

you sure that has happened?
seems like they're definitely working on it, but it's not actually fully fixed yet.

Attached: writehole.png (867x591, 116.95K)

IIRC the problems with RAID 5/6 are also problems with an actual hardware RAID 5/6.

I host a 20TB mirror parity ZFS samba share for my lan inside a VMWare machine with 512MB of memory running on my desktop. It saturates the disk and network when getting demand from multiple programs/users.
People need to stop parroting this garbage. I put this off for years because of remarks like this.
Stop this.

And you know what's hilarious? L2ARC has memory demands and is rarely used. yet they parrot the idea you need it
slog devices are only used for sync writes.
dedup is only good in some edge cases none of them are any you'll ever experience.

ZFS is the best around if the data is sitting on disk for more than a few days.

botnet

In what way? ZFS has been Free Software for many years now.

ZFS eats up RAM with deduplication enabled

I personally don't go below 8gb, since that's what it is fully tested against, and I've seen rare reports of people having issues going below 4gb (though these issues due to their OS not playing nice and could likely be resolved with some tuning). It's nice to know that it can go that low though.

To reiterate for those that don't know, the only time zfs NEEDS ram is when using deduplication.
Don't fucking use deduplication
It's off by default for a reason and if you aren't running an enterprise system you don't need it and it will only waste your resources and causes problems. Otherwise when running ZFS normally, it'll just make use of free ram for caching, but it doesn't NEED that ram. You can reduce how much ram it tries to cache with some tuning variables.

Likewise (as others have said) SSD L2ARC caches do NOT do what people think they do and only benefit specific use cases.

Hopeful the idea it needs ECC is also already thoroughly debunked. ECC is nice, but not needed any more than any other filesystem.

The only thing that's expensive about ZFS is stupidity and hard disks. And as it's happens, here's some 10TB $170 easystores: slickdeals.net/f/12838102-my-best-buy-members-10tb-wd-easystore-external-usb-3-0-hard-drive-32gb-usb-flash-drive-170-free-shipping?src=frontpage
My first ZFS/samba NAS was literally pic related for $50 and a bunch of spare parts/thrift store junk. The most expensive parts were the disks and a decent PSU. Which reminds me, you can cheap out on a lot, but don't cheap out on the PSU.

t. paranoid datahoarder with 24TB NAS + backups.

Attached: ASRock QC5000ITX.jpg (404x500, 78.9K)

ZFS > XFS > ext4 > ReiserFS >= Reiser4 > btrfs > JFS > UFS

t. garage tech enthusiast

I don't see any FAT32 in there.