Opening the filesystems royale

Long time lurker. Never read a discussion on filesystems on /tech. Does it make any difference at all? I've been rolling with LUKS-encrypted BTRFS raid-1 for storage and XFS for everything else. Quite uncomfortable. We know they got taken over by Facecuck and (((Red Hat))), but how fucked up are these projects at the moment? What do you folks suggest?

Attached: 6f0.gif (388x356, 2.22M)

Other urls found in this thread:

phoronix.com/scan.php?page=article&item=btrfs_raid_mdadm&num=2
securitypitfalls.wordpress.com/2018/05/08/raid-doesnt-work/
gist.github.com/MawKKe/caa2bbf7edcc072129d73b61ae7815fb
wiki.gentoo.org/wiki/Device-mapper#Integrity
btrfs.wiki.kernel.org/index.php/Status
bcachefs.org/
en.wikipedia.org/wiki/ZFS
twitter.com/SFWRedditVideos

i just use lvm + ext4 for everything, and i turn journaling off for database partitions / volumes.

I looked at ZFS, but it seems like it's better to just use linux raid10 if you need the raid functionality, lvm for snapshots and easier partition resizing. The data consistency checksumming and all that would be great with ZFS but the horror stories of kernel panic errors
and ZFS maxing out system ram and crashing the system, and then loosing all your data keep me from using it.

Why?
phoronix.com/scan.php?page=article&item=btrfs_raid_mdadm&num=2
raid 10 far 2 is better than raid 1 in every circumstance and mdadm is faster than btrfs.
if you want the btrfs features why not just put it on top of mdadm raid.

See securitypitfalls.wordpress.com/2018/05/08/raid-doesnt-work/ and gist.github.com/MawKKe/caa2bbf7edcc072129d73b61ae7815fb
tl;dr Linux already has generic self-healing RAID with dm-integrity + dm-raid and there's dm-vdo getting worked on if you want compression/dedup

ext4 for HDD, f2fs for sand/SSD

i didn't even know this existed, thanks
wiki.gentoo.org/wiki/Device-mapper#Integrity

I have a couple disks on BTRFS and while the subvolume approach can be kinda cool to for say keeping cruft out of your home folder the performance goes down hill really fast.
Root and Home are on LUKS+LVM with ext4 but if I get enough spare space I'd like to switch my storage over to ZFS since I already work with it on a freenas system and variable sector size plus lz4 can have fantastic performance gains.

As in for data? What's that good for? I assume it would be practically useless for pretty much all audio-visual media.

Kinda, I mentioned variable block size because using compression then allows you to compress data and fit it in a smaller space improving disk I/O.
And since lz4 is rather inexpensive, files that are already compressed won't cost you that much either CPU time wise.

Every major OS dev has plans to move to COW based FSs sometime in the future or already has to a degree
Apple has APFS introduced in 2017 and is already deployed
Microsoft has ReFS intended to replace NTFS
Oracle/Sun have btrfs and ZFS
But Linux for the foreseeable future is stuck with ext4 because of a combination of incompetence and with the case of ZFS, licensing autism as well

Shits fucked. Install ReiserFS. That’s supported by the Linux kernel and was developed by some roastie killer

Wheres the beef? Just don't use RAID 56 at all, and use kernel 5.0+: btrfs.wiki.kernel.org/index.php/Status
There's also bcachefs, but its not upstream yet: bcachefs.org/
I haven't tried bcachefs myself, I'm very interested in hearing what its users think of it.

Lol just use ext4

Attached: 1555610681.jpg (752x460, 70.37K)

...

ZFS on FreeBSD is great, but on Linux I just resort to using ext4, like the lazy fuck I am.
Doing regular backups to the FreeBSD fileserver, which should be safe enough and spare me from using raid on the work machines.

Not sure on the general status of ZFS on Linux, but as a package the filesystem and tools are the best I've used so far (didn't try HAMMER, but HAMMER2 has good chances of being the general fs to end all). So it could be worth trying out.

both are vulnerable to data corruption issues.

NTFS

Attached: Spongebob.jpg (731x731, 84.08K)

Why should anyone care? If you want other filesystems, then just use those filesystems. There's no need to recreate their work if their work is perfectly usable.

it is? seems way more active than the others. windows still has ntfs and bsds have the same they always have had

Is there any advantage to using ZFS over a hardware RAID card?

HAMMER2 also looks interesting but it doesn't have any Linux implementation so one would need to install dragonfly for testing it.


btrfs is in mainline last I checked and a lot of people consider the ZFS situation to be moot since no one has enough standing.


You'd likely need an HBA/RAID card in IT mode anyway but the main advantage is that you can ditch having another piece of hardware you need to actively set up and monitor for keeping data safe opposed to letting ZFS yell at you when the SMART data comes back wrong.

I'll throw my opinion in the ring on this one. If you are/work for a company with a budget, buy a commercial filer. If this is for personnel use and you really care about the data that you store, I would recommend you store it on a ZFS file system. Specifically a ZFS file system utilizing multiple hard drives in a RAID configuration with parity. There are SO many options of RAID configurations you can do with ZFS, I'm not going to suggest a specific hard drive configuration, as it depends on size of drives, and how many you got.

Why ZFS? The ZFS file system has an abundant amount of high level features that you can easily take advantage of, and personally to me, features that I care about. One of the greatest features, in my opinion, is that when you write data to the file system, ZFS will perform then store a checksum of the data (I can't remember if it's at the block level or file level for the CRC). When you go back to read data from the disk(s), a CRC is again performed against the data, thus you and the file system know if that data has been corrupted. If it has been corrupted, and you configured your ZFS file system to contain parity, ZFS will fix the data for you on the fly. Alternatively, you can run a scheduled check of the whole disk, or specific volumes of the disk, at a regular interval to see if you have any corruption. Please note, this does not prevent corruption from what some software could do, or inadvertent file overwrites by a user. To clarify, it protects from corruption of the physical disk (ie disk failure, bad blocks, etc) or also from bitrot. You can argue if bitrot is a thing or not, with MY data, I lean on the side of caution.

Another great, and supper cool, feature is snapshots. ZFS snapshots work on the volume level, they are supper cheap and supper fast. What I mean is that when you tell ZFS to create a snapshot, it almost instantaneous. A it's cheap, because when you create a snapshot, it doesn't cost you any disk space. I know that sounds strange if you haven't dealt with snapshots before, but it's only the changes or additions to the disk / ZFS file system that happen after the snapshot that grow. I'm not explaining it entirely great, just google how file system snapshots work.

I would answer this with yet ANOTHER great feature available with ZFS. ZFS does not need, and most time not want, a RAID controller, especially if it's a "smart" raid controller (fuckin HP p410s). Hardware RAID can be great, but ZFS is all about software RAID. The software part is what gives you so much more control over your file system, which is cool. One upside of NOT requiring a RAID controller is that your data is still safe if your RAID controller goes out. I have heard, and know personally, people who have relied on hardware RAID controllers to store their data, but the controller craps out. Guess what, without that controller, the computer doesn't know how to read the data from the disks. You can't just replace it with any old RAID controller, you generally need the exact model replacement, and often times, with the SAME firmware version you were running on the controller card that died. So that's FUN! With ZFS, you can pull your drives out a system, put it in some other system, hooked up in different order, to even include slaving internal disks via USB, and mount them using the ZFS file system. Your redundancy in ZFS is not dependent on a hardware controller.

CONT

Which version of ZFS? So there isn't just one version, there are a couple of choices. This is where you should check Wiki and read about the history of ZFS. What I personally run, because I give a shit about my data and want all the bells and whistles, is Oracle Solaris operating system hosting my ZFS file system. I know, it's proprietary, fuckin sue me. You can download Solaris from Oracle's site (after creating an account I KNOW) and install it for personal use. Please note, Solaris is NOT Linux, it's a Unix beast. Sliding the topic a bit, Oracle has GREAT documentation about the zfs and zpool cli commands. Oracle has been improving ZFS, and some nice features come natively with thier newer and updated versions of ZFS, native ZFS encryption is one, as is inband LZ4 compression. If you don't want to use proprietary shit, FreeBSD has great ZFS support native. There are awesome open source distros just for file servers that leverage the ZFS protocol. Not to get off topic, but it is my opinion the large storage manufacture NetApp, basically took the opensource ZFS file system from FreeBSD, re-worked the CLI and interface, close-sourced it and called it Data OnTap with the WAFFLE (WAFFLE = ZFS) file system.

I have personally tested a ZFS array I had created on FreeBSD, I moved it over to different hardware running Arch Linux with ZFS installed, and was able to mount and read my data.

Here, just read the Wiki about ZFS: en.wikipedia.org/wiki/ZFS

NTFS when I have that option and ext4 when I don't. When people here say something is good, it's guaranteed to be garbage, so I just keep using what I know. I don't trust any of this shit. Especially if it's related to Linux or even worse, the BSDs. Anything new is also guaranteed to be trash. Just look at everything else. Everything sucks, so why would that be any different?

nilfs2. Lets me go back in time in a scriptable way which is cool for live-backups of the filesystem and also for reverting to earlier versions of individual files. It's also very simple to use and a good file system for SSDs because of how it spreads it's writes. Not the fastest but who cares.

ZFS sounds like the systemd of filesystems. Both in how unparseable it's feature set is on first look and also how everyone using it sounds like a Jehovahs witness. Has it also got a DNS resolver?

The big appeal of ZFS is the RAID-z functionality, followed closely by the self-healing and snapshotting.
Basically getting as much storage out of disks as possible without the performance loss of RAID5/6/(7?).
That is what makes ZFS a ram whore to most people if not tuned correctly, but can be mostly alleviated with zoom-zoom ssd's as cache along with tuning.
ZFS as a whole makes more sense than BTRFS, but has fallen foul on the good ol' licensing issue.
That and people not understanding their filesystem needs.

based.
NTFS is to new for me though. I use FAT32.

they can be good but its a Zig Forums thing that everything has to be hard to use so you have to spend hours learning everything and possibly learn some programming language too just to configure the program.

This guy knows what's up. I forgot about the RAM whoring ZFS does, I consider that a feature. But my use case is that the information system that hosts my files does that, host my files, and nothing more. You does raise a good point, you can add SSD or other supper fast storage and use it as a cache, which is another great feature!


Sounds interesting, might have to give that a try in the future, see how it works. The ability to fallback on a snapshot easily is like magic. I checked their site, didn't see how you can natively or easily restore a single file from a snapshot, is there other documentation on that? ZFS has the same downside, you have to mount the whole snapshot to copy the RO file in the snapshot back to the "live" / current volume on the disk.


Jesus christ man, don't try anything new!! How the fuck do you learn about new tech if you don't go try it for yourself? Especially if you don't trust the fucks on here?

To my knowledge, ZFS is unable to resolve host names to IP addresses. However, if you were hosting a primary DNS server, or really any DNS server you rely on to resolve name lookups, you may consider storing the named zone files on an underlying ZFS file system for safe keeping.

NTFS is complete dogshit by filesystem standards and the only reason it's still around is because Microsoft autistically refuses to support anything else (aside from the FAT family and recently ReFS).

Attached: 0ab7e06a00cf7114cd579682bd3391eb8663aeb55060711d4f41c2b94ef0a0ec.jpg (800x1233, 102.87K)

I hear you, but it won't be my data testing out how good ReFS is, spanning multiple discs, hopefully ensuring the filesystem is still able to mount encase of a bad Windows update / BSOD / regular system reboots because it's Windows....

No it’s really not, it’s comparable to ext4, not really better or worse. The future of file systems is CoW and that’s the direction we’re all going. Microsoft really wants to make sure ReFS is thoroughly testes in the real world before deploying it on consumer level shit. That’s why it’s currently only for Windows Server. Because Windows Server users, unlike consumerfags, actually pay Microsoft money for continuous support programs and if a Windows Server drive on ReFS fails Microsoft would be more then happy sending their data recovery guys over to a paying enterprise customer and it helps them gather real world testing data before pushing it to everyone else

I read about bcachefs some years back and became convinced it's the future. I check on it periodically from my ext4 lignux systems.

It's not comparable to ext4 because it doesn't have ANY of the permissions features.

penis

nothing supports refs tho. might as well not exist.

no that was because you didnt have backups

Attached: nobrain.gif (500x493, 631.3K)

Jesus christ my sides, are you serious? Hey if you believe that, Microsoft will also call you if your computer has a virus and offer to help fix it over the phone for you.


HAHAHA, you thought my greentext was a true story, HAHAHA
I tell you this though, if the company was a M$ shop, and wanted to host their data on a Windows Server OS, I'd get two identical hardware RAID controllers (1 for spare just in case) and host their data on a NTFS partition before I hosted it on ReFS. At least NTFS has been around and is proven.

Forgot to mention though, your not wrong, in that made up scenario, they should have had a backup. I'm not a total jerk, you were right about that. It should be an offline and offsite backup, should also be a separate PHYSICAL backup, not just somewhere else online.

I know it must be hard being, you know, special needs and all, but actual companies are paying Microsoft for official support and said support doesn’t come with nothing

Attached: 82D783FD-ABCC-45F5-8D1B-B07EA719E2EA.jpeg (493x517, 42.09K)

I should have been less of a jerk. Yes, you can buy official support from Microsoft. They well sell you all the support you can afford. If you got the money, you can pay for an onsite 24/hours a day official certified Microsoft Professional. I've met one, once, at a company that could afford one.

To get to my actual point, it's just that in my limited experience with different vendors (including M$) a paid licensed support contract with them, does NOT get you as much support as you would think, depending how much you pay them.

General "support" troubleshooting (from the vendors side):

Also, if it's a software company, generally all you get with support is any patches / updates / security fixes or able to upgrade to the next version. So if something simply stops working for some unknown reason, see above how they start troubleshooting.

It also highly depends on the problem. Like a hardware company is much more likely to simply ship a replacement part, and even send a guy to install it, if that's part of your service agreement (that's just from my experience). My laughter (hahaha in my post) came from the thought that M$ is in NO WAY sending anyone from their company to help in the described scenario of ReFS-drive failing and the data being un-recoverable, and M$ sending someone out because they care.

Also, I think we are getting off topic. Let me distill this down

ZFS = good, if you care about your data
Everything else = I don't care, it's not MY data, you store your data on whatever you want

If it’s an enterprise customer over the phone then fucking hell Microsoft better send someone over there unless it’s some third world country out of their support area

bump