Which are your favorites? In my experience, ReiserFS and ext4 are the best for HDDs, and XFS for SSDs. But...

Which are your favorites? In my experience, ReiserFS and ext4 are the best for HDDs, and XFS for SSDs. But, I have not tried Btrfs. ZFS is great, although I have limited experience with it. Have any of you tried F2FS on SSDs? I have heard that it is meant for USBs/SD cards. I'd also like to hear about some of the more esoteric filesystems, like NILFS2 and HAMMER. For those of you NVMe owners, what are your experiences?

Attached: Reiser.jpg (920x613, 39.28K)

Other urls found in this thread:

github.com/zfsonlinux/zfs/issues?q=panic is:open
twitter.com/NSFWRedditImage

Sorry, I forgot to do the captcha, so it ate the subject without me noticing. This is a thread for discussing filesystems.

Fat32.

I use XFS everywhere. What does your "experience" amounts to? Feelz?

btrfs on top of LUKS

As you probably know, XFS was created by SGI as the primary file system for $50,000+ MIPS workstations and servers in the 90s, and then carried over into the early 2000s for use in exorbitantly expensive clustered super computers and such. My reasoning is as follows: there's no point using a high-performance filesystem on a HDD. Ext4 uses journaling instead of meta-data journaling. ReiserFS has also never made me lose data. I would use ReiserFS on my SSD (I use Gentoo, so efficiently managing small files would be a godsend) but it doesn't support TRIM.

1. ZFS will eat all of your ram, it's supposed to give it back when something requests it, but it does not, and your system will crash, which leads to 2.

2. ZFS will cause kernel panics constantly
github.com/zfsonlinux/zfs/issues?q=panic is:open
127 open issues for ZFS causing kernel panics

ZFS might work if you have a dedicated box and all it's doing is running ZFS, you set it up, getting it working, and don't fucking touch it again, a network file server or something, but your retarded if you use it on anything else.

also raid10 far2 + ext4 still performs better than any other combination
disable journaling for performance
enable writeback journaling for reliability, but your write performance will be cut in half, because it's literally writing the data twice.

linux will not double read performance with raid1, which is why it's suggested to use raid10 far2 for fucking everything, even if you have 10 drives.

You also didn't mention how you have to paravirtualize half the Solaris kernel.

Easy to prevent, in fact it's one of the first things you do.
If things still go crazy, you can flush your caches every few minutes.
echo 3 > /proc/sys/vm/drop_caches

I know your full of shit because I ran zfsonlinux for several years with only 512mb RAM on a P3 equivalent with other services.

I've heard RAID-Z2 (basically RAID6 with ZFS) is the best in terms safety/efficient use of space.
It allows for any two drives to fail, whereas mirrored systems are fucked the two drives that failed are mirrored to each other.
I put the stuff I need quick on SSDs, and use my platter boys for storage/video files etc... So extreme performance isn't an issue.
May I ask what tasks you do that benefits from using RAID10?

In my experience ext4 crashes and fucks my data. Went back to ext3 lol.

raid 10 far 2 has superior performance for literally everything and also mirrors the data, that's what the "2" in far 2 means. "far", means the data copies will be guaranteed to be on separate drives, "2", means the number of data copies.

Raid 1 will not double your read performance. Raid 10 far 2 will. Raid 1 will only show read performance improvements if you have so many separate reads that it saturates a single drive, but a single read (read a 10GB file, a massive sqlite database, etc), will not be improved. It's the way linux does raid 1, if you have a raid card than who knows how it handles it. raid 10 +near/far/offset is specific to software raid in linux. it's not a standard raid 10.

here's the data layout for raid 10 far 2
2 drives 3 drives-------- --------------A1 A2 A1 A2 A3A3 A4 A4 A5 A6A5 A6 A7 A8 A9.. .. .. .. ..A2 A1 A3 A1 A2A4 A3 A6 A4 A5A6 A5 A9 A7 A8.. .. .. .. ..
as you can see, the copies of the data are kept at the back of the drives, allowing raid 0 levels of performance (while cutting your available space in half, same as raid 1)

Currently using f2fs on Ubuntu 18.04. Kind of a pain to set up, but works really well

(You)
i keep fucking up this post

for read, obviously it has to write the data twice, same as raid 1, so you cut write performance in half too. it has the advantage though over raid 1 + raid 0 setups, in that you can use an odd number of drives. and the read speed is much faster.

in pic related diagram, a traditional raid 1 + 0, you have 4 drives, but the read speed will only be boosted by 2x a single drive.

in raid 10 far 2, with 4 drives, your read speed will be 4x a single drive.

it will look like this

4 drives / linux raid 10 far2-------- A1 A2 A3 A4A5 A6 A7 A8.. .. .. ..A4 A3 A2 A1A8 A7 A6 A5.. .. .. ..

Attached: 15179003008_e48806b3ef_o.png (577x401, 43.36K)

raid 10 will also allow any number of drives to fail, the number of copies
raid 10 far 2, means 2 copies of data, you can allow any one to fail.

if you want 4 copies of data, you can do raid 10 far 4, at which point any 3 drives can fail.

I use XFS for everything since I don't need any advance file system features just an FS that's reliable, low on resources and decently fast. I tried Btrfs the last time I was reinstalling GNU/Linux and every time I installed Intel microcode updates the system became unbootable. From what I hear the ext4 code is old and haphazard but ext4 is still a good FS.

With mirrored systems, say you have 3x3 drives, you could theoretically lose 3 and still be fine, but you could also leave 2 and be fucked if one is the copy of the other.
Parity stripped systems like RAID6 don't have that problem. Any 2 drives can fail. It therefore has a better worst case scenario.
It is also more efficient with space usage. RAID10 is only 50% efficient for any number of drives. RAID6 is 1 - 2/n efficient for n drives and you need at least 4 drives.

lose*, not leave

On less than 1G.
Module settings for arc_max work fine on machines with 2GB or more RAM.

Max arc not being honored hasn't been an issue since the 0.4 days.

Pretty shit.
ZFS stripes and mirrors perform as expected.

This whole thread reads like someone who just discovered RAID and can only think one way.

it's 2x slower than raid 10. ZFS does not guarantee that all drives will be used, and in practice they do not get used for reads.

I use ext4 with data=journal and metadata_csum on all my partitions, including / on an ancient Samsung 830 SSD. The write speeds are as slow as your average /g/ poster but it also has a retard protection hat. The write speeds are actually acceptable with on my three disk raid10.

NTFS and ext4 have the 1980s design of indexing free space using bitmaps. Ext* simply cannot be good, until they ditch the bedrock (the ext architecture itself).

ext4 and squashfs cover all needs.

HAMMER2 on Linux when?

Combination of ext2, ext4 , and NTFS

i don't understand the point of data=ordered which is the default.

data=journaled
-files can be recovered if the write fucks up

data=ordered
-system can detect write fucked up but can't repair it, file is zero'd out, your file is gone.

data=writeback
-essentially no journaling, if write fucks up system won't know it and can't do shit anyway. max performance. good for postgresql / db that does it's own protection.

what's the point of knowing a file is fucked up if the file cannot be salvaged and the system zeros it out after it detects its fucked.

It keeps the metadata intact at a low performance cost and allows programs that are designed to recover from failed writes to do so.