I’d been using ZFS with Void linux on both my laptop and desktop for a couple of months. And ZFS is cool! But I’m thinking not great for my use case, especially for my laptop with it’s more constrained resources. Memory usage was a real problem, even after imposing low ARC limits. And the kernel module compile time was long enough to be a bit annoying, especially for a few kernels (I like to keep the last few around, to be safe) as it happens fairly often on a rolling release.

I switched the laptop to LUKS/btrfs a couple of days ago. And I’m thinking that was the correct choice for that. And now I’m considering doing the same for my desktop. As they seem comparable but btrfs is in-kernel and seemingly more system resource friendly. But before doing so I figured I’d ask the community about it. Maybe some important factors or features for either setup that I might not be considering.

Here’s the stuff I care about. All of which both offer, but I’m not an expert at either and I don’t know how equal they are.

  • Disk encryption. For ZFS everything (except the EFI partition) is encrypted. I use ZFSBootMenu in this scenario. For the btrfs setup I have the kernel/initramfs on an ext2 partition. I do not store any decryption keys in the initramfs. I know grub can decrypt LUKS with limitations, but I prefer this setup. And it feels secure enough to me. Any pitfalls I’m missing?
  • Pools/subvolumes
  • Snapshots. ZFSBootmenu has an option to load a snapshot. For btrfs it looks like I’d need to create a subvolume from a snapshot, which in a recovery situation might mean doing this from recovery media. That’s ok, given this is an unlikely thing to encounter. But if anyone knows of an easier way, I’d love to hear it.
  • CoW
  • RAID 1
  • Compression is nice, especially for the laptop

Edit: typo in title.

  • JovialSodiumOP
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    6 hours ago

    Hearing roughly a decade of successful use, especially on systems with constrained resources, certainly makes me lean further towards btrfs.

    its RAID ≠ 0/1/10 are buggy, but 0/1/10 are considered reliable.

    btrfs has been solid and done everything I could want. It was a huge upgrade from mdadm and lvm

    @ikidd@lemmy.world said that btrfs is poor at software RAID. I’ll do a little research in to how it fares for RAID 1 vs mdadm. I don’t see any reason I couldn’t do mdadm>luks>btrfs if that’s the better choice. But if btrfs is reliable and with comparable performance, I’d certainly rather do that.

    • I have no doubt ZFS is solid, too, FWIW. I leaned toward btrfs because it was simple, the commands straightforward and clear, nothing required more than one step - this is all super valuable to me because there are other things I want to spend my time on than fiddling with the filesystem.

      @ikidd@lemmy.world said that btrfs is poor at software RAID.

      You should check for yourself. I haven’t used software RAID in years - RAID 0+1 gives me no value - but the btrfs team and Arch wiki say 0, 1, and 10 are solid. You should not use 5 or 6, as they’re known to be buggy and even the btrfs man page tells you to not use it. So, yeah? btrfs is poor at RAID 5/6; to my understanding, it’s good at 0/1/10.

      btrfs can do encryption, compression, snapshots, and some RAID. I found combining mdadm and lvm and FS built a jenga tower, of which if part failed, the entire end result was borked. I once did an OS upgrade and lost the mdadm config, and spent two days recreating it. I never used it on a new machine after that. Separation of concerns is great, but having an all-in-one that can self repair and boot into snapshots is better.

      I can’t speak to performance. No doubt Toms of someone like that’s looked into that in detail.

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        The issue last time I looked into btrfs mirrors was it’s poor reporting of disk problems and letting it boot with a borked drive. Might be fixed, but that was a 10 year old unresolved bug at that time. Seemed like a WONTFIX and I didn’t need that for a server OS drive.