Poll: Which type of filesystem you prefer on Proxmox host node?

Which type of filesystem you prefer use or recommend on Proxmox host node?

When providers can write in comments their recommendations in this direction i would be very grateful for their share

Also who can desc about reasons of its choice and few words why he recommend any option is welcomed

Thanks for all responses in advance!

filesystem what you prefer use or recommend on Proxmox host node
  1. Which type of filesystem you prefer use or recommend on Proxmox host node?48 votes
    1. ZFS
      37.50%
    2. ext4
      41.67%
    3. BtrFS
      10.42%
    4. XFS
        8.33%
    5. any other (pls mention in comment if u can)
        2.08%

Good day and Goodbye

Comments

  • I think in general the comparison is a bit... misleading.

    while ext4/xfs/btrfs are rather classical filesystems as such (and might have their benefits or not) - ZFS is not. but rather comparable to the usage of md-raid underneath or LVM ... which btw you should put in here then as well. at least thin-LVM as storage type is something that people might use to provide the guests storage.

    in general for the question between using a folder on ext4/xfs/btrfs to provide a disk image vs zfs vs thin-lvm (vs ... ceph?) I'd say you probably also cannot get a good recommendation as it will heavily depend on the hardware you are using and maybe also on the size and use case of your guest VMs and so on.

    my recommendation: define your use case and available ressources and try to find the best fit for that ;-)

    Thanked by (1)miu
  • ext4 works fine, never had an issue with it, since I started using proxmox about 6 years ago.
    Clustering is another part.

    Thanked by (1)miu
  • depend on storage configuration
    fixed = xfs/ext4
    expandable = zfs

    Thanked by (2)miu shallow
  • miumiu
    edited April 2021

    @Falzo said:
    I think in general the comparison is a bit... misleading.

    while ext4/xfs/btrfs are rather classical filesystems as such (and might have their benefits or not) - ZFS is not. but rather comparable to the usage of md-raid underneath or LVM ... which btw you should put in here then as well. at least thin-LVM as storage type is something that people might use to provide the guests storage.

    in general for the question between using a folder on ext4/xfs/btrfs to provide a disk image vs zfs vs thin-lvm (vs ... ceph?) I'd say you probably also cannot get a good recommendation as it will heavily depend on the hardware you are using and maybe also on the size and use case of your guest VMs and so on.

    my recommendation: define your use case and available ressources and try to find the best fit for that ;-)

    good note, thanks
    i wrote it not very exactly
    because i had on the mind of course ext or xfs in md RAID config vs. ZFS raid (in equivalent raid level)

    Good day and Goodbye

  • @miu said: because i had on the mind of course ext or xfs in md RAID config vs. ZFS raid (in equivalent raid level)

    to add to the confusion: one actually could use btrfs to act like some sort of raid ;-)

    I would not use btrfs for that anyway. and probably also not xfs.

    as @Neoon pointed out ext4 (on your mdraid) works just fine, for that you can rather discuss the usage of raw vs qemu, which driver (virtio, scsi, ...) and if/how to preallocate metadata etc.

    zfs also has it's uses and can speed up things a lot if you want to involve some caching device and/or have enough RAM. the latter also can be the culprit with zfs, because it tends to be greedy in that regard but you might need that memory elsewhere.

    thin-lvm also works great from my experience and was better suited to handle thin-provisioning for a long time. nowadays with the fstrim/discard, the right driver and qemu sparse images you can achieve nearly the same without thin-lvm as well.

    then there are snapshots...

    my recommendation: define your use case and available ressources and try to find the best fit for that ;-)

    Thanked by (1)miu
  • Depending on the space in question, I typically end up using both ext4 (on lvm/mdadm) and zfs (directly over raw disks).

    I usually use ext4 on the root (OS) volume along with some space for VMs (that can be run on lvm/ext4). I also have a separate zfs pool for either additional storage or VMs running on zfs (for snapshots).

    ZFS is very useful for compression (and of course longer term storage with bit-rot prevention, despite backups). ARC needs some tuning to extract more performance and also leave room for VMs depending on the use case.

    Having access to both ext4/lvm as well as zfs sort of nets the best of both - when I don't need access to ZFS I can also completely shut down the ZFS system if required (of course that will require a rebuild of the ARC, but for write heavy loads it is a non-issue).

    There are also lots of other benefits to replication (zfs send/recv) over rsync, especially when you have a very large number of files/inodes as rsync can be particularly slow for such use cases (even when detecting small changes). I'm not even going to harp on snapshots - that is one of the fundamental reasons to even use zfs.

    LUKS and ZFS encryption are both useful in their own right (and ZFS is really useful to do things like scrubbing without even decrypting the pool as such).

    For most of my (hosted) dedi's, the non-shrinkable aspect of ZFS isn't really a concern - the configuration/disks is typically static for the life of the server and so there's not too much to worry about. Avoiding lvm resize and having dynamic data sets with quota is very useful in the ZFS realm.

    All in all, they both have their use cases and depending on your needs pick one over the other (or have both!).

    I'm still very hesitant to put ZFS on the root (OS) data just because rescue becomes a pain (very limited options to get access to zfs on a rescue system via providers) and frankly zfs isn't very good outside of the "safe" territory. There are hardly any recovery options and one has to go quite deep into zdb land for any serious recovery (and it is NOT easy even when directly connected, let alone over a rescue system).

    ZFS (on Linux) v2.1 (now in rc stage) is going to be LTS - so that will be a huge plus because I'd expect that a few more distros will have it available as part of the base OS which means root on zfs will get much easier and I might consider it. Though frankly for a measly <50GB it's not really worth the time to deal with complexity (despite the benefits of snapshots et al).

    Separately, ext4 has been rock solid for the better part of a decade (or more). ZFS (on Linux) is relatively fast evolving and that comes with a fair share of pain and incompatibilities (and efforts to keep upgrading all your ZFS pools to allow compatible features/replication etc.).

    I recently ran into a lot of (fairly low level, deep technicalities - it's a story by itself) difficulty when migrating a ZFS pool from a FreeBSD system to a Linux system (because I am much more comfortable with Linux than FreeBSD) and I eventually just had to recreate the pool from scratch (which was not nice time wise as I had to replicate back a lot of data). All this over releases in the <2 years - so you can see it is not very simple when the fundamental FS itself is in relatively high flux/evolution. I'm not complaining - just mentioning that there is effort required to stay reasonably recent with ZFS other wise you will run into issues.

    Longish rant... I've been guilty of forum neglect and this is my way of making up.

    Thanked by (4)Falzo Not_Oles miu flips
  • havochavoc OGContent Writer

    Using ext4. Mostly because single drive so didn't really see the point in extra complications.

    Might try ZFS on next complete rebuild. That'll hopefully be a way off though. And by then hopefully added more drives.

    Thanked by (1)miu
  • xfs/mdadm, since i always had problems with zfs when set up with nvme drives. Too much I/O eats your CPU alive and freezes your entire node.

    Thanked by (3)Not_Oles quicksilver03 miu
  • CEPH, my main goal is storage and backups, so CEPH was the logical choice.

    Thanked by (1)miu
  • I like btrfs. The snapshots are handy and it has been just as stable as ext4 in the last two years for me.

    Thanked by (2)miu rajprakash
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    I do not know much about filesystems. So I read this interesting thread, but didn't post. Now, somebody actually sent me a PM to ask what I was using. So. . . .

    I am using ext4.

    Over the years I occasionally have heard adventure stories about zfs. :) In contrast to the zfs adventure stories, my mind keeps revisiting a qmail FAQ answer by DJB about "relying on certain guarantees provided by the UNIX filesystem." Nevertheless, I do think zfs is very interesting, and one of these days I expect to try using zfs. Also btrfs.

    I noticed btrfs mentioned prominently in Hetzner's semi-secret, unsupported-by-Hetzner, cool, fast Proxmox install. :) Made me wonder if btrfs was favored at Hetzner. Just wondering. No idea what the actual facts are.

    Best wishes from New York City and Mexico! πŸ—½πŸ‡ΊπŸ‡ΈπŸ‡²πŸ‡½πŸœοΈ

    Thanked by (1)miu

    I hope everyone gets the servers they want!

  • flipsflips OG
    edited April 2021

    Hopefully the ugly bugs days of btrfs are long gone. (I almost lost quite a bit of data some years ago.) ZFS on Linux has been stable (been using it some years on a plain Debian for data volumes, not root, as it's dkms).

    Thanked by (1)miu
  • I've been using single-device btrfs for many years without issue. Config management (ansible) makes it very easy to blow away and recreate root, so I no longer worry about drive failure for root. Application data are either on single disks with automated backups, or on ceph.

    Thanked by (1)miu
  • @Not_Oles said:
    I noticed btrfs mentioned prominently in Hetzner's semi-secret, unsupported-by-Hetzner, cool, fast Proxmox install. :) Made me wonder if btrfs was favored at Hetzner. Just wondering. No idea what the actual facts are.

    That looks interesting, can you share some more information about this install? I'd like to test it and perhaps move some non-critical stuff there.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @quicksilver03 said:

    @Not_Oles said:
    I noticed btrfs mentioned prominently in Hetzner's semi-secret, unsupported-by-Hetzner, cool, fast Proxmox install. :) Made me wonder if btrfs was favored at Hetzner. Just wondering. No idea what the actual facts are.

    That looks interesting, can you share some more information about this install? I'd like to test it and perhaps move some non-critical stuff there.

    Hi @quicksilver03!

    Hetzner has their installimage script -- please see also on Github-- within their Rescue System.

    It's been awhile, so I do not remember exactly, but maybe hiding under "Other" when selecting Debian version, there might be shown various Proxmox-over-Debian "unsupported-by-Hetzner" install options.

    Post-install, please expect probably to hand configure your server's internal networking for the VMs. Also probably hand configure the apt repositories for updating both Debian and Proxmox. Finally, please do not forget to use https when first attempting to log in to your newly installed server's web GUI on port 8006. Plain http returns a blank page. Haha, the first time I tried to log in I thought my install had borked. :)

    If you try this install and run into trouble, just post. If I can't help you, we have both Newton and Leibniz :)

    I hope everyone gets the servers they want!

  • With networked storage I don't care much as long as it's raid1 or equal to that.

    I wonder why you would use proxmox without networked storage. Might as well go with virtualizor or solusvm. Well, probably the licensing costs is the reason.

  • @martijnk said:
    With networked storage I don't care much as long as it's raid1 or equal to that.

    I wonder why you would use proxmox without networked storage. Might as well go with virtualizor or solusvm. Well, probably the licensing costs is the reason.

    Maybe because of the network card?
    I don't think 1 Gbps is good enough for a good network storage. Never tried it though

    Also, maybe proxmox is just installed on single machine, so the network storage is just increasing latencies without adding HA

    Thanked by (1)miu
  • edited January 2023

    It is 2023 in the meantime, but I just joined LowendSpirit so I feel free to comment :tongue:

    My current system uses ext4 on all drives, and I have had not any issue so far. I have grouped the drives together in LVs, according to their characteristics. 2x1Tb NVMe as 2Tb volume, 1x4Tb SSD is a LV on its own (but new SSDs can be added), and a bunch of spinning HDDs together in a 5Tb volume (1x1Tb + 2x2Tb).
    I am considering to reconfigure the system, but the more I read about file systems, the more I get confused. Some are enthusiastic about ZFS, others use Btrfs everywhere, and then you have the XFS and Ext4 fans. And each time I read such posts I think, yes you're right.

    My main concern about LVM is that it starts filling up the first physical disk before using the next. That means the first disk will always be the busiest one. That's why I thought: let me use LVM striping. The only thing is that you have to have equally sized disks then, ideally.
    The current setup is:
    2x1Tb NVMe: Proxmox OS, and local-lvm, having the majority of the VMs and LXCs.
    4Tb SSD: a few large VMs
    5Tb HDD: ISO's, and data directories that are mounted to the LXCs (like documents, films, etc)

    I could start from scratch and introduce redundancy by using ZFS or some other kind of RAID, but that would cost me disk space. I have Ansible scripts to easily redeploy most of my LXCs and VMs, and my data is mirrored to a 2TB Dropbox space. So why losing disk space for additional redundancy...
    So I think I will stick to Ext4... or use btrfs... to benefit from the COW feature.... or... you see, that's my problem :/

    Anyway, the Proxmos OS can be moved to an old 2.5" small drive. It does not need the NVMe speed. Then I can combine the NVMe SSDs in a striped LV. That will give a really good performance.

    Oh well... my plans are changing per day... the more information I read, the more I am doubting and reconsidering everything...

Sign In or Register to comment.