need some opinions on zfs resilvering

i am new to zfs so need some insights on zfs resilvering. i just recovered a degraded raid z1 pool due to faulty disk and it took around 4-5 days to resilver . during the whole process all my applications on the zfs went extremely slow. just needed to know if i replace a drive from a healthy pool with all the disk working properly will the resilver take the same time and also will my applications work properly or slow down again.

Comments

  • edited July 17

    It will more than likely take less time if you are resilvering in a healthy pool because read operations can happen across all disks.

    I/O operations will still be happening though therefore there will be an impact on your system performance. It's hard to judge though as it depends on your system specs, the type of disks used and the type of data. You can change the resilvering priority on your system with 'zfs_resilver_delay' though, have a read on the documentation around that.

    Is this a system that is only typically used during certain hours? You could setup a cron job to change the resilvering priority to be lower during the required hours, and then increase the priority once the busy period has passed.

    SharedGrid | Fast, secure, and reliable UK, USA and Singapore web, reseller and VPS Hosting
    Litespeed, Redis Cache, NVMe Drives, Daily Backups, 24x7 Support, Wordpress Optimised.

  • host_chost_c Hosting Provider

    @nestap said:
    i am new to zfs so need some insights on zfs resilvering. i just recovered a degraded raid z1 pool due to faulty disk and it took around 4-5 days to resilver . during the whole process all my applications on the zfs went extremely slow. just needed to know if i replace a drive from a healthy pool with all the disk working properly will the resilver take the same time and also will my applications work properly or slow down again.

    Rule 1 - use either raid10 or Z2 ( z1 / raid5 EOL for 2 decades, google why ).
    Rule 2 - Use Raid 10 if you run VM/s on that pool, all other are for storage, not IO intensive applications.
    Rule 3 - obey the 1 GB RAM / TB raw Storage in ZFS, use only HBA cards.

    Rule 0 - Spend at least 12h google-ing around ZFS before you dive into it., search for problems or performance issues rather then searching for how awesome it is, you will find more useful information on ZFS this way.

    Re-silvering Speed is purely dictated by the RAID Type you use, number of disks, speed of disks, speed of the HBA and IO hitting the pool at that time. ( and I assume your CPU is not a core 2 duo )

    Ah yes, you must stay bellow 80% space usage in ZFS, otherwise things will go slow. if you are at 90%, you are better off with a USB drive in terms of speed and IO.

    If you have a 4 drive Z1 array ( I presume you do ) next time this happens, pray, pray that until the re-silver is finished no other drive leaves the pool ( aka dies ).

    If you have 4 drives, the only logical raid type would be Raid 10 ( stripped mirrors in ZFS ). Some could argue that if the drives are old-ish or have a lot of ours on them, do Z2 ( or raid 6 ).

    Thanked by (1)yoursunny

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • Ext4 on mdraid raid 1 takes 6 hours to add one disk to a 2TB pool. I tried ZFS on linux several years ago but performance was pathetic.

  • host_chost_c Hosting Provider
    edited July 19

    @davide said:
    Ext4 on mdraid raid 1 takes 6 hours to add one disk to a 2TB pool. I tried ZFS on linux several years ago but performance was pathetic.

    SATA drives? 7.2K RPM, 64MB cache?

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • edited July 20

    @host_c said:
    SATA drives?

    yes

    7.2K RPM

    yes

    64MB cache?

    boh no idea. Why you ask?

  • host_chost_c Hosting Provider
    edited July 20

    @davide

    as I remember those drives and they were slow at rebuild, either SW raid or HW raid, none made the diffrence. =)

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • @host_c: last time I tried ZFS for linux, with spinning disks it had about 3 times less throughput than mdraid during regular use. But I may start using it anyway in place of mdraid, as it has its "built-in" device mapper that simplifies its management.

Sign In or Register to comment.