Need to choose: RAID 1 Mirror (Software or Hardware)

2»

Comments

  • edited October 2

    The newer versions of mdadm since a few years ago recognize arrays by the metadata on the block devices and no longer rely on device names, so if the disks order and names change, the array is recognized automatically. ZFS on the other hand depends on the devices' names, which can be either:

    /dev/disk/by-id
    /dev/disk/by-label  # not available for ZFS component disks
    /dev/disk/by-partuuid
    /dev/disk/by-path
    /dev/disk/by-uuid  # not available for ZFS component disks
    

    The immutable ones are /dev/disk/by-id/wwn-* and /dev/disk/by-partuuid. But there are problems:

    • under /dev/disk/by-id, only the names starting with wwn- are immutable, but not all sata storage devices provide these names.
    • the UUIDs under /dev/disk/by-partuuid belong to the partitions themselves and not to the filesystems within them, according to the theory, yet mkfs may change these UUIDs upon formatting. It's a caveat.
    Thanked by (1)host_c
  • host_chost_c Hosting Provider

    THX @davide

    I was not up to date on MDADM.

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • ShakibShakib Hosting Provider

    @SashkaPro said:

    @host_c said: Need some details to this, or I might not understood something?

    Nah, in that part I mean that if Shakib successfully install proxmox with ZFS (never tried proxmox) then it seems that Proxmox installation process offering us ZFS-raid during setup. So my advice was:
    1. Install Proxmox on baremetal.
    2. Spin-up VM with maxed resources minus ZFS RAM (for example, if our server 128G and 2x4TB NVME, then leave 8GB/10GB RAM for host and rest 119GB for VM).
    3. And inside that VM install cPanel/DA and sell shared hosting.
    So we successfully completed task "shared hosting server on ZFS raid".

    This setup will work perfectly.

    Thanked by (1)SashkaPro

    HostCram LLC - Web Hosting Built For Speed, Reliability, Security & Uptime! [We operate AS39618]

  • host_chost_c Hosting Provider
    edited October 3

    @HostMayo

    as a side-note, since we dived into storage.

    Here is a comparison between storage technologies and general use case scenario with general strengths and weaknesses:

    Feature NFS iSCSI iSCSI over Fiber Channel Ceph Local RAID 6 (12 Drives) Local RAID 10 (12 Drives) ZFS (RAID-Z2, 12 Drives) MDADM (RAID 6, 12 Drives)
    Protocol Type File-level Block-level Block-level over Fiber Channel Distributed Block, File, Object Local Block-level Local Block-level Local Block-level with ZFS Local Block-level with MDADM
    Latency High (file system overhead) Low (direct block access) Very Low (high-speed Fiber Channel) Moderate (network replication) Low (local disk access) Very Low (local disk access) Low (local disk, but ZFS adds checksumming overhead) Low (local disk access)
    IOPS Low-to-moderate High (better random I/O) Very High (optimized for low-latency storage) Scalable (depends on cluster size) Moderate (RAID 6 parity overhead) High (good random I/O performance) Moderate (ZFS parity overhead) Moderate (RAID 6 parity overhead)
    Throughput Moderate High Very High (Fiber Channel bandwidth) High (scales with nodes) High (parity-based, sequential writes are slower) Very High (optimized for both sequential and random I/O) High (with features like ZFS compression) High (parity-based, sequential writes are slower)
    Scalability Limited Limited (single storage node) Moderate (Fiber Channel fabrics can scale) Very High (multi-node cluster) Limited (per server/disk bays) Limited (per server/disk bays) Limited (scales up per pool) Limited (single machine RAID)
    Fault Tolerance Limited (rely on server HA) Limited (without extra setup) High (redundancy with Fiber Channel fabrics) Very High (automatic replication) Moderate (tolerates 2 disk failures) Moderate (tolerates 1 disk failure per RAID 1 pair) High (tolerates 2 disk failures, strong data integrity with checksumming) Moderate (tolerates 2 disk failures)
    Setup Complexity Easy Moderate Complex (Fiber Channel infrastructure required) Complex Easy to Moderate Easy to Moderate Moderate (ZFS requires some tuning) Easy to Moderate (MDADM setup)
    Best for File sharing, small VMs VM disks, databases High-performance applications needing ultra-low latency Large-scale storage, HA, cloud Balanced workloads, cost-effective redundancy High-performance workloads, databases Data integrity, large storage pools, backup and archiving Cost-effective redundancy for local storage
    Cost Efficiency Low Moderate High (Fiber Channel infrastructure is costly) High (more hardware needed) Moderate (less efficient use of space) Low (50% space efficiency, higher disk cost) Moderate (high efficiency, but requires more resources) Moderate (space efficient but slower rebuilds)
    Thanked by (1)HostMayo

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • host_chost_c Hosting Provider

    NFS, iSCSI, iSCSI over Fiber Channel are storage transport protocols, they "share" the raid type you have on the server/storage.

    I included them in the comparison above just to have an idea on performance/cost factors.

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

Sign In or Register to comment.