2019 Personal Top VPS Providers

cybertechcybertech OGBenchmark King

After a year on LET and finally LES, its been a good learning journey so far on VPS in general. Even better to find a few enthusiastic hobbyists whom I can continue to download new information from.

This list is formed predominantly by quad-benchmarking (bench, nench, fio,geekbench), with a personal evaluation on triangle of performance: CPU, IO, Network

So apart from numbers crunching, the VPS needs to have some better than average connectivity (also evaluated based on personal experience).

The list is in no order of ranking, they are all preferred and (mostly) in use.

       **Avoro**
        vServer Winter Promotion 2019
        https://browser.geekbench.com/v4/cpu/14993051
        ----------------------------------------------------------------------
        CPU model            : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
        Number of cores      : 2
        CPU frequency        : 2299.998 MHz
        Total size of Disk   : 30.5 GB (8.7 GB Used)
        Total amount of Mem  : 5949 MB (830 MB Used)
        Total amount of Swap : 1023 MB (0 MB Used)
        System uptime        : 6 days, 11 hour 44 min
        Load average         : 0.19, 0.12, 0.04
        OS                   : CentOS 7.7.1908
        Arch                 : x86_64 (64 Bit)
        Kernel               : 5.4.5-1.el7.elrepo.x86_64
        ----------------------------------------------------------------------
        I/O speed(1st run)   : 723 MB/s
        I/O speed(2nd run)   : 958 MB/s
        I/O speed(3rd run)   : 964 MB/s
        Average I/O speed    : 881.7 MB/s
        ----------------------------------------------------------------------
        Node Name                       IPv4 address            Download Speed
        CacheFly                        205.234.175.175         106MB/s       
        Linode, Tokyo2, JP              139.162.65.37           8.86MB/s      
        Linode, Singapore, SG           139.162.23.4            13.9MB/s      
        Linode, London, UK              176.58.107.39           134MB/s       
        Linode, Frankfurt, DE           139.162.130.8           304MB/s       
        Linode, Fremont, CA             50.116.14.9             15.4MB/s      
        Softlayer, Dallas, TX           173.192.68.18           15.3MB/s      
        Softlayer, Seattle, WA          67.228.112.250          12.1MB/s      
        Softlayer, Frankfurt, DE        159.122.69.4            107MB/s       
        Softlayer, Singapore, SG        119.81.28.170           11.0MB/s      
        Softlayer, HongKong, CN         119.81.130.170          6.96MB/s      
        ----------------------------------------------------------------------
        CPU: SHA256-hashing 500 MB
            1.678 seconds
        CPU: bzip2-compressing 500 MB
            5.680 seconds
        CPU: AES-encrypting 500 MB
            1.167 seconds

    ioping: seek rate
        min/avg/max/mdev = 84.9 us / 293.3 us / 16.5 ms / 466.6 us
    ioping: sequential read speed
        generated 8.41 k requests in 5.00 s, 2.05 GiB, 1.68 k iops, 420.4 MiB/s
----------------------------------------------------------------------
[root@cybertech fio-2.0.9]# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m] [100.0% done] [217.2M/74645K /s] [55.9K/18.7K iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=279327: Thu Dec 26 10:13:02 2019
  read : io=3071.2MB, bw=240058KB/s, iops=**60014 **, runt= 13104msec
  write: io=1024.4MB, bw=80020KB/s, iops=**20004 **, runt= 13104msec
  cpu          : usr=12.32%, sys=46.46%, ctx=37221, majf=0, minf=5
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786431/w=262145/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=3071.2MB, aggrb=240058KB/s, minb=240058KB/s, maxb=240058KB/s, mint=13104msec, maxt=13104msec
  WRITE: io=1024.4MB, aggrb=80019KB/s, minb=80019KB/s, maxb=80019KB/s, mint=13104msec, maxt=13104msec

Disk stats (read/write):
  vda: ios=777603/259222, merge=0/0, ticks=418387/184379, in_queue=123965, util=93.89%
----------------------------------------------------------------------


    **PHP-Friends**
    vserver schnupperspecial 2019
    https://browser.geekbench.com/v4/cpu/14900606
    ----------------------------------------------------------------------
    CPU model            : Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
    Number of cores      : 2
    CPU frequency        : 2199.998 MHz
    Total size of Disk   : 65.0 GB (10.0 GB Used)
    Total amount of Mem  : 5948 MB (756 MB Used)
    Total amount of Swap : 0 MB (0 MB Used)
    System uptime        : 1 days, 15 hour 32 min
    Load average         : 0.25, 0.08, 0.02
    OS                   : CentOS 7.7.1908
    Arch                 : x86_64 (64 Bit)
    Kernel               : 5.4.6-1.el7.elrepo.x86_64
    ----------------------------------------------------------------------
    I/O speed(1st run)   : 1.5 GB/s
    I/O speed(2nd run)   : 1.4 GB/s
    I/O speed(3rd run)   : 1.4 GB/s
    Average I/O speed    : 1467.7 MB/s
    ----------------------------------------------------------------------
    Node Name                       IPv4 address            Download Speed
    CacheFly                        205.234.175.175         105MB/s       
    Linode, Tokyo2, JP              139.162.65.37           6.83MB/s      
    Linode, Singapore, SG           139.162.23.4            8.00MB/s      
    Linode, London, UK              176.58.107.39           81.2MB/s      
    Linode, Frankfurt, DE           139.162.130.8           96.6MB/s      
    Linode, Fremont, CA             50.116.14.9             8.28MB/s      
    Softlayer, Dallas, TX           173.192.68.18           13.7MB/s      
    Softlayer, Seattle, WA          67.228.112.250          10.9MB/s      
    Softlayer, Frankfurt, DE        159.122.69.4            88.6MB/s      
    Softlayer, Singapore, SG        119.81.28.170           9.04MB/s      
    Softlayer, HongKong, CN         119.81.130.170          6.70MB/s      
    ----------------------------------------------------------------------
    CPU: SHA256-hashing 500 MB
        2.211 seconds
    CPU: bzip2-compressing 500 MB
        7.179 seconds
    CPU: AES-encrypting 500 MB
        1.978 seconds

    ioping: seek rate
        min/avg/max/mdev = 108.9 us / 252.1 us / 23.3 ms / 487.3 us
    ioping: sequential read speed
        generated 8.96 k requests in 5.00 s, 2.19 GiB, 1.79 k iops, 447.8 MiB/s
    ----------------------------------------------------------------------
[root@cybertech fio-2.0.9]# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m] [100.0% done] [248.2M/85426K /s] [63.6K/21.4K iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=17219: Thu Dec 26 10:29:14 2019
  read : io=3070.1MB, bw=218726KB/s, iops=**54681 **, runt= 14377msec
  write: io=1025.8MB, bw=73011KB/s, iops=**18252 **, runt= 14377msec
  cpu          : usr=10.67%, sys=58.35%, ctx=130288, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786156/w=262420/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=3070.1MB, aggrb=218726KB/s, minb=218726KB/s, maxb=218726KB/s, mint=14377msec, maxt=14377msec
  WRITE: io=1025.8MB, aggrb=73011KB/s, minb=73011KB/s, maxb=73011KB/s, mint=14377msec, maxt=14377msec

Disk stats (read/write):
  vda: ios=778402/259815, merge=0/0, ticks=377079/125283, in_queue=170719, util=92.46%
----------------------------------------------------------------------


**Letbox**
BBox NVMe2
https://browser.geekbench.com/v4/cpu/13188969
----------------------------------------------------------------------
CPU model            : Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
Number of cores      : 2
CPU frequency        : 2899.998 MHz
Total size of Disk   : 523.0 GB (201.9 GB Used)
Total amount of Mem  : 1993 MB (1184 MB Used)
Total amount of Swap : 92 MB (92 MB Used)
System uptime        : 52 days, 15 hour 34 min
Load average         : 0.03, 0.01, 0.00
OS                   : Ubuntu 18.04.3 LTS
Arch                 : x86_64 (64 Bit)
Kernel               : 4.15.0-65-generic
----------------------------------------------------------------------
I/O speed(1st run)   : 1.1 GB/s
I/O speed(2nd run)   : 1.2 GB/s
I/O speed(3rd run)   : 1.1 GB/s
Average I/O speed    : 1160.5 MB/s
----------------------------------------------------------------------
Node Name                       IPv4 address            Download Speed
CacheFly                        205.234.175.175         101MB/s       
Linode, Tokyo2, JP              139.162.65.37           19.7MB/s      
Linode, Singapore, SG           139.162.23.4            14.0MB/s      
Linode, London, UK              176.58.107.39           18.0MB/s      
Linode, Frankfurt, DE           139.162.130.8           15.9MB/s      
Linode, Fremont, CA             50.116.14.9             112MB/s       
Softlayer, Dallas, TX           173.192.68.18           69.0MB/s      
Softlayer, Seattle, WA          67.228.112.250          67.7MB/s      
Softlayer, Frankfurt, DE        159.122.69.4            10.8MB/s      
Softlayer, Singapore, SG        119.81.28.170           10.6MB/s      
Softlayer, HongKong, CN         119.81.130.170          13.7MB/s      
----------------------------------------------------------------------
CPU: SHA256-hashing 500 MB
    3.496 seconds
CPU: bzip2-compressing 500 MB
    5.533 seconds
CPU: AES-encrypting 500 MB
    1.596 seconds

ioping: seek rate
    min/avg/max/mdev = 91.7 us / 179.7 us / 3.02 ms / 40.7 us
ioping: sequential read speed
    generated 12.1 k requests in 5.00 s, 2.94 GiB, 2.41 k iops, 602.8 MiB/s
----------------------------------------------------------------------
root@labox:~/fio-2.0.9# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.0.9
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m] [100.0% done] [289.2M/98180K /s] [74.3K/24.6K iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=6387: Wed Dec 25 19:37:29 2019
  read : io=3073.4MB, bw=245028KB/s, iops=**61257 **, runt= 12844msec
  write: io=1022.7MB, bw=81529KB/s, iops=**20382 **, runt= 12844msec
  cpu          : usr=14.97%, sys=59.54%, ctx=17696, majf=0, minf=4
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786786/w=261790/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
   READ: io=3073.4MB, aggrb=245028KB/s, minb=245028KB/s, maxb=245028KB/s, mint=12844msec, maxt=12844msec
  WRITE: io=1022.7MB, aggrb=81529KB/s, minb=81529KB/s, maxb=81529KB/s, mint=12844msec, maxt=12844msec

Disk stats (read/write):
  vda: ios=786616/261632, merge=0/8, ticks=461844/121448, in_queue=576704, util=99.24%
----------------------------------------------------------------------
Thanked by (3)isunbejo seriesn uptime

I bench YABS 24/7/365 unless it's a leap year.

Comments

  • cybertechcybertech OGBenchmark King
        **Inception Hosting**
        UK-SSD-KVM-2048
        https://browser.geekbench.com/v4/cpu/15073785
        ----------------------------------------------------------------------
        CPU model            : Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz
        Number of cores      : 2
        CPU frequency        : 3792.008 MHz
        Total size of Disk   : 14.0 GB (12.0 GB Used)
        Total amount of Mem  : 1990 MB (722 MB Used)
        Total amount of Swap : 2035 MB (1 MB Used)
        System uptime        : 6 days, 20 hour 22 min
        Load average         : 0.73, 0.70, 0.34
        OS                   : CentOS 7.7.1908
        Arch                 : x86_64 (64 Bit)
        Kernel               : 5.4.5-1.el7.elrepo.x86_64
        ----------------------------------------------------------------------
        I/O speed(1st run)   : 95.8 MB/s
        I/O speed(2nd run)   : 156 MB/s
        I/O speed(3rd run)   : 162 MB/s
        Average I/O speed    : 137.9 MB/s
        ----------------------------------------------------------------------
        Node Name                       IPv4 address            Download Speed
        CacheFly                        205.234.175.175         98.2MB/s      
        Linode, Tokyo2, JP              139.162.65.37           5.13MB/s      
        Linode, Singapore, SG           139.162.23.4            2.35MB/s      
        Linode, London, UK              176.58.107.39           83.7MB/s      
        Linode, Frankfurt, DE           139.162.130.8           39.5MB/s      
        Linode, Fremont, CA             50.116.14.9             5.95MB/s      
        Softlayer, Dallas, TX           173.192.68.18           4.64MB/s      
        Softlayer, Seattle, WA          67.228.112.250          8.97MB/s      
        Softlayer, Frankfurt, DE        159.122.69.4            39.2MB/s      
        Softlayer, Singapore, SG        119.81.28.170           2.22MB/s      
        Softlayer, HongKong, CN         119.81.130.170          7.49MB/s      
        ----------------------------------------------------------------------
        CPU: SHA256-hashing 500 MB
            1.603 seconds
        CPU: bzip2-compressing 500 MB
            6.091 seconds
        CPU: AES-encrypting 500 MB
            1.313 seconds
    
        ioping: seek rate
            min/avg/max/mdev = 27.9 us / 104.8 us / 20.9 ms / 284.4 us
        ioping: sequential read speed
            generated 20.8 k requests in 5.00 s, 5.07 GiB, 4.16 k iops, 1.01 GiB/s
        ----------------------------------------------------------------------
        [root@cybertech fio-2.0.9]# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
        test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
        fio-2.0.9
        Starting 1 process
        test: Laying out IO file(s) (1 file(s) / 4096MB)
        Jobs: 1 (f=1): [m] [100.0% done] [130.4M/45030K /s] [33.4K/11.3K iops] [eta 00m:00s]
        test: (groupid=0, jobs=1): err= 0: pid=1976: Thu Dec 26 11:22:29 2019
          read : io=3068.3MB, bw=126712KB/s, iops=**31678 **, runt= 24795msec
          write: io=1027.9MB, bw=42447KB/s, iops=**10611 **, runt= 24795msec
          cpu          : usr=6.88%, sys=26.93%, ctx=82311, majf=0, minf=4
          IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
             submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
             complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
             issued    : total=r=785459/w=263117/d=0, short=r=0/w=0/d=0
    
        Run status group 0 (all jobs):
           READ: io=3068.3MB, aggrb=126712KB/s, minb=126712KB/s, maxb=126712KB/s, mint=24795msec, maxt=24795msec
          WRITE: io=1027.9MB, aggrb=42446KB/s, minb=42446KB/s, maxb=42446KB/s, mint=24795msec, maxt=24795msec
    
        Disk stats (read/write):
          vda: ios=779068/261081, merge=0/3, ticks=910371/227293, in_queue=638894, util=86.29%
        ----------------------------------------------------------------------
    
        **NexusBytes**
        MegaDeals Thursday - EU-Germany-Ultra-NVME-MT
        https://browser.geekbench.com/v4/cpu/14873654
        ----------------------------------------------------------------------
        CPU model            : AMD Ryzen 7 3700X 8-Core Processor
        Number of cores      : 2
        CPU frequency        : 3593.248 MHz
        Total size of Disk   : 18.0 GB (13.0 GB Used)
        Total amount of Mem  : 1990 MB (610 MB Used)
        Total amount of Swap : 2047 MB (0 MB Used)
        System uptime        : 6 days, 1 hour 23 min
        Load average         : 0.22, 0.06, 0.02
        OS                   : CentOS 7.7.1908
        Arch                 : x86_64 (64 Bit)
        Kernel               : 5.4.5-1.el7.elrepo.x86_64
        ----------------------------------------------------------------------
        I/O speed(1st run)   : 2.7 GB/s
        I/O speed(2nd run)   : 2.8 GB/s
        I/O speed(3rd run)   : 2.7 GB/s
        Average I/O speed    : 2798.9 MB/s
        ----------------------------------------------------------------------
        Node Name                       IPv4 address            Download Speed
        CacheFly                        205.234.175.175         107MB/s       
        Linode, Tokyo2, JP              139.162.65.37           5.53MB/s      
        Linode, Singapore, SG           139.162.23.4            7.35MB/s      
        Linode, London, UK              176.58.107.39           64.2MB/s      
        Linode, Frankfurt, DE           139.162.130.8           108MB/s       
        Linode, Fremont, CA             50.116.14.9             7.51MB/s      
        Softlayer, Dallas, TX           173.192.68.18           12.0MB/s      
        Softlayer, Seattle, WA          67.228.112.250          10.9MB/s      
        Softlayer, Frankfurt, DE        159.122.69.4            76.3MB/s      
        Softlayer, Singapore, SG        119.81.28.170           10.2MB/s      
        Softlayer, HongKong, CN         119.81.130.170          7.65MB/s      
        ----------------------------------------------------------------------
        CPU: SHA256-hashing 500 MB
            0.394 seconds
        CPU: bzip2-compressing 500 MB
            3.887 seconds
        CPU: AES-encrypting 500 MB
            0.705 seconds
    
        ioping: seek rate
            min/avg/max/mdev = 72.8 us / 159.6 us / 2.27 ms / 46.8 us
        ioping: sequential read speed
            generated 18.2 k requests in 5.00 s, 4.45 GiB, 3.65 k iops, 912.1 MiB/s
        ----------------------------------------------------------------------
        [root@cybertech fio-2.0.9]# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
        test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
        fio-2.0.9
        Starting 1 process
        test: Laying out IO file(s) (1 file(s) / 4096MB)
        Jobs: 1 (f=1): [m] [100.0% done] [108.4M/36444K /s] [27.8K/9111  iops] [eta 00m:00s]
        test: (groupid=0, jobs=1): err= 0: pid=23976: Thu Dec 26 11:50:35 2019
          read : io=3072.7MB, bw=109720KB/s, iops=**27429 **, runt= 28677msec
          write: io=1023.4MB, bw=36541KB/s, iops=**9135** , runt= 28677msec
          cpu          : usr=2.24%, sys=11.56%, ctx=70231, majf=0, minf=5
          IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
             submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
             complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
             issued    : total=r=786607/w=261969/d=0, short=r=0/w=0/d=0
    
        Run status group 0 (all jobs):
           READ: io=3072.7MB, aggrb=109719KB/s, minb=109719KB/s, maxb=109719KB/s, mint=28677msec, maxt=28677msec
          WRITE: io=1023.4MB, aggrb=36540KB/s, minb=36540KB/s, maxb=36540KB/s, mint=28677msec, maxt=28677msec
    
        Disk stats (read/write):
          vda: ios=783400/260901, merge=0/0, ticks=1363974/307433, in_queue=1426117, util=24.65%
        ----------------------------------------------------------------------
    
    Thanked by (2)isunbejo seriesn

    I bench YABS 24/7/365 unless it's a leap year.

  • That NexusBytes one.... Damn <3

    And, why Inception Hosting one reports so much low IO? Is it cache? raid?

    Thanked by (1)seriesn
  • Your results on Nexus Bytes is similar to mine, so it is good to have confirmation on my results (I am not releasing mine yet because I intend to collect about 30 days of data, and I am only halfway there).

    For fio, I suggest you add higher block sizes to compare. The IOPS drop can be very dramatic at higher block sizes, which in my view gives a better view of disk performance because it is quite easy to have inflated IOPS with caching at 4k block sizes. In addition to 4k, I do 64k and 256k (basically 16 and 64 times the data load that has to be processed).

    Thanked by (3)isunbejo seriesn cybertech

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow

  • seriesnseriesn OG
    edited December 2019

    CyberDuck runs bench on cron! The Benchie (bench junkie Haha)

    Thanks dude. Appreciate you even putting us next to all the big boys <3

    @cybertech Boss, on a completely unrelated note, how many servers/vps are in your position right now?

    Thanked by (1)cybertech
  • Avoro seems to be the best :0

  • cybertechcybertech OGBenchmark King

    @PHP_Backend said:
    That NexusBytes one.... Damn <3

    And, why Inception Hosting one reports so much low IO? Is it cache? raid?

    They have pretty good write IOPS actually. however DD not so much although it is debated often that it is not the correct way to test I/O especially for NVMe. That being said I do intend to make a ticket about it once the holidays are over, which may or may not impact these benches.

    @poisson said:
    Your results on Nexus Bytes is similar to mine, so it is good to have confirmation on my results (I am not releasing mine yet because I intend to collect about 30 days of data, and I am only halfway there).

    For fio, I suggest you add higher block sizes to compare. The IOPS drop can be very dramatic at higher block sizes, which in my view gives a better view of disk performance because it is quite easy to have inflated IOPS with caching at 4k block sizes. In addition to 4k, I do 64k and 256k (basically 16 and 64 times the data load that has to be processed).

    I am not proficient in how fio works, is 64 / 256K also relevant in actual usage as well? if so i'll make the test too :smiley:

    @seriesn said:
    CyberDuck runs bench on cron! The Benchie (bench junkie Haha)

    Thanks dude. Appreciate you even putting us next to all the big boys <3

    @cybertech Boss, on a completely unrelated note, how many servers/vps are in your position right now?

    quack quack! Boss, including one that has not been provisioned yet, I have 14 including yours :)

    @dev said:
    Avoro seems to be the best :0

    overall on my list, yes :) also depends on what you want. speed wise nothing beats the ryzens

    I bench YABS 24/7/365 unless it's a leap year.

  • @cybertech said:

    @poisson said:
    Your results on Nexus Bytes is similar to mine, so it is good to have confirmation on my results (I am not releasing mine yet because I intend to collect about 30 days of data, and I am only halfway there).

    For fio, I suggest you add higher block sizes to compare. The IOPS drop can be very dramatic at higher block sizes, which in my view gives a better view of disk performance because it is quite easy to have inflated IOPS with caching at 4k block sizes. In addition to 4k, I do 64k and 256k (basically 16 and 64 times the data load that has to be processed).

    I am not proficient in how fio works, is 64 / 256K also relevant in actual usage as well? if so i'll make the test too :smiley:

    Well, 4k block size is quite small, and if you really want to stress the disk, a higher block size will show its capacity to transfer large blocks of data quickly.

    An imperfect analogy but it is like testing how fast a person can move bricks. Being able to move small bricks very quickly is good, but how much does the person slow down when the brick becomes 16 times the weight of the original or 64 times? If the drop in speed is much lower than the increase in weight, you can have more confidence that the person is a higher performer.

    This is why I also test with much larger block size to get an idea of how well the disk performs under heavy load situations. Most SSDs and NVMes have no problems performing with 4k block sizes. All the numbers are usually impressive. The difference starts coming in with higher block sizes. You can try them to see the difference. NVMe generally maintains very good IOPS at 256k compared to SSDs.

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow

  • cybertechcybertech OGBenchmark King

    @poisson said:

    @cybertech said:

    @poisson said:
    Your results on Nexus Bytes is similar to mine, so it is good to have confirmation on my results (I am not releasing mine yet because I intend to collect about 30 days of data, and I am only halfway there).

    For fio, I suggest you add higher block sizes to compare. The IOPS drop can be very dramatic at higher block sizes, which in my view gives a better view of disk performance because it is quite easy to have inflated IOPS with caching at 4k block sizes. In addition to 4k, I do 64k and 256k (basically 16 and 64 times the data load that has to be processed).

    I am not proficient in how fio works, is 64 / 256K also relevant in actual usage as well? if so i'll make the test too :smiley:

    Well, 4k block size is quite small, and if you really want to stress the disk, a higher block size will show its capacity to transfer large blocks of data quickly.

    An imperfect analogy but it is like testing how fast a person can move bricks. Being able to move small bricks very quickly is good, but how much does the person slow down when the brick becomes 16 times the weight of the original or 64 times? If the drop in speed is much lower than the increase in weight, you can have more confidence that the person is a higher performer.

    This is why I also test with much larger block size to get an idea of how well the disk performs under heavy load situations. Most SSDs and NVMes have no problems performing with 4k block sizes. All the numbers are usually impressive. The difference starts coming in with higher block sizes. You can try them to see the difference. NVMe generally maintains very good IOPS at 256k compared to SSDs.

    TBH i copied the test parameters from binarylane, which i trusted they would give a good insight on how to test fio.

    and thanks for using the brick analogy. being a brick myself it is now much more understandable.

    my question is what is the most common brick size that is usually being moved? i would then formulate my tests in future using this one.

    I bench YABS 24/7/365 unless it's a leap year.

  • @cybertech this is the command I use:

    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50

    You can use the same command but instead of --bs=4k, you can change it to either --bs=64k or --bs=256k. If you want to torture the disk, use 512k or even 1024k.

    The binary lane code uses mixed random read-write of 75% read and 25% write --rwmixread=75. This is somewhat real-world enough, but my command uses 50-50 mixed random read-write --rwmixread=50.

    The other flags you don't really need to touch, but --bs and --rwmixread you can tweak to match your scenario. Hope I managed to help you better understand fio to have better tests for your use cases.

    Thanked by (2)cybertech uptime

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow

  • @cybertech said: TBH i copied the test parameters from binarylane, which i trusted they would give a good insight on how to test fio.

    You can also check the commands that ServerScope uses. It does several fio tests.

    Thanked by (2)cybertech uptime
  • cybertechcybertech OGBenchmark King
    edited December 2019

    @poisson said:
    @cybertech this is the command I use:

    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50

    You can use the same command but instead of --bs=4k, you can change it to either --bs=64k or --bs=256k. If you want to torture the disk, use 512k or even 1024k.

    The binary lane code uses mixed random read-write of 75% read and 25% write --rwmixread=75. This is somewhat real-world enough, but my command uses 50-50 mixed random read-write --rwmixread=50.

    The other flags you don't really need to touch, but --bs and --rwmixread you can tweak to match your scenario. Hope I managed to help you better understand fio to have better tests for your use cases.

    Just tried this from inception hosting:

    root@cybertech fio-2.0.9]# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50
    -bash: fio: command not found
    [root@cybertech fio-2.0.9]# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=64k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=50
    test: (g=0): rw=randrw, bs=64K-64K/64K-64K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    test: Laying out IO file(s) (1 file(s) / 4096MB)
    Jobs: 1 (f=1): [m] [17.6% done] [114.8M/112.8M /s] [1835 /1803 iops] [eta 00Jobs: 1 (f=1): [m] [23.5% done] [111.1M/111.6M /s] [1790 /1785 iops] [eta 00Jobs: 1 (f=1): [m] [29.4% done] [111.8M/113.6M /s] [1788 /1816 iops] [eta 00Jobs: 1 (f=1): [m] [33.3% done] [80559K/81262K /s] [1258 /1269 iops] [eta 00Jobs: 1 (f=1): [m] [38.9% done] [121.4M/114.5M /s] [1941 /1831 iops] [eta 00Jobs: 1 (f=1): [m] [44.4% done] [131.7M/136.3M /s] [2105 /2179 iops] [eta 00Jobs: 1 (f=1): [m] [52.9% done] [135.3M/136.3M /s] [2163 /2179 iops] [eta 00Jobs: 1 (f=1): [m] [58.8% done] [133.1M/132.6M /s] [2142 /2120 iops] [eta 00Jobs: 1 (f=1): [m] [64.7% done] [130.2M/128.6M /s] [2095 /2055 iops] [eta 00Jobs: 1 (f=1): [m] [70.6% done] [133.1M/131.9M /s] [2142 /2108 iops] [eta 00Jobs: 1 (f=1): [m] [76.5% done] [82989K/84459K /s] [1296 /1319 iops] [eta 00Jobs: 1 (f=1): [m] [82.4% done] [99257K/100.2M /s] [1550 /1601 iops] [eta 00Jobs: 1 (f=1): [m] [83.3% done] [82221K/87464K /s] [1284 /1366 iops] [eta 00Jobs: 1 (f=1): [m] [88.9% done] [94049K/99804K /s] [1469 /1559 iops] [eta 00Jobs: 1 (f=1): [m] [94.4% done] [110.1M/109.3M /s] [1775 /1748 iops] [eta 00Jobs: 1 (f=1): [m] [100.0% done] [128.9M/130.6M /s] [2061 /2088 iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=2924: Thu Dec 26 16:19:39 2019
    read : io=2043.7MB, bw=116467KB/s, iops=1819 , runt= 17968msec
    write: io=2052.4MB, bw=116965KB/s, iops=1827 , runt= 17968msec
    cpu : usr=1.42%, sys=5.68%, ctx=29418, majf=0, minf=4
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
    issued : total=r=32698/w=32838/d=0, short=r=0/w=0/d=0

    Run status group 0 (all jobs):
    READ: io=2043.7MB, aggrb=116466KB/s, minb=116466KB/s, maxb=116466KB/s, mint=17968msec, maxt=17968msec
    WRITE: io=2052.4MB, aggrb=116965KB/s, minb=116965KB/s, maxb=116965KB/s, mint=17968msec, maxt=17968msec

    Disk stats (read/write):
    vda: ios=32554/32727, merge=0/0, ticks=1073261/56669, in_queue=1102262, util=45.49%

    Thanked by (2)uptime seriesn

    I bench YABS 24/7/365 unless it's a leap year.

  • @cybertech it is a bit messy to read, so I will extract the lines to look out for from your earlier Inception KVM and the one you just posted:

    These was from your previously posted results using 4k block size and 75% read 25% write parameters:

    read : io=3068.3MB, bw=126712KB/s, iops=**31678 **, runt= 24795msec
    write: io=1027.9MB, bw=42447KB/s, iops=**10611 **, runt= 24795msec
    

    The results from your just posted results using 64k block size and 50-50 read-write operations:

    read : io=2043.7MB, bw=116467KB/s, iops=1819 , runt= 17968msec
    write: io=2052.4MB, bw=116965KB/s, iops=1827 , runt= 17968msec
    

    You can now see that the impressive iops numbers dropped drastically. When the block size went up 16 times to 64k.

    I will post one of my results from @seriesn's Germany NVMe fio commands showing the drop in iops as the block size increases from 4k to 64k to 256k so that you get a better idea of what the numbers mean.

    NexusBytes Germany NVMe fio 4k random read-write (50-50)

    read : io=2046.9MB, bw=344567KB/s, iops=86141, runt=  6083msec
    write: io=2049.2MB, bw=344945KB/s, iops=86236, runt=  6083msec
    

    NexusBytes Germany NVMe fio 64k random read-write (50-50)

    read : io=2043.3MB, bw=2359.5MB/s, iops=37750, runt=   866msec
    write: io=2052.8MB, bw=2370.4MB/s, iops=37926, runt=   866msec
    

    NexusBytes Germany NVMe fio 256k random read-write (50-50)

    read : io=2064.0MB, bw=3026.5MB/s, iops=12105, runt=   682msec
    write: io=2032.0MB, bw=2979.5MB/s, iops=11917, runt=   682msec
    

    See how the iops dropped only by a factor of 8 from 4k to 256k although the block size increased 64 times? If you look at the drop from 4k to 64k, it is only a factor of about 1.5.

    This is expected of NVMe (otherwise you should ask for your money back), although this is one of the more impressive NVMe I have tested (others were excellent but @seriesn's stuff is mindblowing).

    Thanked by (2)cybertech uptime

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Get expert copyediting and copywriting help at The Write Flow

  • @cybertech said:

    @PHP_Backend said:
    That NexusBytes one.... Damn <3

    And, why Inception Hosting one reports so much low IO? Is it cache? raid?

    They have pretty good write IOPS actually. however DD not so much although it is debated often that it is not the correct way to test I/O especially for NVMe. That being said I do intend to make a ticket about it once the holidays are over, which may or may not impact these benches.

    @poisson said:
    Your results on Nexus Bytes is similar to mine, so it is good to have confirmation on my results (I am not releasing mine yet because I intend to collect about 30 days of data, and I am only halfway there).

    For fio, I suggest you add higher block sizes to compare. The IOPS drop can be very dramatic at higher block sizes, which in my view gives a better view of disk performance because it is quite easy to have inflated IOPS with caching at 4k block sizes. In addition to 4k, I do 64k and 256k (basically 16 and 64 times the data load that has to be processed).

    I am not proficient in how fio works, is 64 / 256K also relevant in actual usage as well? if so i'll make the test too :smiley:

    @seriesn said:
    CyberDuck runs bench on cron! The Benchie (bench junkie Haha)

    Thanks dude. Appreciate you even putting us next to all the big boys <3

    @cybertech Boss, on a completely unrelated note, how many servers/vps are in your position right now?

    quack quack! Boss, including one that has not been provisioned yet, I have 14 including yours :)

    @dev said:
    Avoro seems to be the best :0

    overall on my list, yes :) also depends on what you want. speed wise nothing beats the ryzens

    Jesus! Dial 1800-Server-Additction. You are a benchie ?

    Thanked by (1)cybertech
  • cybertechcybertech OGBenchmark King

    @seriesn said:

    @cybertech said:

    @PHP_Backend said:
    That NexusBytes one.... Damn <3

    And, why Inception Hosting one reports so much low IO? Is it cache? raid?

    They have pretty good write IOPS actually. however DD not so much although it is debated often that it is not the correct way to test I/O especially for NVMe. That being said I do intend to make a ticket about it once the holidays are over, which may or may not impact these benches.

    @poisson said:
    Your results on Nexus Bytes is similar to mine, so it is good to have confirmation on my results (I am not releasing mine yet because I intend to collect about 30 days of data, and I am only halfway there).

    For fio, I suggest you add higher block sizes to compare. The IOPS drop can be very dramatic at higher block sizes, which in my view gives a better view of disk performance because it is quite easy to have inflated IOPS with caching at 4k block sizes. In addition to 4k, I do 64k and 256k (basically 16 and 64 times the data load that has to be processed).

    I am not proficient in how fio works, is 64 / 256K also relevant in actual usage as well? if so i'll make the test too :smiley:

    @seriesn said:
    CyberDuck runs bench on cron! The Benchie (bench junkie Haha)

    Thanks dude. Appreciate you even putting us next to all the big boys <3

    @cybertech Boss, on a completely unrelated note, how many servers/vps are in your position right now?

    quack quack! Boss, including one that has not been provisioned yet, I have 14 including yours :)

    @dev said:
    Avoro seems to be the best :0

    overall on my list, yes :) also depends on what you want. speed wise nothing beats the ryzens

    Jesus! Dial 1800-Server-Additction. You are a benchie ?

    To justify my addiction I have one production one hot backup one test environment one direct play Plex one transcoding Plex

    Ok run out of ideas for the 9 remaining

    Thanked by (1)poisson

    I bench YABS 24/7/365 unless it's a leap year.

  • @cybertech said:

    @seriesn said:

    @cybertech said:

    @PHP_Backend said:
    That NexusBytes one.... Damn <3

    And, why Inception Hosting one reports so much low IO? Is it cache? raid?

    They have pretty good write IOPS actually. however DD not so much although it is debated often that it is not the correct way to test I/O especially for NVMe. That being said I do intend to make a ticket about it once the holidays are over, which may or may not impact these benches.

    @poisson said:
    Your results on Nexus Bytes is similar to mine, so it is good to have confirmation on my results (I am not releasing mine yet because I intend to collect about 30 days of data, and I am only halfway there).

    For fio, I suggest you add higher block sizes to compare. The IOPS drop can be very dramatic at higher block sizes, which in my view gives a better view of disk performance because it is quite easy to have inflated IOPS with caching at 4k block sizes. In addition to 4k, I do 64k and 256k (basically 16 and 64 times the data load that has to be processed).

    I am not proficient in how fio works, is 64 / 256K also relevant in actual usage as well? if so i'll make the test too :smiley:

    @seriesn said:
    CyberDuck runs bench on cron! The Benchie (bench junkie Haha)

    Thanks dude. Appreciate you even putting us next to all the big boys <3

    @cybertech Boss, on a completely unrelated note, how many servers/vps are in your position right now?

    quack quack! Boss, including one that has not been provisioned yet, I have 14 including yours :)

    @dev said:
    Avoro seems to be the best :0

    overall on my list, yes :) also depends on what you want. speed wise nothing beats the ryzens

    Jesus! Dial 1800-Server-Additction. You are a benchie ?

    To justify my addiction I have one production one hot backup one test environment one direct play Plex one transcoding Plex

    Ok run out of ideas for the 9 remaining

    Backup of production. Backup of that. Backup configuration of that. Replicate with any cast dns and rsync and you get a poor man's cloud with the rest :D

  • cybertechcybertech OGBenchmark King

    @PHP_Backend said:
    That NexusBytes one.... Damn <3

    And, why Inception Hosting one reports so much low IO? Is it cache? raid?

    Just did a fresh reinstall of inception hosting VPS and it seems the NVMe is back up at speed:

    ---------------------------------------------------------------------
    I/O speed(1st run)   : 387 MB/s
    I/O speed(2nd run)   : 447 MB/s
    I/O speed(3rd run)   : 496 MB/s
    Average I/O speed    : 443.3 MB/s
    ----------------------------------------------------------------------
    
    Thanked by (1)PHP_Backend

    I bench YABS 24/7/365 unless it's a leap year.

  • @cybertech said:

    @seriesn said:

    @cybertech said:

    @PHP_Backend said:
    That NexusBytes one.... Damn <3

    And, why Inception Hosting one reports so much low IO? Is it cache? raid?

    They have pretty good write IOPS actually. however DD not so much although it is debated often that it is not the correct way to test I/O especially for NVMe. That being said I do intend to make a ticket about it once the holidays are over, which may or may not impact these benches.

    @poisson said:
    Your results on Nexus Bytes is similar to mine, so it is good to have confirmation on my results (I am not releasing mine yet because I intend to collect about 30 days of data, and I am only halfway there).

    For fio, I suggest you add higher block sizes to compare. The IOPS drop can be very dramatic at higher block sizes, which in my view gives a better view of disk performance because it is quite easy to have inflated IOPS with caching at 4k block sizes. In addition to 4k, I do 64k and 256k (basically 16 and 64 times the data load that has to be processed).

    I am not proficient in how fio works, is 64 / 256K also relevant in actual usage as well? if so i'll make the test too :smiley:

    @seriesn said:
    CyberDuck runs bench on cron! The Benchie (bench junkie Haha)

    Thanks dude. Appreciate you even putting us next to all the big boys <3

    @cybertech Boss, on a completely unrelated note, how many servers/vps are in your position right now?

    quack quack! Boss, including one that has not been provisioned yet, I have 14 including yours :)

    @dev said:
    Avoro seems to be the best :0

    overall on my list, yes :) also depends on what you want. speed wise nothing beats the ryzens

    Jesus! Dial 1800-Server-Additction. You are a benchie ?

    To justify my addiction I have one production one hot backup one test environment one direct play Plex one transcoding Plex

    Ok run out of ideas for the 9 remaining

    Looks like you and I are on the same level of addiction. I have put most of my idlers to work to justify their existence, even if that work is just running tmux and htop :-)

    Neat idea with 2 plex servers, although that would be a nightmare to keep in sync.

    Thanked by (1)cybertech

    Get the best deal on your next VPS or Shared/Reseller hosting from RacknerdTracker.com - The original aff garden.

  •     CPU: SHA256-hashing 500 MB
            0.394 seconds
        CPU: bzip2-compressing 500 MB
            3.887 seconds
        CPU: AES-encrypting 500 MB
            0.705 seconds
    

    Thanked by (1)cybertech
Sign In or Register to comment.