HostHatch - New non-sale pricing

DanielDaniel OG
edited April 17 in General

I just noticed that HostHatch have revamped their standard (non-sale) pricing and moved the plans to their new platform with AMD EPYC CPUs and Samsung Gen4 NVMe drives.

Old pricing vs new pricing.

There's definitely tradeoffs in that the newer plans come with less disk space and therefore a worse memory to disk ratio, which is my main pain point with HostHatch's plans. It's mostly a nice improvement though.

4 dedicated cores + 16GB RAM + 100GB storage used to be $40/month, but a kinda similar plan with 4 cores (2 dedicated and 2 fair share) + 16GB RAM + 75GB storage is now only $15/month. That's pretty competitive if you look at what other providers with higher-end AMD Ryzen or EPYC processors (like BuyVM, Nexusbytes, etc) offer for that price.

The old pricing started at $5/month for one core (50% dedicated), 2GB RAM and 20GB space, going up to $160/month for 16 dedicated cores, 64GB RAM and 340GB space The new pricing starts at $4/month for one core (fair share), 2GB RAM and 10GB space, going up to $69/month (nice) for 16 cores (8 dedicated, 8 fair share), 96GB RAM and 250GB space.

I think that's actually some of the best pricing I've seen for a server with AMD EPYC, 96GB RAM, and NVMe disks in countries like the USA and Australia. If only the disk space was a bit larger! Disk only being 2.5x the size of the RAM is an unusual ratio.

Their support is still pretty slow/unresponsive at times. Sometimes my tickets take over a month to get a response. That's probably one of their tradeoffs to save money, but I hope it gets a bit better over time - It's the one thing that seems to hold them back, based on other customers' comments on OGF. The plans are really great when they work, and I personally haven't had any major issues (knock on wood).

(no I'm not affiliated with them in any way - these are just my thoughts as a customer)

Comments

  • Their standard prices are good, and their sales have been amazing. Slow support is a seemingly necessary tradeoff for the mad deals.

    I just fear that with prices of everything increasing, we won't be seeing deals like the last few years anymore.

    I use storage boxes or dedis when I need storage, so the less storage the better for me, especially since storage costs increased. My favorite HH sales have been those with double ram, stackable so I can get 4x ram with a few cores and minimal storage.

    Hope they won't be getting rid of their old hardware. I'd take an older Intel CPU over the EPYC ones if there are sales for those in future.

  • Does HH prioritise their regular priced customers for support?

  • Wonder what the fair share policy is now for even the lowest plan? Still 50% you think?

  • corbpiecorbpie OG
    edited April 18

    The only thing I see HostHatch good for currently is storage (if you got a good BF/CM deal). Their support is non existent and the performance on their nvm line has deteriorated

  • @corbpie said:
    The only thing I see HostHatch good for currently is storage (if you got a good BF/CM deal). Their support is non existent and the performance on their nvm line has deteriorated

    So you've tried their non-LET plans and their support is just as bad?

  • @skorous said:

    @corbpie said:
    The only thing I see HostHatch good for currently is storage (if you got a good BF/CM deal). Their support is non existent and the performance on their nvm line has deteriorated

    So you've tried their non-LET plans and their support is just as bad?

    Assuming they’d respond quicker to a 4$ p/m standard VPS?

  • DanielDaniel OG
    edited April 18

    @caracal said:
    Does HH prioritise their regular priced customers for support?

    I think this is the case but I don't have a regular priced plan at the moment so I'm not sure. I'm thinking of getting one though.

    @Cybr said: I just fear that with prices of everything increasing, we won't be seeing deals like the last few years anymore.

    Prices have to go up eventually - they can't remain the same forever. Hardware is more expensive due to the shortages, electricity has gone up quite a bit in price, IPv4 addresses are hard to find and cost way more now.

    My guess would be that their new servers were at least $3000-3500 per server, plus the cost of labour for building the servers, plus colo costs... not cheap.

    Hope they won't be getting rid of their old hardware. I'd take an older Intel CPU over the EPYC ones if there are sales for those in future.

    They said somewhere on this forum that there's a higher profit margin on the older servers vs the new ones, so they don't mind if people stick with the old Intels.

    @Salomon123 said:
    Wonder what the fair share policy is now for even the lowest plan? Still 50% you think?

    I'm not sure, and their Acceptable Use Policy doesn't even have the words "fair share" in it, so it seems like it's currently undefined 🤔

    @corbpie said: Their support is non existent

    Yeah and I mentioned this in my post. Their deals are good, but support is definitely lacking. In my experience it hasn't been too bad, but I haven't had any major issues.

    @corbpie said: the performance on their nvm line has deteriorated

    FWIW I haven't had issues with my NVMe VPSes, and I've got a bunch of them.

    BF2020 in LA:

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 85.42 MB/s   (21.3k) | 721.50 MB/s  (11.2k)
    Write      | 85.65 MB/s   (21.4k) | 725.30 MB/s  (11.3k)
    Total      | 171.08 MB/s  (42.7k) | 1.44 GB/s    (22.6k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 832.80 MB/s   (1.6k) | 1.11 GB/s     (1.0k)
    Write      | 877.05 MB/s   (1.7k) | 1.18 GB/s     (1.1k)
    Total      | 1.70 GB/s     (3.3k) | 2.30 GB/s     (2.2k)
    

    BF2021 in LA:

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 272.49 MB/s  (68.1k) | 2.97 GB/s    (46.4k)
    Write      | 273.21 MB/s  (68.3k) | 2.99 GB/s    (46.7k)
    Total      | 545.70 MB/s (136.4k) | 5.96 GB/s    (93.2k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 9.54 GB/s    (18.6k) | 8.71 GB/s     (8.5k)
    Write      | 10.05 GB/s   (19.6k) | 9.29 GB/s     (9.0k)
    Total      | 19.59 GB/s   (38.2k) | 18.00 GB/s   (17.5k)
    

    both are far beyond what most use cases ever need. It's expected the BF2021 is faster since it costs more and uses an AMD EPYC processor plus Gen4 NVMe drives. I imagine that disk performance of their standard priced plans is similar now too.

    Thanked by (1)Cybr
  • @corbpie said:

    @skorous said:

    @corbpie said:
    The only thing I see HostHatch good for currently is storage (if you got a good BF/CM deal). Their support is non existent and the performance on their nvm line has deteriorated

    So you've tried their non-LET plans and their support is just as bad?

    Assuming they’d respond quicker to a 4$ p/m standard VPS?

    So that's a no then?

  • I've got something like a dozen-plus services and have mixed feelings. Most of them are very reliable and suitable for what I use them for. I've had 3 storage servers with them, one had the RAID card die and they were good about getting it set up again. Pricing is hard to beat.

    Main issues for me are the network and support. They've got a network misconfiguration at one site and that has gone on for months with no resolution and the ticket still open. Network a bit flaky at some locations (not sure if they are single-homed or not). And if you care about it, IPv6 still seems to be broken in various locations with no prospect of rectification. Support is hit and miss depending on who picks it up - sometimes it is really good and fast, other times it seems to go into a black hole.

    While the new pricing/equipment is nice, I don't see it fixing the issues in the above paragraph. So probably my attitude is unchanged.

  • @Daniel said:
    They said somewhere on this forum that there's a higher profit margin on the older servers vs the new ones, so they don't mind if people stick with the old Intels.

    Will be looking out for some amazing sales when they have a surplus of those old nodes again.

    Doubt they're making much profit on the 32GB ram servers I have in Australia and Asia for under $12/mo. Those ones idle really well.

    Past HH sales have spoiled me, which makes all deals these days seem expensive. Need some more of those double stacked double ram double discounts!

  • @Daniel said: FWIW I haven't had issues with my NVMe VPSes, and I've got a bunch of them.

    That's fast, is that the new nodes?

  • DanielDaniel OG
    edited April 18

    @Cybr said: Past HH sales have spoiled me, which makes all deals these days seem expensive.

    It's definitely expensive compared to their sale prices, but it's a pretty decent price if you compare to other providers.

    ~4GB RAM, ~60-80GB space for ~$15-20 is a common price point for regular (non-sale) pricing. I was using BuyVM for close to 10 years, and paying $15/month for 1 dedicated core, 4GB RAM and 80GB disk. Nexusbytes is similar: 4 shared cores, 4GB RAM, 60GB space for $12.80/month. DigitalOcean (their "Premium" nodes) and Vultr are around $24/m for something similar.

    Now HostHatch have 4 cores (2 dedicated + 2 fair share), 16GB RAM and 75GB space for the same price point of $15/month!

    Sure, Contabo is a bit cheaper and you get more disk space (6 shared cores, 16GB RAM, 100GB NVMe for $12/month), but they oversell more and in my experience their performance isn't as good as HostHatch's AMD nodes.

    @corbpie said:

    @Daniel said: FWIW I haven't had issues with my NVMe VPSes, and I've got a bunch of them.

    That's fast, is that the new nodes?

    Sorry, I should have mentioned that. BF2020 is an old Intel node. BF2021 is a new AMD node.

    I ran a YABS on a VPS in Sydney Australia (old BF2020 node) and the disk is definitely slower on that one, but still not toooo bad (fine for my use cases):

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 78.43 MB/s   (19.6k) | 192.10 MB/s   (3.0k)
    Write      | 78.63 MB/s   (19.6k) | 193.11 MB/s   (3.0k)
    Total      | 157.07 MB/s  (39.2k) | 385.22 MB/s   (6.0k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 335.39 MB/s    (655) | 373.28 MB/s    (364)
    Write      | 353.21 MB/s    (689) | 398.14 MB/s    (388)
    Total      | 688.60 MB/s   (1.3k) | 771.43 MB/s    (752)****
    
  • cybertechcybertech OGBenchmark King

    @Daniel said:
    I ran a YABS on a VPS in Sydney Australia (old BF2020 node) and the disk is definitely slower on that one, but still not toooo bad (fine for my use cases):

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 78.43 MB/s   (19.6k) | 192.10 MB/s   (3.0k)
    Write      | 78.63 MB/s   (19.6k) | 193.11 MB/s   (3.0k)
    Total      | 157.07 MB/s  (39.2k) | 385.22 MB/s   (6.0k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 335.39 MB/s    (655) | 373.28 MB/s    (364)
    Write      | 353.21 MB/s    (689) | 398.14 MB/s    (388)
    Total      | 688.60 MB/s   (1.3k) | 771.43 MB/s    (752)****
    

    this probably looks like either space is filling up on the NVMe, or limited by cpu. this was considered one of the better results perhaps just 3-4 years ago though.

    I bench YABS 24/7/365 unless it's a leap year.

  • @Daniel said:
    I ran a YABS on a VPS in Sydney Australia (old BF2020 node) and the disk is definitely slower on that one, but still not toooo bad (fine for my use cases):

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 78.43 MB/s   (19.6k) | 192.10 MB/s   (3.0k)
    Write      | 78.63 MB/s   (19.6k) | 193.11 MB/s   (3.0k)
    Total      | 157.07 MB/s  (39.2k) | 385.22 MB/s   (6.0k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 335.39 MB/s    (655) | 373.28 MB/s    (364)
    Write      | 353.21 MB/s    (689) | 398.14 MB/s    (388)
    Total      | 688.60 MB/s   (1.3k) | 771.43 MB/s    (752)****
    

    My Intel NVMe IO in Sydney is even worse:

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 101.67 MB/s  (25.4k) | 249.80 MB/s   (3.9k)
    Write      | 101.94 MB/s  (25.4k) | 251.12 MB/s   (3.9k)
    Total      | 203.61 MB/s  (50.9k) | 500.93 MB/s   (7.8k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 249.46 MB/s    (487) | 250.31 MB/s    (244)
    Write      | 262.72 MB/s    (513) | 266.98 MB/s    (260)
    Total      | 512.18 MB/s   (1.0k) | 517.30 MB/s    (504)
    

    My Intel NVMe IO in Hong Kong looks better for big blocks, but worse for small since the node seems to have more IO load currently (bigger IO latency spikes too).

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 34.42 MB/s    (8.6k) | 532.36 MB/s   (8.3k)
    Write      | 34.52 MB/s    (8.6k) | 535.16 MB/s   (8.3k)
    Total      | 68.95 MB/s   (17.2k) | 1.06 GB/s    (16.6k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 917.32 MB/s   (1.7k) | 729.01 MB/s    (711)
    Write      | 966.06 MB/s   (1.8k) | 777.56 MB/s    (759)
    Total      | 1.88 GB/s     (3.6k) | 1.50 GB/s     (1.4k)
    

    Minimum NVMe latency in Sydney is also 566us, compared to 220us on the same package in Hong Kong.

    Too bad I never benched the IO in Sydney over a year ago, so I don't know if it has degraded or has always had that bottleneck.

  • Meanwhile HostHatch Chicago...

    Processor  : AMD EPYC 7502 32-Core Processor
    CPU cores  : 3 @ 2495.312 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 15.7 GiB
    Swap       : 1024.0 MiB
    Disk       : 38.4 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 14.03 MB/s    (3.5k) | 196.95 MB/s   (3.0k)
    Write      | 14.04 MB/s    (3.5k) | 197.99 MB/s   (3.0k)
    Total      | 28.07 MB/s    (7.0k) | 394.94 MB/s   (6.1k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 23.65 MB/s      (46) | 16.92 MB/s      (16)
    Write      | 24.91 MB/s      (48) | 18.62 MB/s      (18)
    Total      | 48.57 MB/s      (94) | 35.54 MB/s      (34)
    
  • @Cybr said: Minimum NVMe latency in Sydney is also 566us, compared to 220us on the same package in Hong Kong.

    Do you actually notice this difference in day-to-day usage of your VPS though? Sure, the numbers differ, but there's very few use cases that will notice an extra 300µs in latency. The extra latency is likely due to a longer queue depth (that is, more things hitting the disk at the same time).

    Some people obsess over benchmarks but just end up hosting small sites on their VPSes, where disk speed is irrelevant since the entire site fits into RAM cache.

    Thanked by (1)skorous
  • DanielDaniel OG
    edited April 18

    @Cybr said:
    Meanwhile HostHatch Chicago...

    Processor  : AMD EPYC 7502 32-Core Processor
    CPU cores  : 3 @ 2495.312 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 15.7 GiB
    Swap       : 1024.0 MiB
    Disk       : 38.4 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 14.03 MB/s    (3.5k) | 196.95 MB/s   (3.0k)
    Write      | 14.04 MB/s    (3.5k) | 197.99 MB/s   (3.0k)
    Total      | 28.07 MB/s    (7.0k) | 394.94 MB/s   (6.1k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 23.65 MB/s      (46) | 16.92 MB/s      (16)
    Write      | 24.91 MB/s      (48) | 18.62 MB/s      (18)
    Total      | 48.57 MB/s      (94) | 35.54 MB/s      (34)
    

    Oh... oh no.

    I mean it kinda looks like either NVMe or SATA SSDs (4k IOPS are too high for it to be HDDs) but why are the speeds so slow, and why are they slower for higher block sizes?

    Seems like Chicago is not a good location :(

  • cybertechcybertech OGBenchmark King

    @Cybr said:
    Meanwhile HostHatch Chicago...

    Processor  : AMD EPYC 7502 32-Core Processor
    CPU cores  : 3 @ 2495.312 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 15.7 GiB
    Swap       : 1024.0 MiB
    Disk       : 38.4 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 14.03 MB/s    (3.5k) | 196.95 MB/s   (3.0k)
    Write      | 14.04 MB/s    (3.5k) | 197.99 MB/s   (3.0k)
    Total      | 28.07 MB/s    (7.0k) | 394.94 MB/s   (6.1k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 23.65 MB/s      (46) | 16.92 MB/s      (16)
    Write      | 24.91 MB/s      (48) | 18.62 MB/s      (18)
    Total      | 48.57 MB/s      (94) | 35.54 MB/s      (34)
    

    this will be problematic when running backups jobs and import database

    I bench YABS 24/7/365 unless it's a leap year.

  • @Daniel said:

    @Cybr said: Minimum NVMe latency in Sydney is also 566us, compared to 220us on the same package in Hong Kong.

    Do you actually notice this difference in day-to-day usage of your VPS though? Sure, the numbers differ, but there's very few use cases that will notice an extra 300µs in latency. The extra latency is likely due to a longer queue depth (that is, more things hitting the disk at the same time).

    Some people obsess over benchmarks but just end up hosting small sites on their VPSes, where disk speed is irrelevant since the entire site fits into RAM cache.

    I would for some of the platforms I host, but I do use dedis with NVMe for most serious production applications.

    For real-time low latency applications, any time the disk has to be hit, the IO request needs to be completed as fast as possible. So both throughput and request latency matters. Any IO is done from background threads but it still needs to keep up with deadlines. Even just 1µs can be an eternity in real-time.

    You're right of course that most people wouldn't notice these kinds of differences for most applications.

    @Daniel said:
    Oh... oh no.

    I mean it kinda looks like either NVMe or SATA SSDs (4k IOPS are too high for it to be HDDs) but why are the speeds so slow, and why are they slower for higher block sizes?

    Seems like Chicago is not a good location :(

    I know they've been having issues in Chicago recently with hardware failures. It might be that they had to migrate too many people to some nodes due to that.

    @cybertech said:
    this will be problematic when running backups jobs and import database

    That server hasn't had any issues with idling over the last year. Perhaps it was born for that purpose.

  • bdlbdl OG
    edited April 18

    @Daniel said:
    Seems like Chicago is not a good location :(

    It was really good about two years ago but I've found the network especially has been getting increasingly laggier and slower - and now the past 3-4 months or so it's worse than my HH LA service :(

  • HostHatch replied on OGF:

    @hosthatch said:
    Thank you for the feedback on the new plans. Note that we just "soft-launched" them for now. We're going to launch the new website soon, and then start migrating legacy customers into the new platform. There are still a few basics that need to be completed for the new panel though (like IPv6 reverse DNS).

    As for the plans - they are not yet the final plans, and we might make a few changes to them in the coming days and weeks. But the RAM/disk ratios - it would raise the pricing by a considerable percentage if we did higher storage for each plan, because you can only fit in a certain number of hot-swap NVMes into a single server.....and it starts getting way more expensive if you try to go above a certain number.

    So instead we will build a high-performance pure NVMe block storage product, that you can attach to these VMs if you really do need more storage.

    We've been super impressed with Hetzner Cloud, and wanted to do something similar - and for it to be available in 16 (and more) locations - with the same, predictable performance and price.

    @pbx said: Would be interested as well to know if there is a difference. I would assume that they are on the same nodes, though. At least on some locations.

    The new plans are deployed on new hardware, and it's the same exact Dell EMC servers, with AMD Milan CPUs and Samsung enterprise NVMes in all locations. We haven't cut any corners on these servers. They do not share nodes with the older plans. We also don't plan to do any crazy promotions for them (as was the case on BF21), as we've already priced them very well compared to what everyone else on the market offers.

    We might still do some promotions here and there with "legacy plans" on the older E5 servers, but the performance will be significantly different for obvious reasons.

  • @tetech said:
    Network a bit flaky at some locations (not sure if they are single-homed or not). And if you care about it, IPv6 still seems to be broken in various locations with no prospect of rectification. Support is hit and miss depending on who picks it up - sometimes it is really good and fast, other times it seems to go into a black hole.

    They don't run their own network at any location from what I recall so in most locations they're singlehomed but to a multihomed provider. LAX and Chicago are Psychz so I'd recommend avoiding them like the plague. I actually made a list of all their upstreams a while back:

    HKG: M247
    Amsterdam: HostCircle (and Psychz)
    Los Angeles: Psychz
    Stockholm: Obenetwork, GleSYS
    Australia: Host Universal
    Vienna: M247
    Oslo: Obenetwork
    New York: M247
    Chicago: Psychz
    London: M247
    Zurich: M247
    Warsaw: M247
    Milan: M247
    Madrid: M247

    I've seen the best network performance out of Amsterdam, LAX has all the issues inherent with Psychz. I mostly have the storage vms but I'm also idling a compute product in each location, and I've seen the best network from Stockholm and Amsterdam. Overall all locations besides Chicago are usable, but some are lacking V6 support. I've given up on a few and just used tunnels from Route48.

    Thanked by (3)tetech nick_ webcraft
  • I have 3 VMs and they work well.
    I hope to see the deals like the last few years more.

  • @fluttershy said:

    @tetech said:
    Network a bit flaky at some locations (not sure if they are single-homed or not). And if you care about it, IPv6 still seems to be broken in various locations with no prospect of rectification. Support is hit and miss depending on who picks it up - sometimes it is really good and fast, other times it seems to go into a black hole.

    They don't run their own network at any location from what I recall so in most locations they're singlehomed but to a multihomed provider. LAX and Chicago are Psychz so I'd recommend avoiding them like the plague. I actually made a list of all their upstreams a while back:

    HKG: M247
    Amsterdam: HostCircle (and Psychz)
    Los Angeles: Psychz
    Stockholm: Obenetwork, GleSYS
    Australia: Host Universal
    Vienna: M247
    Oslo: Obenetwork
    New York: M247
    Chicago: Psychz
    London: M247
    Zurich: M247
    Warsaw: M247
    Milan: M247
    Madrid: M247

    I've seen the best network performance out of Amsterdam, LAX has all the issues inherent with Psychz. I mostly have the storage vms but I'm also idling a compute product in each location, and I've seen the best network from Stockholm and Amsterdam. Overall all locations besides Chicago are usable, but some are lacking V6 support. I've given up on a few and just used tunnels from Route48.

    Also, Singapore: M247.

    Thanked by (1)webcraft
  • cybertechcybertech OGBenchmark King
    edited April 19

    Stockholm has really high bandwidth.

    [email protected]:~# wget https://speed.hetzner.de/10GB.bin
    --2022-04-19 17:06:23--  https://speed.hetzner.de/10GB.bin
    Resolving speed.hetzner.de (speed.hetzner.de)... 2a01:4f8:0:59ed::2, 88.198.248.254
    Connecting to speed.hetzner.de (speed.hetzner.de)|2a01:4f8:0:59ed::2|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 10485760000 (9.8G) [application/octet-stream]
    Saving to: ‘10GB.bin’
    
    10GB.bin            100%[================>]   9.77G  99.2MB/s    in 1m 41s
    
    2022-04-19 17:08:05 (98.7 MB/s) - ‘10GB.bin’ saved [10485760000/10485760000]
    
    
    [email protected]:~# scp 10* [email protected]:
    10GB.bin                                   100%   10GB  57.5MB/s   02:53
    
    Thanked by (1)stevewatson301

    I bench YABS 24/7/365 unless it's a leap year.

  • @hosthatch please consider updating https://hosthatch.com/benchmarks m_ _m (5 years in this industry is a really long time)

    Contribute your idling VPS/dedi, Android or iOS devices to fight COVID

Sign In or Register to comment.