@Shot² said:
VM shutting down on me, did not get the email about a maintenance... for it was a maintenance indeed Weird.
(For a moment I thought the whole Eastern Yourop had just been sizzled to a crisp by some random nuclear strike.)
Should've been sent out yesterday (I got mine around 10:25 AM UTC on December 9th), caught in some spam filters perhaps?
Strangely I got every email from Host-C lately (my renewing, Nov 27 maintenance, 'look back at 2025') but nothing about a maintenance on my VM (and it does not show in the control panel 'Email history' either).
Very amateurish, I'm this close 🤏 to opening a Paypal dispute because I lost $bazillions during the 5 mn downtime. /s
I think I know why that happened.
I hope you got back the trillions @Shot² - mail stuff fixed by now
Sorry for that by the way, that was on me
moving forward:
Over the next few days, your Fiat 500 Xmass Promo Deal VPS will be upgraded for free to a Xeon Scale Gen 2 CPU.
To avoid any data corruption due to the different CPU generations, this will be done as an offline migration.
During this process, your service may be temporarily offline for about 5–10 minutes. If you notice your VPS is offline during this window, please wait a few minutes for the migration to complete.
Thank you for your understanding, and happy holidays! 🎄
Others will follow, like the 1 to 10 USD / yearly xmass and 2024 BF deals. ( LES and OGF also )
@Shot² said:
VM shutting down on me, did not get the email about a maintenance... for it was a maintenance indeed Weird.
(For a moment I thought the whole Eastern Yourop had just been sizzled to a crisp by some random nuclear strike.)
Should've been sent out yesterday (I got mine around 10:25 AM UTC on December 9th), caught in some spam filters perhaps?
Strangely I got every email from Host-C lately (my renewing, Nov 27 maintenance, 'look back at 2025') but nothing about a maintenance on my VM (and it does not show in the control panel 'Email history' either).
Very amateurish, I'm this close 🤏 to opening a Paypal dispute because I lost $bazillions during the 5 mn downtime. /s
I think I know why that happened.
I hope you got back the trillions @Shot² - mail stuff fixed by now
Sorry for that by the way, that was on me
moving forward:
Over the next few days, your Fiat 500 Xmass Promo Deal VPS will be upgraded for free to a Xeon Scale Gen 2 CPU.
To avoid any data corruption due to the different CPU generations, this will be done as an offline migration.
During this process, your service may be temporarily offline for about 5–10 minutes. If you notice your VPS is offline during this window, please wait a few minutes for the migration to complete.
Thank you for your understanding, and happy holidays! 🎄
Others will follow, like the 1 to 10 USD / yearly xmass and 2024 BF deals. ( LES and OGF also )
Shit happens man it is all good. We all make mistakes and I do not hold anything against anyone.
ok, just found out the issue of the random power off on the customers that did not get the email.
It seems, oh boy........
That the fella in the DC is a bit color blind, and powered off the wrong fucking node, even tho that the correct one was off and blinking like a bitch at the corner of the street holding a sign " pick me, pick me ".
O swear, DC staff these days is like.... Never mind.
So you have it folks, "JOE" took off the wrong node.
Thank GOD these do not work in a nuclear power station, or do they relatives work there, didn't quite figure that one out yet.
Shit happens man it is all good. We all make mistakes and I do not hold anything against anyone.
ok, just found out the issue of the random power off on the customers that did not get the email.
It seems, oh boy........
That the fella in the DC is a bit color blind, and powered off the wrong fucking node, even tho that the correct one was off and blinking like a bitch at the corner of the street holding a sign " pick me, pick me ".
O swear, DC staff these days is like.... Never mind.
So you have it folks, "JOE" took off the wrong node.
Thank GOD these do not work in a nuclear power station, or do they relatives work there, didn't quite figure that one out yet.
Minion status revoked!!!! Nah be kind to the guy I am colorblind as well so I know what he is going through.
Shit happens man it is all good. We all make mistakes and I do not hold anything against anyone.
ok, just found out the issue of the random power off on the customers that did not get the email.
It seems, oh boy........
That the fella in the DC is a bit color blind, and powered off the wrong fucking node, even tho that the correct one was off and blinking like a bitch at the corner of the street holding a sign " pick me, pick me ".
O swear, DC staff these days is like.... Never mind.
So you have it folks, "JOE" took off the wrong node.
Thank GOD these do not work in a nuclear power station, or do they relatives work there, didn't quite figure that one out yet.
Minion status revoked!!!! Nah be kind to the guy I am colorblind as well so I know what he is going through.
Here is the problem, JOE was not my minion.
Stu - resident minion did he's task correctly - I know cause I checked the logs.
JOE from the DC did the fuckup.
Either way, I am happy this was not an upgrade done on a cooling system in a Nuclear Power Station.
Shit happens man it is all good. We all make mistakes and I do not hold anything against anyone.
ok, just found out the issue of the random power off on the customers that did not get the email.
It seems, oh boy........
That the fella in the DC is a bit color blind, and powered off the wrong fucking node, even tho that the correct one was off and blinking like a bitch at the corner of the street holding a sign " pick me, pick me ".
O swear, DC staff these days is like.... Never mind.
So you have it folks, "JOE" took off the wrong node.
Thank GOD these do not work in a nuclear power station, or do they relatives work there, didn't quite figure that one out yet.
Minion status revoked!!!! Nah be kind to the guy I am colorblind as well so I know what he is going through.
Here is the problem, JOE was not my minion.
Stu - resident minion did he's task correctly - I know cause I checked the logs.
JOE from the DC did the fuckup.
Either way, I am happy this was not an upgrade done on a cooling system in a Nuclear Power Station.
I just love 💕 this job..... Sometimes 🤣
Still man been where he is Joe is the deckhand that everyone forgets. Be kind to them they usually run on caffeine and no sleep. Anyways it is okay by me and all good. SSHIT happens.
@root said: My VPS backup feature is not working in the client area - @host_c - is this related to the guy pulling cables?
NO, that is related to SW fuckery, while your backup gets done, and it is visible on the backup server, from the latest and greatest SW update it does not get shown in the user panel.
We have a fix for it, but waiting for a maintenance windows and try to figure it out how to do it without taking off the whole infrastructure for 2 hours again. It is in progress, we will find a way.
Trying to document the location of my servers using LOC dns records... Where is located (approximately) the famous "DIGI (RCS-RDS) DC2" in Oradea/Bihor? Datacenter-specialized websites show nothin' there (!), while "DIGI" shops are too many everywhere all around the damned city
@Shot² said:
Trying to document the location of my servers using LOC dns records... Where is located (approximately) the famous "DIGI (RCS-RDS) DC2" in Oradea/Bihor? Datacenter-specialized websites show nothin' there (!), while "DIGI" shops are too many everywhere all around the damned city
They never will, as it is a Closed Datacenter Owned by RDS-RCS ( now DIGI ).
You can get only if you are a long time customer and pass their "standards" in what you bring there. They do not advertise Colocation services, fuck, at the price they ask no wonder.
It is expensive as shit compared to what is in Bucharest ( NXDATA 1 & 2, M247 or Voxility ).
It does however have a ton of extras fur us, especially for the B2B customers we have that are 99.9% on DIGI ( for example latencie of 1-3 ms until the Carpatian mountains and 5-15 to the farthest part of Romania from us ( Constanta )
Will probably split the infra in 2 locations soon for you fellas, and only keep the B2B in Oradea plus a very limited servies, but more on that much later. ( not so late as 2026 is right around the corner )
Now back to the better part of things:
As previously promised, we have started upgrading selected services from the decommissioned E5-V4 platform to our newer Scale Gen 2 infrastructure.
Today, we successfully migrated the following services to Scale Gen 2:
BF – Revuelto NVMe VPS
Nemesis – IPv6-Only, 2 TB Storage
V8-ing ’til Judgment Day
Promo VPS – $7–$10 / Year (New Trend 2024)
HostSolutions Legendary Deal from the Grave – In Search for Cociu
If you noticed — or are currently noticing — a brief VPS offline state, please don’t worry. This is a normal part of the migration process, and services should now be fully operational on the new platform.
👉 Additional services will be migrated over the next few days, following the same process.
Thank you for your patience and for being part of our journey as we continue to improve performance and reliability.
@host_c said: Will probably split the infra in 2 locations soon for you fellas, and only keep the B2B in Oradea plus a very limited servies, but more on that much later. ( not so late as 2026 is right around the corner )
Can't wait for this! Hopefully with some new upstreams too!
@Shot² said:
Trying to document the location of my servers using LOC dns records... Where is located (approximately) the famous "DIGI (RCS-RDS) DC2" in Oradea/Bihor? Datacenter-specialized websites show nothin' there (!), while "DIGI" shops are too many everywhere all around the damned city
They never will, as it is a Closed Datacenter Owned by RDS-RCS ( now DIGI ).
You can get only if you are a long time customer and pass their "standards" in what you bring there. They do not advertise Colocation services, fuck, at the price they ask no wonder.
It is expensive as shit compared to what is in Bucharest ( NXDATA 1 & 2, M247 or Voxility ).
It does however have a ton of extras fur us, especially for the B2B customers we have that are 99.9% on DIGI ( for example latencie of 1-3 ms until the Carpatian mountains and 5-15 to the farthest part of Romania from us ( Constanta )
Will probably split the infra in 2 locations soon for you fellas, and only keep the B2B in Oradea plus a very limited servies, but more on that much later. ( not so late as 2026 is right around the corner )
Now back to the better part of things:
As previously promised, we have started upgrading selected services from the decommissioned E5-V4 platform to our newer Scale Gen 2 infrastructure.
Today, we successfully migrated the following services to Scale Gen 2:
BF – Revuelto NVMe VPS
Nemesis – IPv6-Only, 2 TB Storage
V8-ing ’til Judgment Day
Promo VPS – $7–$10 / Year (New Trend 2024)
HostSolutions Legendary Deal from the Grave – In Search for Cociu
If you noticed — or are currently noticing — a brief VPS offline state, please don’t worry. This is a normal part of the migration process, and services should now be fully operational on the new platform.
👉 Additional services will be migrated over the next few days, following the same process.
Thank you for your patience and for being part of our journey as we continue to improve performance and reliability.
🎄 Merry Christmas to you all!
HOST-C
THank you merry Christmas! Will there be any christmas promos?
@lowendmeow said: THank you merry Christmas! Will there be any christmas promos?
Thank you, and Merry Christmas!
At the moment, we’re not running any Christmas promotions. We decided to focus our time on resolving ongoing backup API issues, as ensuring service reliability is more important to us than announcing new offers that could be affected by the same problems.
Once everything is fully stable, we’ll revisit new promotions and service announcements.
@lowendmeow said: THank you merry Christmas! Will there be any christmas promos?
Thank you, and Merry Christmas!
At the moment, we’re not running any Christmas promotions. We decided to focus our time on resolving ongoing backup API issues, as ensuring service reliability is more important to us than announcing new offers that could be affected by the same problems.
Once everything is fully stable, we’ll revisit new promotions and service announcements.
that makes sense I'm glad you're focusing on sustainability!! That's what I thought but didn't hurt to check. Best of luck with the migrations
@lowendmeow said: THank you merry Christmas! Will there be any christmas promos?
Thank you, and Merry Christmas!
At the moment, we’re not running any Christmas promotions. We decided to focus our time on resolving ongoing backup API issues, as ensuring service reliability is more important to us than announcing new offers that could be affected by the same problems.
Once everything is fully stable, we’ll revisit new promotions and service announcements.
that makes sense I'm glad you're focusing on sustainability!! That's what I thought but didn't hurt to check. Best of luck with the migrations
Thanks 🙂
By the time we’re done, we’ll probably call this phase the “December 2025 HOST-C PURGE” 😄
Stuart actually started calling it that, so credit where credit is due — and honestly, he’s kinda right.
Small update on the backup situation
Backups are getting done — they always were — but they weren’t showing up in the panel. We’ve now traced this back to the core issue.
When we upgraded the provisioning module from v3.x to v4.x (about 6 months ago), everything went by the book except for one thing:
in the database, some backup locations (destination servers) ended up duplicated or even triplicated.
As a result, every backup job was physically executed correctly, but when it tried to write the record back to the database, it hit the classic
“duplicate shit exists, aborting” error — so nothing appeared in the UI.
We’re currently fixing this without doing a full wipe of settings or causing downtime, and I’m confident we’ll have it sorted shortly.
And yes… right after that, we move on to the next fuckup 😅
The IPv6 migration — from stateless to fully routed IPv6.
What we have now works, but it doesn’t scale anymore — at least not in my vision — so that change is unavoidable. ( As Thanos said "INEVITABLE" )
Backups for some services are already visible in the user panel.
While not all services are fully fixed yet, progress is ongoing and we are steadily getting there.
During debugging, we also identified additional issues which have already been resolved. However, some remaining fixes will require a complete power-off and power-on, not only of individual services, but also of entire nodes.
Unfortunately, there is no safe way around a full shutdown for these changes. We attempted yesterday to apply part of the fixes without taking services fully offline; however, this approach proved unreliable and ultimately forced us to reboot a node. Based on this outcome, we have decided to proceed with a properly planned and controlled maintenance window to avoid unplanned disruptions.
📅 Scheduled Full Maintenance – January 31, 2026
Given the requirement for a full power-off, we will also take this opportunity to perform a major platform upgrade. As a result, we are scheduling a complete infrastructure maintenance on January 31, 2026.
This maintenance will include the following major items (and a few smaller related fixes):
CPU configuration adjustment to resolve a performance-impacting flag
(issue discovered on 02-01-2026)
Final implementation of API Panel communication
(work started in December 2025)
Upgrade of Proxmox VE from version 8.4 to 9.1
Firmware updates on hardware controllers within the Scale Gen 2 platform
Various firmware updates on networking devices
⏱️ Expected duration
This will be a long maintenance window:
Proxmox upgrade alone: approximately 4–6 hours
Remaining work (controller and networking firmware updates) involves slow, sequential processes
Total estimated duration: 8–12 hours
Most of the smaller fixes are already prepared and only require node reboots. The firmware updates on controllers and networking equipment are the most time-consuming part of this maintenance.
🔒 Client Area Access
To avoid accidental power-on or management actions during the maintenance window, access to the client area will be temporarily restricted for the duration of the maintenance.
Further updates will be posted as we get closer to the maintenance window.
HOST-C Team
PS: "We appreciate your patience and understanding" - I know I have said this before a few lot of times, I know downtime is not something you want, and it is something that we as a hosting company definitely do not wish fore, unfortunately there are processes that require to take offline the server/appliance/device.
@Neoon; @AuroraZero and others that have services with us, please announce your customers also.
Backups for some services are already visible in the user panel.
While not all services are fully fixed yet, progress is ongoing and we are steadily getting there.
During debugging, we also identified additional issues which have already been resolved. However, some remaining fixes will require a complete power-off and power-on, not only of individual services, but also of entire nodes.
Unfortunately, there is no safe way around a full shutdown for these changes. We attempted yesterday to apply part of the fixes without taking services fully offline; however, this approach proved unreliable and ultimately forced us to reboot a node. Based on this outcome, we have decided to proceed with a properly planned and controlled maintenance window to avoid unplanned disruptions.
📅 Scheduled Full Maintenance – January 31, 2026
Given the requirement for a full power-off, we will also take this opportunity to perform a major platform upgrade. As a result, we are scheduling a complete infrastructure maintenance on January 31, 2026.
This maintenance will include the following major items (and a few smaller related fixes):
CPU configuration adjustment to resolve a performance-impacting flag
(issue discovered on 02-01-2026)
Final implementation of API Panel communication
(work started in December 2025)
Upgrade of Proxmox VE from version 8.4 to 9.1
Firmware updates on hardware controllers within the Scale Gen 2 platform
Various firmware updates on networking devices
⏱️ Expected duration
This will be a long maintenance window:
Proxmox upgrade alone: approximately 4–6 hours
Remaining work (controller and networking firmware updates) involves slow, sequential processes
Total estimated duration: 8–12 hours
Most of the smaller fixes are already prepared and only require node reboots. The firmware updates on controllers and networking equipment are the most time-consuming part of this maintenance.
🔒 Client Area Access
To avoid accidental power-on or management actions during the maintenance window, access to the client area will be temporarily restricted for the duration of the maintenance.
Further updates will be posted as we get closer to the maintenance window.
HOST-C Team
PS: "We appreciate your patience and understanding" - I know I have said this before a few lot of times, I know downtime is not something you want, and it is something that we as a hosting company definitely do not wish fore, unfortunately there are processes that require to take offline the server/appliance/device.
@Neoon; @AuroraZero and others that have services with us, please announce your customers also.
Thank you for the heads up man I have currently added it to the que to go out next week.
Comments
You are forgetting my 0 usd deal
Want free vps ? https://microlxc.net
😂😂🤗
Want free vps ? https://microlxc.net
I want, i want...
Shit happens man it is all good. We all make mistakes and I do not hold anything against anyone.
ok, just found out the issue of the random power off on the customers that did not get the email.
It seems, oh boy........
That the fella in the DC is a bit color blind, and powered off the wrong fucking node, even tho that the correct one was off and blinking like a bitch at the corner of the street holding a sign " pick me, pick me ".
O swear, DC staff these days is like.... Never mind.
So you have it folks, "JOE" took off the wrong node.
Thank GOD these do not work in a nuclear power station, or do they relatives work there, didn't quite figure that one out yet.
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”
ROFL a bunch of people lost more billions again
... and I still did not get my free reboot yet.
Minion status revoked!!!! Nah be kind to the guy I am colorblind as well so I know what he is going through.
Maybe is just the hair that's covering your eyes
Not likely man wish it was sometimes
Here is the problem, JOE was not my minion.
Stu - resident minion did he's task correctly - I know cause I checked the logs.
JOE from the DC did the fuckup.
Either way, I am happy this was not an upgrade done on a cooling system in a Nuclear Power Station.
I just love 💕 this job..... Sometimes 🤣
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”
I think it is time we move from billions to trillions, billions are getting pretty common these days.
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”
Still man been where he is Joe is the deckhand that everyone forgets. Be kind to them they usually run on caffeine and no sleep. Anyways it is okay by me and all good. SSHIT happens.
My VPS backup feature is not working in the client area - @host_c - is this related to the guy pulling cables?
I reserve the right to license all of my content under: CC BY-NC-ND. Whatever happens on this forum should stay on this forum.
NO, that is related to SW fuckery, while your backup gets done, and it is visible on the backup server, from the latest and greatest SW update it does not get shown in the user panel.
We have a fix for it, but waiting for a maintenance windows and try to figure it out how to do it without taking off the whole infrastructure for 2 hours again. It is in progress, we will find a way.
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”
Trying to document the location of my servers using LOC dns records... Where is located (approximately) the famous "DIGI (RCS-RDS) DC2" in Oradea/Bihor? Datacenter-specialized websites show nothin' there (!), while "DIGI" shops are too many everywhere all around the damned city
Happy customer at AlexHost, AxusHost, Bakker-IT, Host-C, Ionos, Veesp + NanoKVM ftw.
They never will, as it is a Closed Datacenter Owned by RDS-RCS ( now DIGI ).
You can get only if you are a long time customer and pass their "standards" in what you bring there. They do not advertise Colocation services, fuck, at the price they ask no wonder.
It is expensive as shit compared to what is in Bucharest ( NXDATA 1 & 2, M247 or Voxility ).
It does however have a ton of extras fur us, especially for the B2B customers we have that are 99.9% on DIGI ( for example latencie of 1-3 ms until the Carpatian mountains and 5-15 to the farthest part of Romania from us ( Constanta )
Will probably split the infra in 2 locations soon for you fellas, and only keep the B2B in Oradea plus a very limited servies, but more on that much later.
( not so late as 2026 is right around the corner )
Now back to the better part of things:
As previously promised, we have started upgrading selected services from the decommissioned E5-V4 platform to our newer Scale Gen 2 infrastructure.
Today, we successfully migrated the following services to Scale Gen 2:
BF – Revuelto NVMe VPS
Nemesis – IPv6-Only, 2 TB Storage
V8-ing ’til Judgment Day
Promo VPS – $7–$10 / Year (New Trend 2024)
HostSolutions Legendary Deal from the Grave – In Search for Cociu
If you noticed — or are currently noticing — a brief VPS offline state, please don’t worry. This is a normal part of the migration process, and services should now be fully operational on the new platform.
👉 Additional services will be migrated over the next few days, following the same process.
Thank you for your patience and for being part of our journey as we continue to improve performance and reliability.
🎄 Merry Christmas to you all!
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”
Can't wait for this! Hopefully with some new upstreams too!
The Ultimate Speedtest Script | Get Instant Alerts on new LES/LET deals | Cheap VPS Deals | VirMach Flash Sales Notifier
FREE KVM VPS - FreeVPS.org | FREE LXC VPS - MicroLXC
I have no idea what this deal was but I kinda want want just for the name. Well done.
It’s the $7 a year 1TB one
Best and friendliest hosts Host-C , Hostbrr aff.
THank you merry Christmas! Will there be any christmas promos?
Thank you, and Merry Christmas!
At the moment, we’re not running any Christmas promotions. We decided to focus our time on resolving ongoing backup API issues, as ensuring service reliability is more important to us than announcing new offers that could be affected by the same problems.
Once everything is fully stable, we’ll revisit new promotions and service announcements.
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”
that makes sense I'm glad you're focusing on sustainability!! That's what I thought but didn't hurt to check. Best of luck with the migrations
Thanks 🙂
By the time we’re done, we’ll probably call this phase the “December 2025 HOST-C PURGE” 😄
Stuart actually started calling it that, so credit where credit is due — and honestly, he’s kinda right.
Small update on the backup situation
Backups are getting done — they always were — but they weren’t showing up in the panel. We’ve now traced this back to the core issue.
When we upgraded the provisioning module from v3.x to v4.x (about 6 months ago), everything went by the book except for one thing:
in the database, some backup locations (destination servers) ended up duplicated or even triplicated.
As a result, every backup job was physically executed correctly, but when it tried to write the record back to the database, it hit the classic
“duplicate shit exists, aborting” error — so nothing appeared in the UI.
We’re currently fixing this without doing a full wipe of settings or causing downtime, and I’m confident we’ll have it sorted shortly.
And yes… right after that, we move on to the next fuckup 😅
The IPv6 migration — from stateless to fully routed IPv6.
What we have now works, but it doesn’t scale anymore — at least not in my vision — so that change is unavoidable. ( As Thanos said "INEVITABLE" )
Fun times ahead 🚀
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”
Updates
Backups for some services are already visible in the user panel.
While not all services are fully fixed yet, progress is ongoing and we are steadily getting there.
During debugging, we also identified additional issues which have already been resolved. However, some remaining fixes will require a complete power-off and power-on, not only of individual services, but also of entire nodes.
Unfortunately, there is no safe way around a full shutdown for these changes. We attempted yesterday to apply part of the fixes without taking services fully offline; however, this approach proved unreliable and ultimately forced us to reboot a node. Based on this outcome, we have decided to proceed with a properly planned and controlled maintenance window to avoid unplanned disruptions.
📅 Scheduled Full Maintenance – January 31, 2026
Given the requirement for a full power-off, we will also take this opportunity to perform a major platform upgrade. As a result, we are scheduling a complete infrastructure maintenance on January 31, 2026.
This maintenance will include the following major items (and a few smaller related fixes):
⏱️ Expected duration
This will be a long maintenance window:
Proxmox upgrade alone: approximately 4–6 hours
Remaining work (controller and networking firmware updates) involves slow, sequential processes
Total estimated duration: 8–12 hours
Most of the smaller fixes are already prepared and only require node reboots. The firmware updates on controllers and networking equipment are the most time-consuming part of this maintenance.
🔒 Client Area Access
To avoid accidental power-on or management actions during the maintenance window, access to the client area will be temporarily restricted for the duration of the maintenance.
Further updates will be posted as we get closer to the maintenance window.
HOST-C Team
PS: "We appreciate your patience and understanding" - I know I have said this before a few lot of times, I know downtime is not something you want, and it is something that we as a hosting company definitely do not wish fore, unfortunately there are processes that require to take offline the server/appliance/device.
@Neoon; @AuroraZero and others that have services with us, please announce your customers also.
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”
Thank you for the heads up man I have currently added it to the que to go out next week.
😉
No hostname available. affbrr
I knew that will get your attention
Host-C | Storage by Design | AS211462
“If it can’t guarantee behavior under load, it doesn’t belong in production.”