According to https://status.virm.ac/, recently LAXA008, LAXA009, and LAXA010 have been experiencing synchronized, bursts of high occupancy (peaks close to 100%) almost every day for about 15 minutes at a time, during which time it causes my VPS on one of the nodes to become nearly unavailable, triggering a bunch of anomaly alarms.
Should I request @VirMach to pay attention to this, or should I silently turn off the anomaly alerts?
Have the honor of being the crybaby who pays $20 for a 128MB VPS at VirMach in 2023.
Mon, 20 May 2024 19:48:00 -0700 FFME001, FFME002, FFME003, FFME004, FFME005, FFME006, and FFME007 will be physically migrated between 10PM local time on 05/24/2025 and set up in Amsterdam the next day (05/25/2025.) This migration may take approximately 12 to 18 hours . Your service will return online with the same data & same IP address, but network connectivity may have intermittent issues initially. We will enable a feature where you can activate a second temporary service since the downtime may be extended. This will appear on the left sidebar, and you can activate a service in any other available location, and then choose to either keep your old service or new service once maintenance is completed. You will have access to this tool for approximately two weeks. It has not yet been activated, but we're working hard to enable it for you. Once it is ready, you will receive an email from us. In any case, we'll attempt to email notifications at least 72 hours in advance for the maintenance/migration. We are providing this update as a "reported" incident to ensure it is more easily seen.
Rest in peace Frankfurt.
Also holy shit @VirMach please - 12 to 18 hours for physical migration where driving gonna take at least 6 hours alone?!
Mon, 20 May 2024 19:48:00 -0700 FFME001, FFME002, FFME003, FFME004, FFME005, FFME006, and FFME007 will be physically migrated between 10PM local time on 05/24/2025 and set up in Amsterdam the next day (05/25/2025.) This migration may take approximately 12 to 18 hours . Your service will return online with the same data & same IP address, but network connectivity may have intermittent issues initially. We will enable a feature where you can activate a second temporary service since the downtime may be extended. This will appear on the left sidebar, and you can activate a service in any other available location, and then choose to either keep your old service or new service once maintenance is completed. You will have access to this tool for approximately two weeks. It has not yet been activated, but we're working hard to enable it for you. Once it is ready, you will receive an email from us. In any case, we'll attempt to email notifications at least 72 hours in advance for the maintenance/migration. We are providing this update as a "reported" incident to ensure it is more easily seen.
Rest in peace Frankfurt.
Just curious--did you receive an email about this, or notice it on the network status page?
I'm seeing the new option to activate a secondary service! Trying to decide if I want to take advantage of the situation to permanently move my $3/year VPS back to the USA. (Looks like Miami, Chicago, NYC, Atlanta, LA, Amsterdam, and San Jose are the options.)
Welp, 30 minutes later nothing changed and primary service is still down.
35 minutes clicked "Refresh" button... WHMCS loaded -secondary server for this service ID... first one is gone from WHMCS, but still shows up in SolusVM panel.
Booted primary from SolusVM, seems still to be working.
Booted secondary from SolusVM - no disk error.
Trying different service, progress bar still not moving. @VirMach javascript expects element with id dataprogress and there is no such thing in popup - there is one progress element, but no id attached.
clientarea.php?actio…etails&id=23941:250 Uncaught
TypeError: Cannot set properties of null (setting 'value')
at clientarea.php?actio…ils&id=23941:250:32
// Animate progress bar
var intervalId = setInterval(function() {
value += increment;
progress.value = value;
document.getElementById('dataprogress')
null
Okey, clicking Refresh detected that migration and showed nice(r) popup.
I created a secondary with no data migration and it was replying to pings after a few minutes. Had to use the rescue image to install my keys and change the password. The solus vm panel shows the secondary service has 5 TB of transfer quota instead of 100 GB the old one had!
Starting to think that AMSD031T36 is broken - I've tried to migrate two servers with data, but both ended with no disk fuck off. Tried reinstall one, no disk, fuck off.
I've migrated one more with data that didn't end on AMSD031T36 and everything is working - bah, I even see "Secondary service" button in WHCMS and it loads nice page showing -primary and -secondary, timeleft and switch button.
Why do I have a feeling in 6 days I am gonna get rekt - it will delete my failed-to-migrate primary and I will be left with two broken -secondary
This also totally nukes my plans to semi-manually dd migrate data from Frankfurt to Amsterdam after failed automatic migration - can't really migrate if the -secondary has no disk, lmao?
EDIT: correct size 'disk' shows up in rescue mode and fdisk, dd attempt it is! ;')
37547409408 bytes (38 GB, 35 GiB) copied, 500.026 s, 75.1 MB/s
1120+0 records in
1120+0 records out
37580963840 bytes (38 GB, 35 GiB) copied, 500.411 s, 75.1 MB/s
73400320+0 records in
73400320+0 records out
37580963840 bytes (38 GB, 35 GiB) copied, 489.096 s, 76.8 MB/s
@Jab said:
Why do I have a feeling in 6 days I am gonna get rekt - it will delete my failed-to-migrate primary and I will be left with two broken -secondary
I don’t trust that either that’s why I’m pretty sure I left that part of the code disabled for now so I can process them manually. But I don’t trust me either. Hmm
Starting to think that AMSD031T36 is broken - I've tried to migrate two servers with data, but both ended with no disk fuck off. Tried reinstall one, no disk, fuck off.
I've migrated one more with data that didn't end on AMSD031T36 and everything is working - bah, I even see "Secondary service" button in WHCMS and it loads nice page showing -primary and -secondary, timeleft and switch button.
>
I hate that server. I was going to have a look but I’ve mostly just been driving.
Frankfurt has been successfully destroyed. More driving, currently ahead of schedule in terms of the 12 to 18 hours downtime. I’ll make sure to break something so we’re back on track.
Wow, tried to create second service for one vm with data transfer, now I can't access it in the billing panel anymore 🥲
Cloudflare Timeout: Error code 524
Mon, 20 May 2024 19:48:00 -0700 FFME001, FFME002, FFME003, FFME004, FFME005, FFME006, and FFME007 will be physically migrated between 10PM local time on 05/24/2025 and set up in Amsterdam the next day (05/25/2025.) This migration may take approximately 12 to 18 hours . Your service will return online with the same data & same IP address, but network connectivity may have intermittent issues initially. We will enable a feature where you can activate a second temporary service since the downtime may be extended. This will appear on the left sidebar, and you can activate a service in any other available location, and then choose to either keep your old service or new service once maintenance is completed. You will have access to this tool for approximately two weeks. It has not yet been activated, but we're working hard to enable it for you. Once it is ready, you will receive an email from us. In any case, we'll attempt to email notifications at least 72 hours in advance for the maintenance/migration. We are providing this update as a "reported" incident to ensure it is more easily seen.
Rest in peace Frankfurt.
Also holy shit @VirMach please - 12 to 18 hours for physical migration where driving gonna take at least 6 hours alone?!
Hey @Jab, did you get an email notification about this or you just monitoring the Network Status page all the time?
Comments
You need to ... "Let it go...." "Let it goo..."
https://microlxc.net/
According to https://status.virm.ac/, recently LAXA008, LAXA009, and LAXA010 have been experiencing synchronized, bursts of high occupancy (peaks close to 100%) almost every day for about 15 minutes at a time, during which time it causes my VPS on one of the nodes to become nearly unavailable, triggering a bunch of anomaly alarms.
Should I request @VirMach to pay attention to this, or should I silently turn off the anomaly alerts?
Have the honor of being the crybaby who pays $20 for a 128MB VPS at VirMach in 2023.
Rest in peace Frankfurt.
Also holy shit @VirMach please - 12 to 18 hours for physical migration where driving gonna take at least 6 hours alone?!
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
something's happening!
I bench YABS 24/7/365 unless it's a leap year.
Oh, nooo We're all gonna die!
Brace for impact!!
https://microlxc.net/
Just curious--did you receive an email about this, or notice it on the network status page?
I'm seeing the new option to activate a secondary service! Trying to decide if I want to take advantage of the situation to permanently move my $3/year VPS back to the USA. (Looks like Miami, Chicago, NYC, Atlanta, LA, Amsterdam, and San Jose are the options.)
Amazing how my $8/yr VPS can fund an automated VPS Control Panel
食之无味 弃之可惜 - Too arduous to relish, too wasteful to discard.
anyone experiencing serious network problem from LAXA018?
Secondary service is generating. Please do not interrupt the process with any actions.
I love the rave effect :-D
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Welp, 30 minutes later nothing changed and primary service is still down.
35 minutes clicked "Refresh" button... WHMCS loaded
-secondary
server for this service ID... first one is gone from WHMCS, but still shows up in SolusVM panel.Booted primary from SolusVM, seems still to be working.
Booted secondary from SolusVM - no disk error.
I guess data transfer failed
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Trying different service, progress bar still not moving.
@VirMach javascript expects element with id dataprogress and there is no such thing in popup - there is one
progress
element, but noid
attached.Okey, clicking Refresh detected that migration and showed nice(r) popup.
2-3 hours holly molly macaroni cheese
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
I created a secondary with no data migration and it was replying to pings after a few minutes. Had to use the rescue image to install my keys and change the password. The solus vm panel shows the secondary service has 5 TB of transfer quota instead of 100 GB the old one had!
Starting to think that
AMSD031T36
is broken - I've tried to migrate two servers with data, but both ended with no disk fuck off. Tried reinstall one, no disk, fuck off.I've migrated one more with data that didn't end on
AMSD031T36
and everything is working - bah, I even see "Secondary service" button in WHCMS and it loads nice page showing -primary and -secondary, timeleft and switch button.Why do I have a feeling in 6 days I am gonna get rekt - it will delete my failed-to-migrate primary and I will be left with two broken -secondary
This also totally nukes my plans to semi-manually
dd
migrate data from Frankfurt to Amsterdam after failed automatic migration - can't really migrate if the -secondary has no disk, lmao?EDIT: correct size 'disk' shows up in rescue mode and fdisk,
dd
attempt it is! ;')EDIT2:
"This was a triumph. I'm making a note here: HUGE SUCCESS."
Only needed to rename eth0 to ens3 in interface, thanks Debian & VirMach network auto-reconfig!
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
I don’t trust that either that’s why I’m pretty sure I left that part of the code disabled for now so I can process them manually. But I don’t trust me either. Hmm
>
I hate that server. I was going to have a look but I’ve mostly just been driving.
Frankfurt has been successfully destroyed. More driving, currently ahead of schedule in terms of the 12 to 18 hours downtime. I’ll make sure to break something so we’re back on track.
Oh noooeeeeee
https://microlxc.net/
REST IN PEACE FRANKFURT
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Love how there was no communication prior to to the downtime. Not heard anything since the attempt in January.
Proper low-end!
Wow, tried to create second service for one vm with data transfer, now I can't access it in the billing panel anymore 🥲
Cloudflare Timeout: Error code 524
From the sounds of it, Virmach has been driving aimlessly around the world's DCs, so maybe he doesn't have roaming on his phone?
Hey @Jab, did you get an email notification about this or you just monitoring the Network Status page all the time?
No e-mail
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Yeah, I also didn't get any e-mails about the FFME downtime/migrations. I loved Frankfurt.
I also tried to use that "Secondary Service" button on 1 of my 3 in FFME without success.
First it gave: " {"status":"error","message":"Not eligible to activate secondary service."} "
And now that panel keeps giving a CloudFlare timeout.
I think you guys were supposed to use the Secondary Service before shit went down aka WHMCS gonna shit itself because nodes offline.
Yeah, I know, I am being very helpful.
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
This migration may take approximately 12 to 18 hours.
SOON™
mere minutes left for everything come back online, right? Right? RIGHT?!
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Hey, what's up?
NOT Frankfurt.
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
It's a very long 12-18 hours.
Do you know Virmach reached 10000 customers in 2016?
cmon @VirMach deliver the bad news already :-D
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png