I have received several congratulations about Chisinau VPSes which are working great. However, there also have been several reports of VPSes which are offline, which I confirm. Additionally, there are multiple VPSes running very high CPU. . . .
@Not_Oles said:
I have received several congratulations about Chisinau VPSes which are working great. However, there also have been several reports of VPSes which are offline, which I confirm. Additionally, there are multiple VPSes running very high CPU. . . .
They are just mining LES coin and creating AI generated content for posting here
I missed one space in a yaml configuration file when testing the first of the Chisinau node's newly assigned IP addresses!
root@alexhost:~/vm-seed/vmXX# yamllint network-config
network-config
1:1 warning missing document start "---" (document-start)
11:8 error wrong indentation: expected 8 but found 7 (indentation)
12:14 error syntax error: mapping values are not allowed here (syntax)
root@alexhost:~/vm-seed/vmXX#
--------
After adding one space on line 11:
root@alexhost:~/vm-seed/vmXX# yamllint network-config
network-config
1:1 warning missing document start "---" (document-start)
root@alexhost:~/vm-seed/vmXX#
Looks like the newly assigned IP addresses do work just fine with the previous and now again reverted netplan when the yaml configuration is correct.
Now I have to figure out why some (but definitely not all!) of the previously made VPSes stopped WAN network access and picked up high %CPU after the netplan change and reversion, which turned out not to have been necessary.
My fault! Sorry!
Thanks again to Alexhost Support for trying to help! Thanks again to Alexhost for your kind donation!
Here's a Yabs from inside the new Chisinau VPS that I just made. Its IPv4 is one from the newly assigned group. The results seem pretty good, especially considering that several of the existing VMs currently are running 100% CPU. The results are -- if memory serves -- consistent with earlier Yabs when the node was not stressed.
This one is for @AuroraZero if he wants it. Note the terms from FOSSVPS.org: "Ephemeral. Could disappear at any moment!" So, not "stable."
@Not_Oles said:
This one is for @AuroraZero if he wants it. Note the terms from FOSSVPS.org: "Ephemeral. Could disappear at any moment!" So, not "stable."
Sounds perfect for the Yeti, who is also Ephemeral and people has spent decades searching for him
Given the lack of good ideas about the high %CPU VPSes, and in the absence of objection, I propose to shut them down, perhaps as soon as tomorrow.
One thing we might try is reverting one of the high CPU VPSes to its fresh snapshot. Maybe they all will work again if reverted. Otherwise I guess they could be reinstalled.
In the future, we could make snapshots after users finish their basic installs. With "basic-install" snapshots we could revert to an installed state.
I made a second new Ubuntu 24.04 VPS with another of the newly assigned IPv4s. It works great and will be assigned to someone, along with more, soon. You can see a list of the requests at https://fossvps.org/list.html.
@Not_Oles said:
I made a second new Ubuntu 24.04 VPS with another of the newly assigned IPv4s. It works great and will be assigned to someone, along with more, soon. You can see a list of the requests at https://fossvps.org/list.html.
At the moment there are 27 VPSes configured on the Chisinau server.
I don't know why, but 8 VPSes seemed to be running continuously at high %CPU following recent update and revert of the netplan. As far as I am aware, the remaining 19 VPSes are working fine.
All 8 of the overperforming VPSes have been shutdown. From here at LES, @jmaxwell is down.
@jmaxwell Do you want me to attempt restoring your VPS from your "fresh" snapshot? Attempting the restore will destroy whatever happened subsequent to the snapshot. Alternatively, we could rebuild your VPS from scratch. Please let me know.
By the way, if you restore or reinstall, we could make a another snapshot after your basic install is completed. That way, if a revert happens again, we could revert to your basic installed state.
Or, maybe, someone has a better idea about how to adjust the overperforming VPSes?
The FOSSVPS website has been updated to show the total of 47 previously assigned VPSes (up from the September 12 count of 40) and the total of 5 VPSes currently available for assignment.
The FOSSVPS Request List has been updated to move Assigned VPSes to the bottom so that the list of Pending and Accepted Requests at the top of the page is easier to read. If you have a Pending or an Accepted Request, please double check that your Request appears on the List.
We would like to inform you that our Data Center will undergo scheduled maintenance on the power infrastructure in order to enhance reliability and future-proof our systems.
Maintenance Window:
Date: 06 October 2025
Time: 14:00 - 16:00 UTC+3
Expected Impact: Your dedicated server(s) may experience downtime of up to 2 hours during this period.
Our engineering team will work to minimize service interruption and restore operations as quickly as possible. All critical systems will be closely monitored before, during, and after the maintenance.
We recommend that you take any necessary precautions (such as notifying your users or scheduling tasks outside of this window) to mitigate potential disruption.
If you have any questions or require assistance, please do not hesitate to contact our support team.
We would like to inform you that our Data Center will undergo scheduled maintenance on the power infrastructure in order to enhance reliability and future-proof our systems.
Maintenance Window:
Date: 06 October 2025
Time: 14:00 - 16:00 UTC+3
Expected Impact: Your dedicated server(s) may experience downtime of up to 2 hours during this period.
Our engineering team will work to minimize service interruption and restore operations as quickly as possible. All critical systems will be closely monitored before, during, and after the maintenance.
We recommend that you take any necessary precautions (such as notifying your users or scheduling tasks outside of this window) to mitigate potential disruption.
If you have any questions or require assistance, please do not hesitate to contact our support team.
At the moment there are 27 VPSes configured on the Chisinau server.
I don't know why, but 8 VPSes seemed to be running continuously at high %CPU following recent update and revert of the netplan. As far as I am aware, the remaining 19 VPSes are working fine.
All 8 of the overperforming VPSes have been shutdown. From here at LES, @jmaxwell is down.
@jmaxwell Do you want me to attempt restoring your VPS from your "fresh" snapshot? Attempting the restore will destroy whatever happened subsequent to the snapshot. Alternatively, we could rebuild your VPS from scratch. Please let me know.
By the way, if you restore or reinstall, we could make a another snapshot after your basic install is completed. That way, if a revert happens again, we could revert to your basic installed state.
Or, maybe, someone has a better idea about how to adjust the overperforming VPSes?
At the moment there are 27 VPSes configured on the Chisinau server.
I don't know why, but 8 VPSes seemed to be running continuously at high %CPU following recent update and revert of the netplan. As far as I am aware, the remaining 19 VPSes are working fine.
All 8 of the overperforming VPSes have been shutdown. From here at LES, @jmaxwell is down.
@jmaxwell Do you want me to attempt restoring your VPS from your "fresh" snapshot? Attempting the restore will destroy whatever happened subsequent to the snapshot. Alternatively, we could rebuild your VPS from scratch. Please let me know.
By the way, if you restore or reinstall, we could make a another snapshot after your basic install is completed. That way, if a revert happens again, we could revert to your basic installed state.
Or, maybe, someone has a better idea about how to adjust the overperforming VPSes?
@jmaxwell said: Yeah, please try restoring. If it works okay, otherwise I’ll just reconfigure scratch again.
Restoring from the snapshot did not work. The VPS went right back to high CPU.
Next is to rebuild it from scratch.
All of the several others which had simultaneous, similar issues seem now to have been happy since they were rebuilt. So I have good hopes for your VPS too!
It is not too hard to rebuild all the VPSes numbered beginning with 2. There is a directory with scripts for each VPS, so I have to go through that directory, rerun the scripts, and double check.
Your VPS is rebuilt from scratch since restoring from the snapshot did not work. The new version is not using backing store, which means everything needed is actually within your qcow2 file.
Everything should work as before. I have tested that I can get in on IPv4 and on IPv6. Please remove my keys if you wish. Password login is disabled. Yabs from inside your VPS is shown below.
Please post here to let us know if you get in and if everything looks okay.
virsh seems to think that some of the VPSes whose qcow2 files were deleted are still running.
I logged in to one VPS where the client apparently had not deleted my ssh key. I did nothing except log out immediately, but it seems people still could log in to their VPSes even though the .qcow2 files are gone.
I dont know what use it would be to log in, but maybe I ought not to rebuild VPSes until clients have had a chance to log in if they want to do so.
Clients who are affected are those with Alexhost VPSes numbered vm2*. If you are in the vm2* group, please let me know if I should go ahead and rebuild your VPS.
virsh seems to think that some of the VPSes whose qcow2 files were deleted are still running.
I logged in to one VPS where the client apparently had not deleted my ssh key. I did nothing except log out immediately, but it seems people still could log in to their VPSes even though the .qcow2 files are gone.
I dont know what use it would be to log in, but maybe I ought not to rebuild VPSes until clients have had a chance to log in if they want to do so.
Maybe it's time to start using a control panel to manage VMs...
I would recommend proxmox, but there was a shell based vm control panel(?) that someone made, which would also work fine.
@Not_Oles said: I did nothing except log out immediately, but it seems people still could log in to their VPSes even though the .qcow2 files are gone.
Just saying, as long as a process still has that file open, the file is still there, just not visible any more in the filesystem. Unfortunately, there is no easy way to relink the file into the filesystem again, but you could copy the contents; see https://serverfault.com/questions/168909/relinking-a-deleted-file
Or to give you a simpler example:
$ echo Hello >test.txt
$ python3
Python 3.13.3 (main, Aug 14 2025, 11:53:40) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> fd = open('test.txt', 'r')
>>>
And while that is still running, do a rm test.txt in another terminal.
Then get the pid of the python3 process and have a look in the /proc filesystem:
$ ls -lF /proc/34864/fd/
total 0
lrwx------ 1 cmeerw cmeerw 64 Oct 3 08:34 0 -> /dev/pts/12
lrwx------ 1 cmeerw cmeerw 64 Oct 3 08:34 1 -> /dev/pts/12
lrwx------ 1 cmeerw cmeerw 64 Oct 3 08:34 2 -> /dev/pts/12
lrwx------ 1 cmeerw cmeerw 64 Oct 3 08:34 3 -> /dev/pts/12
lrwx------ 1 cmeerw cmeerw 64 Oct 3 08:34 4 -> /dev/pts/12
lr-x------ 1 cmeerw cmeerw 64 Oct 3 08:34 5 -> '/home/cmeerw/test.txt (deleted)'
You can still do something like
$ cat /proc/34864/fd/5
Hello
to get the file contents (and maybe copy that to a new file), but as explained in the serverfault link, you can't do ln -L /proc/34864/fd/5 test-recovered.txt directly.
@cmeerw said: Unfortunately, there is no easy way to relink the file into the filesystem again, but you could copy the contents
@cmeerw What happens if the file is a 100 GB qcow2 file instead of just Hello? Catting the file out of /proc can not be a practical method of recovery when the file is so big?
Inside /proc, there is more, in addition to exactly what you already mentioned:
@cmeerw said: Unfortunately, there is no easy way to relink the file into the filesystem again, but you could copy the contents
@cmeerw What happens if the file is a 100 GB qcow2 file instead of just Hello? Catting the file out of /proc can not be a practical method of recovery when the file is so big?
Shouldn't really be an issue, except that it will take a bit longer - it's just copying the file contents. Although maybe just using cp would be better in this case (maybe even with --sparse=always, but be careful to not just copy the symlink).
Of course, you'll need to have the extra disk space, so maybe copy the VM images one by one, and shut the VM down once you have copied the image for that VM.
Inside /proc, there is more, in addition to exactly what you already mentioned:
Assuming I can use the -L option with cp because the -L option is different between cp(1) (dereference sources which are symlinks) and ln(1) (dereference targets which are symlinks) , I should try something like
We would like to inform you that our Data Center will undergo scheduled maintenance on the power infrastructure in order to enhance reliability and future-proof our systems.
Maintenance Window:
Date: 06 October 2025
Time: 14:00 - 16:00 UTC+3
Expected Impact: Your dedicated server(s) may experience downtime of up to 2 hours during this period.
Our engineering team will work to minimize service interruption and restore operations as quickly as possible. All critical systems will be closely monitored before, during, and after the maintenance.
We recommend that you take any necessary precautions (such as notifying your users or scheduling tasks outside of this window) to mitigate potential disruption.
If you have any questions or require assistance, please do not hesitate to contact our support team.
Regards,
AlexHost Team.
Reminder about Chisinau maintenance coming up soon! If the power to our server is interrupted during the maintenance, we might lose any vm2* qcow2 and iso files still alive in /proc. So, if anyone wants to copy their files out of /proc, it might be best to do that before the maintenance.
Comments
Two client reports confirmed the issues I was seeing. Netplan reverted. Chisinau node rebooted.
I hope everyone gets the servers they want!
Great news! Chisinau node came back up without needing a power bump! Also, I can ssh into the node and into my test VPS via IPv4 and via IPv6.
Hopefully we are back to where we left off before the netplan adjustment!
I need to sleep now. Anybody still having problems please post, and I will try to fix.
Thanks to Alexhost!
Best wishes!
I hope everyone gets the servers they want!
I have received several congratulations about Chisinau VPSes which are working great. However, there also have been several reports of VPSes which are offline, which I confirm. Additionally, there are multiple VPSes running very high CPU. . . .
I hope everyone gets the servers they want!
They are just mining LES coin and creating AI generated content for posting here
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
Mystery partially solved!
I missed one space in a yaml configuration file when testing the first of the Chisinau node's newly assigned IP addresses!
Looks like the newly assigned IP addresses do work just fine with the previous and now again reverted netplan when the yaml configuration is correct.
Now I have to figure out why some (but definitely not all!) of the previously made VPSes stopped WAN network access and picked up high %CPU after the netplan change and reversion, which turned out not to have been necessary.
My fault!
Sorry! 
Thanks again to Alexhost Support for trying to help! Thanks again to Alexhost for your kind donation!
I hope everyone gets the servers they want!
Here's a Yabs from inside the new Chisinau VPS that I just made. Its IPv4 is one from the newly assigned group. The results seem pretty good, especially considering that several of the existing VMs currently are running 100% CPU. The results are -- if memory serves -- consistent with earlier Yabs when the node was not stressed.
This one is for @AuroraZero if he wants it. Note the terms from FOSSVPS.org: "Ephemeral. Could disappear at any moment!" So, not "stable."
I hope everyone gets the servers they want!
Sounds perfect for the Yeti, who is also Ephemeral and people has spent decades searching for him
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
Given the lack of good ideas about the high %CPU VPSes, and in the absence of objection, I propose to shut them down, perhaps as soon as tomorrow.
One thing we might try is reverting one of the high CPU VPSes to its fresh snapshot. Maybe they all will work again if reverted. Otherwise I guess they could be reinstalled.
In the future, we could make snapshots after users finish their basic installs. With "basic-install" snapshots we could revert to an installed state.
I made a second new Ubuntu 24.04 VPS with another of the newly assigned IPv4s. It works great and will be assigned to someone, along with more, soon. You can see a list of the requests at https://fossvps.org/list.html.
Thanks everyone!
Thanks Alexhost! 
I hope everyone gets the servers they want!
2 of 4 assigned? So 2 more to go?
Grab them while they're hot!
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
2 of 5 assigned? So 3 more to go?
I have to update the numbers on fossvps.org.
I hope everyone gets the servers they want!
Sorry man busy trying to fix my truck and doc appointments. Should be able to get to this this weekend.
Free Hosting at YetiNode | MicroNode| Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
Best of luck with both doc and truck!
I hope everyone gets the servers they want!
Hello!
At the moment there are 27 VPSes configured on the Chisinau server.
I don't know why, but 8 VPSes seemed to be running continuously at high %CPU following recent update and revert of the netplan. As far as I am aware, the remaining 19 VPSes are working fine.
All 8 of the overperforming VPSes have been shutdown. From here at LES, @jmaxwell is down.
@jmaxwell Do you want me to attempt restoring your VPS from your "fresh" snapshot? Attempting the restore will destroy whatever happened subsequent to the snapshot. Alternatively, we could rebuild your VPS from scratch. Please let me know.
By the way, if you restore or reinstall, we could make a another snapshot after your basic install is completed. That way, if a revert happens again, we could revert to your basic installed state.
Or, maybe, someone has a better idea about how to adjust the overperforming VPSes?
Thanks! Sorry for the issue!
Thanks to Alexhost! 
Tom
I hope everyone gets the servers they want!
Update
The FOSSVPS website has been updated to show the total of 47 previously assigned VPSes (up from the September 12 count of 40) and the total of 5 VPSes currently available for assignment.
The FOSSVPS Request List has been updated to move Assigned VPSes to the bottom so that the list of Pending and Accepted Requests at the top of the page is easier to read. If you have a Pending or an Accepted Request, please double check that your Request appears on the List.
Thanks everyone!
I hope everyone gets the servers they want!
@Not_Oles Thank you for all the assistance and the great VPS!!
Free Hosting at YetiNode | MicroNode| Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
Maintenance at Alexhost
Alexhost says:
Dear customers.
We would like to inform you that our Data Center will undergo scheduled maintenance on the power infrastructure in order to enhance reliability and future-proof our systems.
I hope everyone gets the servers they want!
Thank you for heads up man!
Free Hosting at YetiNode | MicroNode| Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
Yeah, please try restoring. If it works okay, otherwise I’ll just reconfigure scratch again.
Why?
Restoring from the snapshot did not work. The VPS went right back to high CPU.
Next is to rebuild it from scratch.
All of the several others which had simultaneous, similar issues seem now to have been happy since they were rebuilt. So I have good hopes for your VPS too!
I will post again after rebuilding.
Best wishes!
Tom
I hope everyone gets the servers they want!
@Not_Oles just make a spectacular noob mistake!
During the process of rebuilding vm2 for @jmaxwell I went to remove the qcow2 image and the vm-seed.iso for vm2.
Heres what went wrong:
It is not too hard to rebuild all the VPSes numbered beginning with 2. There is a directory with scripts for each VPS, so I have to go through that directory, rerun the scripts, and double check.
I will post more as events progress.
But it will take a while.
Sorry!
Wow!
Tom
I hope everyone gets the servers they want!
@jmaxwell
Your VPS is rebuilt from scratch since restoring from the snapshot did not work. The new version is not using backing store, which means everything needed is actually within your qcow2 file.
Everything should work as before. I have tested that I can get in on IPv4 and on IPv6. Please remove my keys if you wish. Password login is disabled. Yabs from inside your VPS is shown below.
Please post here to let us know if you get in and if everything looks okay.
Sorry about continuing problems!
Best wishes!
Tom
I hope everyone gets the servers they want!
Unexpected discovery!
virsh seems to think that some of the VPSes whose qcow2 files were deleted are still running.
I logged in to one VPS where the client apparently had not deleted my ssh key. I did nothing except log out immediately, but it seems people still could log in to their VPSes even though the .qcow2 files are gone.
I dont know what use it would be to log in, but maybe I ought not to rebuild VPSes until clients have had a chance to log in if they want to do so.
Clients who are affected are those with Alexhost VPSes numbered vm2*. If you are in the vm2* group, please let me know if I should go ahead and rebuild your VPS.
Sorry for the inconvenience!
Tom
I hope everyone gets the servers they want!
Maybe it's time to start using a control panel to manage VMs...
I would recommend proxmox, but there was a shell based vm control panel(?) that someone made, which would also work fine.
Never make the same mistake twice. There are so many new ones to make.
It’s OK if you disagree with me. I can’t force you to be right.
Just saying, as long as a process still has that file open, the file is still there, just not visible any more in the filesystem. Unfortunately, there is no easy way to relink the file into the filesystem again, but you could copy the contents; see https://serverfault.com/questions/168909/relinking-a-deleted-file
Or to give you a simpler example:
And while that is still running, do a
rm test.txt
in another terminal.Then get the pid of the python3 process and have a look in the
/proc
filesystem:You can still do something like
to get the file contents (and maybe copy that to a new file), but as explained in the serverfault link, you can't do
ln -L /proc/34864/fd/5 test-recovered.txt
directly.@cmeerw What happens if the file is a 100 GB qcow2 file instead of just Hello? Catting the file out of /proc can not be a practical method of recovery when the file is so big?
Inside /proc, there is more, in addition to exactly what you already mentioned:
I hope everyone gets the servers they want!
Shouldn't really be an issue, except that it will take a bit longer - it's just copying the file contents. Although maybe just using
cp
would be better in this case (maybe even with--sparse=always
, but be careful to not just copy the symlink).Of course, you'll need to have the extra disk space, so maybe copy the VM images one by one, and shut the VM down once you have copied the image for that VM.
Quite a lot of eventfds... but still looks fine.
@cmeerw
Given
Assuming I can use the -L option with cp because the -L option is different between cp(1) (dereference sources which are symlinks) and ln(1) (dereference targets which are symlinks) , I should try something like
Thanks so much, as always!
I hope everyone gets the servers they want!
Reminder about Chisinau maintenance coming up soon! If the power to our server is interrupted during the maintenance, we might lose any vm2* qcow2 and iso files still alive in /proc. So, if anyone wants to copy their files out of /proc, it might be best to do that before the maintenance.
I hope everyone gets the servers they want!
One of the Nodeseek guys reported that he can log into his VPS and can install software.
But his qcow2 file is gone!
Amazing! As @cmeerw says, the VPS persists inside /proc. Wow!
I hope everyone gets the servers they want!
In case anyone might be interested, I posted the steps I took to possibly recover a VPS from /proc at https://fossvps.org/recover.html. A few comments are posted at https://www.nodeseek.com/post-431916-36#359.
Best wishes!
I hope everyone gets the servers they want!