[FOSSVPS - Alexhost] Free Chisinau 2 x E5-2680 v2, 4 vCores, 4 GB DDR3 ECC RAM, 100 GB RAID 10 Disk!

13

Comments

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited September 22

    Two client reports confirmed the issues I was seeing. Netplan reverted. Chisinau node rebooted.

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited September 22

    Great news! Chisinau node came back up without needing a power bump! Also, I can ssh into the node and into my test VPS via IPv4 and via IPv6.

    Hopefully we are back to where we left off before the netplan adjustment!

    I need to sleep now. Anybody still having problems please post, and I will try to fix.

    Thanks to Alexhost! <3

    Best wishes!

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    I have received several congratulations about Chisinau VPSes which are working great. However, there also have been several reports of VPSes which are offline, which I confirm. Additionally, there are multiple VPSes running very high CPU. . . .

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    I have received several congratulations about Chisinau VPSes which are working great. However, there also have been several reports of VPSes which are offline, which I confirm. Additionally, there are multiple VPSes running very high CPU. . . .

    They are just mining LES coin and creating AI generated content for posting here :lol:

    Thanked by (1)Not_Oles

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Mystery partially solved!

    I missed one space in a yaml configuration file when testing the first of the Chisinau node's newly assigned IP addresses!

    root@alexhost:~/vm-seed/vmXX# yamllint network-config 
    network-config
      1:1       warning  missing document start "---"  (document-start)
      11:8      error    wrong indentation: expected 8 but found 7  (indentation)
      12:14     error    syntax error: mapping values are not allowed here (syntax)
    
    root@alexhost:~/vm-seed/vmXX# 
    
    --------
    
    After adding one space on line 11:
    
    
    root@alexhost:~/vm-seed/vmXX# yamllint network-config 
    network-config
      1:1       warning  missing document start "---"  (document-start)
    
    root@alexhost:~/vm-seed/vmXX# 
    

    Looks like the newly assigned IP addresses do work just fine with the previous and now again reverted netplan when the yaml configuration is correct.

    Now I have to figure out why some (but definitely not all!) of the previously made VPSes stopped WAN network access and picked up high %CPU after the netplan change and reversion, which turned out not to have been necessary.

    My fault! :) Sorry! <3

    Thanks again to Alexhost Support for trying to help! Thanks again to Alexhost for your kind donation! <3

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited September 23

    Here's a Yabs from inside the new Chisinau VPS that I just made. Its IPv4 is one from the newly assigned group. The results seem pretty good, especially considering that several of the existing VMs currently are running 100% CPU. The results are -- if memory serves -- consistent with earlier Yabs when the node was not stressed.

    This one is for @AuroraZero if he wants it. Note the terms from FOSSVPS.org: "Ephemeral. Could disappear at any moment!" So, not "stable." <3

    ubuntu@vmXX:~$ curl -sL yabs.sh | bash
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2025-04-20                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Tue Sep 23 17:56:42 UTC 2025
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 0 hours, 31 minutes
    Processor  : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
    CPU cores  : 4 @ 2800.002 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 3.8 GiB
    Swap       : 0.0 KiB
    Disk       : 96.8 GiB
    Distro     : Ubuntu 24.04.3 LTS
    Kernel     : 6.8.0-71-generic
    VM Type    : KVM
    IPv4/IPv6  : ✔ Online / ✔ Online
    
    IPv6 Network Information:
    ---------------------------------
    ISP        : Alexhost SRL
    ASN        : AS200019 ALEXHOST SRL
    Host       : Alexhost SRL
    Location   : Chisinau, Chișinău Municipality (CU)
    Country    : Moldova
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 235.13 MB/s  (58.7k) | 1.79 GB/s    (28.0k)
    Write      | 235.76 MB/s  (58.9k) | 1.80 GB/s    (28.1k)
    Total      | 470.89 MB/s (117.7k) | 3.59 GB/s    (56.2k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 2.01 GB/s     (3.9k) | 2.01 GB/s     (1.9k)
    Write      | 2.11 GB/s     (4.1k) | 2.15 GB/s     (2.1k)
    Total      | 4.12 GB/s     (8.0k) | 4.16 GB/s     (4.0k)
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
    -----           | -----                     | ----            | ----            | ----           
    Clouvider       | London, UK (10G)          | 274 Mbits/sec   | 280 Mbits/sec   | 51.7 ms        
    Eranium         | Amsterdam, NL (100G)      | 277 Mbits/sec   | 286 Mbits/sec   | 39.9 ms        
    Uztelecom       | Tashkent, UZ (10G)        | 270 Mbits/sec   | 168 Mbits/sec   | 104 ms         
    Leaseweb        | Singapore, SG (10G)       | 240 Mbits/sec   | 237 Mbits/sec   | 153 ms         
    Clouvider       | Los Angeles, CA, US (10G) | 248 Mbits/sec   | 128 Mbits/sec   | 183 ms         
    Leaseweb        | NYC, NY, US (10G)         | 88.9 Mbits/sec  | 252 Mbits/sec   | 123 ms         
    Edgoo           | Sao Paulo, BR (1G)        | 42.6 Mbits/sec  | 136 Mbits/sec   | 227 ms         
    
    iperf3 Network Speed Tests (IPv6):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
    -----           | -----                     | ----            | ----            | ----           
    Clouvider       | London, UK (10G)          | 264 Mbits/sec   | 283 Mbits/sec   | 51.7 ms        
    Eranium         | Amsterdam, NL (100G)      | 264 Mbits/sec   | 283 Mbits/sec   | 39.8 ms        
    Uztelecom       | Tashkent, UZ (10G)        | 265 Mbits/sec   | 111 Mbits/sec   | 104 ms         
    Leaseweb        | Singapore, SG (10G)       | 239 Mbits/sec   | 227 Mbits/sec   | 153 ms         
    Clouvider       | Los Angeles, CA, US (10G) | 239 Mbits/sec   | 125 Mbits/sec   | 183 ms         
    Leaseweb        | NYC, NY, US (10G)         | 242 Mbits/sec   | 262 Mbits/sec   | 123 ms         
    Edgoo           | Sao Paulo, BR (1G)        | 226 Mbits/sec   | 79.2 Mbits/sec  | 227 ms         
    
    Geekbench 6 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 545                           
    Multi Core      | 1828                          
    Full Test       | https://browser.geekbench.com/v6/cpu/14019774
    
    YABS completed in 18 min 52 sec
    ubuntu@vmXX:~$ 
    

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    This one is for @AuroraZero if he wants it. Note the terms from FOSSVPS.org: "Ephemeral. Could disappear at any moment!" So, not "stable." <3

    Sounds perfect for the Yeti, who is also Ephemeral and people has spent decades searching for him :lol:

    Thanked by (1)Not_Oles

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Given the lack of good ideas about the high %CPU VPSes, and in the absence of objection, I propose to shut them down, perhaps as soon as tomorrow.

    One thing we might try is reverting one of the high CPU VPSes to its fresh snapshot. Maybe they all will work again if reverted. Otherwise I guess they could be reinstalled.

    In the future, we could make snapshots after users finish their basic installs. With "basic-install" snapshots we could revert to an installed state.

    I made a second new Ubuntu 24.04 VPS with another of the newly assigned IPv4s. It works great and will be assigned to someone, along with more, soon. You can see a list of the requests at https://fossvps.org/list.html.

    Thanks everyone! :) Thanks Alexhost! :)

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    I made a second new Ubuntu 24.04 VPS with another of the newly assigned IPv4s. It works great and will be assigned to someone, along with more, soon. You can see a list of the requests at https://fossvps.org/list.html.

    Thanks everyone! :) Thanks Alexhost! :)

    2 of 4 assigned? So 2 more to go?

    Grab them while they're hot!

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited September 24

    @somik said:

    2 of 4 assigned? So 2 more to go?

    2 of 5 assigned? So 3 more to go? :)

    I have to update the numbers on fossvps.org. :)

    Thanked by (1)somik

    I hope everyone gets the servers they want!

  • AuroraZeroAuroraZero Hosting ProviderRetired

    @Not_Oles said:

    @somik said:

    2 of 4 assigned? So 2 more to go?

    2 of 5 assigned? So 3 more to go? :)

    I have to update the numbers on fossvps.org. :)

    Sorry man busy trying to fix my truck and doc appointments. Should be able to get to this this weekend.

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Best of luck with both doc and truck!

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Hello!

    At the moment there are 27 VPSes configured on the Chisinau server.

    I don't know why, but 8 VPSes seemed to be running continuously at high %CPU following recent update and revert of the netplan. As far as I am aware, the remaining 19 VPSes are working fine.

    All 8 of the overperforming VPSes have been shutdown. From here at LES, @jmaxwell is down.

    @jmaxwell Do you want me to attempt restoring your VPS from your "fresh" snapshot? Attempting the restore will destroy whatever happened subsequent to the snapshot. Alternatively, we could rebuild your VPS from scratch. Please let me know.

    By the way, if you restore or reinstall, we could make a another snapshot after your basic install is completed. That way, if a revert happens again, we could revert to your basic installed state.

    Or, maybe, someone has a better idea about how to adjust the overperforming VPSes?

    Thanks! Sorry for the issue! :) Thanks to Alexhost! <3

    Tom

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Update

    The FOSSVPS website has been updated to show the total of 47 previously assigned VPSes (up from the September 12 count of 40) and the total of 5 VPSes currently available for assignment.

    The FOSSVPS Request List has been updated to move Assigned VPSes to the bottom so that the list of Pending and Accepted Requests at the top of the page is easier to read. If you have a Pending or an Accepted Request, please double check that your Request appears on the List.

    Thanks everyone!

    I hope everyone gets the servers they want!

  • AuroraZeroAuroraZero Hosting ProviderRetired

    @Not_Oles Thank you for all the assistance and the great VPS!!

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Maintenance at Alexhost

    Alexhost says:

    Dear customers.

    We would like to inform you that our Data Center will undergo scheduled maintenance on the power infrastructure in order to enhance reliability and future-proof our systems.

    Maintenance Window:
    Date: 06 October 2025

    Time: 14:00 - 16:00 UTC+3

    Expected Impact: Your dedicated server(s) may experience downtime of up to 2 hours during this period.

    Our engineering team will work to minimize service interruption and restore operations as quickly as possible. All critical systems will be closely monitored before, during, and after the maintenance.

    We recommend that you take any necessary precautions (such as notifying your users or scheduling tasks outside of this window) to mitigate potential disruption.

    If you have any questions or require assistance, please do not hesitate to contact our support team.

    Regards,
    AlexHost Team.

    I hope everyone gets the servers they want!

  • AuroraZeroAuroraZero Hosting ProviderRetired

    @Not_Oles said:
    Maintenance at Alexhost

    Alexhost says:

    Dear customers.

    We would like to inform you that our Data Center will undergo scheduled maintenance on the power infrastructure in order to enhance reliability and future-proof our systems.

    Maintenance Window:
    Date: 06 October 2025

    Time: 14:00 - 16:00 UTC+3

    Expected Impact: Your dedicated server(s) may experience downtime of up to 2 hours during this period.

    Our engineering team will work to minimize service interruption and restore operations as quickly as possible. All critical systems will be closely monitored before, during, and after the maintenance.

    We recommend that you take any necessary precautions (such as notifying your users or scheduling tasks outside of this window) to mitigate potential disruption.

    If you have any questions or require assistance, please do not hesitate to contact our support team.

    Regards,
    AlexHost Team.

    Thank you for heads up man!

    Thanked by (1)Not_Oles
  • @Not_Oles said:
    Hello!

    At the moment there are 27 VPSes configured on the Chisinau server.

    I don't know why, but 8 VPSes seemed to be running continuously at high %CPU following recent update and revert of the netplan. As far as I am aware, the remaining 19 VPSes are working fine.

    All 8 of the overperforming VPSes have been shutdown. From here at LES, @jmaxwell is down.

    @jmaxwell Do you want me to attempt restoring your VPS from your "fresh" snapshot? Attempting the restore will destroy whatever happened subsequent to the snapshot. Alternatively, we could rebuild your VPS from scratch. Please let me know.

    By the way, if you restore or reinstall, we could make a another snapshot after your basic install is completed. That way, if a revert happens again, we could revert to your basic installed state.

    Or, maybe, someone has a better idea about how to adjust the overperforming VPSes?

    Thanks! Sorry for the issue! :) Thanks to Alexhost! <3

    Tom

    @Not_Oles said:
    Hello!

    At the moment there are 27 VPSes configured on the Chisinau server.

    I don't know why, but 8 VPSes seemed to be running continuously at high %CPU following recent update and revert of the netplan. As far as I am aware, the remaining 19 VPSes are working fine.

    All 8 of the overperforming VPSes have been shutdown. From here at LES, @jmaxwell is down.

    @jmaxwell Do you want me to attempt restoring your VPS from your "fresh" snapshot? Attempting the restore will destroy whatever happened subsequent to the snapshot. Alternatively, we could rebuild your VPS from scratch. Please let me know.

    By the way, if you restore or reinstall, we could make a another snapshot after your basic install is completed. That way, if a revert happens again, we could revert to your basic installed state.

    Or, maybe, someone has a better idea about how to adjust the overperforming VPSes?

    Thanks! Sorry for the issue! :) Thanks to Alexhost! <3

    Tom

    Yeah, please try restoring. If it works okay, otherwise I’ll just reconfigure scratch again.

    Thanked by (1)Not_Oles

    Why?

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @jmaxwell said: Yeah, please try restoring. If it works okay, otherwise I’ll just reconfigure scratch again.

    Restoring from the snapshot did not work. The VPS went right back to high CPU.

    Next is to rebuild it from scratch.

    All of the several others which had simultaneous, similar issues seem now to have been happy since they were rebuilt. So I have good hopes for your VPS too!

    I will post again after rebuilding.

    Best wishes!

    Tom

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited October 2

    @Not_Oles just make a spectacular noob mistake!

    During the process of rebuilding vm2 for @jmaxwell I went to remove the qcow2 image and the vm-seed.iso for vm2.

    Heres what went wrong:

    root@alexhost:~/vm-seed/vm2 # rm -v /var/lib/libvirt/images/vm2*
    removed '/var/lib/libvirt/images/vm2-seed.iso'
    removed '/var/lib/libvirt/images/vm2.qcow2'
    removed '/var/lib/libvirt/images/vm20-seed.iso'
    removed '/var/lib/libvirt/images/vm20.qcow2'
    removed '/var/lib/libvirt/images/vm21-seed.iso'
    removed '/var/lib/libvirt/images/vm21.qcow2'
    removed '/var/lib/libvirt/images/vm22-seed.iso'
    removed '/var/lib/libvirt/images/vm22.qcow2'
    removed '/var/lib/libvirt/images/vm23-seed.iso'
    removed '/var/lib/libvirt/images/vm23.qcow2'
    removed '/var/lib/libvirt/images/vm24-seed.iso'
    removed '/var/lib/libvirt/images/vm24.qcow2'
    removed '/var/lib/libvirt/images/vm26-seed.iso'
    removed '/var/lib/libvirt/images/vm26.qcow2'
    removed '/var/lib/libvirt/images/vm27-seed.iso'
    removed '/var/lib/libvirt/images/vm27.qcow2'
    removed '/var/lib/libvirt/images/vm28-seed.iso'
    removed '/var/lib/libvirt/images/vm28.qcow2'
    removed '/var/lib/libvirt/images/vm29-seed.iso'
    removed '/var/lib/libvirt/images/vm29.qcow2'
    root@alexhost:~/vm-seed/vm2 #
    

    It is not too hard to rebuild all the VPSes numbered beginning with 2. There is a directory with scripts for each VPS, so I have to go through that directory, rerun the scripts, and double check.

    I will post more as events progress.

    But it will take a while.

    Sorry!

    Wow!

    Tom

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @jmaxwell

    Your VPS is rebuilt from scratch since restoring from the snapshot did not work. The new version is not using backing store, which means everything needed is actually within your qcow2 file.

    Everything should work as before. I have tested that I can get in on IPv4 and on IPv6. Please remove my keys if you wish. Password login is disabled. Yabs from inside your VPS is shown below.

    Please post here to let us know if you get in and if everything looks okay.

    Sorry about continuing problems!

    Best wishes!

    Tom

    ubuntu@vm2:~$ curl -sL yabs.sh | bash
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2025-04-20                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Fri Oct  3 00:36:13 UTC 2025
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 0 hours, 10 minutes
    Processor  : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
    CPU cores  : 4 @ 2800.002 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 3.8 GiB
    Swap       : 0.0 KiB
    Disk       : 96.8 GiB
    Distro     : Ubuntu 24.04.3 LTS
    Kernel     : 6.8.0-71-generic
    VM Type    : KVM
    IPv4/IPv6  : ✔ Online / ✔ Online
    
    IPv6 Network Information:
    ---------------------------------
    ISP        : Alexhost SRL
    ASN        : AS200019 ALEXHOST SRL
    Host       : Alexhost SRL
    Location   : Chisinau, Chișinău Municipality (CU)
    Country    : Moldova
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda1):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 208.60 MB/s  (52.1k) | 1.91 GB/s    (29.8k)
    Write      | 209.15 MB/s  (52.2k) | 1.92 GB/s    (30.0k)
    Total      | 417.75 MB/s (104.4k) | 3.83 GB/s    (59.8k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 2.04 GB/s     (3.9k) | 2.05 GB/s     (2.0k)
    Write      | 2.15 GB/s     (4.2k) | 2.19 GB/s     (2.1k)
    Total      | 4.19 GB/s     (8.1k) | 4.24 GB/s     (4.1k)
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
    -----           | -----                     | ----            | ----            | ----           
    Clouvider       | London, UK (10G)          | 282 Mbits/sec   | 262 Mbits/sec   | 49.6 ms        
    Eranium         | Amsterdam, NL (100G)      | busy            | busy            | 44.4 ms        
    Uztelecom       | Tashkent, UZ (10G)        | 257 Mbits/sec   | 138 Mbits/sec   | 102 ms         
    Leaseweb        | Singapore, SG (10G)       | 247 Mbits/sec   | 241 Mbits/sec   | 153 ms         
    Clouvider       | Los Angeles, CA, US (10G) | 250 Mbits/sec   | 116 Mbits/sec   | 174 ms         
    Leaseweb        | NYC, NY, US (10G)         | 263 Mbits/sec   | 244 Mbits/sec   | 122 ms         
    Edgoo           | Sao Paulo, BR (1G)        | 243 Mbits/sec   | 87.8 Mbits/sec  | 226 ms         
    
    iperf3 Network Speed Tests (IPv6):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
    -----           | -----                     | ----            | ----            | ----           
    Clouvider       | London, UK (10G)          | 277 Mbits/sec   | 281 Mbits/sec   | 49.5 ms        
    Eranium         | Amsterdam, NL (100G)      | 278 Mbits/sec   | 283 Mbits/sec   | 44.4 ms        
    Uztelecom       | Tashkent, UZ (10G)        | 243 Mbits/sec   | 98.7 Mbits/sec  | 102 ms         
    Leaseweb        | Singapore, SG (10G)       | 248 Mbits/sec   | 235 Mbits/sec   | 153 ms         
    Clouvider       | Los Angeles, CA, US (10G) | 239 Mbits/sec   | 145 Mbits/sec   | 174 ms         
    Leaseweb        | NYC, NY, US (10G)         | 250 Mbits/sec   | 258 Mbits/sec   | 122 ms         
    Edgoo           | Sao Paulo, BR (1G)        | 232 Mbits/sec   | 107 Mbits/sec   | 227 ms         
    
    Geekbench 6 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 513                           
    Multi Core      | 1754                          
    Full Test       | https://browser.geekbench.com/v6/cpu/14226500
    
    YABS completed in 20 min 0 sec
    ubuntu@vm2:~$ 
    
    Thanked by (1)jmaxwell

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Unexpected discovery!

    virsh seems to think that some of the VPSes whose qcow2 files were deleted are still running.

    I logged in to one VPS where the client apparently had not deleted my ssh key. I did nothing except log out immediately, but it seems people still could log in to their VPSes even though the .qcow2 files are gone.

    I dont know what use it would be to log in, but maybe I ought not to rebuild VPSes until clients have had a chance to log in if they want to do so.

    Clients who are affected are those with Alexhost VPSes numbered vm2*. If you are in the vm2* group, please let me know if I should go ahead and rebuild your VPS.

    Sorry for the inconvenience!

    Tom

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    Unexpected discovery!

    virsh seems to think that some of the VPSes whose qcow2 files were deleted are still running.

    I logged in to one VPS where the client apparently had not deleted my ssh key. I did nothing except log out immediately, but it seems people still could log in to their VPSes even though the .qcow2 files are gone.

    I dont know what use it would be to log in, but maybe I ought not to rebuild VPSes until clients have had a chance to log in if they want to do so.

    Maybe it's time to start using a control panel to manage VMs...

    I would recommend proxmox, but there was a shell based vm control panel(?) that someone made, which would also work fine.

    Thanked by (1)Not_Oles

    Never make the same mistake twice. There are so many new ones to make.
    It’s OK if you disagree with me. I can’t force you to be right.

  • @Not_Oles said: I did nothing except log out immediately, but it seems people still could log in to their VPSes even though the .qcow2 files are gone.

    Just saying, as long as a process still has that file open, the file is still there, just not visible any more in the filesystem. Unfortunately, there is no easy way to relink the file into the filesystem again, but you could copy the contents; see https://serverfault.com/questions/168909/relinking-a-deleted-file

    Or to give you a simpler example:

    $ echo Hello >test.txt
    $ python3
    Python 3.13.3 (main, Aug 14 2025, 11:53:40) [GCC 14.2.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> fd = open('test.txt', 'r')
    >>> 
    

    And while that is still running, do a rm test.txt in another terminal.

    Then get the pid of the python3 process and have a look in the /proc filesystem:

    $ ls -lF /proc/34864/fd/
    total 0
    lrwx------ 1 cmeerw cmeerw 64 Oct  3 08:34 0 -> /dev/pts/12
    lrwx------ 1 cmeerw cmeerw 64 Oct  3 08:34 1 -> /dev/pts/12
    lrwx------ 1 cmeerw cmeerw 64 Oct  3 08:34 2 -> /dev/pts/12
    lrwx------ 1 cmeerw cmeerw 64 Oct  3 08:34 3 -> /dev/pts/12
    lrwx------ 1 cmeerw cmeerw 64 Oct  3 08:34 4 -> /dev/pts/12
    lr-x------ 1 cmeerw cmeerw 64 Oct  3 08:34 5 -> '/home/cmeerw/test.txt (deleted)'
    

    You can still do something like

    $ cat /proc/34864/fd/5
    Hello
    

    to get the file contents (and maybe copy that to a new file), but as explained in the serverfault link, you can't do ln -L /proc/34864/fd/5 test-recovered.txt directly.

    Thanked by (2)Not_Oles tmntwitw
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @cmeerw said: Unfortunately, there is no easy way to relink the file into the filesystem again, but you could copy the contents

    @cmeerw What happens if the file is a 100 GB qcow2 file instead of just Hello? Catting the file out of /proc can not be a practical method of recovery when the file is so big?

    Inside /proc, there is more, in addition to exactly what you already mentioned:

    root@alexhost:~/vm-seed/vm22-ubuntu-v6-only-NS-@XX# ps aux | grep vm22
    libvirt+    2129  1.1  0.9 10694508 2409132 ?    Sl   Sep22 190:45 /usr/bin/qemu-system-x86_64 -name guest=vm22 [ . . . . ]
    root@alexhost:~/vm-seed/vm22-ubuntu-v6-only-NS-@XX# cd /proc/2129/fd
    root@alexhost:/proc/2129/fd# ls -lF
    total 0
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 0 -> /dev/null
    l-wx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 1 -> 'pipe:[39095]'|
    lr-x------ 1 libvirt-qemu kvm 64 Oct  3 17:16 10 -> /var/lib/libvirt/images/noble-server-cloudimg-amd64.img
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 100 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 101 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 102 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 103 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 104 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 105 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 106 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 107 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 108 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 109 -> 'anon_inode:[eventfd]'
    lr-x------ 1 libvirt-qemu kvm 64 Oct  3 17:16 11 -> '/var/lib/libvirt/images/vm22-seed.iso (deleted)'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 110 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 111 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 112 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 113 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 12 -> '/var/lib/libvirt/images/vm22.qcow2 (deleted)'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 13 -> /dev/kvm
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 14 -> anon_inode:kvm-vm
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 15 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 16 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 17 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 18 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 19 -> 'socket:[237024]'=
    l-wx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 2 -> 'pipe:[39095]'|
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 20 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 21 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 22 -> anon_inode:kvm-vcpu:0
    lr-x------ 1 libvirt-qemu kvm 64 Oct  3 17:16 23 -> anon_inode:kvm-vcpu-stats:0
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 24 -> anon_inode:kvm-vcpu:1
    lr-x------ 1 libvirt-qemu kvm 64 Oct  3 17:16 25 -> anon_inode:kvm-vcpu-stats:1
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 26 -> anon_inode:kvm-vcpu:2
    lr-x------ 1 libvirt-qemu kvm 64 Oct  3 17:16 27 -> anon_inode:kvm-vcpu-stats:2
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 28 -> anon_inode:kvm-vcpu:3
    lr-x------ 1 libvirt-qemu kvm 64 Oct  3 17:16 29 -> anon_inode:kvm-vcpu-stats:3
    lr-x------ 1 libvirt-qemu kvm 64 Oct  3 17:16 3 -> /dev/urandom
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 30 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 31 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 32 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 33 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 34 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 35 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 36 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 37 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 38 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 39 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 4 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 40 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 41 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 42 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 43 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 44 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 45 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 46 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 47 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 48 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 49 -> 'anon_inode:[eventfd]'
    l-wx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 5 -> /run/libvirt/qemu/vm22.pid
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 50 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 51 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 52 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 53 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 54 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 55 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 56 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 57 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 58 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 59 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 6 -> 'anon_inode:[signalfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 60 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 61 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 62 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 63 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 64 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 65 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 66 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 67 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 68 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 69 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 7 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 70 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 71 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 72 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 73 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 74 -> 'socket:[9500]'=
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 75 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 76 -> 'socket:[9501]'=
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 77 -> /dev/net/tun
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 78 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 79 -> /dev/vhost-net
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 8 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 80 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 81 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 82 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 83 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 84 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 85 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 86 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 87 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 88 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 89 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 9 -> /dev/ptmx
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 90 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 91 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 92 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 93 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 94 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 95 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 96 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 97 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 98 -> 'anon_inode:[eventfd]'
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 99 -> 'anon_inode:[eventfd]'
    root@alexhost:/proc/2129/fd# 
    
    Thanked by (1)cmeerw

    I hope everyone gets the servers they want!

  • @Not_Oles said:

    @cmeerw said: Unfortunately, there is no easy way to relink the file into the filesystem again, but you could copy the contents

    @cmeerw What happens if the file is a 100 GB qcow2 file instead of just Hello? Catting the file out of /proc can not be a practical method of recovery when the file is so big?

    Shouldn't really be an issue, except that it will take a bit longer - it's just copying the file contents. Although maybe just using cp would be better in this case (maybe even with --sparse=always, but be careful to not just copy the symlink).

    Of course, you'll need to have the extra disk space, so maybe copy the VM images one by one, and shut the VM down once you have copied the image for that VM.

    Inside /proc, there is more, in addition to exactly what you already mentioned:

    Quite a lot of eventfds... but still looks fine.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @cmeerw

    Given

    root@alexhost:/proc/2129/fd# ls -lF
      [ . . . ]
    lr-x------ 1 libvirt-qemu kvm 64 Oct  3 17:16 11 -> '/var/lib/libvirt/images/vm22-seed.iso (deleted)'
      [ . . .  ]
    lrwx------ 1 libvirt-qemu kvm 64 Oct  3 17:16 12 -> '/var/lib/libvirt/images/vm22.qcow2 (deleted)'
    

    Assuming I can use the -L option with cp because the -L option is different between cp(1) (dereference sources which are symlinks) and ln(1) (dereference targets which are symlinks) , I should try something like

    root@alexhost:/proc/2129/fd# cp -L --sparse=always 11 /root/vm22-seed-recovered.iso
    root@alexhost:/proc/2129/fd# cp -L --sparse=always 12 /root/vm22-recovered.qcow2
    

    Thanks so much, as always!

    Thanked by (1)cmeerw

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Not_Oles said:
    Maintenance at Alexhost

    Alexhost says:

    Dear customers.

    We would like to inform you that our Data Center will undergo scheduled maintenance on the power infrastructure in order to enhance reliability and future-proof our systems.

    Maintenance Window:
    Date: 06 October 2025

    Time: 14:00 - 16:00 UTC+3

    Expected Impact: Your dedicated server(s) may experience downtime of up to 2 hours during this period.

    Our engineering team will work to minimize service interruption and restore operations as quickly as possible. All critical systems will be closely monitored before, during, and after the maintenance.

    We recommend that you take any necessary precautions (such as notifying your users or scheduling tasks outside of this window) to mitigate potential disruption.

    If you have any questions or require assistance, please do not hesitate to contact our support team.

    Regards,
    AlexHost Team.

    Reminder about Chisinau maintenance coming up soon! If the power to our server is interrupted during the maintenance, we might lose any vm2* qcow2 and iso files still alive in /proc. So, if anyone wants to copy their files out of /proc, it might be best to do that before the maintenance.

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    One of the Nodeseek guys reported that he can log into his VPS and can install software.

    But his qcow2 file is gone!

    Amazing! As @cmeerw says, the VPS persists inside /proc. Wow!

    Thanked by (1)cmeerw

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    In case anyone might be interested, I posted the steps I took to possibly recover a VPS from /proc at https://fossvps.org/recover.html. A few comments are posted at https://www.nodeseek.com/post-431916-36#359.

    Best wishes!

    Thanked by (2)cmeerw tmntwitw

    I hope everyone gets the servers they want!

Sign In or Register to comment.