@VirMach thank you for keeping this thread updated. I had a VPS in SEA005 which after couple of months of downtime is finally up at LAX1Z012, but I am missing two ip4 from the network. Also, I thought based on what I read these are eventually moving to San Jose is that correct? Or do I request that via ticketing system? Do I wait for 2 missing ip4 to appear magically in sometime as you are working through or should raise ticket? Thanks again.
@xyz said: @VirMach thank you for keeping this thread updated. I had a VPS in SEA005 which after couple of months of downtime is finally up at LAX1Z012, but I am missing two ip4 from the network. Also, I thought based on what I read these are eventually moving to San Jose is that correct? Or do I request that via ticketing system? Do I wait for 2 missing ip4 to appear magically in sometime as you are working through or should raise ticket? Thanks again.
If you didn't get your additional it means at the time it was fixed there were none left so it had to be done that way. Maybe a dozen or so people affected if I recall correctly. Wait a little bit so we have extra IPs available and contact us, or if you need it immediately then feel free to contact us sooner but let us know we can move you anywhere with available IPs.
@xTom said: We have PNI with Cloudflare in Frankfurt, we didn't receive any other reports from other customers.
Thanks for answer, I guess then it's something VirMach related as the same Debian packages/version running on other hosts don't drop Cloudflared/Argo tunnel that many times.
Dear @VirMach should the network in Frankfurt look like that? I know jackshit how this should be, but getting 1-97 ms icmp ping from gateway is not good, right?
# ip -4 r s | grep "default"
default via 213.232.115.1 dev ens3 proto dhcp src 213.232.115.X metric 100
# ping 213.232.115.1
PING 213.232.115.1 (213.232.115.1) 56(84) bytes of data.
64 bytes from 213.232.115.1: icmp_seq=1 ttl=64 time=1.27 ms
64 bytes from 213.232.115.1: icmp_seq=2 ttl=64 time=22.4 ms
64 bytes from 213.232.115.1: icmp_seq=3 ttl=64 time=8.10 ms
64 bytes from 213.232.115.1: icmp_seq=4 ttl=64 time=8.46 ms
64 bytes from 213.232.115.1: icmp_seq=5 ttl=64 time=28.3 ms
64 bytes from 213.232.115.1: icmp_seq=6 ttl=64 time=21.5 ms
64 bytes from 213.232.115.1: icmp_seq=7 ttl=64 time=7.74 ms
64 bytes from 213.232.115.1: icmp_seq=8 ttl=64 time=8.19 ms
64 bytes from 213.232.115.1: icmp_seq=9 ttl=64 time=5.82 ms
64 bytes from 213.232.115.1: icmp_seq=10 ttl=64 time=30.2 ms
64 bytes from 213.232.115.1: icmp_seq=11 ttl=64 time=6.61 ms
64 bytes from 213.232.115.1: icmp_seq=12 ttl=64 time=2.38 ms
64 bytes from 213.232.115.1: icmp_seq=13 ttl=64 time=2.31 ms
64 bytes from 213.232.115.1: icmp_seq=14 ttl=64 time=1.02 ms
64 bytes from 213.232.115.1: icmp_seq=15 ttl=64 time=32.8 ms
64 bytes from 213.232.115.1: icmp_seq=16 ttl=64 time=60.1 ms
64 bytes from 213.232.115.1: icmp_seq=17 ttl=64 time=4.55 ms
64 bytes from 213.232.115.1: icmp_seq=18 ttl=64 time=6.74 ms
64 bytes from 213.232.115.1: icmp_seq=19 ttl=64 time=40.1 ms
64 bytes from 213.232.115.1: icmp_seq=20 ttl=64 time=2.35 ms
64 bytes from 213.232.115.1: icmp_seq=21 ttl=64 time=8.35 ms
64 bytes from 213.232.115.1: icmp_seq=22 ttl=64 time=2.29 ms
64 bytes from 213.232.115.1: icmp_seq=23 ttl=64 time=42.5 ms
64 bytes from 213.232.115.1: icmp_seq=24 ttl=64 time=2.44 ms
64 bytes from 213.232.115.1: icmp_seq=25 ttl=64 time=5.41 ms
64 bytes from 213.232.115.1: icmp_seq=26 ttl=64 time=1.08 ms
64 bytes from 213.232.115.1: icmp_seq=27 ttl=64 time=2.41 ms
64 bytes from 213.232.115.1: icmp_seq=28 ttl=64 time=77.2 ms
64 bytes from 213.232.115.1: icmp_seq=29 ttl=64 time=3.92 ms
64 bytes from 213.232.115.1: icmp_seq=30 ttl=64 time=1.31 ms
64 bytes from 213.232.115.1: icmp_seq=31 ttl=64 time=21.9 ms
64 bytes from 213.232.115.1: icmp_seq=32 ttl=64 time=51.1 ms
64 bytes from 213.232.115.1: icmp_seq=33 ttl=64 time=3.41 ms
64 bytes from 213.232.115.1: icmp_seq=34 ttl=64 time=8.07 ms
64 bytes from 213.232.115.1: icmp_seq=35 ttl=64 time=8.30 ms
64 bytes from 213.232.115.1: icmp_seq=36 ttl=64 time=97.9 ms
64 bytes from 213.232.115.1: icmp_seq=37 ttl=64 time=3.61 ms
64 bytes from 213.232.115.1: icmp_seq=38 ttl=64 time=5.39 ms
64 bytes from 213.232.115.1: icmp_seq=39 ttl=64 time=8.02 ms
64 bytes from 213.232.115.1: icmp_seq=40 ttl=64 time=1.94 ms
64 bytes from 213.232.115.1: icmp_seq=41 ttl=64 time=49.6 ms
64 bytes from 213.232.115.1: icmp_seq=42 ttl=64 time=2.17 ms
64 bytes from 213.232.115.1: icmp_seq=43 ttl=64 time=9.16 ms
64 bytes from 213.232.115.1: icmp_seq=44 ttl=64 time=1.07 ms
64 bytes from 213.232.115.1: icmp_seq=45 ttl=64 time=2.32 ms
64 bytes from 213.232.115.1: icmp_seq=46 ttl=64 time=22.4 ms
64 bytes from 213.232.115.1: icmp_seq=47 ttl=64 time=2.19 ms
^C
--- 213.232.115.1 ping statistics ---
47 packets transmitted, 47 received, 0% packet loss, time 46068ms
rtt min/avg/max/mdev = 1.020/15.883/97.910/21.362 ms
Okey, I am even more confused - seems like there are Google DNS route pushed within DHCP as the same route...
# ip -4 r s | grep "8.8"
8.8.4.4 via 213.232.115.1 dev ens3 proto dhcp src 213.232.115.X metric 10
8.8.8.8 via 213.232.115.1 dev ens3 proto dhcp src 213.232.115.X metric 100
# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=61 time=0.533 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=61 time=0.732 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=61 time=0.979 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=61 time=0.879 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=61 time=0.579 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=61 time=0.558 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=61 time=0.568 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=61 time=0.662 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=61 time=0.776 ms
64 bytes from 8.8.8.8: icmp_seq=10 ttl=61 time=0.890 ms
64 bytes from 8.8.8.8: icmp_seq=11 ttl=61 time=0.658 ms
64 bytes from 8.8.8.8: icmp_seq=12 ttl=61 time=0.746 ms
64 bytes from 8.8.8.8: icmp_seq=13 ttl=61 time=0.591 ms
64 bytes from 8.8.8.8: icmp_seq=14 ttl=61 time=0.569 ms
64 bytes from 8.8.8.8: icmp_seq=15 ttl=61 time=0.719 ms
64 bytes from 8.8.8.8: icmp_seq=16 ttl=61 time=0.493 ms
64 bytes from 8.8.8.8: icmp_seq=17 ttl=61 time=0.673 ms
64 bytes from 8.8.8.8: icmp_seq=18 ttl=61 time=0.836 ms
64 bytes from 8.8.8.8: icmp_seq=19 ttl=61 time=0.544 ms
64 bytes from 8.8.8.8: icmp_seq=20 ttl=61 time=0.814 ms
64 bytes from 8.8.8.8: icmp_seq=21 ttl=61 time=0.680 ms
64 bytes from 8.8.8.8: icmp_seq=22 ttl=61 time=0.781 ms
64 bytes from 8.8.8.8: icmp_seq=23 ttl=61 time=0.625 ms
64 bytes from 8.8.8.8: icmp_seq=24 ttl=61 time=0.730 ms
64 bytes from 8.8.8.8: icmp_seq=25 ttl=61 time=0.770 ms
64 bytes from 8.8.8.8: icmp_seq=26 ttl=61 time=0.825 ms
64 bytes from 8.8.8.8: icmp_seq=27 ttl=61 time=0.507 ms
64 bytes from 8.8.8.8: icmp_seq=28 ttl=61 time=0.704 ms
64 bytes from 8.8.8.8: icmp_seq=29 ttl=61 time=0.672 ms
64 bytes from 8.8.8.8: icmp_seq=30 ttl=61 time=0.866 ms
64 bytes from 8.8.8.8: icmp_seq=31 ttl=61 time=0.783 ms
64 bytes from 8.8.8.8: icmp_seq=32 ttl=61 time=0.686 ms
64 bytes from 8.8.8.8: icmp_seq=33 ttl=61 time=0.627 ms
64 bytes from 8.8.8.8: icmp_seq=34 ttl=61 time=0.659 ms
^C
--- 8.8.8.8 ping statistics ---
34 packets transmitted, 34 received, 0% packet loss, time 33747ms
rtt min/avg/max/mdev = 0.493/0.697/0.979/0.118 ms
rDNS has been added in SolusVM for all new blocks, but I've seen reports of it not working. I haven't had time to go through these specifically so it'd be helpful if we can get anyone here to use the feature and let me know what you're seeing in terms of issues and on which node so we can see if it's just all broken or clusters of problems.
Receiving "Could not connect to DNS server. Please try again later" for NY server in ip range 141.11.22x.xxx
also, installed Debian 12 on RYZE.NYC-B013.VMS VM and I'm getting a segfault when I run apt upgrade. (not happening with other templates --- so I assume its a problem with the template.)
@JoeMerit said: also, installed Debian 12 on RYZE.NYC-B013.VMS VM and I'm getting a segfault when I run apt upgrade. (not happening with other templates --- so I assume its a problem with the template.)
This feels like dejavu. We added Debian 12 in the past and it had the same problem, didn't it? And then we removed it because we had to work on our own version not from SolusVM, that never happened, I forgot, and we went back to adding the SolusVM version that's still broken.
@FrankZ said: Could not connect to DNS server. Please try again later
Yeah I forgot an important step for all these, I have to go one by one mapping them to specific primary ID for each as they were added to PowerDNS. SolusVM feature, because it'd be impossible to do it any other way that's less labor intensive.
@imok said:
Why the $30 setup fee for Tokyo VPSes?
Is it a good location for a VPN to connect from China?
We still get a good number of purchases, it helps balance out stock levels, and balance out abuse. Will likely be changed again as we usually do based on other factors.
@imok said:
Given $30 are too expensive, any other recommended location to set up a private VPN?
From China or anywhere?
If anywhere, just select the nearest location to you.
From China, usually what I've seen people do is in the order:
Tokyo
Hong Kong
Vietnam (Greencloud)
Singapore
US West Los Angeles
US West Seattle
@imok said:
Given $30 are too expensive, any other recommended location to set up a private VPN?
Novosibirsk, Russia
Test IP: 91.188.223.6
I really wanted to do this location a while back as I was looking at all the global routes and thought it'd be an interesting location, but then it was definitely complicated after 2022.
@imok said:
Given $30 are too expensive, any other recommended location to set up a private VPN?
Novosibirsk, Russia
Test IP: 91.188.223.6
I really wanted to do this location a while back as I was looking at all the global routes and thought it'd be an interesting location, but then it was definitely complicated after 2022.
hey, there are no forever friends nor enemies. russia will be back in one way or another.
@imok said:
I started working with my phoenix ryzen vps again. Installed Debian 11 and upgraded and it stopped working.
Reinstallling Debian 11... I suppose I should not upgrade the OS.
Hmm, I do see a history of things like this happening with Debian. Update, then bug reported of a similar issue, "fixed" and then repeat. Seems like it could be caused by any particular package. Gets even more fun when you add in virtualization, Ryzen, and specific motherboards.
It sure is going to be fun to see where we're at with everything in 10, 20, 30 years.
So minimal ISO is confirmed working? It must be missing whatever's causing this. If someone can confirm exact setup with ISO that works the best I should be able to just make a template.
@imok said:
I started working with my phoenix ryzen vps again. Installed Debian 11 and upgraded and it stopped working.
Reinstallling Debian 11... I suppose I should not upgrade the OS.
Hmm, I do see a history of things like this happening with Debian. Update, then bug reported of a similar issue, "fixed" and then repeat. Seems like it could be caused by any particular package. Gets even more fun when you add in virtualization, Ryzen, and specific motherboards.
It sure is going to be fun to see where we're at with everything in 10, 20, 30 years.
So minimal ISO is confirmed working? It must be missing whatever's causing this. If someone can confirm exact setup with ISO that works the best I should be able to just make a template.
i think so yes. it wasn't working for a while then now it is.
@mbk said: @VirMach Four of my servers were moved to NYC, two from Dallas and two from Atlanta. Network status page said that nodes in ATLZ010 were not moving?
I don't want all those servers in NYC. Is it possible to get these servers migrated away from NYC?
Very long answer but some variation of this will occur, yes.
Comments
Welp, this is even more scary.
Okey, it's gone after 2 minutes.
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
@VirMach thank you for keeping this thread updated. I had a VPS in SEA005 which after couple of months of downtime is finally up at LAX1Z012, but I am missing two ip4 from the network. Also, I thought based on what I read these are eventually moving to San Jose is that correct? Or do I request that via ticketing system? Do I wait for 2 missing ip4 to appear magically in sometime as you are working through or should raise ticket? Thanks again.
STOP BREAKING THE DATABASE!
If you didn't get your additional it means at the time it was fixed there were none left so it had to be done that way. Maybe a dozen or so people affected if I recall correctly. Wait a little bit so we have extra IPs available and contact us, or if you need it immediately then feel free to contact us sooner but let us know we can move you anywhere with available IPs.
Thanks for answer, I guess then it's something VirMach related as the same Debian packages/version running on other hosts don't drop Cloudflared/Argo tunnel that many times.
Dear @VirMach should the network in Frankfurt look like that? I know jackshit how this should be, but getting 1-97 ms icmp ping from gateway is not good, right?
Okey, I am even more confused - seems like there are Google DNS route pushed within DHCP as the same route...
and those has proper LAN timings - less than 1ms?
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
@VirMach said:
Receiving "Could not connect to DNS server. Please try again later" for NY server in ip range 141.11.22x.xxx
also, installed Debian 12 on RYZE.NYC-B013.VMS VM and I'm getting a segfault when I run apt upgrade. (not happening with other templates --- so I assume its a problem with the template.)
@VirMach rDNS works fine on:
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Why the $30 setup fee for Tokyo VPSes?
Is it a good location for a VPN to connect from China?
This feels like dejavu. We added Debian 12 in the past and it had the same problem, didn't it? And then we removed it because we had to work on our own version not from SolusVM, that never happened, I forgot, and we went back to adding the SolusVM version that's still broken.
Yeah I forgot an important step for all these, I have to go one by one mapping them to specific primary ID for each as they were added to PowerDNS. SolusVM feature, because it'd be impossible to do it any other way that's less labor intensive.
We still get a good number of purchases, it helps balance out stock levels, and balance out abuse. Will likely be changed again as we usually do based on other factors.
Given $30 are too expensive, any other recommended location to set up a private VPN?
From China or anywhere?
If anywhere, just select the nearest location to you.
From China, usually what I've seen people do is in the order:
Tokyo
Hong Kong
Vietnam (Greencloud)
Singapore
US West Los Angeles
US West Seattle
The Ultimate Speedtest Script | Get Instant Alerts on new LES/LET deals | Cheap VPS Deals | VirMach Flash Sales Notifier
FREE KVM VPS - FreeVPS.org | FREE LXC VPS - MicroLXC
Novosibirsk, Russia
Test IP: 91.188.223.6
I really wanted to do this location a while back as I was looking at all the global routes and thought it'd be an interesting location, but then it was definitely complicated after 2022.
When VirMach SG
食之无味 弃之可惜 - Too arduous to relish, too wasteful to discard.
I can be remote hands
食之无味 弃之可惜 - Too arduous to relish, too wasteful to discard.
I'll buy 10 of those sir
I bench YABS 24/7/365 unless it's a leap year.
hey, there are no forever friends nor enemies. russia will be back in one way or another.
I bench YABS 24/7/365 unless it's a leap year.
Sorry 10 of my hand or VPS?
食之无味 弃之可惜 - Too arduous to relish, too wasteful to discard.
is it too much to ask for both?
I bench YABS 24/7/365 unless it's a leap year.
I started working with my phoenix ryzen vps again. Installed Debian 11 and upgraded and it stopped working.
Reinstallling Debian 11... I suppose I should not upgrade the OS.
i think its best to use ISO for latest now
I bench YABS 24/7/365 unless it's a leap year.
Hmm, I do see a history of things like this happening with Debian. Update, then bug reported of a similar issue, "fixed" and then repeat. Seems like it could be caused by any particular package. Gets even more fun when you add in virtualization, Ryzen, and specific motherboards.
It sure is going to be fun to see where we're at with everything in 10, 20, 30 years.
So minimal ISO is confirmed working? It must be missing whatever's causing this. If someone can confirm exact setup with ISO that works the best I should be able to just make a template.
i think so yes. it wasn't working for a while then now it is.
I bench YABS 24/7/365 unless it's a leap year.
@VirMach Four of my servers were moved to NYC, two from Dallas and two from Atlanta. Network status page said that nodes in ATLZ010 were not moving?
I don't want all those servers in NYC. Is it possible to get these servers migrated away from NYC?
"If you put your mind to it, anything is possible." -- Kimberly Guilefoyle
Very long answer but some variation of this will occur, yes.
Alright, thanks for the reply. I guess I'll wait on renewing these for now but I hope that something good occurs before BF/CM.
My node was migrated from NYCB027 to NYCB035, and there is still no network connectivity.
How about if we had a bunch of servers migrated to NY and we actually like it that way, can they stay?