@Kaito said:
I also got ip replacement mail. But I don't really want my server location to get changed from Amsterdam/Frankfurt to US.(though IP change is fine, but not location)
I have already mentioned this in the tickets,
Hopefully it get noticed by @VirMach
Location won't change. Right now the IP addresses are not ours yet until they're assigned, no initially geolocation will be wrong.
Some of these Virmach folks need to find a new line of work. I've been doing this for over 30 years, and I've never seen critical service outages anywhere close to what Virmach is reporting IN REAL TIME AGAIN. Our server which weathered the last fiasco wasn't quite so lucky this time around. Here's a tip: SAVE YOUR MONEY AND GO ELSEWHERE!
@NerdUno said:
Some of these Virmach folks need to find a new line of work. I've been doing this for over 30 years, and I've never seen critical service outages anywhere close to what Virmach is reporting IN REAL TIME AGAIN. Our server which weathered the last fiasco wasn't quite so lucky this time around. Here's a tip: SAVE YOUR MONEY AND GO ELSEWHERE!
SEAZ009 is offline for 2-3 days and being worked on.
SEAZ010 is offline for 1-2 days and waiting on datacenter.
Nothing else is offline. Some nodes have partial issues but already addressed. Out of all the times you could've made this comment right now is the least "critical" and I'm assuming it just means you were recently affected on one of these nodes.
After it's back online and you're able to get your data off, create a ticket provide the ticket # here after clicking cancel and I'll provide you a refund so you can follow your own advice, save your money and go elsewhere.
The usage is extremely spiky in that location for almost every user. If that's how it's going to be used by most customers in that region then obviously to stop this we can't just start suspending everyone during peak times and it'd be better to reach out and try to smooth out the usage where we can first to at least avoid the server from locking up on Saturdays.
It's not just Saturday for me. and are you sure it's in that location instead of in this node? I have no problem with my Tokyo NVMe VPSes, just the storage one.
As someone mentioned here earlier, maybe there're some abusers doing mining stuffs/ torrenting all the time.
The usage is extremely spiky in that location for almost every user. If that's how it's going to be used by most customers in that region then obviously to stop this we can't just start suspending everyone during peak times and it'd be better to reach out and try to smooth out the usage where we can first to at least avoid the server from locking up on Saturdays.
It's not just Saturday for me. and are you sure it's in that location instead of in this node? I have no problem with my Tokyo NVMe VPSes, just the storage one.
As someone mentioned here earlier, maybe there're some abusers doing mining stuffs/ torrenting all the time.
Yes, I'll quote again:
The usage is extremely spiky in that location for almost every user
This means every day. The usage in that location is not like the others. 90% of the people have extremely spiky traffic/disk usage. As in, it appears most users are torrenting, using it as a VPN, or a private media server. Even the bandwidth usage is nothing like the other locations. So if you are someone who is not doing that, you're in the minority in Tokyo.
@foitin said: I have no problem with my Tokyo NVMe VPSes, just the storage one.
That's because of the "NVMe" part of that. Bandwidth usage will naturally be higher in proportion to the disk. We're seeing higher network usage in Tokyo NVMe versus other locations as well but since the nodes don't have 130TB disk that ends up being lower still.
Anyway, we'll most likely be contacting the highest users today and asking them what they're running and how they can reduce usage to at least ensure these people that were spiking the most on Saturday don't do it again next week, then we'll go from there and see how it improves. I've been monitoring everything for a couple days now so we should at least be able to make some level of improvement but it'll still be nowhere close to NYC or LAX.
I wanted to comment again that I have no ticket for FFME01 IP migration, but I have for FFME02 and FFME04... but I see secondary IP added to WHMCS, but no ticket, at all.
No idea if you want to check that. id=613983
[ and there is no new IP in panel and/or ticket for NYCB035 and NYCB036 (Dedipath), but there is IP for NYCB014 (PSINet/Cogent) - but this I will assume it's on purpose.
and nothing for DENZ001 too, but I think you mentioned this a later batch]
@NerdUno said:
Some of these Virmach folks need to find a new line of work. I've been doing this for over 30 years, and I've never seen critical service outages anywhere close to what Virmach is reporting IN REAL TIME AGAIN. Our server which weathered the last fiasco wasn't quite so lucky this time around. Here's a tip: SAVE YOUR MONEY AND GO ELSEWHERE!
SEAZ009 is offline for 2-3 days and being worked on.
SEAZ010 is offline for 1-2 days and waiting on datacenter.
Nothing else is offline. Some nodes have partial issues but already addressed. Out of all the times you could've made this comment right now is the least "critical" and I'm assuming it just means you were recently affected on one of these nodes.
After it's back online and you're able to get your data off, create a ticket provide the ticket # here after clicking cancel and I'll provide you a refund so you can follow your own advice, save your money and go elsewhere.
I'm on SEAZ002, have opened a ticket, and networking is dead as a doornail.
@NerdUno said:
I'm on SEAZ002, have opened a ticket, and networking is dead as a doornail.
SEAZ002 isn't even down. There's 2% of VMs offline but that's a normal amount as some people power down their services to that level when they're not using it. Disks are all healthy. Uptime 81 days. CPU is at 25% or so. Active memory usage around 50% and someone is bursting I/O right now over the last 30 minutes but otherwise low usage there too. Load is at 8-10 which is low.
I see one ticket about SEAZ002 so I'll assume it's yours. For some reason SolusVM assigned the IP incorrectly here so when anti-IP stealing and ARP attack feature was turned on it broke your connection. I'm handling the ticket now.
@Jab said: [ and there is no new IP in panel and/or ticket for NYCB035 and NYCB036 (Dedipath), but there is IP for NYCB014 (PSINet/Cogent) - but this I will assume it's on purpose.
and nothing for DENZ001 too, but I think you mentioned this a later batch]
These aren't changing.
@Jab said: I wanted to comment again that I have no ticket for FFME01 IP migration, but I have for FFME02 and FFME04... but I see secondary IP added to WHMCS, but no ticket, at all.
@DanSummer said: @VirMach DALZ008 seems to be having some problem (not booting).
Routing issue. Trying to look into it and see what we can do, same with QN LAX. Already reported to the DCs as well. I don't know what specifically happened outside of the IRR issue but it looks like a lot of routes have problems since the day we discovered it, including even some routing from our office in LA.
Hopefully it can be escalated to the appropriate carriers. DALZ008 seems to have problems to NYC, Virginia, Turkey, Saudi Arabia, and most of China.
We were able to get some people off but it went offline pretty quickly. I've already requested the DC try to bring it back up and monitoring so we can continue the emergency migrations.
Always interesting when people come in full of energy and ready to shit on Virmach for what they think the issue was... But don't have the energy after to even apologize or thank them for working on their issue which was not as they portrayed it.
Must be that 30+ years of "experience" making them so very tired that they need to go take a nap.
LAX QN is back to normal. I've investigated this and it 100% looks like internet backbone issue. Some carrier in between LAX and Dallas and earlier LAX and LAX was having issues.
It went from Charter LAX --> QN and INAP having speed issues, to some issues between Cloudflare LAX and OVH Virginia, and a few others in between such as INAP NJ having issues to varying degrees. DALZ008 to SolusVM couldn't reach eachother at all, but they can finally reach eachother now with no changes on our end, except they still refuse to connect and DALZ008 is having its previous issues as well.
I'll look into it further but it seems like we just have to wait until the carriers involved figure it out. Seems like we were the only ones to notice early on as OVH just posted a vague network issue tonight and then quickly removed it.
Great,
My vps is finally out of SJCZ005 node after two months down.
@VirMach said:
We were able to get some people off but it went offline pretty quickly. I've already requested the DC try to bring it back up and monitoring so we can continue the emergency migrations.
@yoursunny said:
It's two days before the IP change.
New IP subnet is not yet announced in BGP.
I predict the IP change will be delayed.
Switch configuration for most locations has been completed. The IP addresses were finally assigned to us and are getting announced. There's a small chance some locations may have to be delayed to Friday but I think we're finally in a position where I can say that any major delay has been avoided.
Phoenix has been fully completed and just waiting for the official switch-over on Thursday (or you can do it early.)
I'll keep the network status page updated throughout the day.
@Daevien said:
Always interesting when people come in full of energy and ready to shit on Virmach for what they think the issue was... But don't have the energy after to even apologize or thank them for working on their issue which was not as they portrayed it.
Must be that 30+ years of "experience" making them so very tired that they need to go take a nap.
Actually, @Daevien, the issue was exactly as I described. All network connectivity was down FOR DAYS. I fully appreciate that isn't that long in Virmach time, but it's intolerable in most businesses including mine. By @VirMach's own admission, the misconfiguration was solely the result of actions taken by them which blew the networking out of the water. @VirMach clearly had no clue there was even a problem until I opened a ticket which went into the pit with a thousand other open tickets. Only when I later complained about it here did the matter get addressed. And I'm supposed to apologize because they restored service that I'm paying for?? You may recall that the Soup Nazi got away with bad behavior because the soup was good. Wish I could say the same for Virmach.
@yoursunny said:
It's two days before the IP change.
New IP subnet is not yet announced in BGP.
I predict the IP change will be delayed.
Switch configuration for most locations has been completed. The IP addresses were finally assigned to us and are getting announced. There's a small chance some locations may have to be delayed to Friday but I think we're finally in a position where I can say that any major delay has been avoided.
Phoenix has been fully completed and just waiting for the official switch-over on Thursday (or you can do it early.)
I'll keep the network status page updated throughout the day.
That's good news; is Chicago on the "small chance of delay" list, or should I start obsessively hitting F5 on the network status page? :-)
@NerdUno said: Actually, @Daevien, the issue was exactly as I described. All network connectivity was down FOR DAYS. I fully appreciate that isn't that long in Virmach time, but it's intolerable in most businesses including mine. By @VirMach's own admission, the misconfiguration was solely the result of actions taken by them which blew the networking out of the water. @VirMach clearly had no clue there was even a problem until I opened a ticket which went into the pit with a thousand other open tickets. Only when I later complained about it here did the matter get addressed. And I'm supposed to apologize because they restored service that I'm paying for?? You may recall that the Soup Nazi got away with bad behavior because the soup was good. Wish I could say the same for Virmach.
There's a difference between coming here, immediately telling us to find a new line of work, mentioning you've been doing "this" for 30 years, mentioning critical service outages, while implying that your VPS on a server not included in any outages is part of them, coming here just to tell everyone else to follow advice that you clearly yourself were not interested in, and coming here to report the actual issue you were facing which is that your clearly "online" service was having connectivity issues and your priority ticket had gone unanswered for 18 hours.
So congratulations, you did get it addressed a few hours sooner.
@NerdUno said: By @VirMach's own admission, the misconfiguration was solely the result of actions taken by them which blew the networking out of the water.
I'll write it out again as you've misinterpreted. This was due to a bug in the third party software we use, that is not open source, and for some reason the industry-standard for a decade when it comes to affordable virtual servers. I'm sure you can twist our words to make it seem like it's somehow in the end our fault.
@NerdUno said: @VirMach clearly had no clue there was even a problem until I opened a ticket which went into the pit with a thousand other open tickets
But yes this is a case where it would have been difficult for us to identify it quickly on our own without a ticket being created. Your ticket didn't go into a pit of a thousand other tickets, we have the numbers down, and it was actually in the top 20 tickets view and we would have gotten to it most likely a few hours after your comment.
@NerdUno said: Only when I later complained about it here did the matter get addressed. And I'm supposed to apologize because they restored service that I'm paying for?? You may recall that the Soup Nazi got away with bad behavior because the soup was good. Wish I could say the same for Virmach.
Once again congratulations. By making false claims, you grabbed my attention and were able to skip a few tickets in line and put yourself ahead of others. All that was necessary was my genuine concern that anything you initially said was possibly true and thus unknown by us, and affecting everyone on your node, so you got me to investigate it in an inefficient manner.
The best part of this is that you stated that you believed we were having many catastrophic failures, and did not take that as an opportunity to perhaps empathize. Instead you saw that as us, according to you, were have the HIGHEST number of catastrophic issues you've EVER seen, and even then, you could not justify waiting more than 18 hours for your ticket to be answered before coming here to exaggerate the situation and only care about yourself. In your own words, this was something you've never seen before yet you expected us to be able to still get back to you for your individual issue, while everything else is on fire, to assist you and you only first.
Hey, at least you're revealing your true motives. It doesn't seem like your goal was to help anyone else by getting them to avoid us, it was to get your service online. Which is fine.
@NerdUno said: Here's a tip: SAVE YOUR MONEY AND GO ELSEWHERE!
I still highly recommend you contact us based on your own advice as someone who has been doing "this" for 30 years. I am offering you a full refund. It's obviously your choice to make, but remember that it was offered. I'll even go through in this case and refund any store credits involved in the purchase, so that's money back to your card/PayPal.
Based on your continued comments you clearly do not trust our abilities so I do not understand under any circumstances why you'd want to continue with us when I'm offering you a policy exception refund in this case.
@yoursunny said:
It's two days before the IP change.
New IP subnet is not yet announced in BGP.
I predict the IP change will be delayed.
Switch configuration for most locations has been completed. The IP addresses were finally assigned to us and are getting announced. There's a small chance some locations may have to be delayed to Friday but I think we're finally in a position where I can say that any major delay has been avoided.
Phoenix has been fully completed and just waiting for the official switch-over on Thursday (or you can do it early.)
I'll keep the network status page updated throughout the day.
That's good news; is Chicago on the "small chance of delay" list, or should I start obsessively hitting F5 on the network status page? :-)
Chicago has two Chicagos. I'm fairly confident in both being completed soon, but QN Chicago has a higher likelihood of being completed first based on previous response times in terms of networking.
@yoursunny said:
It's two days before the IP change.
New IP subnet is not yet announced in BGP.
I predict the IP change will be delayed.
Switch configuration for most locations has been completed. The IP addresses were finally assigned to us and are getting announced. There's a small chance some locations may have to be delayed to Friday but I think we're finally in a position where I can say that any major delay has been avoided.
Phoenix has been fully completed and just waiting for the official switch-over on Thursday (or you can do it early.)
I'll keep the network status page updated throughout the day.
That's good news; is Chicago on the "small chance of delay" list, or should I start obsessively hitting F5 on the network status page? :-)
Chicago has two Chicagos. I'm fairly confident in both being completed soon, but QN Chicago has a higher likelihood of being completed first based on previous response times in terms of networking.
I'm on CHIZ001, but I have no idea which Chicago that is (although traceroute would suggest QN so hopefully yay!)
(Side note against all the tales of woe; aside from a brief outage caused by an unexpected IP change a few months ago, this VPS has been quietly and reliably rumbling on so it's not all doom and gloom)
Comments
LOL @ IP change location
good morning
^ Why?
On second thoughts, don't bother.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Location won't change. Right now the IP addresses are not ours yet until they're assigned, no initially geolocation will be wrong.
Some of these Virmach folks need to find a new line of work. I've been doing this for over 30 years, and I've never seen critical service outages anywhere close to what Virmach is reporting IN REAL TIME AGAIN. Our server which weathered the last fiasco wasn't quite so lucky this time around. Here's a tip: SAVE YOUR MONEY AND GO ELSEWHERE!
SEAZ009 is offline for 2-3 days and being worked on.
SEAZ010 is offline for 1-2 days and waiting on datacenter.
Nothing else is offline. Some nodes have partial issues but already addressed. Out of all the times you could've made this comment right now is the least "critical" and I'm assuming it just means you were recently affected on one of these nodes.
After it's back online and you're able to get your data off, create a ticket provide the ticket # here after clicking cancel and I'll provide you a refund so you can follow your own advice, save your money and go elsewhere.
It's not just Saturday for me. and are you sure it's in that location instead of in this node? I have no problem with my Tokyo NVMe VPSes, just the storage one.
As someone mentioned here earlier, maybe there're some abusers doing mining stuffs/ torrenting all the time.
Yes, I'll quote again:
This means every day. The usage in that location is not like the others. 90% of the people have extremely spiky traffic/disk usage. As in, it appears most users are torrenting, using it as a VPN, or a private media server. Even the bandwidth usage is nothing like the other locations. So if you are someone who is not doing that, you're in the minority in Tokyo.
That's because of the "NVMe" part of that. Bandwidth usage will naturally be higher in proportion to the disk. We're seeing higher network usage in Tokyo NVMe versus other locations as well but since the nodes don't have 130TB disk that ends up being lower still.
Anyway, we'll most likely be contacting the highest users today and asking them what they're running and how they can reduce usage to at least ensure these people that were spiking the most on Saturday don't do it again next week, then we'll go from there and see how it improves. I've been monitoring everything for a couple days now so we should at least be able to make some level of improvement but it'll still be nowhere close to NYC or LAX.
I wanted to comment again that I have no ticket for FFME01 IP migration, but I have for FFME02 and FFME04... but I see secondary IP added to WHMCS, but no ticket, at all.
No idea if you want to check that.
id=613983
[ and there is no new IP in panel and/or ticket for NYCB035 and NYCB036 (Dedipath), but there is IP for NYCB014 (PSINet/Cogent) - but this I will assume it's on purpose.
and nothing for DENZ001 too, but I think you mentioned this a later batch]
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
It's two days before the IP change.
New IP subnet is not yet announced in BGP.
I predict the IP change will be delayed.
Accepting submissions for IPv6 less than /64 Hall of Incompetence.
You can migrate users with high long-term traffic usage to other locations.
I'm on SEAZ002, have opened a ticket, and networking is dead as a doornail.
SEAZ002 isn't even down. There's 2% of VMs offline but that's a normal amount as some people power down their services to that level when they're not using it. Disks are all healthy. Uptime 81 days. CPU is at 25% or so. Active memory usage around 50% and someone is bursting I/O right now over the last 30 minutes but otherwise low usage there too. Load is at 8-10 which is low.
I see one ticket about SEAZ002 so I'll assume it's yours. For some reason SolusVM assigned the IP incorrectly here so when anti-IP stealing and ARP attack feature was turned on it broke your connection. I'm handling the ticket now.
I really hope not but it seems like a possibility with how long they're taking on everything.
These aren't changing.
No idea if you want to check that. id=613983
I'll see what happened with FFME
@VirMach DALZ008 seems to be having some problem (not booting).
shame, SJCZ005 offline again.
Routing issue. Trying to look into it and see what we can do, same with QN LAX. Already reported to the DCs as well. I don't know what specifically happened outside of the IRR issue but it looks like a lot of routes have problems since the day we discovered it, including even some routing from our office in LA.
Hopefully it can be escalated to the appropriate carriers. DALZ008 seems to have problems to NYC, Virginia, Turkey, Saudi Arabia, and most of China.
We were able to get some people off but it went offline pretty quickly. I've already requested the DC try to bring it back up and monitoring so we can continue the emergency migrations.
Always interesting when people come in full of energy and ready to shit on Virmach for what they think the issue was... But don't have the energy after to even apologize or thank them for working on their issue which was not as they portrayed it.
Must be that 30+ years of "experience" making them so very tired that they need to go take a nap.
LAX QN is back to normal. I've investigated this and it 100% looks like internet backbone issue. Some carrier in between LAX and Dallas and earlier LAX and LAX was having issues.
It went from Charter LAX --> QN and INAP having speed issues, to some issues between Cloudflare LAX and OVH Virginia, and a few others in between such as INAP NJ having issues to varying degrees. DALZ008 to SolusVM couldn't reach eachother at all, but they can finally reach eachother now with no changes on our end, except they still refuse to connect and DALZ008 is having its previous issues as well.
I'll look into it further but it seems like we just have to wait until the carriers involved figure it out. Seems like we were the only ones to notice early on as OVH just posted a vague network issue tonight and then quickly removed it.
Great,
My vps is finally out of SJCZ005 node after two months down.
@VirMach : When will the JP location available for order new VPS?
If there's any sense, it'll be unavailable for the foreseeable. Try Chicago instead; on 2nd thoughts don't!
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Switch configuration for most locations has been completed. The IP addresses were finally assigned to us and are getting announced. There's a small chance some locations may have to be delayed to Friday but I think we're finally in a position where I can say that any major delay has been avoided.
Phoenix has been fully completed and just waiting for the official switch-over on Thursday (or you can do it early.)
I'll keep the network status page updated throughout the day.
Actually, @Daevien, the issue was exactly as I described. All network connectivity was down FOR DAYS. I fully appreciate that isn't that long in Virmach time, but it's intolerable in most businesses including mine. By @VirMach's own admission, the misconfiguration was solely the result of actions taken by them which blew the networking out of the water. @VirMach clearly had no clue there was even a problem until I opened a ticket which went into the pit with a thousand other open tickets. Only when I later complained about it here did the matter get addressed. And I'm supposed to apologize because they restored service that I'm paying for?? You may recall that the Soup Nazi got away with bad behavior because the soup was good. Wish I could say the same for Virmach.
That's good news; is Chicago on the "small chance of delay" list, or should I start obsessively hitting F5 on the network status page? :-)
There's a difference between coming here, immediately telling us to find a new line of work, mentioning you've been doing "this" for 30 years, mentioning critical service outages, while implying that your VPS on a server not included in any outages is part of them, coming here just to tell everyone else to follow advice that you clearly yourself were not interested in, and coming here to report the actual issue you were facing which is that your clearly "online" service was having connectivity issues and your priority ticket had gone unanswered for 18 hours.
So congratulations, you did get it addressed a few hours sooner.
I'll write it out again as you've misinterpreted. This was due to a bug in the third party software we use, that is not open source, and for some reason the industry-standard for a decade when it comes to affordable virtual servers. I'm sure you can twist our words to make it seem like it's somehow in the end our fault.
But yes this is a case where it would have been difficult for us to identify it quickly on our own without a ticket being created. Your ticket didn't go into a pit of a thousand other tickets, we have the numbers down, and it was actually in the top 20 tickets view and we would have gotten to it most likely a few hours after your comment.
Once again congratulations. By making false claims, you grabbed my attention and were able to skip a few tickets in line and put yourself ahead of others. All that was necessary was my genuine concern that anything you initially said was possibly true and thus unknown by us, and affecting everyone on your node, so you got me to investigate it in an inefficient manner.
The best part of this is that you stated that you believed we were having many catastrophic failures, and did not take that as an opportunity to perhaps empathize. Instead you saw that as us, according to you, were have the HIGHEST number of catastrophic issues you've EVER seen, and even then, you could not justify waiting more than 18 hours for your ticket to be answered before coming here to exaggerate the situation and only care about yourself. In your own words, this was something you've never seen before yet you expected us to be able to still get back to you for your individual issue, while everything else is on fire, to assist you and you only first.
Hey, at least you're revealing your true motives. It doesn't seem like your goal was to help anyone else by getting them to avoid us, it was to get your service online. Which is fine.
I still highly recommend you contact us based on your own advice as someone who has been doing "this" for 30 years. I am offering you a full refund. It's obviously your choice to make, but remember that it was offered. I'll even go through in this case and refund any store credits involved in the purchase, so that's money back to your card/PayPal.
Based on your continued comments you clearly do not trust our abilities so I do not understand under any circumstances why you'd want to continue with us when I'm offering you a policy exception refund in this case.
Chicago has two Chicagos. I'm fairly confident in both being completed soon, but QN Chicago has a higher likelihood of being completed first based on previous response times in terms of networking.
I'm on CHIZ001, but I have no idea which Chicago that is (although traceroute would suggest QN so hopefully yay!)
(Side note against all the tales of woe; aside from a brief outage caused by an unexpected IP change a few months ago, this VPS has been quietly and reliably rumbling on so it's not all doom and gloom)