[2022] ★ VirMach ★ RYZEN ★ NVMe ★★ The Epic Sales Offer Thread ★★

16566687071277

Comments

  • ATZ007 was one of my down, so 14/14 up
    still have some issue on 2 that some processes crash not sure why

  • AlwaysSkintAlwaysSkint OGSenpai
    edited August 2022

    @FrankZ said: ATL just came back up

    Thanks for the heads up - hopefully will stay up this time. ;)
    That's 12/12 VPS now (- likely due to distance from China).

    That just leaves:

    • @vyas to get his Transfer
    • Missing additional two IP addresses on one VPS <-- have to keep shutting it down 'cos it gets started up by 'maintenance'.
    • rDNS

    Have been trying to migrate outta Dallas, with no luck, but this is small fry stuff. No big deal.

    Thanked by (2)FrankZ vyas

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • edited August 2022

    @tridinebandim said:
    i have 4 vps with virmach which didn't get a hichkup during the transition. it seems one of them hicking up now :)

    what is your success score like? mine 3/4

    edit: did a last check =)

    5/6, but I would have traded all 5 (and more!) of the working ones for the one that has been broken for 50 days.

    Amusingly one of the 5 was also migrated to the wrong data center, but I hardly care, that probably was the second most important to me, so I'd rather have it working...

    EDIT: Wait what do you mean hichkup? My other VPS that I cared about was down for >24 hours I dunno if that counts as a hichkup. Honestly I'm just happy to have that one back...

  • williewillie OG
    edited August 2022

    I guess I'm currently 2/4 depending. 2 of 3 vps up (one has been random, one is partly blacklisted by comcast), 1 vps with disconnected node for at least a month, vpsshared down but idk if that "counts".

  • @willie said:
    I guess I'm currently 2/4 depending. 2 of 3 vps up (one has been random, one is partly blacklisted by comcast), 1 vps with disconnected node for at least a month, vpsshared down but idk if that "counts".

    I was about to report 3 VMs down, and awaiting a dedi re-install for about 10 days until I looked again, and another 2 VMs have since went unreachable. The network status page was updated today, so that's a good sign, but less helpful if you don't have a record of which node a VM you're on. Here's to hoping all will be forgiven with some super-duper BF deals. :-)

    Thanked by (1)Ademan
  • meanwhile invoices are coming. should i pay them even if not answer from support and servers down?...

    100usd/mo are not millions, but would like to know if my servers are going back =S

    Thanked by (1)Ademan
  • edited August 2022

    My biggest issue is I got 2 dedicated replacement servers that have centos7 on them with no way to install another OS.

    I have had these going on 10 days now and cant do anything with them.

    Appreciate the effort but their worthless if I can't install my OS on the them

    Thanked by (1)dedicados
  • skorousskorous OGSenpai

    @acidpuke said:
    My biggest issue is I got 2 dedicated replacement servers that have centos7 on them with no way to install another OS.

    I have had these going on 10 days now and cant do anything with them.

    Appreciate the effort but their they're worthless if I can't install my OS on the them

    What OS do you want on it?

  • vyasvyas OGSenpai
    edited August 2022

    Renewed my VPS from FFME001.VIRM.AC node,
    updated the OS, rebooted, and Poof! Server offline.
    have raised a (non priority) ticket - at this time, do not want to even experiment with a $ 15 charge even though I know it is not an error on my part.

    Wait and watch.


    p.s: where can I find the SolusVM login credentials (email probably deleted, oops!)

  • edited August 2022

    Ryzen migrate is not working

    I am trying to migrate from tokyo to frankfurt

    Anyone have same migrate issue?

  • MasonMason AdministratorOG

    @acidpuke said:
    My biggest issue is I got 2 dedicated replacement servers that have centos7 on them with no way to install another OS.

    I have had these going on 10 days now and cant do anything with them.

    Appreciate the effort but their worthless if I can't install my OS on the them

    Same here. I may attempt to do what @AlwaysSkint suggested a little while back and just attempt to install debian over top of centos. What's the worst that could happen -- I break it and have to wait for dedi controls anyway? Worth a shot, I guess.

    @FrankZ said: One VM on ATLZ007 that is down, (EDIT: ATL just came back up)

    Ugh finally! Was worried it'd be awhile before getting resolved. They seem to be playing whack-a-mole, though. Had 2 up and 2 down for awhile, then ATLZ007 came back, but now I'm noticing another down so still 2 up, 2 down. Currently affected nodes are LAXA031 and TPAZ002, neither of which appear on the network issues/announcements page.

    Thanked by (1)FrankZ

    Head Janitor @ LES • AboutRulesSupport

  • cybertechcybertech OGBenchmark King

    @add_iT said:
    Ryzen migrate is not working

    I am trying to migrate from tokyo to frankfurt

    Anyone have same migrate issue?

    yeah doesnt work, i think it maybe wont if the current vps is offline.

    I bench YABS 24/7/365 unless it's a leap year.

  • @dedicados said:
    VirMach

    About:
    Last Active August 15

    :'( :s

    The Last Active tracker doesn't actually work properly on LES/LET (maybe just if you just an adblocker?). As I type this, mine says "August 16". Maybe it will change once I post.

  • @MallocVoidstar said:

    @dedicados said:
    VirMach

    About:
    Last Active August 15

    :'( :s

    The Last Active tracker doesn't actually work properly on LES/LET (maybe just if you just an adblocker?). As I type this, mine says "August 16". Maybe it will change once I post.

    Might be displaying in your local time zones?

  • Hope FFME006 is fixed soon

  • @Ademan said:

    @MallocVoidstar said:

    @dedicados said:
    VirMach

    About:
    Last Active August 15

    :'( :s

    The Last Active tracker doesn't actually work properly on LES/LET (maybe just if you just an adblocker?). As I type this, mine says "August 16". Maybe it will change once I post.

    Might be displaying in your local time zones?

    Not sure which timezone is over a week behind UTC ;)

  • VirMachVirMach Hosting Provider
    edited August 2022

    Sorry guys, I didn't really provide an update for a while and I'm sure it doesn't look good while there's also several outages. We're aware of all of them and working on permanent solutions while balancing everything else. Here's some information on what we have achieved so far since my last message, it's possible I may repeat a few things as I haven't looked.

    • Stuck VMs: These were part of the 0.6% I mentioned a few times. We've made great progress on these and also located some others facing different strange issues and began fixing those as well. For example there were some VMs that migrated properly but the database wasn't properly updated to reflect the new server they were on (a few dozen of these cases slipped through.)
    • Templates: All template options should have been corrected for ALL services, meaning at least on SolusVM you should now have access to all the proper templates (now whether they have other issues is another matter.) We're also going to normalize all services and template options finally on WHMCS (it's on the list.)
    • Dedicated servers: We've continued adding these and delivering them where possible. I know there's still people waiting if they had more specific requests and we're currently still working on getting more 32GB options in certain locations requested. I'd say we're somewhere around 60% right now if not higher in terms of what we've sent out and close to done when it comes to getting all the servers we require to finish them out.
    • More Ryzen: Quietly in the background we've been getting the last dozen or two servers completed and these will be going out to locations that need them (re-occurring issues) as well as locations low in stock which should potentially give people a chance to roll their service in a new highly desired location in the upcoming weeks.
    • Storage: We fell behind again on these but they're pretty much ready. Once we close out some of the issues that took longer than expected, these are essentially ready to send out. We're still working on figuring out the problem with the big storage node in NYC so if that situation regresses any further we may have to send Amsterdam's server to NYC.
    • Tickets/Organization: A lot of work has gone in to organize and segment the tickets where we resolve a problem and then begin mass replying for those issues so we can get back to a normal ticket workflow. A lot of work has gone in with organizing everything else as we prepare to return to normalcy. This also includes cleaning up the office from server parts scattered everywhere so we can try getting in serious new hires to the office.

    • Other/Attention to detail: I guess this should have it's own bullet point. I've personally done a lot of work on the things we've put off for some time because we were focusing on the bigger issues. This includes a lot of things on the frontend and backend (things you don't normally hear about like the non-tech aspects of running a business.) We had to focus on cleaning up and organizing some stuff like anti-fraud, chargebacks, abuse reports, balancing expenses, etc. I also had to go back and pick out some smaller but important tasks from the list and had to sink in a good amount of time in organizing our websites, and gearing up to go back to being a functional company that also has to market and sell their products. Some of the stuff is already discussed above such as making sure template options work. I've also personally started going through every single VM again that is in an offline state and began re-organizing a new list to make sure everyone's having a good time (being able to use their service) and digging into logs and everything at a higher level than just deferring it due to it being time consuming. This also means looking into things like IPv6, rDNS, and trying to have a more solid plan for them instead of just saying "coming soon." Something else worth mentioning is that this also... (how many times have I said also?) includes things like looking at template issues like Windows and trying to fix them. (Edit: Also,) things like looking at nodes more specifically and fixing scripts, handling more abuse affecting certain nodes, and so on. Basically, we've been moving back to micro instead of just macro in terms of what we handle to some degree as it was beginning to collectively become a big issue.

    All the maintenance is going very poorly but I hope we've done a semi-decent job of trying to communicate it. I haven't updated the pages personally but relayed information. We got ATLZ007 to have a heartbeat yesterday but it looks like it's gone right back down. Atlanta is not in a good state. It's difficult to get the techs to do anything and they keep misplacing our stuff. The plan was to move everyone to Tampa but that's having its own issues.

    Tampa and Chicago issues have mostly delayed on our end and I keep trying to get to them, hopefully I'll have some meaningful plan set in place by today but I keep saying that to myself every day. We're still getting acclimated to the unexpected additional work as a result of what happened near the beginning of the month and I have a lot more meetings and other work/paperwork as a result. I've gone back to working all day again so just playing catch up now to the week or so that I cut back hours for health reasons.

    San Jose, we're trying to get button presses there again but obviously this is a terrible situation to constantly be in so we're trying to move those to Los Angeles once we get them back online until we can ship them out and fix the issues causing button presses to be required.

    We still have around 9 servers we didn't expect to go down that have been down to some degree or at the very least SolusVM issues for quite some time now and I apologize.

    Any questions, feel free to ask but I'm going to disappear again from here for at least the next couple days, sorry.

  • @VirMach said: Storage: We fell behind again on these but they're pretty much ready. Once we close out some of the issues that took longer than expected, these are essentially ready to send out. We're still working on figuring out the problem with the big storage node in NYC so if that situation regresses any further we may have to send Amsterdam's server to NYC.

    well i am stuck in there.

    thanks for the info

  • JabJab Senpai

    @VirMach said: Any questions, feel free to ask but I'm going to disappear again from here for at least the next couple days, sorry.

    Do you sleep enough!?

    Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
    https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png

  • VirMachVirMach Hosting Provider

    We were able to get button presses on San Jose 4 & 8 which were having problems but an hour or two later, San Jose 5 decided to go offline. San Jose cursed? I need to get more servers up at LAX and then we'll send out migration emails. I didn't realize how much LAX has filled.

    @dedicados said:

    @VirMach said: Storage: We fell behind again on these but they're pretty much ready. Once we close out some of the issues that took longer than expected, these are essentially ready to send out. We're still working on figuring out the problem with the big storage node in NYC so if that situation regresses any further we may have to send Amsterdam's server to NYC.

    well i am stuck in there.

    thanks for the info

    Making progress on NYCB004S (storage) today. It should hopefully be online soon.

    @Jab said:

    @VirMach said: Any questions, feel free to ask but I'm going to disappear again from here for at least the next couple days, sorry.

    Do you sleep enough!?

    I got a good few days, I think I'm read to go back to not getting enough =)

  • imokimok OG
    edited August 2022

    My SSD4G is back online :)

    And it seems it's in Tampa now.

    And Ryzen?

    And it's free.

    Nice.

  • @acidpuke said:
    My biggest issue is I got 2 dedicated replacement servers that have centos7 on them with no way to install another OS.

    I have had these going on 10 days now and cant do anything with them.

    Appreciate the effort but their worthless if I can't install my OS on the them

    I'm in the exact same boat, unlucky but it is what it is. At least most of my VPS are up.

  • I have a dedi too.

    Unlike you, mine won't come back :(

  • edited August 2022

    @VirMach said:
    San Jose cursed? I need to get more servers up at LAX and then we'll send out migration emails.

    I have working service in San Jose.
    Would it be force migrated to Los Angeles, or can it stay there?

  • @VirMach said: Tickets/Organization: A lot of work has gone in to organize and segment the tickets where we resolve a problem and then begin mass replying for those issues so we can get back to a normal ticket workflow. A lot of work has gone in with organizing everything else as we prepare to return to normalcy. This also includes cleaning up the office from server parts scattered everywhere so we can try getting in serious new hires to the office.

    Hi, @VirMach , everything will get better,I hope you can update at least once a week,By the way, the vps of LAXA031 and LAXA032 are both offline

  • @VirMach said:
    Sorry guys, I didn't really provide an update for a while and I'm sure it doesn't look good while there's also several outages. We're aware of all of them and working on permanent solutions while balancing everything else. Here's some information on what we have achieved so far since my last message, it's possible I may repeat a few things as I haven't looked.

    • Stuck VMs: These were part of the 0.6% I mentioned a few times. We've made great progress on these and also located some others facing different strange issues and began fixing those as well. For example there were some VMs that migrated properly but the database wasn't properly updated to reflect the new server they were on (a few dozen of these cases slipped through.)
    • Templates: All template options should have been corrected for ALL services, meaning at least on SolusVM you should now have access to all the proper templates (now whether they have other issues is another matter.) We're also going to normalize all services and template options finally on WHMCS (it's on the list.)
    • Dedicated servers: We've continued adding these and delivering them where possible. I know there's still people waiting if they had more specific requests and we're currently still working on getting more 32GB options in certain locations requested. I'd say we're somewhere around 60% right now if not higher in terms of what we've sent out and close to done when it comes to getting all the servers we require to finish them out.
    • More Ryzen: Quietly in the background we've been getting the last dozen or two servers completed and these will be going out to locations that need them (re-occurring issues) as well as locations low in stock which should potentially give people a chance to roll their service in a new highly desired location in the upcoming weeks.
    • Storage: We fell behind again on these but they're pretty much ready. Once we close out some of the issues that took longer than expected, these are essentially ready to send out. We're still working on figuring out the problem with the big storage node in NYC so if that situation regresses any further we may have to send Amsterdam's server to NYC.
    • Tickets/Organization: A lot of work has gone in to organize and segment the tickets where we resolve a problem and then begin mass replying for those issues so we can get back to a normal ticket workflow. A lot of work has gone in with organizing everything else as we prepare to return to normalcy. This also includes cleaning up the office from server parts scattered everywhere so we can try getting in serious new hires to the office.

    • Other/Attention to detail: I guess this should have it's own bullet point. I've personally done a lot of work on the things we've put off for some time because we were focusing on the bigger issues. This includes a lot of things on the frontend and backend (things you don't normally hear about like the non-tech aspects of running a business.) We had to focus on cleaning up and organizing some stuff like anti-fraud, chargebacks, abuse reports, balancing expenses, etc. I also had to go back and pick out some smaller but important tasks from the list and had to sink in a good amount of time in organizing our websites, and gearing up to go back to being a functional company that also has to market and sell their products. Some of the stuff is already discussed above such as making sure template options work. I've also personally started going through every single VM again that is in an offline state and began re-organizing a new list to make sure everyone's having a good time (being able to use their service) and digging into logs and everything at a higher level than just deferring it due to it being time consuming. This also means looking into things like IPv6, rDNS, and trying to have a more solid plan for them instead of just saying "coming soon." Something else worth mentioning is that this also... (how many times have I said also?) includes things like looking at template issues like Windows and trying to fix them. (Edit: Also,) things like looking at nodes more specifically and fixing scripts, handling more abuse affecting certain nodes, and so on. Basically, we've been moving back to micro instead of just macro in terms of what we handle to some degree as it was beginning to collectively become a big issue.

    All the maintenance is going very poorly but I hope we've done a semi-decent job of trying to communicate it. I haven't updated the pages personally but relayed information. We got ATLZ007 to have a heartbeat yesterday but it looks like it's gone right back down. Atlanta is not in a good state. It's difficult to get the techs to do anything and they keep misplacing our stuff. The plan was to move everyone to Tampa but that's having its own issues.

    Tampa and Chicago issues have mostly delayed on our end and I keep trying to get to them, hopefully I'll have some meaningful plan set in place by today but I keep saying that to myself every day. We're still getting acclimated to the unexpected additional work as a result of what happened near the beginning of the month and I have a lot more meetings and other work/paperwork as a result. I've gone back to working all day again so just playing catch up now to the week or so that I cut back hours for health reasons.

    San Jose, we're trying to get button presses there again but obviously this is a terrible situation to constantly be in so we're trying to move those to Los Angeles once we get them back online until we can ship them out and fix the issues causing button presses to be required.

    We still have around 9 servers we didn't expect to go down that have been down to some degree or at the very least SolusVM issues for quite some time now and I apologize.

    Any questions, feel free to ask but I'm going to disappear again from here for at least the next couple days, sorry.

    TYOC40 still down until now, pls at least give us chance to migrate it to other location as well

  • ^ Did you really have to quote the whole thing? :|

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • cybertechcybertech OGBenchmark King

    @AlwaysSkint said:
    ^ Did you really have to quote the whole thing? :|

    yes

    Thanked by (2)kheng86 rockinmusicgv

    I bench YABS 24/7/365 unless it's a leap year.

  • edited August 2022

    @VirMach said: Any questions, feel free to ask but I'm going to disappear again from here for at least the next couple days, sorry.

    I really, desperately, want my data back from my VPS that's been down for 52 days, but as I described in another post, though my VPS is now booting again, /dev/vda is extremely munged. I've downloaded and tried to recover an image of /dev/vda but even though (with a lot of work) I was able to mount it, none of the relevant data was accessible.

    Is there any hope you guys have a good backup of my VPS? I don't want to wait another 50 days to find out my data's been long gone...

    EDIT: hehe page 69

  • @VirMach said: Any questions, feel free to ask but I'm going to disappear again from here for at least the next couple days, sorry.

    any progress between virmach and CC
    can you disclose what happen to let CC shutdown all servers

This discussion has been closed.