microLXC Public Test

17810121320

Comments

  • @Neoon said:

    @Ganonk said:
    how about SGP and LA on the future?

    I am working on getting a new Sponsor, no update from NexusBytes yet.

    @VirMach =)

  • Shout-out for potential sponsors: I did recently buy services from a MicroLXC sponsor entirely because I tried it on MicroLXC and liked it.

    Sponsorship is also the only reason that I know Zappie has offerings in South Africa, New Zealand, and Chile. I don't know of any other providers in those regions. So, I think this is a pretty good marketing success. Similar situation with Flow...

    Thanked by (3)yoursunny Neoon adly
  • NeoonNeoon OGSenpai

    Some emails got rejected due to a invalid SPF record, this has been fixed.

    Please monitor your VM, don't rely by 100% on the email notifications to remind you to log in to keep the VM.

    If you exceed 60 days without activity, the VM will be stopped, not deleted.
    If for whatever reason the email is not getting delivered, the monitoring should alert you at this point, you have another 7 days to log in and start your VM.

    Thanked by (3)atomi Ganonk adly
  • NeoonNeoon OGSenpai

    Groningen has been moved to local Storage, due to I/O issues today while running on SAN.
    I/O Performance should be back to normal.

    Thanked by (1)Ganonk
  • @Neoon said:
    Groningen has been moved to local Storage, due to I/O issues today while running on SAN.
    I/O Performance should be back to normal.

    Looks like my dronten still running

    Are we should move to Groningen now?

    U said dronten will be wipe out

  • NeoonNeoon OGSenpai

    @add_iT said:

    @Neoon said:
    Groningen has been moved to local Storage, due to I/O issues today while running on SAN.
    I/O Performance should be back to normal.

    Looks like my dronten still running

    Are we should move to Groningen now?

    U said dronten will be wipe out

    Yeah , Groningen is a replacement for Dronten.
    New Deployments have been disabled since months.

    I will announce the exact date later, somewhere around October.

  • @Neoon said:

    @add_iT said:

    @Neoon said:
    Groningen has been moved to local Storage, due to I/O issues today while running on SAN.
    I/O Performance should be back to normal.

    Looks like my dronten still running

    Are we should move to Groningen now?

    U said dronten will be wipe out

    Yeah , Groningen is a replacement for Dronten.
    New Deployments have been disabled since months.

    I will announce the exact date later, somewhere around October.

    I saw melbourne is not favorite location at all, because it always available since first start, i suggest you better make a large plan (1GB) on this, So i will choose it B)

  • NeoonNeoon OGSenpai

    @add_iT said:

    @Neoon said:

    @add_iT said:

    @Neoon said:
    Groningen has been moved to local Storage, due to I/O issues today while running on SAN.
    I/O Performance should be back to normal.

    Looks like my dronten still running

    Are we should move to Groningen now?

    U said dronten will be wipe out

    Yeah , Groningen is a replacement for Dronten.
    New Deployments have been disabled since months.

    I will announce the exact date later, somewhere around October.

    I saw melbourne is not favorite location at all, because it always available since first start, i suggest you better make a large plan (1GB) on this, So i will choose it B)

    No.

    Thanked by (5)ralf Brueggus adly bdl Ganonk
  • @Neoon said:

    @add_iT said:

    @Neoon said:

    @add_iT said:

    @Neoon said:
    Groningen has been moved to local Storage, due to I/O issues today while running on SAN.
    I/O Performance should be back to normal.

    Looks like my dronten still running

    Are we should move to Groningen now?

    U said dronten will be wipe out

    Yeah , Groningen is a replacement for Dronten.
    New Deployments have been disabled since months.

    I will announce the exact date later, somewhere around October.

    I saw melbourne is not favorite location at all, because it always available since first start, i suggest you better make a large plan (1GB) on this, So i will choose it B)

    No.

    It's one of the best locations... Zappie host's locations are also pretty fantastic and frequently available. Especially South Africa, that's an awesome location.

    Most of the people who snatched up all of the US/EU nodes probably already have idling nodes nearby. AU, NZ, JP, CL, and ZA are where it's at.

    PS: I just terminated in JP and relaunched in AU. So, there's one JP node available now.

  • 64d0-1603-bd0c-69f3

  • MasonMason AdministratorOG

    @Neoon said:

    @lindy54 said:
    64d0-1603-bd0c-69f3

    Dude really signed up with multiple accounts to 'thank' himself 50 times. The things people do for free shit...

    Head Janitor @ LES • AboutRulesSupport

  • @Mason said:
    Dude really signed up with multiple accounts to 'thank' himself 50 times. The things people do for free shit...

    I wish I had 2 loyal friends who'd always thank every post I make

    image

  • @Mason said:

    @Neoon said:

    @lindy54 said:
    64d0-1603-bd0c-69f3

    Dude really signed up with multiple accounts to 'thank' himself 50 times. The things people do for free shit...

    Well, there's no place for any justifications, but I must say that it's not like what it seems to have happened.

    Sorry, and thanks for your efforts.

  • NeoonNeoon OGSenpai

    @lindy54 said:

    @Mason said:

    @Neoon said:

    @lindy54 said:
    64d0-1603-bd0c-69f3

    Dude really signed up with multiple accounts to 'thank' himself 50 times. The things people do for free shit...

    Well, there's no place for any justifications, but I must say that it's not like what it seems to have happened.

  • @pgpg said:
    i come, i see, i leave

    i don't care

  • NeoonNeoon OGSenpai

    The Backend was down for about 2 minutes today, due to a planned maintenance.
    However, the scripts that sync the haproxy entries had a bug, which resulted in the HAProxy dying for 30 minutes on all locations as the result.

    This already has been fixed on NanoKVM, however I forgot to port that fix, my bad.
    DNS Proxy wasn't affected by this, since it uses a different script to sync.

  • NeoonNeoon OGSenpai

    Reminder, SG (Nexusbytes) has been set out of stock since months, however we still got a bunch of containers running.
    This Location could go offline this week, according to sources.

  • edited January 2023

    @Neoon said:
    Reminder, SG (Nexusbytes) has been set out of stock since months, however we still got a bunch of containers running.
    This Location could go offline this week, according to sources.

    Ooooh, I have one. Do I need to do anything, or just wait and see what happens to it? I can happily terminate my container if that'd be useful as I only use it occasionally for checking connectivity to elsewhere.

  • NeoonNeoon OGSenpai

    @ralf said:

    @Neoon said:
    Reminder, SG (Nexusbytes) has been set out of stock since months, however we still got a bunch of containers running.
    This Location could go offline this week, according to sources.

    Ooooh, I have one. Do I need to do anything, or just wait and see what happens to it? I can happily terminate my container if that'd be useful as I only use it occasionally for checking connectivity to elsewhere.

    You can watch it burn if you want.

    Thanked by (3)Shot² Ganonk ralf
  • @Neoon said: You can watch it burn if you want.

    (Lyrics from Eurotrip 2004) The roof, the roof, the roof is on fire...

    MicroLXC is lovable. Uptime of C1V

  • SG Location is down on my end.
    Is it final?

  • NeoonNeoon OGSenpai

    @Fritz said:
    SG Location is down on my end.
    Is it final?

    You did follow the LET thread right? Every single NexusByte location goes offline and one by one.
    Today it was most of APAC.

    Thanked by (1)Ganonk
  • The problem is I can't manage the LXC from dashboard.
    Failed to contact Node, please try again later.

  • NeoonNeoon OGSenpai

    @Fritz said:
    The problem is I can't manage the LXC from dashboard.
    Failed to contact Node, please try again later.

    That is normal, if the Node goes down.
    I told people before, that they can migrate if they wish.

    Despite the news, I wait a bit until I clear the node from the database by hand.

    Thanked by (1)Asim
  • ^ Oke noted.

    nb: I want to sink with the ship till the last, that's why I didn't migrate before. :)

    Thanked by (2)yoursunny Asim
  • thankyou Jay 🥲

  • @Ganonk said:
    thankyou Jay 🥲

    thnks for you info bro

    Thanked by (1)Ganonk
  • NeoonNeoon OGSenpai

    SG has been deconsecrated.

    Thanked by (2)Ganonk crunchbits
Sign In or Register to comment.