microLXC Public Test

11113151617

Comments

  • AbdullahAbdullah Hosting ProviderOG

    @Ganonk said:

    @Neoon said:
    The SG VM is getting moved to another node on Friday, 10 AM GMT, 7th July 2023.
    Expect some downtime around that time.

    haven't migrated yet?

    (now 12:30 pm GMT)

    Uh, I forgot microlxc. Starting in 15 mins from now.

    Thanked by (1)Neoon
  • AbdullahAbdullah Hosting ProviderOG

    @Abdullah said:

    @Ganonk said:

    @Neoon said:
    The SG VM is getting moved to another node on Friday, 10 AM GMT, 7th July 2023.
    Expect some downtime around that time.

    haven't migrated yet?

    (now 12:30 pm GMT)

    Uh, I forgot microlxc. Starting in 15 mins from now.

    Completed.

    Thanked by (4)sh97 Ganonk ElonBezos Neoon
  • @Abdullah said:

    Completed.

    Thankyou 🥰

  • I am going to reboot Norway tomorrow, around 21:00 CET.
    Afterwards the Node will have more stock available.

    Thanked by (2)carlin0 Ganonk
  • @Neoon said:
    I am going to reboot Norway tomorrow, around 21:00 CET.
    Afterwards the Node will have more stock available.

    Well, Maintenance was done, on time, however, I found a new glitch in the matrix.
    For some reason, I can't explain yet, Deployments are getting marked as Failed despite they are successful.

    Until I get this bug fixed, no restock on Norway.

    Thanked by (1)Ganonk
  • The recent downtime in SG, around 15 minutes, was due to a Patch on the Node itself because of Zenbleed.

    Thanked by (1)jmaxwell
  • AbdullahAbdullah Hosting ProviderOG

    For MicroLXC Singapore

    Hardware upgrade underway for hypervisor ID SGP-K4 (Singapore) - we are migrating services to new hypervisor.
    Expected disruption 5 minutes

    Thanked by (3)zgato Ganonk Neoon
  • AbdullahAbdullah Hosting ProviderOG

    @Abdullah said:
    For MicroLXC Singapore

    Hardware upgrade underway for hypervisor ID SGP-K4 (Singapore) - we are migrating services to new hypervisor.
    Expected disruption 5 minutes

    Completed

    Thanked by (3)Ganonk ElonBezos Neoon
  • Maintenance Announcement
    Currently microLXC is still running on an old PHP version that is going to be EOL in about 1 month.
    Since the Codebase is already tested on 8.1 and I am using the latest PHP version for development, it should go smooth.

    This will be done, this Friday, 20th at around 23:00 UTC.
    I expect roughly 20 minutes downtime or less.

    The Maintenance won't affect any running containers or virtual machines.

  • NeoonNeoon OG
    edited October 2023

    Restocked Norway

    The allocated Ports have been increased too, from 20 to 99.
    The first port as usual is reserved for SSH.

    This is not yet reflected in the Panel, since its the first Node.
    A few small GUI changes also have been made, Terminate is now using a Lightbox instead of a Button, this should better integrate into the Panel.

    Maintenance Announcement
    I plan to reboot Singapore on Tuesday so I can restock it.
    Around 19:00 UTC, Expected downtime about 5 minutes.

  • Restocked Singapore

  • NeoonNeoon OG
    edited October 2023

    Patchnotes
    - Nested Support for Docker and LXC
    - Less Strict VPN Check
    - Reworked Tasks
    - A few smol UI changes

    Thanked by (4)Ganonk balloon carlin0 bliss
  • Thank you for opening up Singapore!

    c999-a757-eb0b-06e6

    Thanked by (1)Neoon
  • NeoonNeoon OG
    edited October 2023

    Forgot to mention, that the current numproc limit is 150.
    I reckon that could be a issue if you run Docker or another LXC container.

    I set it to 500 for new deployments.
    If anyone hits the limit lemme know and I up it.

    Thanked by (1)tuc
  • Likely next week, going to add another JP node thanks to WebHorizon.
    The current JP is not getting removed or anything, however we are not getting it upgraded.

    It has been out of stock most of the time, hence the upgrade.
    Probably doing 512MB same as SG.

  • Johannesburg had an unexpected reboot, this resulted in the API being offline.
    Containers where running, however you could not control them via the dashboard, fixed.

  • Patchnotes
    - Fixed, If a host system had less than 256MB free, the stock system still displayed the 256MB Package despite its not available
    - Tasks now shows the correct Information if you got more than one Container or Virtual Machine, was broken since the last update
    - Packages are finally sorted by Memory
    - Packages with limited quantity are now available

    To test this feature, I added a new 777 Package for Singapore.
    Its limited to 2 concurrent deployments will restock if the user decides to terminate.

  • https://hetrixtools.com/r/ea411498ed8e0e14c9fe35ccc972f982/

    Forgot to mention it, the status page has been moved to HetrixTools.
    Thanks to @Andrei

    Thanked by (3)carlin0 Ganonk Andrei
  • Would anyone be interested to see microLXC switching to a resource pool based allocation instead of the current slot allocation?

  • yes please 😃

  • @Neoon said:
    Would anyone be interested to see microLXC switching to a resource pool based allocation instead of the current slot allocation?

    I checked the changes that need to be done, 2 points to edit to make it happen.
    However, currently the slot limits on most Nodes are low, means they probably run out of stock pretty quick.

    Which is fixable by updating the current network configuration of the Nodes, needs a reboot.
    Since the Network allocation happens dynamically, should scale fine after that.

    I would bring back the 128MB Package too, was OOS due to low slot limits.
    64MB could be added if it makes @yoursunny happy.

    Thanked by (1)yoursunny
  • NeoonNeoon OG
    edited November 2023

    Its done, the buildserver just passed all test with the resource pool change
    If you have your quota set to 1, it would be converted to 1024MB.

    You would be able to spend the allocation freely, without any limitations, right now, as long the location has stock.
    However, we have a few small nodes, which brings up a concern, that these may get hammered.

    Hence I probably make them more expensive, so your cost would be times 2, in rare locations.
    This would not impact or make the current slot limit worse and incentivize people to not deploy everything in these locations.

    I also have considered, a cost increase if you deploy more than 1 container in a location, but I leave that out for now.
    Ideas?

  • NeoonNeoon OG
    edited December 2023

    I wanted actually to do more, however, it has been delayed already but the patch has been tested sooooo here we go

    Patchnotes
    Switching from slot based to resource based Instead of 1 Slot, you get 1GB of available memory you can use as you want.
    Currently there are no rules or limits, besides your allocation and if the location has stock.
    I will see how things will develop, adding rules or limits based on that.

    Due to switching to resource based pools, the 50GB Plan in Norway has been limited to 10 slots, which is about half of the SSD's storage, the Package just exists because terrahost decided to throw in a TB SSD.

    Enabled the 128MB Package again, probably will do more Packages, already at roughly 20 right now, counting disabled and enabled ones.
    May do Regional 128MB Packages with more Storage.

    The current 128MB Package is Global.
    Had no time yet, to test the templates on 64MB, so not enabled yet, 128MB Package already does not support all operating systems.

    Increased minimum Uplink to 100Mbit on all Packages, no matter what size, same goes for IPv6 you get a /64 on everything, routed, except Tokyo.
    Also increased the CPU caps to a minimum of 50%, Norway has been unleashed to 200%, since its a Dedi

  • Deployment of a 256M machine fails in Auckland NZ

  • @Shot² said:
    Deployment of a 256M machine fails in Auckland NZ

    NZ just went down, error handling is working.

  • Oh, everything's fine then :p

  • Looks like something's weird in ZA too: ssh access (via key + high port) over ipv6 is ok, but not over ipv4.

  • NeoonNeoon OG
    edited December 2023

    @Shot² said:
    Looks like something's weird in ZA too: ssh access (via key + high port) over ipv6 is ok, but not over ipv4.

    The Port Forwarding is configured for your Container, I just checked.
    You sure the port is closed? Did you try from a different ISP/VPN?

    We had some rare cases, where the ISP blocked some Ports.
    Otherwise have to troubleshoot it.

  • edited December 2023

    No luck either from a different ISP/IPv4. :(
    ssh -i mykey [email protected] -pBLAH does not work.
    ssh -i mykey root@2c0f:f530:20:4::BL:AB:LA -pBLAH works.
    "No route to host". Weird. Maybe some internet pipe clogged somewhere inbetween.

  • NeoonNeoon OG
    edited December 2023

    @Shot² said:
    No luck either from a different ISP/IPv4. :(
    ssh -i mykey [email protected] -pBLAH does not work.
    ssh -i mykey root@2c0f:f530:20:4::BL:AB:LA -pBLAH works.
    "No route to host". Weird. Maybe some internet pipe clogged somewhere inbetween.

    You change the default Port 22 after it wasn't working right?
    Because changing it, breaks the NAT Port that is suppose to be your SSH Port.

    You can of course do that.

    Thanked by (1)Wonder_Woman
Sign In or Register to comment.