@Neoon said:
Melbourne is back Online, since it suffered a total data loss on Saturday.
It took a bit longer as expected due to network issues, which now have been solved.
Anyone who has been affected has been notified.
Happy deploying.
Well the deployment did not went well.
Currently images are not preloaded, means they get fetched from the LXD image server.
Due to APAC and shitty mirrors, it can happen, that its not done in time and the deployment process is considered as failure.
I need to preload all images to fix that issue, which I will do hopefully this week also.
I removed the request, so you can if you want... deploy it again immediately, the image should be now preloaded.
In about 2 weeks, the system will check for certain activity on the containers.
Means for you, you should have logged into the panel or container before this update is released.
After this update, you have to login every 60 days, either ssh into the container or login into the panel.
If you fail to meet these requirements, your container goes into a 7 day grace period, afterwards it will be stopped, 7 more days and your container will be deleted.
The reason why I did choose a login based activity is, depending on your use case you may have low traffic,memory or cpu needs, so we cannot really pin it down by that.
We will see how this works and it might be changed later but for now it should be fine.
However, the notifications are not fully in production yet, you can already add your email + confirm it but you wont get any emails yet. We likely will handle this by hand for now until we enable automatic suspension.
Regarding the ongoing issue with failed deployments.
I did manage to narrow it down, sadly it seems like LXD is at fault here.
If you send LXD a container creation request, it simply does not reply always.
But creates the container in the background, which is kinda bad, because it would also mean it can affect reinstalls.
However, you can start another reinstall if the first one fails, since it seems to work on the second try for some reason.
It does not look like a timeout problem or such, as it does not matter if you set the timeout to 60s, LXD just wont answer.
There is a theoretically fix for that, that would kinda duct tape it but I would rather fix it at the core.
I will post again once I fixed it, since is is kinda essential, I have put everything on HOLD for this project.
SG will be rebooted tomorrow night, to help troubleshoot the ongoing deployment issues.
All Containers will be automatically started after the reboot, it should not take longer than 5 minutes.
NL will be physically moved to another data room next week, more information will follow
NL will be moved into another rack on Tuesday, it will be de racked and racked up again so should not take long, no exact time window on that.
LA, SG and NO have currently issues with the LVM backend, which results in poor I/O performance or failed deployments in SG.
The Plan would be for these locations, to migrate existing containers to a new LVM pool, which should solve these issues.
However, the operation could lead to data loss so I advise anyone if you have a container in one of these locations, take a backup before they will be migrated.
This will take place at the following days:
LA Friday, 16:00 GMT, approximately 1 hour
SG Friday 19:00 GMT, approximately 1 hour
NO Sunday, 19:00 GMT, approximately 2 hours
The Downtime will likely to be shorter for each container, as long nothing comes in-between.
During the maintenance, you won't be able to control the container via microlxc.net.
After the maintenance stock will be available again on these locations, including NO.
Tokyo is not affected by this, however if we upgrade NL and AU, we likely will performance this maintenance additional or if it becomes necessary.
CH will be not moved to another LVM Backend, since I plan to discontinue it, due to the network related issues.
However don't plan to discontinue CH until we have a replacement, currently I am still looking for one.
I keep you updated on this.
LA maintenance is done, total downtime was 3 minutes + about 1 minute for each container.
Sadly IPv6 died again, we are working on it, so the downtime was a bit higher + a few reboots.
@vyas said:
Thanks for the awesome work. I had written a short review of MicroLXC on my blog a few days ago. Added this discussion to keep track of updates.
Please tell me what is the Control Panel being used in this project, which is pictured on @vyas' blog page. Thanks!
@vyas said:
Thanks for the awesome work. I had written a short review of MicroLXC on my blog a few days ago. Added this discussion to keep track of updates.
Please tell me what is the Control Panel being used in this project, which is pictured on @vyas' blog page. Thanks!
@vyas said:
Thanks for the awesome work. I had written a short review of MicroLXC on my blog a few days ago. Added this discussion to keep track of updates.
Please tell me what is the Control Panel being used in this project, which is pictured on @vyas' blog page. Thanks!
It's a custom built one.
Is it open source? If yes, where are the sources? Thanks!
@vyas said:
Thanks for the awesome work. I had written a short review of MicroLXC on my blog a few days ago. Added this discussion to keep track of updates.
Please tell me what is the Control Panel being used in this project, which is pictured on @vyas' blog page. Thanks!
It's a custom built one.
Is it open source? If yes, where are the sources? Thanks!
the issue I face "Configure instance: Failed to mount LVM logical volume: Failed to mount '/dev/secondary/containers_lxccef844a4' on '/var/snap/lxd/common/lxd/storage-pools/secondary/containers/lxccef844a4': no such device or address"
@vyas said:
Thanks for the awesome work. I had written a short review of MicroLXC on my blog a few days ago. Added this discussion to keep track of updates.
Every time I stop visiting your website, all of a sudden there’s a content tsunami. When I do visit regularly, it feels like my fridge. Mostly empty.
@vyas said:
Thanks for the awesome work. I had written a short review of MicroLXC on my blog a few days ago. Added this discussion to keep track of updates.
Every time I stop visiting your website, all of a sudden there’s a content tsunami. When I do visit regularly, it feels like my fridge. Mostly empty.
Y U do dis
Just for you my friend, just for you. I publish multiple times to irritate you
Now the RSS feed is working again.
Aiming for one new post everyday for the first month. As a moving target, might keep extending it.
the issue I face "Configure instance: Failed to mount LVM logical volume: Failed to mount '/dev/secondary/containers_lxccef844a4' on '/var/snap/lxd/common/lxd/storage-pools/secondary/containers/lxccef844a4': no such device or address"
may be it needs some manual review
I disabled lvm tools host again, seems to be causing these issues.
When I was working on the lvm volumes it was suggested to enable the lvm host tools to prevent issues but instead it was causing these.
Please try again, I did try 4 deploys on SG in a row, seems to work fine now.
Additional, I switched the image source to local, should be faster and more reliable on deployments.
@vyas said:
Thanks for the awesome work. I had written a short review of MicroLXC on my blog a few days ago. Added this discussion to keep track of updates.
Please tell me what is the Control Panel being used in this project, which is pictured on @vyas' blog page. Thanks!
It's a custom built one.
Is it open source? If yes, where are the sources? Thanks!
Comments
Thanks for the work!
54fb-39f1-f65f-a0e7
The all seeing eye sees everything...
Well the deployment did not went well.
Currently images are not preloaded, means they get fetched from the LXD image server.
Due to APAC and shitty mirrors, it can happen, that its not done in time and the deployment process is considered as failure.
I need to preload all images to fix that issue, which I will do hopefully this week also.
I removed the request, so you can if you want... deploy it again immediately, the image should be now preloaded.
Sorry for the issue.
edit: Looks like SG is facing network issues currently https://stats.uptimerobot.com/J6jQDu8Kry
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Singapore IPv6 has been fixed, enjoy.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
0009-7cf5-d7c0-b464
The all seeing eye sees everything...
In about 2 weeks, the system will check for certain activity on the containers.
Means for you, you should have logged into the panel or container before this update is released.
After this update, you have to login every 60 days, either ssh into the container or login into the panel.
If you fail to meet these requirements, your container goes into a 7 day grace period, afterwards it will be stopped, 7 more days and your container will be deleted.
The reason why I did choose a login based activity is, depending on your use case you may have low traffic,memory or cpu needs, so we cannot really pin it down by that.
We will see how this works and it might be changed later but for now it should be fine.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Token:
a1ce-2b30-c6a7-6254
Cheap dedis are my drug, and I'm too far gone to turn back.
ouch, does not meet any of those requirements
NL has been rebooted for a network maintenance, SR-IOV has been enabled for better network performance.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
NO will be down for a short period on Monday morning (around 8:00 GMT+1), due to a memory upgrade + microcode update.
So we can add a few more slots.
Today, I also dropped a few updates:
a few design changes, for example the new new account page:
Existing accounts are now considered as verified, means you can delete & deploy without verification
However, the notifications are not fully in production yet, you can already add your email + confirm it but you wont get any emails yet. We likely will handle this by hand for now until we enable automatic suspension.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Regarding the ongoing issue with failed deployments.
I did manage to narrow it down, sadly it seems like LXD is at fault here.
If you send LXD a container creation request, it simply does not reply always.
But creates the container in the background, which is kinda bad, because it would also mean it can affect reinstalls.
However, you can start another reinstall if the first one fails, since it seems to work on the second try for some reason.
It does not look like a timeout problem or such, as it does not matter if you set the timeout to 60s, LXD just wont answer.
There is a theoretically fix for that, that would kinda duct tape it but I would rather fix it at the core.
I will post again once I fixed it, since is is kinda essential, I have put everything on HOLD for this project.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Maintenance announcement:
SG will be rebooted tomorrow night, to help troubleshoot the ongoing deployment issues.
All Containers will be automatically started after the reboot, it should not take longer than 5 minutes.
NL will be physically moved to another data room next week, more information will follow
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Another Maintenance announcement:
The Plan would be for these locations, to migrate existing containers to a new LVM pool, which should solve these issues.
However, the operation could lead to data loss so I advise anyone if you have a container in one of these locations, take a backup before they will be migrated.
This will take place at the following days:
LA Friday, 16:00 GMT, approximately 1 hour
SG Friday 19:00 GMT, approximately 1 hour
NO Sunday, 19:00 GMT, approximately 2 hours
The Downtime will likely to be shorter for each container, as long nothing comes in-between.
During the maintenance, you won't be able to control the container via microlxc.net.
After the maintenance stock will be available again on these locations, including NO.
Tokyo is not affected by this, however if we upgrade NL and AU, we likely will performance this maintenance additional or if it becomes necessary.
CH will be not moved to another LVM Backend, since I plan to discontinue it, due to the network related issues.
However don't plan to discontinue CH until we have a replacement, currently I am still looking for one.
I keep you updated on this.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
thanks for your updating announcements
"Thursday" not Tuesday, I am sorry for that mistake.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
NL maintenance starting in the next hours, new deployments have been suspended.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
LA maintenance is done, total downtime was 3 minutes + about 1 minute for each container.
Sadly IPv6 died again, we are working on it, so the downtime was a bit higher + a few reboots.
I/O feels a lot faster now, looks good.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Is MicroLXC same as LXC in LDX or mini version? Or you brand it?
Is there GUI tool available over LXC [other than LXDUI] for complete cluster management for newbies [free/paid]?
SG Maintenance is done, LA v6 works again, thanks Virtualizor and thanks @seriesn
@gks No and No.
No idea, I mean LXD has a Rest API, what I do use, and if you use that, you have nearly endless possibilities.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Thanks for the awesome work. I had written a short review of MicroLXC on my blog a few days ago. Added this discussion to keep track of updates.
blog | exploring visually |
Please tell me what is the Control Panel being used in this project, which is pictured on @vyas' blog page. Thanks!
I hope everyone gets the servers they want!
It's a custom built one.
Is it open source? If yes, where are the sources? Thanks!
I hope everyone gets the servers they want!
blog | exploring visually |
the issue I face "Configure instance: Failed to mount LVM logical volume: Failed to mount '/dev/secondary/containers_lxccef844a4' on '/var/snap/lxd/common/lxd/storage-pools/secondary/containers/lxccef844a4': no such device or address"
may be it needs some manual review
Cannot deploy first container, I get "You recently deployed a container, please wait."
Every time I stop visiting your website, all of a sudden there’s a content tsunami. When I do visit regularly, it feels like my fridge. Mostly empty.
Y U do dis
Nexus Bytes Ryzen Powered NVMe VPS | NYC|Miami|LA|London|Netherlands| Singapore|Tokyo
Storage VPS | LiteSpeed Powered Web Hosting + SSH access | Switcher Special |
Just for you my friend, just for you. I publish multiple times to irritate you
Now the RSS feed is working again.
Aiming for one new post everyday for the first month. As a moving target, might keep extending it.
blog | exploring visually |
I disabled lvm tools host again, seems to be causing these issues.
When I was working on the lvm volumes it was suggested to enable the lvm host tools to prevent issues but instead it was causing these.
Please try again, I did try 4 deploys on SG in a row, seems to work fine now.
Additional, I switched the image source to local, should be faster and more reliable on deployments.
The Image server dislikes APAC.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Sadly its not.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES