Uses for a few short-term VPSes

2»

Comments

  • I kind of decided I should whip up some type of control panel. Seems nothing works well/properly on the low-end stuff I'm running. Proxmox certainly isn't going to be suitable.

    Thanked by (1)uptime
  • I'm the first to admit that it won't win prizes for UI, but this is what I got so far. Stats are obviously faked, I didn't do the Prometheus connector yet.



    Thanked by (2)ehab chimichurri
  • @tetech i encourage you to continue adding mocks with main important functions. Then start the backend API immediately. thats way you can be 70% complete better than nothing at all... so don't worry how it looks like now.

  • tetechtetech OG
    edited January 2022

    @ehab said:
    @tetech i encourage you to continue adding mocks with main important functions. Then start the backend API immediately. thats way you can be 70% complete better than nothing at all... so don't worry how it looks like now.

    Oh, the important functionality is mostly done. Here's noVNC.

    To be clear, my time budget for this is around 20 hours and I've already burned a third of it, so it won't get too fancy.

    Thanked by (1)Not_Oles
  • Status this morning:

    • Minor changes to UI from the screenshots. A few new things like SSH key download.
    • Control (start/stop/restart/noVNC) is complete.
    • Security is mostly finished - login/out, forgot/change password, verifying permissions in API, etc.
    • API is mostly complete (exceptions below) so it is displaying live data

    To be finished:

    • The remaining things in the API are related to profile: changing avatar, updating preferences
    • Reinstall/reset network functionality not done yet
    • Stats are reported but not recorded (integration with Prometheus not done yet)

    I came up with a longer list for the future (like 2FA) but the main goal of this is to get something super light-weight which might make mini containers feasible.

    Thanked by (2)bdl Not_Oles
  • Prometheus integration is working, plus the profile/preferences. Fiddled with the UI a bit, but still don't claim it is good.

    I'll set up a new node and optimize the memory usage a bit, then maybe it is time for someone else to take it for a test-drive. Memory is already not bad:

    # lxc-ls --running
    debtest2 eZxs82th
    # free -m
                  total        used        free      shared  buff/cache   available
    Mem:            984          78         190           0         715         886
    Swap:          1024           0        1024
    # ps -o comm,rss | grep m$
    systemd-journal   10m
    node_exporter     19m
    

    So under 100MB of RAM used for two LXC containers, and that includes 20MB for Prometheus monitoring.



    Thanked by (3)Not_Oles _MS_ Bochi
  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited January 2022

    @tetech said:
    To be clear, my time budget for this is around 20 hours

    Seems like a lot has been accomplished in a short time! Congrats! 🎉

    I hope everyone gets the servers they want!

  • @Not_Oles said:

    @tetech said:
    To be clear, my time budget for this is around 20 hours

    Seems like a lot has been accomplished in a short time! Congrats! 🎉

    It blew out to at least 30 :( I had to learn the Prometheus API from scratch and after that I refactored some things.

    Thanked by (2)Not_Oles uptime
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @tetech said:

    @Not_Oles said:

    @tetech said:
    To be clear, my time budget for this is around 20 hours

    Seems like a lot has been accomplished in a short time! Congrats! 🎉

    It blew out to at least 30 :( I had to learn the Prometheus API from scratch and after that I refactored some things.

    It's okay! Lots of folks like Prometheus so you learned something useful! 🆗

    Thanked by (1)tetech

    I hope everyone gets the servers they want!

  • NeoonNeoon OGSenpai
    edited January 2022

    Quite possible, I think the initial NanoKVM Panel was 40 hours including Documentation.
    If you know the stuff you are going to work with, its easy possible in this time.

    The big time eater, is solving issues, if I did not had run into a few, I likely would be done faster.

    Thanked by (3)tetech uptime yoursunny
  • (off-topic, but I remember finding a nice "night sky" monitor service from @Neoon on the og lowendspirit forum when I was learning the ropes on my first little ovz nat from deepnet solutions)

    HS4LIFE (+ (* 3 4) (* 5 6))

  • @Neoon said:
    The big time eater, is solving issues, if I did not had run into a few, I likely would be done faster.

    Yeah, agreed.

  • tetechtetech OG
    edited January 2022

    Looking for a volunteer/sucker/guinea pig to do an initial test. Same requirements as @Neoon,

    • Your account needs to be 6 months old
    • You need to have at least 50 Posts
    • You need to have at least 50 Likes

    But I don't have a fancy bot so the invite/provisioning is manual at the moment. In terms of the actual LXC container, it would be 128 MB RAM in Chicago, most notably IPv6 only. You'll get a routed /64 from tunnelbroker. Anyone want to kick the tyres and share some thoughts? https://www.lxcbox.cloud

    root@FG74Tru9:/# ping6 -c 10 cloudflare.com
    PING cloudflare.com(2606:4700::6810:85e5 (2606:4700::6810:85e5)) 56 data bytes
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=1 ttl=58 time=2.43 ms
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=2 ttl=58 time=2.59 ms
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=3 ttl=58 time=2.59 ms
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=4 ttl=58 time=2.70 ms
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=5 ttl=58 time=2.47 ms
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=6 ttl=58 time=2.39 ms
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=7 ttl=58 time=2.47 ms
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=8 ttl=58 time=2.63 ms
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=9 ttl=58 time=2.48 ms
    64 bytes from 2606:4700::6810:85e5 (2606:4700::6810:85e5): icmp_seq=10 ttl=58 time=2.66 ms
    
    --- cloudflare.com ping statistics ---
    10 packets transmitted, 10 received, 0% packet loss, time 9097ms
    rtt min/avg/max/mdev = 2.393/2.540/2.697/0.099 ms
    
    Thanked by (1)Mason
  • MasonMason AdministratorOG
    edited January 2022

    @tetech said: Looking for a volunteer/sucker/guinea pig to do an initial test.

    May be worth opening a new thread with all this info depending on how many users you want to bring on since it's pretty well buried in this thread :)

    Good luck!! Panel looks great

    Thanked by (1)yoursunny

    Head Janitor @ LES • AboutRulesSupport

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited January 2022

    @tetech said: volunteer/sucker/guinea pig to do an initial test.

    Me. If you want. I don't need it for anything, but delighted to help you test! 🤩

    Thanked by (1)tetech

    I hope everyone gets the servers they want!

  • @Mason said:

    @tetech said: Looking for a volunteer/sucker/guinea pig to do an initial test.

    May be worth opening a new thread with all this info depending on how many users you want to bring on since it's pretty well buried in this thread :)

    Good luck!! Panel looks great

    Thanks for the encouragement. I don't consider being inconspicuous a bad thing at the moment ;)

    That may change, but at first I'd like to do a sanity check of whether the containers work and I haven't done a major screw-up!

    @Not_Oles said:

    @tetech said: volunteer/sucker/guinea pig to do an initial test.

    Me. If you want. I don't need it for anything, but delighted to help you test! 🤩

    Thanks! Happy for you to take it for a test drive for however long you want. Preference for template? Debian bullseye? (can't choose that yourself yet... probably next week)

    Thanked by (2)Not_Oles Mason
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @tetech said: Preference for template? Debian bullseye?

    Sure, Debian, please. Will be fun! 🎉

    I expect to be busy Monday, but other days next week should be fine.

    Have a nice weekend! :)

    Thanked by (1)tetech

    I hope everyone gets the servers they want!

  • First, a big thank you to @Not_Oles for testing! He's picked up several dumb errors and made good suggestions.

    The ability to reset networking and reinstall the LXC container (same or new template) has been added to the control panel. Still a bit fragile but in the process of improving. You can (in theory) choose any of the available LXC templates.

    Next thing when time permits is I'll let people provision their own container. It has a lot in common with reinstalling, so shouldn't be too much work.

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @tetech said: First, a big thank you to @Not_Oles for testing!

    Hi @tetech! You're welcome! It's fun to work with you! I appreciate your prompt replies and kind suggestions! :)

    He's picked up several dumb errors and made good suggestions.

    I didn't see any dumb errors! :) It all looks pretty good to me! ✅

    Best wishes and kindest regards! :)

    I hope everyone gets the servers they want!

  • @Not_Oles said: I didn't see any dumb errors!

    I would say changing the firewall and killing network access to the container was a dumb error! :disappointed: But easily fixed.

    The panel now allows users to provision their own containers. The way I'll do it (at least initially) is give each user a total "RAM budget", e.g. 128MB. You can allocate it on whichever node has free RAM, split it into two 64MB's, etc.

    The amount of disk/bandwidth is given as a ratio "per 64MB of RAM". In the example below, it is 1GB disk and 0.5TB BW per 64MB RAM, so if you create a container with 128MB of RAM you'd get 2GB disk and 1TB RAM. The reason for doing it like this is that the plans of the KVM "host" are pretty different - one is 1GB RAM / 5GB NVMe, another is 0.5GB RAM / 250GB HDD. But RAM is almost always the most severe constraint.


    Thanks again to @Not_Oles for valiant testing. I'm probably going to stop adding panel features and add more hosts to the pool, then take up @Mason's suggestion of a new thread.

    Thanked by (2)Not_Oles Mason
  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited January 2022

    @tetech said: I would say changing the firewall and killing network access to the container was a dumb error! But easily fixed.

    Not to be a picky jerk, but, really, testing a firewall seems like a smart idea! I always try a little testing to make sure my firewall seems to be doing what I think it's supposed to be doing. So I think this was a super excellent mistake to make! 🤩

    Thanks to @tetech for letting me help test! 💖

    I hope everyone gets the servers they want!

  • NeoonNeoon OGSenpai

    I kinda like the pool idea doh.

  • @Neoon said:
    I kinda like the pool idea doh.

    Yeah, the basic principle is to share the free resource fairly but give users flexibility on how/where to allocate it.

    Another thing in the "idea bank" is to allow users to choose their balloon memory. So you can over-allocate beyond your 128MB, and the OOM killer will penalize you more based on how much/how long you've been above the allocation. This becomes more like accounting for actual resource usage rather than provisioned resources.

Sign In or Register to comment.