LES BSD Thread!

14567810»

Comments

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Crab What's up in the FreeBDSD world?

    @FrankCastle And in the OpenBSD world?

    I hope everyone gets the servers they want!

  • @Not_Oles said: Before I do anything, though, I just want to double check with @cmeerw because he's been working on the NetBSD images. If there is anything that @cmeerw might want to do or check or ask me to do or check, I want to make sure that whatever @cmeerw wants is done.

    I believe your version of the image might actually just resize the partition and file system on the next reboot (that's because in the first version I didn't mount the filesystem with the "log" option).

    Later versions now use the "log" option - this is preferable, but it means that resizing (after the first boot) gets more complicated (or even dangerous).

    So I think, just try a restart of the machine and see if the filesystem gets resized.

    After that, I think it still makes sense to move to the new image to get some more useful block sizes and inode counts for the filesystem (it should get resized to the full capacity on the first boot, but further resizes won't be automatic).

    Thanked by (1)Not_Oles
  • @Not_Oles said: @FrankCastle And in the OpenBSD world?

    BTW, I actually tried booting into the OpenBSD 7.6 installer (via netboot.xyz) when it was released, but it did hang just before getting to user space.

    Thanked by (1)Not_Oles
  • @Not_Oles said:
    @FrankCastle And in the OpenBSD world?

    A new version just came out a couple weeks ago so I've been slowing upgrading all of my VMs to this latest version.

    Thanked by (1)Not_Oles
  • @cmeerw said:

    @Not_Oles said: @FrankCastle And in the OpenBSD world?

    BTW, I actually tried booting into the OpenBSD 7.6 installer (via netboot.xyz) when it was released, but it did hang just before getting to user space.

    I've upgraded a handful of my 7.5 instances to 7.6 thus far with no issues. I haven't done my Linveo VM yet but when I do if I run into any problems I'll let you know. I didn't have any problems getting 7.5 installed using netboot.xyz though so I don't expect any issues with 7.6 either.

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer
    linveo# df -h
    Filesystem     Size   Used  Avail %Cap Mounted on
    /dev/dk2        20G    19G   1.5M  99% /
    kernfs         1.0K   1.0K     0B 100% /kern
    ptyfs          1.0K   1.0K     0B 100% /dev/pts
    procfs         4.0K   4.0K     0B 100% /proc
    tmpfs          1.0G   4.0K   1.0G   0% /tmp
    tmpfs          1.0G   4.0K   1.0G   0% /var/shm
    linveo# uptime
     7:42PM  up 2 mins, 1 user, load averages: 0.02, 0.02, 0.00
    linveo# 
    

    Rebooting from the Linveo panel worked!

    linveo# date
    Wed Oct 16 19:47:49 UTC 2024
    linveo# df -h 
    Filesystem     Size   Used  Avail %Cap Mounted on
    /dev/dk2        40G    19G    19G  49% /
    kernfs         1.0K   1.0K     0B 100% /kern
    ptyfs          1.0K   1.0K     0B 100% /dev/pts
    procfs         4.0K   4.0K     0B 100% /proc
    tmpfs          1.0G   4.0K   1.0G   0% /tmp
    tmpfs          1.0G   4.0K   1.0G   0% /var/shm
    linveo# 
    

    Inode check for @cmeerw :)

    linveo# df -mi /
    Filesystem   1M-blocks      Used     Avail %Cap      iUsed     iAvail %iCap Mounted on
    /dev/dk2         40880     19218     19617  49%     795510   39114712    1% /
    linveo# 
    

    @linveo said: I upped your VM to 50GB

    How come only 40880 1M-blocks total when @linveo increased the size to 50GB? Does using so many extra inodes somehow take up a substantial portion of the "missing" 9120 blocks?


    Since there now is more disk space and since the system still seems to be working okay, it might be fun to continue the system build, just to see whether it breaks, and, if so, where and how and why. Of course, after retrying the system build, we can go ahead and reinstall. :)

    I hope everyone gets the servers they want!

  • @Not_Oles said: How come only 40880 1M-blocks total when @linveo increased the size to 50GB? Does using so many extra inodes somehow take up a substantial portion of the "missing" 9120 blocks?

    Yes, that's the inodes/filesystem overhead (and there is a 512 MB swap partition).

    BTW, you can also look at the partitions with gpt show ld0 and dkctl ld0 listwedges

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Not_Oles said: Uh . . . File system is full.

    Finally I got around to retrying build.sh from where the file system became full, above.

    linveo# su - tom
    linveo$ cd src
    linveo$ export CVSROOT="[email protected]:/cvsroot"
    linveo$ export CVS_RSH="ssh"
    linveo$ date
    Sat Oct 19 23:45:48 UTC 2024
    linveo$ time ./build.sh -U -u -j 2 -m amd64 -O ~/obj release
    ===> build.sh command:    ./build.sh -U -u -j 2 -m amd64 -O /home/tom/obj release
    ===> build.sh started:    Sat Oct 19 23:45:56 UTC 2024
    ===> NetBSD version:      10.99.12
    ===> MACHINE:             amd64
    ===> MACHINE_ARCH:        x86_64
    ===> Build platform:      NetBSD 10.99.12 amd64
    ===> HOST_SH:             /bin/sh
    ===> share/mk MAKECONF:   /etc/mk.conf
    ===> MAKECONF file:       /etc/mk.conf (File not found)
    ===> TOOLDIR path:        /home/tom/obj/tooldir.NetBSD-10.99.12-amd64
    ===> DESTDIR path:        /home/tom/obj/destdir.amd64
    ===> RELEASEDIR path:     /home/tom/obj/releasedir
      [ . . . ]
    

    The rebuild seems to be running okay. I will post again when it either breaks or completes.

    Thanks to @linveo for a nice VPS and for additional disk space! <3 Thanks to @cmeerw for his NetBSD images, help, and encouragement! <3

    Thanked by (1)linveo

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Wow!

    ===> Successful make release
    ===> build.sh ended:      Sun Oct 20 00:10:23 UTC 2024
    ===> Summary of results:
             build.sh command:    ./build.sh -U -u -j 2 -m amd64 -O /home/tom/obj release
             build.sh started:    Sat Oct 19 23:45:56 UTC 2024
             NetBSD version:      10.99.12
             MACHINE:             amd64
             MACHINE_ARCH:        x86_64
             Build platform:      NetBSD 10.99.12 amd64
             HOST_SH:             /bin/sh
             share/mk MAKECONF:   /etc/mk.conf
             MAKECONF file:       /etc/mk.conf (File not found)
             TOOLDIR path:        /home/tom/obj/tooldir.NetBSD-10.99.12-amd64
             DESTDIR path:        /home/tom/obj/destdir.amd64
             RELEASEDIR path:     /home/tom/obj/releasedir
             Updated makewrapper: /home/tom/obj/tooldir.NetBSD-10.99.12-amd64/bin/nbmake-amd64
             Successful make release
             build.sh ended:      Sun Oct 20 00:10:23 UTC 2024
    ===> .
         1467.24 real       715.16 user       150.22 sys
    linveo$ 
    

    Um . . . what now?

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited October 20

    It looks like we only needed a little more space than we had.

    When we ran out of file space before the size increase:

    linveo# df -hi /
    Filesystem     Size   Used  Avail %Cap      iUsed     iAvail %iCap Mounted on
    /dev/dk2        20G    19G   1.5M  99%     795510   18952264    4% /
    linveo# 
    

    Now, after the filesystem size increase and after finishing the build:

    linveo# df -hi /
    Filesystem     Size   Used  Avail %Cap      iUsed     iAvail %iCap Mounted on
    /dev/dk2        40G    21G    17G  54%     809416   39100806    2% /
    linveo# 
    

    It's probably very simple, but I have to figure out how to install the newly built distribution and then see if the VM will reboot.

    Then we can reinstall from @cmeerw's new image to fix the inode issue.

    Hmm. I wonder if there might be a way to fix the inode issue without reinstalling the VM from the new image. Like, for example, create and move critical parts of the OS into a memory resident filesystem, then wipe and reinstall the disk based filesystem, download full NetBDSD to the new disk based filesystem, get sources, and rebuild. :)

    I hope everyone gets the servers they want!

  • edited October 20

    @Not_Oles said:
    @Crab What's up in the FreeBDSD world?

    Since everything "just works" every day and night, there's not much to report but 100% uptime across the board!

    FreeBSD 14.2 is going to arrive in early December, so that'll bring some new excitement and bunch of work to get all the systems updated, but not expecting any hiccups there.

    People have been very busy with this thread and I am super happy to see that. Thank you @linveo for providing these great resources to the community and @cmeerw for the excellent work on NetBSD!

    Also I wanted to add a tip of the day, check out this great command line utility with an awesome modern approach: https://github.com/aristocratos/bpytop

    Thanked by (1)Not_Oles
  • edited October 20

    @Not_Oles said: .. we only needed a little more space..

    Risky:

    tune2fs -m1 /dev/dk2

    ;)

    Thanked by (1)Not_Oles

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • @Not_Oles said: It looks like we only needed a little more space than we had.

    So it would have worked with the new filesystem settings with the original disk space allocation.

    @Not_Oles said: Hmm. I wonder if there might be a way to fix the inode issue without reinstalling the VM from the new image. Like, for example, create and move critical parts of the OS into a memory resident filesystem, then wipe and reinstall the disk based filesystem, download full NetBDSD to the new disk based filesystem, get sources, and rebuild.

    Well... there is a 512 MB swap partition you could use as a temporary alternative root filesystem.

    Thanked by (1)Not_Oles
  • linveolinveo Hosting ProviderOG

    One of our awesome customers put together a guide on installing OpenBSD via ISO on our services. Take a look!

    https://btxx.org/posts/openbsd-linveo/

    Thanked by (2)AlwaysSkint Not_Oles

    linveo.com | Shared Hosting | KVM VPS | Dedicated Servers

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Thanks @linveo! The OpenBSD install post looks great! From the above linked overview, there are links to a step-by-step walkthrough, which links to a page about setting up a desktop environment.

    I hope to try the OpenBSD install after I get NetBSD to rebuild itself as -current (done), install the newly rebuilt kernel (upcoming), and reboot (upcoming), and then install (upcoming) the already built, -current userland (done).

    @linveo If you haven't already, maybe you could kindly pass on to your customer our invitation to join us here in the LES BSD thread? Thanks again!

    I hope everyone gets the servers they want!

  • @linveo said:
    One of our awesome customers put together a guide on installing OpenBSD via ISO on our services. Take a look!

    https://btxx.org/posts/openbsd-linveo/

    Interesting, I only ever get as far as this with OpenBSD:

    scsibus6 at softraid0: 256 targets

    Not that I really want to use OpenBSD, was just curious...

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Here is moving our previous faithful and hard working kernel to "retired" status, copying the newly compiled kernel from the /obj directory to /, and adjusting the mode to match the previous kernel.

    linveo# date
    Tue Oct 22 18:55:26 UTC 2024
    linveo# pwd
    /home/tom/obj/sys/arch/amd64/compile/GENERIC
    linveo# ls -l netbsd
    -rwxr-xr-x  1 tom  users  29572928 Oct 19 23:56 netbsd
    linveo# ls -l /netbsd
    -rw-r--r--  1 root  wheel  29572760 Oct  5 00:39 /netbsd
    linveo# mv /netbsd /netbsd-old
    linveo# cp netbsd /
    linveo# ls -l /netbsd*
    -rwxr-xr-x  1 root  wheel  29572928 Oct 22 18:59 /netbsd
    -rw-r--r--  1 root  wheel  29572760 Oct  5 00:39 /netbsd-old
    linveo# chmod 644 /netbsd
    linveo# ls -l /netbsd*
    -rw-r--r--  1 root  wheel  29572928 Oct 22 18:59 /netbsd
    -rw-r--r--  1 root  wheel  29572760 Oct  5 00:39 /netbsd-old
    linveo# date; shutdown -r now
    Tue Oct 22 19:07:28 UTC 2024
    Shutdown NOW!
    shutdown: [pid 5108]
    linveo#                                                                                
    *** FINAL System shutdown message from [email protected] ***          
    System going down IMMEDIATELY                                                  
                                                                                   
                                                                                   
    
    System shutdown time has arrived
    
    About to run shutdown hooks...
    Stopping cron.
    Stopping inetd.
    Saved entropy to /var/db/entropy-file.
    Forcibly unmounting /tmp
    Forcibly unmounting /var/shm
    Removing block-type swap devices
    swapctl: removing /dev/dk1 as swap device
    Tue Oct 22 19:07:33 UTC 2024
    
    Done running shutdown hooks.
    Connection to xxx.xxx.xxx.xxx closed by remote host.
    Connection to xxx.xxx.xxx.xxx closed.
    chronos@penguin:~/servers/linveo$ 
    

    Now to reboot and hopefully come back up running our self-compiled NetBSD-current GENERIC kernel.

    Did it work? :)

    chronos@penguin:~/servers/linveo$ `head -n 1 login`
    Last login: Tue Oct 22 18:17:38 2024 from xxx.xxx.xxx.xxx
    NetBSD 10.99.12 (GENERIC) #1: Sat Oct 19 23:52:58 UTC 2024
    
    Welcome to NetBSD!
    
    We recommend that you create a non-root account and use su(1) for root access.
    linveo# date
    Tue Oct 22 19:09:54 UTC 2024
    linveo# 
    

    Next up is to install and, if changes are needed, to adjust the configuration of our newly compiled NetBSD-current userland. When the new userland has been installed, we will have reached our goal of running self-compiled NetBSD-current! All the sources will be really handy -- we can look at or change anything. If we get the userland up, then maybe we can try @AlwaysSkint's risky method of restructuring the filesystem to fix the inodes issue, or just reinstall from an ISO where the inodes issue already is fixed.

    Thanks to @linveo for the really nice VPS! <3 Thanks to @cmeerw, @AlwaysSkint and the other guys here for suggestions and for catching my mistakes. <3

    I hope everyone gets the servers they want!

  • @Not_Oles said: If we get the userland up, then maybe we can try @AlwaysSkint's risky method of restructuring the filesystem to fix the inodes issue

    You mean the tunefs -m? That won't do anything to fix the inodes issue - it only allows non-root users to use a bit more of the filesystem space.

    Thanked by (1)Not_Oles
Sign In or Register to comment.