LES BSD Thread!

12345679»

Comments

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Looks like the info on how to build NetBSD from source is in Part VI of the NetBSD Guide.

    I hope everyone gets the servers they want!

  • OpenBSD ftw!

    Thanked by (1)kiwidave
  • Made another update to the NetBSD 10.0 image to use more useful parameters during filesystem creation (only realised over the weekend that the scripts I based my image creation on used some rather questionable filesystem settings that result in an almost insane number of inodes after resizing).

    As a bonus I then tried to also make an image for NetBSD 9.4. This required an additional reboot when resizing the partition/filesystem, but the end result should be very similar to the 10.0 image. (Note: 9.4 doesn't have any SSL certificates, so I am setting the default PKG_PATH to the http URL instead of https).

    Hopefully, these are now stable enough and I can wait for NetBSD 10.1 for the next update.

    @linveo could you update the NetBSD 10.0 image and add the NetBSD 9.4 image please?

    BTW, for those interested in some of the technical details, I noticed that you can end up with a screwed up filesystem that might silently eat some of your data if you are not careful with all the resize stuff (but without ever seeing any warning from the tools) - that's PR #58723. A cosmetic bug in the df output I noticed has already been fixed on HEAD - PR #58718

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @cmeerw said: some rather questionable filesystem settings that result in an almost insane number of inodes after resizing

    With your two new images and with the NetBSD PRs that you mentioned, it looks like both you and NetBSD itself successfully have moved beyond the inodes issue.

    Normally I would try to read up on the issue before asking a question, but, maybe, if you don't mind, you could please tell me whether you think the the inodes issue might exist on my install from your older image, and how to run a quick check.

    Thanks @cmeerw! :)

    I hope everyone gets the servers they want!

  • @Not_Oles said: Normally I would try to read up on the issue before asking a question, but, maybe, if you don't mind, you could please tell me whether you think the the inodes issue might exist on my install from your older image, and how to run a quick check.

    Yes, your image will be affected, a quick way to tell is to run df -mi /

    In the iAvail column it will likely show something like 20 million inodes (instead of maybe 3 million), but in the 1M-blocks column you'll likely only see around 20000 blocks (instead of maybe 24300), e.g.

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @cmeerw

    I still have to study up, but here is the output of the df -ml / command you suggested showing results as you predicted:

    linveo# date
    Mon Oct  7 23:27:04 UTC 2024
    linveo# df -mi /
    Filesystem   1M-blocks      Used     Avail %Cap      iUsed     iAvail %iCap Mounted on
    /dev/dk2         20231      1920     17299   9%     307359   19440415    1% /
    linveo# 
    

    I still need to try the rebuild from source. When I get a chance, I will do that and post about what happens. Trying the compile will be a lot of fun even if it doesn't work for any reason, including maybe the filesystem. :)

    As always, thanks for your kind help! <3

    I hope everyone gets the servers they want!

  • OpenBSD 7.6 has been released.

    Thanked by (3)zmeu Not_Oles AlwaysSkint
  • @cmeerw said:
    OpenBSD 7.6 has been released.

    I wish they release drivers for Raspberry Pi Zero 2 W - or atleast FreeBSD.

    Thanked by (1)Not_Oles
  • linveolinveo Hosting ProviderOG

    @cmeerw said:
    Made another update to the NetBSD 10.0 image to use more useful parameters during filesystem creation (only realised over the weekend that the scripts I based my image creation on used some rather questionable filesystem settings that result in an almost insane number of inodes after resizing).

    As a bonus I then tried to also make an image for NetBSD 9.4. This required an additional reboot when resizing the partition/filesystem, but the end result should be very similar to the 10.0 image. (Note: 9.4 doesn't have any SSL certificates, so I am setting the default PKG_PATH to the http URL instead of https).

    Hopefully, these are now stable enough and I can wait for NetBSD 10.1 for the next update.

    @linveo could you update the NetBSD 10.0 image and add the NetBSD 9.4 image please?

    BTW, for those interested in some of the technical details, I noticed that you can end up with a screwed up filesystem that might silently eat some of your data if you are not careful with all the resize stuff (but without ever seeing any warning from the tools) - that's PR #58723. A cosmetic bug in the df output I noticed has already been fixed on HEAD - PR #58718

    I have loaded up the latest NetBSD 10 template and added a new one for 9.4. Thanks again!

    If you have any instructions I can follow to create qcow2 templates, I can try as well. I was given some documentation from VF to create my own, but not sure how well it will work.

    Thanked by (1)Not_Oles

    linveo.com | Shared Hosting | KVM VPS | Dedicated Servers

  • @Not_Oles said: I still have to study up, but here is the output of the df -ml / command you suggested showing results as you predicted:

    This is what you get with a new NetBSD 10 install:

    # df -mi /
    Filesystem   1M-blocks      Used     Avail %Cap      iUsed     iAvail %iCap Mounted on
    /dev/dk2         24316       441     22659   1%      15723    3101587    0% /
    
    Thanked by (1)Not_Oles
  • edited October 8

    @linveo said: I have loaded up the latest NetBSD 10 template and added a new one for 9.4. Thanks again!

    Thanks so much. The NetBSD 10 image works as expected.

    The NetBSD 9.4 image works, except that it doesn't see any configuration data, so no network or SSH keys are configured. I am not sure if this an issue with NetBSD 9.4 or if that config data is missing from the VirtFusion side (on my local tests, NetBSD 9.4 seemed to see the same drives as NetBSD 10).

    If you have any instructions I can follow to create qcow2 templates, I can try as well. I was given some documentation from VF to create my own, but not sure how well it will work.

    I have made my scripts available here - this script just needs to be run as root on a NetBSD 10 host (it should work without any additional dependencies on a minimal NetBSD 10 installation)

    ./build.sh -e -v 10.0
    

    will create a netbsd-10.0.raw file.

    That can then be converted with qemu-img -c -O qcow2 netbsd-10.0.raw netbsd-10.0.qcow2 to a qcow2 image (qemu-img is not installed on the minimal NetBSD 10 image, but you could do that step on Linux).

    (for NetBSD 9.4 I use ./build.sh -v 9.4 without the -e (EFI boot) option as that didn't seem to work, so the 9.4 image is BIOS boot only - which is what VirtFusion is using anyway).

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Not_Oles said:
    Looks like the info on how to build NetBSD from source is in Part VI of the NetBSD Guide.

    Finally getting around to trying this. :)

    The above linked guide says to make an unprivileged user account to use for building.

    Documentation for how to make a user account is in Part III, Section 5.6 of the NetBSD Guide.

    I followed the above linked steps in the Guide, Now I have my unprivileged user, tom, who successfully can connect via ssh and use su.

    I hope that my posting reference links and steps I followed might encourage others who haven't yet tried building their entire NetBSD system to actually try it.

    Next up will be making the build directories and downloading and updating the sources.

    Thanks to @linveo for providing the fun test VM for free and to the BSD guys here for watching my back and catching my mistakes. <3

    Thanked by (1)linveo

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Not_Oles said:
    Next up will be making the build directories and downloading and updating the sources.

    Now we have the directory for the sources. Reference: https://www.netbsd.org/docs/guide/en/chap-fetch.html#chap-fetch-dirs

    linveo# cd /usr
    linveo# pwd
    /usr
    linveo# ls /home
    tom
    linveo# mkdir /usr/src
    linveo# chown tom /usr/src
    linveo# ls -l | grep src
    drwxrwxr-x 53 600 125 1024 Sep 28 01:01 pkgsrc
    -rw-r--r-- 1 root wheel 84995861 Sep 28 01:15 pkgsrc.tar.gz
    -rw-r--r-- 1 root wheel 64 Sep 28 01:15 pkgsrc.tar.gz.SHA1
    drwxr-xr-x 2 tom wheel 512 Oct 14 00:11 src
    linveo#

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Setting the CVS environment variables as instructed in https://www.netbsd.org/docs/guide/en/chap-fetch.html#chap-fetch-cvs

    linveo# pwd
    /usr
    linveo# export CVSROOT="[email protected]:/cvsroot"
    linveo# echo $CVSROOT
    [email protected]:/cvsroot
    linveo# export CVS_RSH="ssh"
    linveo# echo $CVS_RSH
    ssh
    linveo#

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited October 14

    We have options to download tarballs of the sources, but, even though it is slower, let's try CVS as specified at https://www.netbsd.org/docs/guide/en/chap-fetch.html#chap-fetch-cvs-netbsd-current

    Switching to tom, checking the environment variables, then switching to tom with su and double checking the environment variables. Skipping the X Window System sources for now.

    linveo# su tom
    linveo$ whoami
    tom
    linveo$ echo $CVSROOT
    [email protected]:/cvsroot
    linveo$ echo $CVS_RSH
    ssh
    linveo$ whoami
    tom
    linveo$ 
    

    Now checking out the sources with CVS as per https://www.netbsd.org/docs/guide/en/chap-fetch.html#chap-fetch-cvs-netbsd-current

    linveo$ date
    Mon Oct 14 00:40:25 UTC 2024
    linveo$ cvs checkout -A -P src
    
      [ . . . Lots of fun output lines to watch! . . . ]
    
    

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited October 14

    Haha, since it wasn't running inside tmux, I borked the above checkout by shutting my Chromebook when I wanted to go to sleep. Probably all that was necessary was to rerun the checkout command, which I did, and the results seemed okay.

    For the sake of completeness, I moved src to src-old, made a new src directory owned by tom, and reran the checkout from the beginning, inside tmux. A few lines of terminal output from new checkout are shown below. It looks like the full checkout took around 3.5 hours.

    I think that the final checkout "Updating" lines which mention X are not a problem. If I remember right, NetBSD together with pkgsrc offer X.org and XFree86.org versions of X. There also seem to be some X libraries in the main NetBSD sources.

    Next up might be to see whether the newly checked out sources build successfully. In the newly checked out README.md file, it says:

    Building

    You can cross-build NetBSD from most UNIX-like operating systems.
    To build for amd64 (x86_64), in the src directory:

    ./build.sh -U -u -j4 -m amd64 -O ~/obj release

    I think I'd have to use -j 2 on my two vCore VPS. I'm not sure if ~/obj needs to exist. I should skim build.sh. :)

    linveo$ pwd
    /usr
    linveo$ whoami
    tom
    linveo$ export CVSROOT="[email protected]:/cvsroot"
    linveo$ export CVS_RSH="ssh"
    linveo$ date
    Mon Oct 14 05:44:35 UTC 2024
    linveo$ time cvs checkout -A -P src
    
      [ . . . ]
    
    cvs checkout: Updating src/x11/tools
    cvs checkout: Updating src/x11/tools/bdftopcf
    cvs checkout: Updating src/x11/tools/fc-cache
    cvs checkout: Updating src/x11/tools/gen_matypes
    cvs checkout: Updating src/x11/tools/makekeys
    cvs checkout: Updating src/x11/tools/makestrs
    cvs checkout: Updating src/x11/tools/mkfontdir
    cvs checkout: Updating src/x11/tools/mkfontscale
    cvs checkout: Updating src/x11/tools/mkg3states
    cvs checkout: Updating src/x11/tools/pswrap
    cvs checkout: Updating src/x11/tools/rgb
    cvs checkout: Updating src/x11/tools/ucs2any
    cvs checkout: Updating src/x11/tools/xkbcomp
        12656.60 real        56.42 user       297.86 sys
    linveo$ echo $?
    0
    linveo$ 
    

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    I made a tar archive of the newly checked out /src so it's easy to revert if / when the build fails or for any other reason.

    linveo# pwd
    /usr
    linveo# tar cvf src-new-checkout-20241014.tar src
    ls -lh src-new-checkout-20241014.tar 
    -rw-r--r--  1 root  wheel  3.1G Oct 14 20:19 src-new-checkout-20241014.tar
    linveo# 
    

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited October 15

    Building NetBSD-current on a Linveo 2 vCore 2 GB RAM VPS

    Reference: https://www.netbsd.org/docs/guide/en/chap-build.html

    Wondering about whether the VPS has enough memory. Wondering about the effect of the inode issue mentioned above. Wondering how long the build will take, if it completes.

    Decided to try a compile, because, why not? Following nia's procedure in the above linked page, I extracted the tar file into /home/tom and made a /home/tom/obj directory.

    Ran this build.sh command inside tmux: time ./build.sh -U -u -j 2 -m amd64 -O ~/obj release

    Since starting the build, it's been maybe half an hour while I have been writing this post, and the build seems to be still running. :)

    @linveo Linveo control panel shows CPU utilization ranging from about 88% to about 102.3%. :)

    linveo# su - tom
    linveo$ pwd
    /home/tom
    linveo$ whoami
    tom
    linveo$ mkdir obj
    linveo$ tar xf /usr/src-new-checkout-20241014.tar 
    linveo$ ls -l
    total 3
    drwxr-xr-x   2 tom  users  512 Oct 14 21:11 obj
    -rw-------   1 tom  users   14 Oct 11 02:24 password
    drwxr-xr-x  25 tom  users  512 Oct 14 09:15 src
    linveo$ cd src
    linveo$ ls
    BUILDING      UPDATING      crypto        external      regress       tests
    CVS           bin           dist          games         rescue        tools
    Makefile      build.sh      distrib       include       sbin          usr.bin
    Makefile.inc  common        doc           lib           share         usr.sbin
    README.md     compat        etc           libexec       sys
    linveo$ ls -l
    total 373
    -rw-r--r--    1 tom  users  42791 Apr 26 17:38 BUILDING
    drwxr-xr-x    2 tom  users    512 Oct 14 09:15 CVS
    -rw-r--r--    1 tom  users  16367 Sep  8  2023 Makefile
    -rw-r--r--    1 tom  users    355 May  2  2018 Makefile.inc
    -rw-r--r--    1 tom  users   1748 Sep  5  2021 README.md
    -rw-r--r--    1 tom  users  20004 Sep 26 20:08 UPDATING
    drwxr-xr-x   37 tom  users   1024 Oct 14 09:15 bin
    -rwxr-xr-x    1 tom  users  73189 Jul 23 20:46 build.sh
    drwxr-xr-x    6 tom  users    512 Oct 14 05:45 common
    drwxr-xr-x   10 tom  users    512 Oct 14 09:15 compat
    drwxr-xr-x    5 tom  users    512 Oct 14 05:47 crypto
    drwxr-xr-x    4 tom  users    512 Oct 14 09:14 dist
    drwxr-xr-x   61 tom  users   1536 Oct 14 09:14 distrib
    drwxr-xr-x    4 tom  users    512 Oct 14 06:05 doc
    drwxr-xr-x   77 tom  users   3072 Oct 14 09:13 etc
    drwxr-xr-x   23 tom  users    512 Oct 14 09:02 external
    drwxr-xr-x   54 tom  users   1024 Oct 14 09:01 games
    drwxr-xr-x    8 tom  users   2048 Oct 14 08:59 include
    drwxr-xr-x   67 tom  users   1536 Oct 14 08:59 lib
    drwxr-xr-x   29 tom  users   1024 Oct 14 08:59 libexec
    drwxr-xr-x    6 tom  users    512 Oct 14 08:59 regress
    drwxr-xr-x    3 tom  users    512 Oct 14 08:14 rescue
    drwxr-xr-x  116 tom  users   2560 Oct 14 08:59 sbin
    drwxr-xr-x   20 tom  users    512 Oct 14 08:59 share
    drwxr-xr-x   37 tom  users   1024 Oct 14 08:59 sys
    drwxr-xr-x   22 tom  users    512 Oct 14 08:57 tests
    drwxr-xr-x  120 tom  users   2560 Oct 14 08:57 tools
    drwxr-xr-x  260 tom  users   4608 Oct 14 08:57 usr.bin
    drwxr-xr-x  171 tom  users   3072 Oct 14 08:57 usr.sbin
    linveo$ date
    Tue Oct 15 00:32:09 UTC 2024
    linveo$ time ./build.sh -U -u -j 2 -m amd64 -O ~/obj release
    ===> build.sh command:    ./build.sh -U -u -j 2 -m amd64 -O /home/tom/obj release
    ===> build.sh started:    Tue Oct 15 00:32:27 UTC 2024
    ===> NetBSD version:      10.99.12
    ===> MACHINE:             amd64
    ===> MACHINE_ARCH:        x86_64
    ===> Build platform:      NetBSD 10.99.12 amd64
    ===> HOST_SH:             /bin/sh
    ===> No $TOOLDIR/bin/nbmake, needs building.
    ===> Bootstrapping nbmake
    
      [ . . . ]
    

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Here's the current output of top in case anybody might be interested.

    load averages:  2.28,  2.30,  2.19;               up 9+22:58:50                          01:52:57
    47 processes: 45 sleeping, 2 on CPU
    CPU states: 31.5% user,  0.0% nice, 17.0% system,  0.1% interrupt, 51.4% idle
    Memory: 1557M Act, 728M Inact, 51M Exec, 2145M File, 615M Free
    Swap: 512M Total, 676K Used, 511M Free / Pools: 1003M Used / Network: 25K In, 160K Out
    
      PID USERNAME PRI NICE   SIZE   RES STATE       TIME   WCPU    CPU COMMAND
    21341 root      85    0    22M 4948K poll/0      0:45  2.98%  2.98% sshd-session
     4566 tom       85    0    23M   12M kqueue/0    1:43  0.63%  0.63% tmux
     3310 tom       28    0    68M   33M CPU/1       0:00  4.00%  0.20% cc1
        0 root     124    0     0K   41M syncer/1   18:35  0.00%  0.00% [system]
    12783 root      85    0    22M 4800K poll/0      0:24  0.00%  0.00% sshd-session
      602 tom       85    0    13M 2764K poll/1      0:07  0.00%  0.00% nbmake
    18609 tom       85    0    13M 2772K poll/1      0:06  0.00%  0.00% nbmake
    24109 tom       85    0    13M 2768K poll/1      0:06  0.00%  0.00% nbmake
    25675 tom       85    0    15M 4364K poll/1      0:05  0.00%  0.00% nbmake
      595 root      85    0    18M 2324K kqueue/1    0:05  0.00%  0.00% syslogd
     1723 tom       85    0    13M 2704K poll/1      0:04  0.00%  0.00% nbmake
     1109 root      85    0    21M 2664K kqueue/0    0:02  0.00%  0.00% master
     1065 root      85    0    12M 1576K nanosl/0    0:01  0.00%  0.00% cron
     8836 tom       43    0    13M 2252K CPU/0       0:00  0.00%  0.00% top
    12326 root      86   -2    12M 2092K wait/0      0:00  0.00%  0.00% su
    

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    The build still seems like it's going strongly.

    I just grabbed the following output lines as they flashed by:

    make distribution started at:  Tue Oct 15 00:32:39 UTC 2024
    make distribution finished at: Tue Oct 15 04:45:35 UTC 2024
    [ . . . ]
    /home/tom/obj/tooldir.NetBSD-10.99.12-amd64/bin/nbmake -C /home/tom/obj/sys/arch/amd64/compile/GENERIC depend &&  /home/tom/obj/tooldir.NetBSD-10.99.12-amd64/bin/nbmake -C /home/tom/obj/sys/arch/amd64/compile/GENERIC &&  /home/tom/obj/tooldir.NetBSD-10.99.12-amd64/bin/nbmake -C /home/tom/obj/sys/arch/amd64/compile/GENERIC debuginstall
    

    Time for sleep soon.

    I wonder if we are going to run out if disk space.

    linveo# df -h .
    Filesystem     Size   Used  Avail %Cap Mounted on
    /dev/dk2        20G    15G   4.0G  78% /
    linveo# 
    

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Not out of disk space yet, but it's getting close.

    linveo# df -h .
    Filesystem     Size   Used  Avail %Cap Mounted on
    /dev/dk2        20G    17G   1.7G  91% /
    linveo# 
    

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Uh . . . File system is full.

    [ 874397.7878684] /: write failed, file system is full
    
    [ 874397.7878684] /: write failed, file system is full
    
    [ 874397.7878684] /: write failed, file system is full
    
    [ 874397.7878684] /: write failed, file system is full
    
    [ 874397.7878684] /: write failed, file system is full
    
    [ 874397.7878684] /: write failed, file system is full
    
    [ 874397.7878684] /: write failed, file system is full
    
    [ 874397.7878684] /: write failed, file system is full
    you have mail
    linveo$ df -h .
    Filesystem     Size   Used  Avail %Cap Mounted on
    /dev/dk2        20G    19G   540K  99% /
    linveo$ 
    

    Sleep now for me. . . .

    Anybody have a genius idea? Or two or three? :star:

    I hope everyone gets the servers they want!

  • @Not_Oles said: Anybody have a genius idea? Or two or three?

    Looks like you didn't start with the new NetBSD template, which would have given you maybe 4 GB more space on the file system (not sure if that would have been enough space).

    Just out of interest, how many inodes are you using? df -hi /

    Filesystem     Size   Used  Avail %Cap      iUsed     iAvail %iCap Mounted on
    /dev/dk2        24G   1.3G    21G   5%      16853    3100457    0% /
    
    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @cmeerw said: Looks like you didn't start with the new NetBSD template

    Right. I went with what I already had, just for fun, to see if it would work. Now I have had some fun, and learned a little. So, all good!

    @cmeerw said: which would have given you maybe 4 GB more space on the file system (not sure if that would have been enough space).

    Yes.

    @cmeerw said: Just out of interest, how many inodes are you using? df -hi /

    Filesystem Size Used Avail %Cap iUsed iAvail %iCap Mounted on
    /dev/dk2 24G 1.3G 21G 5% 16853 3100457 0% /

    linveo# df -hi /
    Filesystem     Size   Used  Avail %Cap      iUsed     iAvail %iCap Mounted on
    /dev/dk2        20G    19G   1.5M  99%     795510   18952264    4% /
    linveo# 
    

    16853 vs 795510

    One of the reasons why I went ahead with the old image was that I wondered whether the large number of inodes would cause any kind of a problem. Looks like, until we ran out of space, everything went fine despite the large number of inodes.

    When things started looking like space was going to get tight, I deleted pkgsrc and also my previously mentioned initial src checkout which had been moved to src-old.

    If @linveo wants to bump my VM's disk size, I am happy to rebuild NetBSD, this time beginning by reinstalling with @cmeerw's latest image. @linveo Rather than increasing disk size, if it's easier to wipe the existing VM and provision a replacement, that's fine. If increasing my VM's disk size causes any issues, it's also fine to leave it as is. I really like this VM! It has a fast processor, and it has only 23 ms ping from my current location in Sonora. :)

    I hope everyone gets the servers they want!

  • @Not_Oles said:

    @cmeerw said: Just out of interest, how many inodes are you using? df -hi /

    Filesystem Size Used Avail %Cap iUsed iAvail %iCap Mounted on
    /dev/dk2 24G 1.3G 21G 5% 16853 3100457 0% /

    linveo# df -hi /
    Filesystem     Size   Used  Avail %Cap      iUsed     iAvail %iCap Mounted on
    /dev/dk2        20G    19G   1.5M  99%     795510   18952264    4% /
    linveo# 
    

    16853 vs 795510

    Was actually wondering how many inodes you had used compared to the new max of 3100457 - your 795510 will still comfortably fit into that max.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @cmeerw said: new max of 3100457

    Google ""NetBSD" inode max 3100457" says "Your search did not match any documents". :)

    @cmeerw If you do not mind, may I please ask, what is the best way to find out where, when, and why the inode max was increased? Why was 3100457 selected? :)

    I hope everyone gets the servers they want!

  • @Not_Oles said:

    @cmeerw said: new max of 3100457

    Google ""NetBSD" inode max 3100457" says "Your search did not match any documents". :)

    @cmeerw If you do not mind, may I please ask, what is the best way to find out where, when, and why the inode max was increased? Why was 3100457 selected? :)

    Sorry... that's just the (max) number of inodes you will get when re-installing NetBSD from my template on that particular VM.

    See the newfs man page:

    -i bytes-per-inode
                     This specifies the density of inodes in the file system.  If
                     fewer inodes are desired, a larger number should be used; to
                     create more inodes a smaller number should be given.  The
                     default is to create an inode for every (4 * frag-size) bytes
                     of data space:
    

    Note that when creating the NetBSD image I am creating a 512 MB root filesystem, but as I am expecting the file system to be resized during first boot, I am setting "-b 16384 -f 2048" (so we get more reasonable defaults for the final filesystem size).

    So the initial 512 MB file system will get approximately 65 thousand inodes, and when the filesystem gets resized to 24.5 GB, the inode density stays the same and you'll end up with roughly 3 million inodes.

    Thanked by (1)Not_Oles
  • BTW, the actual maximum number of inodes is 3117310 (3100457 was the number of available inodes on my system), but I think df only shows that number when using df -G / (and the values are currently swapped around on 10.0).

    Thanked by (1)Not_Oles
  • linveolinveo Hosting ProviderOG

    @Not_Oles said:
    If @linveo wants to bump my VM's disk size, I am happy to rebuild NetBSD, this time beginning by reinstalling with @cmeerw's latest image. @linveo Rather than increasing disk size, if it's easier to wipe the existing VM and provision a replacement, that's fine. If increasing my VM's disk size causes any issues, it's also fine to leave it as is. I really like this VM! It has a fast processor, and it has only 23 ms ping from my current location in Sonora. :)

    You got it! I upped your VM to 50GB, just need a reboot. You might need to manually increase the FS too if cloud-init is not able to interact.

    Thanked by (2)Not_Oles cmeerw

    linveo.com | Shared Hosting | KVM VPS | Dedicated Servers

Sign In or Register to comment.