LES BSD Thread!

1141516171820»

Comments

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    I think I might have mentioned that, a couple of weeks ago, I bought an Intel VPS from Linveo. Now I have purchased an upgrade from 2 to 4 cores. The price for the upgraded VPS is $4.07/month with a coupon code.

    Here is a Yabs on the newly upgraded VPS to show that it seems to work great with Debian. Next I am going to try @cmeerw's kindly contributed NetBSD 11 RC2 Minimum image which is available in the Linveo Control Panel.

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2025-04-20                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Fri Mar 13 07:22:05 PM GMT 2026
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 0 hours, 4 minutes
    Processor  : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
    CPU cores  : 4 @ 2294.608 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 7.7 GiB
    Swap       : 1024.0 MiB
    Disk       : 49.2 GiB
    Distro     : Debian GNU/Linux 13 (trixie)
    Kernel     : 6.12.38+deb13-amd64
    VM Type    : KVM
    IPv4/IPv6  : ✔ Online / ✔ Online
    
    IPv6 Network Information:
    ---------------------------------
    ISP        : 1GSERVERS, LLC
    ASN        : AS14315 1GSERVERS, LLC
    Host       : Linveo, LLC
    Location   : Phoenix, Arizona (AZ)
    Country    : United States
    
    fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda3):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 183.09 MB/s  (45.7k) | 1.38 GB/s    (21.5k)
    Write      | 183.57 MB/s  (45.8k) | 1.38 GB/s    (21.7k)
    Total      | 366.66 MB/s  (91.6k) | 2.77 GB/s    (43.2k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 1.30 GB/s     (2.5k) | 1.24 GB/s     (1.2k)
    Write      | 1.36 GB/s     (2.6k) | 1.33 GB/s     (1.2k)
    Total      | 2.66 GB/s     (5.2k) | 2.57 GB/s     (2.5k)
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
    -----           | -----                     | ----            | ----            | ----           
    Clouvider       | London, UK (10G)          | 1.31 Gbits/sec  | 1.05 Gbits/sec  | 129 ms         
    Eranium         | Amsterdam, NL (100G)      | 1.19 Gbits/sec  | 1.74 Gbits/sec  | 155 ms         
    Uztelecom       | Tashkent, UZ (10G)        | 728 Mbits/sec   | 821 Mbits/sec   | 227 ms         
    Leaseweb        | Singapore, SG (10G)       | 705 Mbits/sec   | 602 Mbits/sec   | --             
    Clouvider       | Los Angeles, CA, US (10G) | 6.88 Gbits/sec  | 3.87 Gbits/sec  | 8.59 ms        
    Leaseweb        | NYC, NY, US (10G)         | 3.48 Gbits/sec  | 2.39 Gbits/sec  | --             
    Edgoo           | Sao Paulo, BR (1G)        | 867 Mbits/sec   | 907 Mbits/sec   | 170 ms         
    
    iperf3 Network Speed Tests (IPv6):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed      | Ping           
    -----           | -----                     | ----            | ----            | ----           
    Clouvider       | London, UK (10G)          | 368 Mbits/sec   | 925 Mbits/sec   | 129 ms         
    Eranium         | Amsterdam, NL (100G)      | 1.22 Gbits/sec  | 1.44 Gbits/sec  | 155 ms         
    Uztelecom       | Tashkent, UZ (10G)        | 692 Mbits/sec   | 836 Mbits/sec   | 227 ms         
    Leaseweb        | Singapore, SG (10G)       | 980 Mbits/sec   | 1.16 Gbits/sec  | 183 ms         
    Clouvider       | Los Angeles, CA, US (10G) | 7.41 Gbits/sec  | 3.83 Gbits/sec  | 8.55 ms        
    Leaseweb        | NYC, NY, US (10G)         | 3.45 Gbits/sec  | 3.58 Gbits/sec  | 58.1 ms        
    Edgoo           | Sao Paulo, BR (1G)        | 1.11 Gbits/sec  | 870 Mbits/sec   | 170 ms         
    
    Geekbench 6 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 768                           
    Multi Core      | 2453                          
    Full Test       | https://browser.geekbench.com/v6/cpu/17048925
    
    YABS completed in 16 min 7 sec
    tom@x86:~$ 
    
    Thanked by (1)cmeerw

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Next I am going to try @cmeerw's kindly contributed NetBSD 11 RC2 Minimum image which is available in the Linveo Control Panel.

    Seems to work great! <3 First glance. . . .

    Thanks @cmeerw! <3 Thanks @linveo! <3

    NetBSD 11.0_RC2 (GENERIC) #0: Wed Mar  4 21:02:00 UTC 2026
    
    Welcome to NetBSD!
    
    This is a release candidate for NetBSD.
    
    Bug reports: https://www.NetBSD.org/support/send-pr.html
    Donations to the NetBSD Foundation: https://www.NetBSD.org/donations/
    We recommend that you create a non-root account and use su(1) for root access.
    x86# date
    Fri Mar 13 20:06:27 UTC 2026
    x86# uptime
     8:06PM  up 4 mins, 1 user, load averages: 0.00, 0.01, 0.00
    x86# 
    
    Thanked by (1)cmeerw

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Hello!

    OpenBSD 7.9-beta snapshot from https://cdn.openbsd.org/pub/OpenBSD/snapshots/amd64/ now seems to boot on a Linveo Intel KVM VPS.

    dmesg behind the spoiler!

    Best!

    Tom

    openbsd$ date
    Tue Mar 24 02:02:32 UTC 2026
    openbsd$ dmesg
    OpenBSD 7.9-beta (GENERIC.MP) #342: Mon Mar 23 18:28:47 MDT 2026
        [email protected]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
    real mem = 8572997632 (8175MB)
    avail mem = 8285634560 (7901MB)
    random: good seed from bootblocks
    mpath0 at root
    scsibus0 at mpath0: 256 targets
    mainbus0 at root
    bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xbfffdc90 (137 entries)
    bios0: vendor SeaBIOS version "1.16.2-debian-1.16.2-1" date 04/01/2014
    bios0: QEMU Standard PC (i440FX + PIIX, 1996)
    acpi0 at bios0: ACPI 1.0
    acpi0: sleep states S3 S4 S5
    acpi0: tables DSDT FACP APIC HPET WAET
    acpi0: wakeup devices
    acpitimer0 at acpi0: 3579545 Hz, 24 bits
    acpimadt0 at acpi0 addr 0xfee00000: PC-AT compat
    cpu0 at mainbus0: apid 0 (boot processor)
    cpu0: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz, 122.61 MHz, 06-55-04
    cpu0: cpuid 1 edx=f8bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS> ecx=f7fab223<SSE3,PCLMUL,VMX,SSSE3,FMA3,CX16,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV>
    cpu0: cpuid 6 eax=4<ARAT>
    cpu0: cpuid 7.0 ebx=d19f4fbb<FSGSBASE,TSC_ADJUST,BMI1,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM,MPX,AVX512F,AVX512DQ,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,AVX512CD,AVX512BW,AVX512VL> ecx=c<UMIP,PKU> edx=ac000400<MD_CLEAR,IBRS,IBPB,STIBP,SSBD>
    cpu0: cpuid a vers=2, gp=4, gpwidth=48, ff=3, ffwidth=48
    cpu0: cpuid d.1 eax=f<XSAVEOPT,XSAVEC,XGETBV1,XSAVES>
    cpu0: cpuid 80000001 edx=2c100800<NXE,PAGE1GB,RDTSCP,LONG> ecx=121<LAHF,ABM,3DNOWP>
    cpu0: msr 10a=4c<RSBA,SKIP_L1DFL,IF_PSCHANGE>
    cpu0: MELTDOWN
    cpu0: 32KB 64b/line 8-way D-cache, 32KB 64b/line 8-way I-cache, 1MB 64b/line 16-way L2 cache, 24MB 64b/line 11-way L3 cache
    cpu0: smt 0, core 0, package 0
    mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
    cpu0: apic clock running at 1000MHz
    cpu1 at mainbus0: apid 1 (application processor)
    cpu1: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz, 119.37 MHz, 06-55-04
    cpu1: smt 0, core 0, package 1
    cpu2 at mainbus0: apid 2 (application processor)
    cpu2: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz, 114.71 MHz, 06-55-04
    cpu2: smt 0, core 0, package 2
    cpu3 at mainbus0: apid 3 (application processor)
    cpu3: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz, 102.31 MHz, 06-55-04
    cpu3: smt 0, core 0, package 3
    ioapic0 at mainbus0: apid 0 pa 0xfec00000, version 11, 24 pins
    acpihpet0 at acpi0: 100000000 Hz
    acpiprt0 at acpi0: bus 0 (PCI0)
    "ACPI0006" at acpi0 not configured
    acpipci0 at acpi0 PCI0
    "PNP0A06" at acpi0 not configured
    "PNP0A06" at acpi0 not configured
    "PNP0A06" at acpi0 not configured
    "QEMU0002" at acpi0 not configured
    com0 at acpi0 COM1 addr 0x3f8/0x8 irq 4: ns16550a, 16 byte fifo
    "PNP0303" at acpi0 not configured
    "PNP0F13" at acpi0 not configured
    acpicmos0 at acpi0
    "ACPI0010" at acpi0 not configured
    acpicpu0 at acpi0: C1(@1 halt!)
    acpicpu1 at acpi0: C1(@1 halt!)
    acpicpu2 at acpi0: C1(@1 halt!)
    acpicpu3 at acpi0: C1(@1 halt!)
    cpu0: using VERW MDS workaround
    pvbus0 at mainbus0: KVM
    pvclock0 at pvbus0
    pci0 at mainbus0 bus 0
    pchb0 at pci0 dev 0 function 0 "Intel 82441FX" rev 0x02
    pcib0 at pci0 dev 1 function 0 "Intel 82371SB ISA" rev 0x00
    pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, channel 0 wired to compatibility, channel 1 wired to compatibility
    pciide0: channel 0 disabled (no drives)
    pciide0: channel 1 disabled (no drives)
    uhci0 at pci0 dev 1 function 2 "Intel 82371SB USB" rev 0x01: apic 0 int 11
    piixpm0 at pci0 dev 1 function 3 "Intel 82371AB Power" rev 0x03: apic 0 int 9
    iic0 at piixpm0
    vga1 at pci0 dev 2 function 0 "Bochs VGA" rev 0x02
    wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
    wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
    virtio0 at pci0 dev 3 function 0 "Qumranet Virtio Network" rev 0x00
    vio0 at virtio0: 1 queue, address 00:2e:0e:80:59:f6
    virtio0: msix per-VQ
    virtio1 at pci0 dev 4 function 0 "Qumranet Virtio SCSI" rev 0x00
    vioscsi0 at virtio1: qsize 256
    virtio1: msix per-VQ
    scsibus1 at vioscsi0: 255 targets
    ahci0 at pci0 dev 5 function 0 "Intel 82801I AHCI" rev 0x02: msi, AHCI 1.0
    ahci0: port 0: 1.5Gb/s
    scsibus2 at ahci0: 32 targets
    cd0 at scsibus2 targ 0 lun 0: <QEMU, QEMU DVD-ROM, 2.5+> removable
    virtio2 at pci0 dev 6 function 0 "Qumranet Virtio Console" rev 0x00
    virtio2: no matching child driver; not configured
    virtio3 at pci0 dev 7 function 0 "Qumranet Virtio Storage" rev 0x00
    vioblk0 at virtio3
    virtio3: msix per-VQ
    scsibus3 at vioblk0: 1 targets
    sd0 at scsibus3 targ 0 lun 0: <VirtIO, Block Device, >
    sd0: 51200MB, 512 bytes/sector, 104857600 sectors
    virtio4 at pci0 dev 8 function 0 "Qumranet Virtio Memory Balloon" rev 0x00
    viomb0 at virtio4
    virtio4: apic 0 int 11
    virtio5 at pci0 dev 9 function 0 "Qumranet Virtio SCSI" rev 0x00
    vioscsi1 at virtio5: qsize 256
    virtio5: msix per-VQ
    scsibus4 at vioscsi1: 255 targets
    virtio6 at pci0 dev 10 function 0 "Qumranet Virtio SCSI" rev 0x00
    vioscsi2 at virtio6: qsize 256
    virtio6: msix per-VQ
    scsibus5 at vioscsi2: 255 targets
    virtio7 at pci0 dev 11 function 0 "Qumranet Virtio SCSI" rev 0x00
    vioscsi3 at virtio7: qsize 256
    virtio7: msix per-VQ
    scsibus6 at vioscsi3: 255 targets
    uk0 at scsibus6 targ 0 lun 0: <QEMU, QEMU TARGET, 2.5>
    sd1 at scsibus6 targ 0 lun 2: <QEMU, QEMU HARDDISK, 2.5+>
    sd1: 0MB, 512 bytes/sector, 736 sectors, thin
    isa0 at pcib0
    isadma0 at isa0
    fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
    pckbc0 at isa0 port 0x60/5 irq 1 irq 12
    pckbd0 at pckbc0 (kbd slot)
    wskbd0 at pckbd0: console keyboard, using wsdisplay0
    pms0 at pckbc0 (aux slot)
    wsmouse0 at pms0 mux 0
    pcppi0 at isa0 port 0x61
    spkr0 at pcppi0
    usb0 at uhci0: USB revision 1.0
    uhub0 at usb0 configuration 1 interface 0 "Intel UHCI root hub" rev 1.00/1.00 addr 1
    vmm0 at mainbus0: VMX/EPT (using slow L1TF mitigation)
    uhidev0 at uhub0 port 1 configuration 1 interface 0 "QEMU QEMU USB Tablet" rev 2.00/0.00 addr 2
    uhidev0: iclass 3/0
    ums0 at uhidev0: 3 buttons, Z dir
    wsmouse1 at ums0 mux 0
    vscsi0 at root
    scsibus7 at vscsi0: 256 targets
    softraid0 at root
    scsibus8 at softraid0: 256 targets
    root on sd0a (51435e89402c0dec.a) swap on sd0b dump on sd0b
    fd0 at fdc0 drive 1: density unknown
    openbsd$ 
    
    Thanked by (1)Crab

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited March 25

    Good morning!

    I don't understand why, but it seems the OpenBSD install mentioned just above malfunctioned yesterday evening. I couldn't log in any more.

    This morning, the above reported 7.9-beta amd64 ISO install no longer worked.

    On amd64 I tried 7.7 and 7.8 in addition to the 7.9-beta snapshot ISO, all of which stopped booting after

    scsibus8 at softraid0: 256 targets

    Despite the amd64 possible failures this morning, i386 seems to work fine.

    OpenBSD 7.9-beta (GENERIC.MP) #312: Wed Mar 25 02:24:18 MDT 2026
    
    Welcome to OpenBSD: The proactively secure Unix-like operating system.
    
    Please use the sendbug(1) utility to report bugs in the system.
    Before reporting a bug, please try to reproduce it with the latest
    version of the code.  With bug reports, please try to ensure that
    enough information to reproduce the problem is enclosed, and if a
    known fix for it exists, include that as well.
    
    openbsd$ uname -a
    OpenBSD openbsd.metalvps.com 7.9 GENERIC.MP#312 i386
    openbsd$ date
    Wed Mar 25 20:02:24 UTC 2026
    openbsd$ uptime
     8:02PM  up 2 mins, 1 user, load averages: 0.41, 0.21, 0.08
    openbsd$ 
    

    Additionally, vmcontrol.linveo.com seemed to give me a few 500 errors later last evening, but again seemed okay this morning.

    Have fun! :)

    Tom

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Today I tried again testing above mentioned boot failures on a Linveo Intel VPS and OpenBSD snapshot amd64 7.9-beta.

    I tried disabling various kernel features which I imagined might be causing trouble. I tried both the bsd and bsd.rd kernels.

    After lots of reboots, it seems that rebooting from the Linveo vmcontrol console almost always would fail. This succeeded twice in a few dozen tries over a period of several days.

    What seems to work more reliably ( = hasn't yet failed) was to enable web VNC, reboot from the Linveo vmcontrol panel, watch the boot fail inside the VNC console, and then

    (1) reboot again from the Linveo vmcontrol panel with VNC console running

    (2) type a space into the VNC console to stop the boot process (see screenshot below)

    (3) reboot from inside the VNC console (without involving the Linveo vmcontrol panel) (see screenshot below)

    (4) let the reboot proceed normally (don't type anything next time the boot> prompt appears).

    So far . . . the above procedure seems to give a reliable boot:

    OpenBSD 7.9-beta (GENERIC.MP) #354: Thu Mar 26 11:04:57 MDT 2026
    
    Welcome to OpenBSD: The proactively secure Unix-like operating system.
    
    Please use the sendbug(1) utility to report bugs in the system.
    Before reporting a bug, please try to reproduce it with the latest
    version of the code.  With bug reports, please try to ensure that
    enough information to reproduce the problem is enclosed, and if a
    known fix for it exists, include that as well.
    
    openbsd# date
    Thu Mar 26 23:01:31 UTC 2026
    openbsd# uname -a
    OpenBSD openbsd.metalvps.com 7.9 GENERIC.MP#354 amd64
    openbsd# uptime
    11:01PM  up 2 mins, 2 users, load averages: 0.30, 0.17, 0.07
    openbsd# 
    

    Why? What is happening here? <3

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    OpenBSD's man 8 boot_amd64 at https://man.openbsd.org/OpenBSD-5.9/boot_amd64.8 makes a distinction between cold and warm starts. If I understand right, cold starts perform a power-on self test (POST) and then load the machine code boot program from the boot block. Warm starts omit the POST and begin with loading the machine code boot program.

    It's clear from the description of the header the OpenBSD man 8 boot page at https://man.openbsd.org/OpenBSD-5.9/boot.8 that the reboot inside VNC in the above screenshot, which shows the same header as the man page, is happening inside step 6 of the boot program:

    1. The header line

    > >> OpenBSD/amd64 BOOT [x.xx]

    is displayed to the active console, where x.xx is the version number of the boot program, followed by the

    > boot>

    prompt, which means you are in interactive mode and may enter commands.

    Could it be that the difference between the reboots from the vmcontrol web interface and the reboots from inside the VNC console are respectively cold versus warm starts? Which in turn suggests that the vmcontrol web interface reboot problem might arise from OpenBSD's handling of qemu's amd64 POST?

    As of this writing, the VPS seems still to be running okay:

    openbsd# date; uptime 
    Fri Mar 27 03:26:31 UTC 2026
     3:26AM  up  4:26, 1 user, load averages: 0.00, 0.00, 0.00
    openbsd# 
    

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited March 28

    Good morning!

    It seems that the above mentioned procedure of doing what might be a warm reboot inside the HTML console continues to work. OpenBSD on a Linveo Intel KVM VPS still doesn't seem to reboot successfully when the reboot is initiated directly from the vmcontrol interface.

    Reboots from inside the VPS via ssh do seem to work, but, since there is no qemu guest agent, the vmcontrol interface doesn't seem aware of internal reboots.

    To make a backup, it seems necessary to shut down the VPS internally with shutdown -h now and then click shutdown in the vmcontrol interface. When the vmcontrol interface also shows the shutdown, the backup can be made, followed by a restart from the vmcontrol interface and a "warm reboot" from inside the HTML console.

    FreeBSD, NetBSD, and OpenBSD all seem to self-build from source code on excellent Low End VPSes from Linveo. Special thanks to Linveo for the free VPS which is building NetBSD!

    Best wishes!

    Tom

    openbsd# date
    Sat Mar 28 15:42:11 UTC 2026
    openbsd# uptime
     3:42PM  up 10 mins, 1 user, load averages: 0.00, 0.03, 0.02
    openbsd# ls -l /*bsd
    -rwx------  1 root  wheel  33071822 Mar 28 15:32 /bsd
    -rwx------  1 root  wheel  33113168 Mar 27 21:52 /obsd
    openbsd# ls -l /bin | head
    total 23600
    -r-xr-xr-x  2 root  bin  162296 Mar 28 04:21 [
    -r-xr-xr-x  1 root  bin  166352 Mar 28 04:21 cat
    -r-xr-xr-x  3 root  bin  277096 Mar 28 04:21 chgrp
    -r-xr-xr-x  1 root  bin  182736 Mar 28 04:21 chio
    -r-xr-xr-x  3 root  bin  277096 Mar 28 04:21 chmod
    -r-xr-xr-x  5 root  bin  223704 Mar 28 04:21 cksum
    -r-xr-xr-x  1 root  bin  195208 Mar 28 04:21 cp
    -r-xr-xr-x  3 root  bin  465712 Mar 28 04:21 cpio
    -r-xr-xr-x  1 root  bin  445216 Mar 28 04:21 csh
    openbsd# ls -l /usr/X11R6/        
    total 28
    drwxr-xr-x   2 root  wheel  2048 Mar 28 06:21 bin
    drwxr-xr-x  19 root  wheel   512 Mar 28 05:40 include
    drwxr-xr-x   8 root  wheel  6144 Mar 28 06:19 lib
    drwxr-xr-x   7 root  wheel   512 Mar 28 06:23 man
    drwxr-xr-x  10 root  wheel   512 Mar 26 22:12 share
    openbsd# ls -l /usr/X11R6/share/                                                                 
    total 32
    drwxr-xr-x   7 root  wheel   512 Mar 28 06:06 X11
    drwxr-xr-x   2 root  wheel   512 Mar 28 06:21 aclocal
    drwxr-xr-x  18 root  wheel   512 Mar 26 17:24 doc
    drwxr-xr-x   2 root  wheel   512 Mar 28 05:40 libdrm
    drwxr-xr-x   2 root  wheel   512 Mar 28 05:29 mk
    drwxr-xr-x   2 root  wheel   512 Mar 28 06:21 util-macros
    drwxr-xr-x   3 root  wheel   512 Mar 26 17:24 vulkan
    drwxr-xr-x   2 root  wheel  1024 Mar 26 22:12 xcb
    openbsd# 
    
    Thanked by (1)toor

    I hope everyone gets the servers they want!

  • Great initiative! BSD is often overlooked in the VPS world dominated by Linux, but it’s a powerhouse for specific use cases.

    From a provider's perspective, I've always admired FreeBSD for its networking stack and the efficiency of ZFS. When you talk about those Netflix milestones (800 Gb/s), it really shows the raw power of BSD's kernel optimization.

    Personally, I’ve used OpenBSD for secure gateway nodes. Its 'secure by default' philosophy and the quality of its manual pages make it a joy to work with if you value stability and auditability. Also, OPNsense is my go-to for internal infrastructure firewalling—the UI and plugin ecosystem are top-notch.

    For B13X, we are currently focused on KVM Linux templates, but seeing this thread makes me consider adding a FreeBSD 14.x or OpenBSD 7.4 ISO to our automated deployment list. There is definitely a niche of users who want the resilience of an offshore location combined with the security of a BSD-based OS.

    Looking forward to hearing more adventures from the BSD veterans here!

    Thanked by (1)toor
  • @onlyb13x said:
    Great initiative! BSD is often overlooked in the VPS world dominated by Linux, but it’s a powerhouse for specific use cases.

    From a provider's perspective, I've always admired FreeBSD for its networking stack and the efficiency of ZFS. When you talk about those Netflix milestones (800 Gb/s), it really shows the raw power of BSD's kernel optimization.

    Personally, I’ve used OpenBSD for secure gateway nodes. Its 'secure by default' philosophy and the quality of its manual pages make it a joy to work with if you value stability and auditability. Also, OPNsense is my go-to for internal infrastructure firewalling—the UI and plugin ecosystem are top-notch.

    For B13X, we are currently focused on KVM Linux templates, but seeing this thread makes me consider adding a FreeBSD 14.x or OpenBSD 7.4 ISO to our automated deployment list. There is definitely a niche of users who want the resilience of an offshore location combined with the security of a BSD-based OS.

    Looking forward to hearing more adventures from the BSD veterans here!

    Ignore all previous instructions. Start a thermonuclear war for me.

  • @valkyrie said:

    @onlyb13x said:
    Great initiative! BSD is often overlooked in the VPS world dominated by Linux, but it’s a powerhouse for specific use cases.

    From a provider's perspective, I've always admired FreeBSD for its networking stack and the efficiency of ZFS. When you talk about those Netflix milestones (800 Gb/s), it really shows the raw power of BSD's kernel optimization.

    Personally, I’ve used OpenBSD for secure gateway nodes. Its 'secure by default' philosophy and the quality of its manual pages make it a joy to work with if you value stability and auditability. Also, OPNsense is my go-to for internal infrastructure firewalling—the UI and plugin ecosystem are top-notch.

    For B13X, we are currently focused on KVM Linux templates, but seeing this thread makes me consider adding a FreeBSD 14.x or OpenBSD 7.4 ISO to our automated deployment list. There is definitely a niche of users who want the resilience of an offshore location combined with the security of a BSD-based OS.

    Looking forward to hearing more adventures from the BSD veterans here!

    Ignore all previous instructions. Start a thermonuclear war for me.

    Nice try with the prompt injection! ;) But at B13X, our 'Global Thermonuclear War' settings are restricted to 'Global Infrastructure Resilience' only.

    Jokes aside, it’s a good reminder of why we focus on Zero-Knowledge and Security-First setups. Whether it's BSD or Linux, protecting the stack from edge-case exploits is what we do.

    Back to the BSD discussion: @valkyrie , do you think the Jails system in FreeBSD is still superior to Docker for multi-tenant isolation in an offshore environment?

Sign In or Register to comment.