@Crab said:
I noticed Linveo VirtFusion panel had changed its appearance, so I was wondering if a newer version was installed with potentially better support, but booting NetBSD and OpenBSD ISOs failed the same old way.
We did upgrade to VF 5.0 which brings a lot of changes. One of them to note for the FreeBSD crowd:
Added support for FreeBSD memory usage display.
There are some behind the scenes setting changes, so I am taking a look to see if there is anything to help with the current issues.
I tried this out since I was curious about its implementation. By default it is trying to access /proc/meminfo which works fine in Linux but doesn't exist in *BSD out of the box. It actually uses freecolor for the memory information on FreeBSD and the VF panel shows the memory information just fine with it.
So people who might've done an ISO install, you'll need to install qemu-guest-agent and freecolor from packages to make it work.
I think I managed to extract the NetBSD kernel panic now. This is possible by using grub2 to read the memory where NetBSD formats the panic string. Essentially, what you need to do is look at either the netbsd.map file or the output of nm netbsd and look for the address of panicstr (but only use the lower 28 bits of that address - the memory contents there point to the address of the actual string, again only use the lower 28 bits of that address).
So, I boot a NetBSD kernel in grub2 via knetbsd, it immediately reboots and in grub I then do:
Somehow, after a few hiccups, I got NetBSD to boot and install, via Linux (Debian 11). This was on an idle Virmach VPS with plenty of resources - now to try the same on the Linveo one..
I'm kinda pee'd off 'cos after trying numerous failures, I didn't have the exact steps that finally got this to work, on the Virmach VPS. So now, I'm struggling to recreate it on the Linveo one. Primarily due to me making changes directly in grub boot, editing the default (Debian) boot, then hitting F10.
It went along the lines of the following, though here, showing an addition to /etc/default/grub..
I'll see if I can glean some clues from the Virmach instance eg. bootlog
It was a simple process, at least on the Virmach VPS and didn't need any fakeroot or similar. The network didn't autoselect at first but I remained in the installation TUI and (re-)chose network setup, which did pick up the virtual NIC and subsequent DHCP.
Gawd, the benefit of hindsight.
Goes off to try the above with VirtFusion (again)..
.. retried with Linveo and the installation kernel doesn't startup (looping back to grub). So it looks as though there's a fundamental difference (perhaps a setting like pass-through) in Virtfusion, compared to both SolusVM and Proxmox hypervisors.
@AlwaysSkint said:
.. retried with Linveo and the installation kernel doesn't startup (looping back to grub). So it looks as though there's a fundamental difference (perhaps a setting like pass-through) in Virtfusion, compared to both SolusVM and Proxmox hypervisors.
@cmeerw said: That's what I have been talking about here all the time.
Apologies but it was a little bit too cryptic for my advancing years. Now that I re-read your posts, I can see how it fits in. I know, it has taken me a while to reach the same point, though in my defense, it was from a position of no experience with BSD. Thanks for your efforts, of course, and I'll perhaps give your fix a try - setting up for a custom kernel takes quite a bit of time/effort. It may be interesting/useful if @linveo divulged the VF settings used when defining a VM, as a fix at source strikes me as the way to go.
Has anyone got a VM that uses Virtfusion, at a different provider?
@cmeerw said: That's what I have been talking about here all the time.
Apologies but it was a little bit too cryptic for my advancing years. Now that I re-read your posts, I can see how it fits in. I know, it has taken me a while to reach the same point, though in my defense, it was from a position of no experience with BSD. Thanks for your efforts, of course, and I'll perhaps give your fix a try - setting up for a custom kernel take quite a bit of time/effort. It may be interesting/useful if @linveo divulged the VF settings used when defining a VM, as a fix at source strikes meas the way to
go.
Has anyone got a VM that uses Virtfusion, at a different provider?
BTW, as far as I understand the code, it also only affects Intel CPUs (AMD should be fine).
Ideally, NetBSD should be able to handle this case more gracefully, so I have brought it up on their tech-kern mailing list yesterday.
BTW, a relatively easy way to then install NetBSD is to put that (gunzipped) kernel file into a (relatively small) separate partition on the VM (not sure which filesystems the NetBSD loader supports - I had put it into a FFS partition). You can then use the installer image https://cdn.netbsd.org/pub/NetBSD/NetBSD-10.0/amd64/installation/cdrom/boot.iso and in the NetBSD loader boot the patched kernel instead (but then select the installer CD as the root device). Once the installation is complete, you again need to tell the NetBSD loader to use the patched kernel again. Then just copy the patched kernel into /netbsd (and you can then re-use the separate kernel partition as swap)
It seems like there might be more success with AMD CPUs? If anyone wants to change their VM type from Intel to AMD let me know and I can switch it for you. I appreciate all of the feedback I've received so far.
@cmeerw said: BTW, as far as I understand the code, it also only affects Intel CPUs (AMD should be fine).
@linveo said: If anyone wants to change their VM type from Intel to AMD let me know and I can switch it for you.
@linveo Thanks for offering migration! Assuming that the issue at hand would affect any method of NetBSD install under Virtfusion (because Virtfusion, or Qemu under Virtfusion, apparently doesn't report or doesn't correctly report data that the NetBSD kernel needs) I also am going to say yes to migrating to AMD.
Please simply wipe my existing VM without further notice and please send me login info for the AMD VM. Or whatever else works for you. Also, no rush.
If Phoenix is okay for AMD, then I'd love to stay in Phoenix, which offers the lowest latency for me. But any other location is equally great!
^ I've powered down and willing to start from scratch too. Given location is across The Pond, then anywhere will do ( though CA is never good due to the prominence of Asian traffic).
@cmeerw said: BTW, as far as I understand the code, it also only affects Intel CPUs (AMD should be fine).
@linveo said: If anyone wants to change their VM type from Intel to AMD let me know and I can switch it for you.
@linveo Thanks for offering migration! Assuming that the issue at hand would affect any method of NetBSD install under Virtfusion (because Virtfusion, or Qemu under Virtfusion, apparently doesn't report or doesn't correctly report data that the NetBSD kernel needs) I also am going to say yes to migrating to AMD.
Please simply wipe my existing VM without further notice and please send me login info for the AMD VM. Or whatever else works for you. Also, no rush.
If Phoenix is okay for AMD, then I'd love to stay in Phoenix, which offers the lowest latency for me. But any other location is equally great!
Thanks again! Kindest regards!
There are AMD nodes in all the DCs, so I have migrated your VMs over to them as they were with the same IP. Hopefully this ends up being more fruitful.
Comments
We did upgrade to VF 5.0 which brings a lot of changes. One of them to note for the FreeBSD crowd:
There are some behind the scenes setting changes, so I am taking a look to see if there is anything to help with the current issues.
linveo.com | Shared Hosting | KVM VPS | Dedicated Servers
^ I prefer the new layout, if nothing else.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
I tried this out since I was curious about its implementation. By default it is trying to access /proc/meminfo which works fine in Linux but doesn't exist in *BSD out of the box. It actually uses freecolor for the memory information on FreeBSD and the VF panel shows the memory information just fine with it.
So people who might've done an ISO install, you'll need to install qemu-guest-agent and freecolor from packages to make it work.
I think I managed to extract the NetBSD kernel panic now. This is possible by using grub2 to read the memory where NetBSD formats the panic string. Essentially, what you need to do is look at either the netbsd.map file or the output of
nm netbsd
and look for the address ofpanicstr
(but only use the lower 28 bits of that address - the memory contents there point to the address of the actual string, again only use the lower 28 bits of that address).So, I boot a NetBSD kernel in grub2 via
knetbsd
, it immediately reboots and in grub I then do:This then gives me the hex dump of the panic message, which in my case is:
Guess I now just need to figure out what goes wrong there...
Congrats @cmeerw! Thank you so much for posting!
I hope everyone gets the servers they want!
Goes off to break something..
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
The "obvious fix" of just adding:
seems to work and I have now managed to install NetBSD 10.0 with such a modified kernel.
But it looks like something is not being reported correctly (by the CPU/virtualised CPU) here.
Excellent discovery @cmeerw !
Looking at the kernel code at https://cdn.netbsd.org/pub/NetBSD/NetBSD-current/src/sys/arch/x86/x86/cpu_topology.c lp_max means the maximum logical processors and core_max obviously maximum cores per package that it detects. If the former smaller value than the latter, kassert will issue a kernel panic.
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
FreeBSD/SMP: 2 package(s) x 1 core(s)
FreeBSD detects 2 packages and 1 core which should be correct, but for NetBSD seems to detect it other way around.
Running cpuid in a Linux VM on that machine I get:
$ cpuid -1l1 | fgrep hyper-threading
hyper-threading / multi-core supported = false
So lp_max = 1
$ cpuid -1l4 | fgrep "in pkg"
maximum IDs for cores in pkg = 0xf (15)
and core_max = 16
Somehow, after a few hiccups, I got NetBSD to boot and install, via Linux (Debian 11). This was on an idle Virmach VPS with plenty of resources - now to try the same on the Linveo one..
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Congrats on getting NetBSD up and running!
Wanna give us a hint, please, about the method you used to install NetBSD?
Maybe a link to a how to?
Maybe a sentence or two about the hiccups and what fixes you applied?
Thanks so much! Best wishes!
I hope everyone gets the servers they want!
I'm kinda pee'd off 'cos after trying numerous failures, I didn't have the exact steps that finally got this to work, on the Virmach VPS. So now, I'm struggling to recreate it on the Linveo one. Primarily due to me making changes directly in grub boot, editing the default (Debian) boot, then hitting F10.
It went along the lines of the following, though here, showing an addition to /etc/default/grub..
The uuid was taken from the existing Debian entries. The ramdisk.fs is intended to be a RAM-based installation kernel - though I wish I'd paid more attention to the exact one that worked.
My browser tabs include:
https://www.gnu.org/software/grub/manual/grub/grub.html#NetBSD
https://mirrors.mit.edu/NetBSD/NetBSD-10.0/amd64/installation/ramdisk/
http://nycdn.netbsd.org/pub/NetBSD-daily/HEAD/202409211400Z/amd64/binary/kernel/
I'll see if I can glean some clues from the Virmach instance eg. bootlog
It was a simple process, at least on the Virmach VPS and didn't need any fakeroot or similar. The network didn't autoselect at first but I remained in the installation TUI and (re-)chose network setup, which did pick up the virtual NIC and subsequent DHCP.
Gawd, the benefit of hindsight.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@AlwaysSkint
Thanks for the correction!
Thanks for the install info! More interesting stuff for me to try!
I hope everyone gets the servers they want!
Haven't found much more, though in /var/log/messages:
Whereas post install/reboot:
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
..Much later. I created a new Debian 11 VM on my Proxmox server and booted it up.
'history'
!/bin/sh
exec tail -n +3 $0
This file provides an easy way to add custom menu entries. Simply type the
menu entries you want to add after this comment. Be careful not to change
the 'exec tail' line above.
menuentry 'NetBSD' {
load_video
insmod gzio
insmod part_gpt
insmod part_bsd
insmod xfs
insmod zfs
search --no-floppy --fs-uuid --set=root 2eb84507-bd74-432f-8bb3-c695f9xxxxxxx
echo 'Loading NetBSD ...'
knetbsd /netbsd-INSTALL.gz
}
As before UUID was copied from the default /etc/boot/grub.cfg entry.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Goes off to try the above with VirtFusion (again)..
.. retried with Linveo and the installation kernel doesn't startup (looping back to grub). So it looks as though there's a fundamental difference (perhaps a setting like pass-through) in Virtfusion, compared to both SolusVM and Proxmox hypervisors.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
That's what I have been talking about here all the time: https://lowendspirit.com/discussion/comment/186288/#Comment_186288 and https://lowendspirit.com/discussion/comment/186301/#Comment_186301
You'll need to compile the kernel with the change mentioned and use that kernel during installation.
Apologies but it was a little bit too cryptic for my advancing years. Now that I re-read your posts, I can see how it fits in. I know, it has taken me a while to reach the same point, though in my defense, it was from a position of no experience with BSD. Thanks for your efforts, of course, and I'll perhaps give your fix a try - setting up for a custom kernel takes quite a bit of time/effort. It may be interesting/useful if @linveo divulged the VF settings used when defining a VM, as a fix at source strikes me as the way to go.
Has anyone got a VM that uses Virtfusion, at a different provider?
[Edit:typos - Grr, laptop keyboard]
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
BTW, as far as I understand the code, it also only affects Intel CPUs (AMD should be fine).
Ideally, NetBSD should be able to handle this case more gracefully, so I have brought it up on their tech-kern mailing list yesterday.
Makes sense that the two working VM that I have are both on (different) AMD CPUs.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Can your kernel be made available?
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Yes: https://edge.cmeerw.net/netbsd-10.0-vm.gz
BTW, a relatively easy way to then install NetBSD is to put that (gunzipped) kernel file into a (relatively small) separate partition on the VM (not sure which filesystems the NetBSD loader supports - I had put it into a FFS partition). You can then use the installer image https://cdn.netbsd.org/pub/NetBSD/NetBSD-10.0/amd64/installation/cdrom/boot.iso and in the NetBSD loader boot the patched kernel instead (but then select the installer CD as the root device). Once the installation is complete, you again need to tell the NetBSD loader to use the patched kernel again. Then just copy the patched kernel into /netbsd (and you can then re-use the separate kernel partition as swap)
Thanks @cmeerw!
Link for the curious:
https://mail-index.netbsd.org/tech-kern/2024/09/22/msg029737.html
I hope everyone gets the servers they want!
Crikey! Glad that you didn't tell us a more difficult way!
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
It seems like there might be more success with AMD CPUs? If anyone wants to change their VM type from Intel to AMD let me know and I can switch it for you. I appreciate all of the feedback I've received so far.
linveo.com | Shared Hosting | KVM VPS | Dedicated Servers
I'll go for it; makes no odds to me. :-)
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@linveo Thanks for offering migration! Assuming that the issue at hand would affect any method of NetBSD install under Virtfusion (because Virtfusion, or Qemu under Virtfusion, apparently doesn't report or doesn't correctly report data that the NetBSD kernel needs) I also am going to say yes to migrating to AMD.
Please simply wipe my existing VM without further notice and please send me login info for the AMD VM. Or whatever else works for you. Also, no rush.
If Phoenix is okay for AMD, then I'd love to stay in Phoenix, which offers the lowest latency for me. But any other location is equally great!
Thanks again! Kindest regards!
I hope everyone gets the servers they want!
^ I've powered down and willing to start from scratch too. Given location is across The Pond, then anywhere will do ( though CA is never good due to the prominence of Asian traffic).
I would've too had I found a replacement job, post redundancy. Heat City, Arizona!
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
There are AMD nodes in all the DCs, so I have migrated your VMs over to them as they were with the same IP. Hopefully this ends up being more fruitful.
linveo.com | Shared Hosting | KVM VPS | Dedicated Servers