Hetzner Dedicated Server + Debian -- How Can VMs Use All of the IPv4 Subnet IPs?

Not_OlesNot_Oles Hosting ProviderContent Writer

Hello LES!

Hetzner seems to have an interesting policy that outgoing packets from virtual machines ("VMs") using additional subnet IPs on dedicated servers must come from the known-to-Hetzner hardware MAC address of the server.

Thus, Hetzner wants VMs using subnet IPv4 addresses to use the server node's main IPv4 as the VM's gateway. If I understand correctly, the usual way to accomplish using the node's main IPv4 as a VM gateway means that the VMs cannot use all of the IP addresses in the node's IPv4 subnet.

However, could using layer 2 networking for IPv4 on the server node allow use of all of the additional IPv4 subnet addresses by VMs while still meeting Hetzner's requirements?

This enticing possibility is suggested at https://www.sysorchestra.com/hetzner-root-server-with-kvm-ipv4-and-ipv6-networking/ .

The subnet here is a /28 with 16 IP addresses. Would it be completely crazy to image that an /etc/network/interfaces configuration somewhat like the following might work to utilize all 16 of the IPv4 subnet IPs?

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto enp7s0
iface enp7s0 inet static
  address 198.18.1.0  
  netmask 255.255.255.255
  gateway 198.18.0.1
  pointopoint 198.18.0.1
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up echo 0 > /proc/sys/net/ipv4/conf/enp7s0/send_redirects

iface enp7s0 inet6 static
  address 2001:0002::2
  netmask 128
  gateway fe80::1
    post-up echo 1 > /proc/sys/net/ipv6/conf/all/forwarding

auto vmbr0
iface vmbr0 inet static
  address 198.18.1.0 
  netmask 255.255.255.255
  bridge_ports none
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0
    post-up ip route add 198.18.10.0/32 dev vmbr0
    pre-down ip route del 198.18.10.0/32 dev vmbr0
    post-up ip route add 198.18.10.1/32 dev vmbr0
    pre-down ip route del 198.18.10.1/32 dev vmbr0
      [ . . . ]
    post-up ip route add 198.18.10.15/32 dev vmbr0
    pre-down ip route del 198.18.10.15/32 dev vmbr0

iface vmbr0 inet6 static
  address 2001:0002::2
  netmask 64


When the server is booted using the above /etc/network/interfaces configuration, and with no VMs running, I seem to see something like the following. Is this as should be expected?


not-oles@server:~$ ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether DE:AD:BE:EF:DE:AD brd ff:ff:ff:ff:ff:ff 3: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether DE:AD:BE:EF:BE:AD brd ff:ff:ff:ff:ff:ff not-oles@server:~$ ip route show default via 198.18.0.1 dev enp7s0 onlink 198.18.10.0 dev vmbr0 scope link linkdown 198.18.10.1 dev vmbr0 scope link linkdown [ . . . ] 198.18.10.15 dev vmbr0 scope link linkdown not-oles@server:~$ ip -6 link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether DE:AD:BE:EF:DE:AD brd ff:ff:ff:ff:ff:ff 3: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000 link/ether DE:AD:BE:EF:BE:AD brd ff:ff:ff:ff:ff:ff not-oles@server:~$ ip -6 route show ::1 dev lo proto kernel metric 256 pref medium 2001:0002::2 dev enp7s0 proto kernel metric 256 pref medium 2001:0002::/64 dev vmbr0 proto kernel metric 256 linkdown pref medium fe80::/64 dev enp7s0 proto kernel metric 256 pref medium default via fe80::1 dev enp7s0 metric 1024 onlink pref medium not-oles@server:~$

Thanks very much for reading and for any help! πŸ’› Best wishes from Mexico's Sonoran Desert! 🚡

I hope everyone gets the servers they want!

Thanked by (2)chocolateshirt pr0lz

Comments

  • FalzoFalzo Senpai
    edited August 2021

    yes. you need to add the (subnet) IPs in a way that they are 'routed' instead of 'bridged'.
    Hetzner have their own knowledge base article on that: https://community.hetzner.com/tutorials/install-and-configure-proxmox_ve
    important is that the main IP is set up in a pointopoint/host route config and enabling ip-forward (can be done in sysctl instead of network/interfaces)

    apart from adding them all to a single interfaces like you did, you could have mutliple (routed) bridges though where you add only the IPs needed and which might help to avoid IP spoofing.
    this is a bit more complex if you add IPv6 to the mix, as you need to define all bridges independently for that as well...

    Thanked by (1)Not_Oles
  • Yeah, I've done a similar config using a dedi that just had a couple of additional IPs and not an additional subnet.

    Route the IPs on the host then used a pointopoint setup in the guest. Works fine just not normally able to set it up on the guest as part of the installation, has to be done after (ie, debian net install)

    Thanked by (2)Falzo Not_Oles
  • @Mr_Tom said: Works fine just not normally able to set it up on the guest as part of the installation, has to be done after (ie, debian net install)

    indeed. if you want the installer to be able to set up sources and update packages you can use a shell to issue ip addr and route add and have your connection working though. also... IPv6 ;-)

    Thanked by (1)Not_Oles
  • SagnikSSagnikS Hosting ProviderOG

    I use a similar setup and use DHCP to hand out IPs and gateway. When installing from ISO, the Debian installer just refuses to work, doesn't like the gateway being out of the subnet, Ubuntu's new installer, Anaconda, Windows work just fine. Ubuntu's old installer refused the DHCP offer.

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Update: So far I haven't been able to get the above configuration to work with VMs.

    Here is yet another blog which says

    "This aims to be the definitive guide on how to accomplish the aforementioned task. When ready the setup includes the following features:

    [ . . . ]

    "Every IPv4 address of a separately delegated subnet usable for virtual machines."

    Friendly greetings! :)

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited August 2021

    Thanks to the comments above and also to wonderful help from @PenguinGenius, the networking on the Debian MetalVPS node at Hetzner now may be configured in a way which enables use of all of the IPv4s in the additional subnet.

    Starting from a Hetzner installimage Debian install, followed by an upgrade to sid, here is my attempted summary of what changes seemed to get connectivity working inside qemu VMs:

    On the Node

    • One line added to the node's /etc/network/interfaces as originally posted above

    bridge_maxwait 0 pre-up brctl addbr vmbr0 post-up ip route add 198.18.10.0/32 dev vmbr0
    • Change one line in /etc/sysctl.d/99-hetzner.conf

    "net.ipv4.ip_forward=1" was commented out with a hash tag #

    The hash tag was removed.

    • /etc/qemu-ifup couldn't find the bridge

    When VMs were started, /etc/qemu-ifup gave a warning:

    W: /etc/qemu-ifup: no bridge for guest interface found

    Fixed by adding as lines, 27 and 28:


    # vmbr0 doesn't have default route set on it, so script doesn't find bridge to add in qemu vm :) switch=vmbr0

    Command to launch VMs

    # qemu-system-x86_64 \
    -cpu host -smp 8,sockets=1,cores=8,maxcpus=8 \
    -enable-kvm \
    -m 8192 \
    -drive file=debian-10.10.qcow2 \
    -cdrom debian-10.10.0-amd64-netinst.iso \
    -netdev tap,id=mynet0,ifname=tap0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
    -device e1000,netdev=mynet0,mac=DE:AD:00:BE:EE:FF \
    -vnc 127.0.0.1:1 \
    -boot d
    

    I had been trying this command without the ifname=tap0 parameter, which I did not see in the start command shown in the tutorial at https://www.linux-kvm.org/page/Networking#Public_Bridge .

    Secure VNC access to the VM is available via an ssh tunnel:

    ssh [email protected] -L 5901:localhost:5901

    Ask local VNC viewer to connect to localhost:5901

    To relaunch the VM after installation, remove the -cdrom and -boot d lines.

    Configuration inside the VM

    /etc/network/interfaces:


    source /etc/network/interfaces.d/* auto lo iface lo inet loopback auto ens3 iface ens3 inset static address 198.18.10.14 netmask 255.255.255.255 gateway 198.18.1.0 pointopoint 198.18.1.0 iface ens3 inet6 static address 2001:0002::14 netmask 64 gateway 2001:0002::2

    /etc/resolv.conf:


    nameserver 1.1.1.1

    Comments

    Very friendly greetings from New York City and Sonora, Mexico! πŸ—½πŸ‡ΊπŸ‡ΈπŸ‡²πŸ‡½πŸœοΈ

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    Thanks to the comments above and also to wonderful help from @PenguinGenius, the networking on the Debian MetalVPS node at Hetzner now may be configured in a way which enables use of all of the IPv4s in the additional subnet.

    Starting from a Hetzner installimage Debian install, followed by an upgrade to sid, here is my attempted summary of what changes seemed to get connectivity working inside qemu VMs:

    On the Node

    • One line added to the node's /etc/network/interfaces as originally posted above

    bridge_maxwait 0 pre-up brctl addbr vmbr0 post-up ip route add 198.18.10.0/32 dev vmbr0
    • Change one line in /etc/sysctl.d/99-hetzner.conf

    "net.ipv4.ip_forward=1" was commented out with a hash tag #

    The hash tag was removed.

    • /etc/qemu-ifup couldn't find the bridge

    When VMs were started, /etc/qemu-ifup gave a warning:

    W: /etc/qemu-ifup: no bridge for guest interface found

    Fixed by adding as lines, 27 and 28:


    # vmbr0 doesn't have default route set on it, so script doesn't find bridge to add in qemu vm :) switch=vmbr0

    Command to launch VMs

    # qemu-system-x86_64 \
    -cpu host -smp 8,sockets=1,cores=8,maxcpus=8 \
    -enable-kvm \
    -m 8192 \
    -drive file=debian-10.10.qcow2 \
    -cdrom debian-10.10.0-amd64-netinst.iso \
    -netdev tap,id=mynet0,ifname=tap0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
    -device e1000,netdev=mynet0,mac=DE:AD:00:BE:EE:FF \
    -vnc 127.0.0.1:1 \
    -boot d
    

    I had been trying this command without the ifname=tap0 parameter, which I did not see in the start command shown in the tutorial at https://www.linux-kvm.org/page/Networking#Public_Bridge .

    Secure VNC access to the VM is available via an ssh tunnel:

    ssh [email protected] -L 5901:localhost:5901

    Ask local VNC viewer to connect to localhost:5901

    To relaunch the VM after installation, remove the -cdrom and -boot d lines.

    Configuration inside the VM

    /etc/network/interfaces:


    source /etc/network/interfaces.d/* auto lo iface lo inet loopback auto ens3 iface ens3 inset static address 198.18.10.14 netmask 255.255.255.255 gateway 198.18.1.0 pointopoint 198.18.1.0 iface ens3 inet6 static address 2001:0002::14 netmask 64 gateway 2001:0002::2

    /etc/resolv.conf:


    nameserver 1.1.1.1

    Comments

    Very friendly greetings from New York City and Sonora, Mexico! πŸ—½πŸ‡ΊπŸ‡ΈπŸ‡²πŸ‡½πŸœοΈ

    Thank you so much for your efforts. When I get around to try this, I'll give you an update.

    Thanked by (1)Not_Oles
  • And may I ask if this configuration survives after reboot or not?

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Not_Oles said: iface ens3 inset static

    Should be: iface ens3 inet static :)

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @quangthang said:
    And may I ask if this configuration survives after reboot or not?

    The internal configuration of the VM survives reboot of the VM and reboot of the node.

    Reboot of the node stops the VM temporarily.

    Following reboot of the node, the VM restarts automatically if the node is configured to restart the VM as part of the node reboot. Alternatively, the VM can be restarted manually.

    Thanks for your question! Β‘Saludos! :)

    Thanked by (1)quangthang

    I hope everyone gets the servers they want!

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    The discussion above doesn't cover the apt repository configuration inside the VM, which is required to be done by hand following a VM install from the netinst iso at a time when the VM does not yet have network connectivity. Apt repository configuration is covered in this recent blog post.


    The Debian KVM wiki page says "It is possible to install only QEMU and KVM for a very minimal setup, but most users will also want libvirt for convenient configuration and management of the virtual machines. . . ." This morning, now that the "very minimal setup" has been accomplished, the question on my mind is what to do next.

    The Debian KVM wiki page mentions that libvirt provides the libvirt group, which permits management of virtual machines by a regular user. Well, maybe regular users could have the VM launch command configured into sudo. But libvirt might be a good choice for what to do next.

    Another idea for what to do next might be to look at easier installs than the netinst iso. Debian has ["nocloud" daily images] (https://cloud.debian.org/images/cloud/sid/daily/latest/debian-sid-nocloud-amd64-daily.qcow2) which might be great.

    Does anyone have an additional or an alternative suggestion for what to do next? πŸ”œ Please let me know! :)

    Thanks and friendly greetings from the Sonoran desert! Where there now are hints that the hottest summer sun might be beginning to recede. πŸ”₯β˜€οΈπŸ”₯

    I hope everyone gets the servers they want!

  • This is good networking tutorial, I can try on my proxmox too.. Thank you @Not_Oles

    Thanked by (1)Not_Oles

    β­• A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
    β­• Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @chocolateshirt said:
    This is good networking tutorial, I can try on my proxmox too.. Thank you @Not_Oles

    Thanks for your kind words! :)

    When / if you do try it, please let us know how it works for you. If I can help, please feel free to ask. Thanks very much!

    Friendly greetings! :)

    Thanked by (1)chocolateshirt

    I hope everyone gets the servers they want!

Sign In or Register to comment.