May I Borrow your IPv4?

This article is originally published on yoursunny.com blog https://yoursunny.com/t/2023/borrow-ipv4/

With the rise of IPv4 costs, IPv6-only servers are becoming more popular in low end hosting world.
In one occasion, I acquired an IPv6-only server, but wanted to have IPv4 on it.
Somewhere else, I have a dual-stack IPv4+IPv6 server, idling away.
Can I "borrow" the IPv4 of that dual-stack server, to use on the IPv6-only server?
I tried several methods, and found a way to make this work.

Situation / Assumption / Requirement

In this case, both servers have KVM virtualization and are running Debian 12 operating system.
Server A is a dual-stack server, with IPv4 address 192.0.2.158 and IPv6 address 2001:db8:aefc::2.
Server B is an IPv6-only server, with IPv6 address 2001:db8:eec0::2.
At server A, both IPv4 and IPv6 service are delivered on the same network interface.

My goal is to somehow "move" the IPv4 address from server A to server B.
In particular, I want all IPv4 traffic to the 192.0.2.158 destination address to reach server B, and allow server B to send any IPv4 traffic with 192.0.2.158 source address.
This shall include all TCP and UDP ports, as well as other IPv4 traffic such as ICMP ping.
A "port forwarding" solution would be insufficient, as it cannot deliver non-TCP/UDP traffic.

Tunnel + Ethernet Bridge (does not work)

The first idea came to my mind is creating an Ethernet bridge:

# server A
ip link add br4 type bridge
ip addr flush dev eth0
ip link set eth0 master br4
ip addr add 2001:db8:aefc::2/64 dev br4
ip route add default via 2001:db8:aefc::1
ip link add tnl4 type ip6gre remote 2001:db8:eec0::2 local 2001:db8:aefc::2 hoplimit 64
ip link set tnl4 up master br4

# server B
ip link add tnl4 type ip6gre remote 2001:db8:aefc::2 local 2001:db8:eec0::2 hoplimit 64
ip link set tnl4 up
ip addr add 192.0.2.158/24 dev tnl4
ip route add default via 192.0.2.1

These commands would achieve the following:

  1. Establish an ip6gre tunnel between server A and server B.
  2. Add the 192.0.2.158 IPv4 address on server B's tunnel interface.
  3. Create an Ethernet bridge on server A that includes both the eth0 "real" network interface and the tunnel interface.

Theoretically, incoming ARP requests for the 192.0.2.158 IPv4 address would travel over the ip6gre tunnel to server B, server B would respond to those ARP requests, and then IPv4 traffic could flow normally.
However, this method does not work, because:

  • Most hosting providers have MAC address filtering in place.
    The hypervisor does not allow server A to send packets with a source MAC address that differs from the assigned MAC address of its eth0 "real" network interface.

  • When server B responds to the ARP requests for the 192.0.2.158 IPv4 address, the source MAC address would be the MAC address of server B's ip6gre tunnel interface.
    These packets would be dropped by the hypervisor of server A's hosting provider.

I tried changing the MAC address of server B's ip6gre tunnel interface to be same as server A's eth0, but this does not work either.
Effectively, there are two network interfaces with duplicate MAC address in the br4 bridge, confusing the Linux bridge driver.

Tunnel + IPv4 Routing (it works)

The second idea is configuring IPv4 routing, but without any address in server A.
According to ip-route(8) manpage, the next hop of a route could specify a device name without an address:

NH := [ encap ENCAP ] [ via [ FAMILY ] ADDRESS ] [ dev STRING ] [ weight NUMBER ] NHFLAGS

Therefore, I could make a tunnel and route the IPv4 address over the tunnel interface:

# server A
echo 1 >/proc/sys/net/ipv4/ip_forward
ip addr del 192.0.2.158/24 dev eth0
ip link add tnl4 type ip6gre remote 2001:db8:eec0::2 local 2001:db8:aefc::2 hoplimit 64
ip link set tnl4 up
ip route add default via 192.0.2.1 dev eth0 onlink
ip route add 192.0.2.158/32 dev tnl4
echo 1 >/proc/sys/net/ipv4/conf/all/proxy_arp

# server B
ip link add tnl4 type ip6gre remote 2001:db8:aefc::2 local 2001:db8:eec0::2 hoplimit 64
ip link set tnl4 up
ip addr add 192.0.2.158/32 dev tnl4
ip route add default dev tnl4

These commands would achieve the following:

  1. Establish an ip6gre tunnel between server A and server B.
  2. Delete the 192.0.2.158 IPv4 address from server A, and add it on server B's tunnel interface.
  3. Make server A forward IPv4 traffic with destination address 192.0.2.158 over the tunnel interface, and route all other IPv4 traffic over the eth0 "real" network interface.
  4. Make server A respond to ARP requests for the 192.0.2.158 IPv4 address.
  5. Make server B send all IPv4 traffic over the tunnel interface.

This method worked.
Traceroute and tcpdump suggest that the traffic is indeed being passed to server B.

Traceroute Reports

I tested this procedure between Crunchbits "server A" in Spokane WA and Limitless Hosting "server B" in Chicago IL, with 50ms RTT in between over IPv6.

Before tunnel setup, from outside to our IPv4 address:

sunny@clientC:~$ mtr -wnz -c4 192.0.2.158
Start: 2023-09-09T22:19:32+0000
HOST: vps4                   Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. AS???    10.0.0.73       0.0%     4    9.3  13.0   9.3  18.7   4.0
  2. AS201106 172.83.154.5    0.0%     4    1.3   2.2   1.0   4.5   1.6
  3. AS???    ???            100.0     4    0.0   0.0   0.0   0.0   0.0
  4. AS400304 23.147.152.15   0.0%     4   24.3  55.7  24.3 130.8  50.2
  5. AS400304 192.0.2.158     0.0%     4   13.7  13.7  13.6  13.7   0.1

After tunnel setup, from outside to our IPv4 address:

sunny@clientC:~$ mtr -wnz -c4 192.0.2.158
Start: 2023-09-09T22:22:38+0000
HOST: vps4                   Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. AS???    10.0.0.73       0.0%     4    4.8  12.8   2.5  36.4  15.9
  2. AS201106 172.83.154.5    0.0%     4    0.3   0.3   0.2   0.3   0.0
  3. AS???    ???            100.0     4    0.0   0.0   0.0   0.0   0.0
  4. AS400304 23.147.152.15   0.0%     4   19.5  31.7  17.4  67.7  24.1
  5. AS???    ???            100.0     4    0.0   0.0   0.0   0.0   0.0
  6. AS400304 192.0.2.158     0.0%     4   63.0  63.1  63.0  63.2   0.1

Notice that the RTT of the last hop increased by 50ms, which corresponds to the IPv6 RTT between server A and server B.
There is also a mysterious "hop 5" that we cannot see the IPv4 address, because server A no longer possesses a globally reachable IPv4 address.

Before tunnel setup, from server A to an outside IPv4 destination:

sunny@serverA:~$ mtr -wnz -c4 1.1.1.1
Start: 2023-09-09T22:25:33+0000
HOST: vps6                   Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. AS400304 104.36.84.1     0.0%     4   18.5  49.1  13.2 151.5  68.3
  2. AS400304 23.147.152.14   0.0%     4    0.4   0.7   0.4   1.3   0.4
  3. AS???    206.81.81.10    0.0%     4   12.0  12.5  12.0  12.9   0.4
  4. AS13335  172.71.144.3    0.0%     4   12.2  13.2  11.8  16.0   1.9
  5. AS13335  1.1.1.1         0.0%     4   11.7  11.7  11.7  11.8   0.1

After tunnel setup, from server B to an outside IPv4 destination:

sunny@serverB:~$ mtr -wnz -c4 1.1.1.1
Start: 2023-09-09T18:23:43-0400
HOST: box4                   Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. AS???    192.0.0.8       0.0%     4   49.8  51.1  49.6  55.1   2.7
  2. AS400304 104.36.84.1     0.0%     4   72.7  67.9  59.6  73.7   6.6
  3. AS400304 23.147.152.14   0.0%     4   49.9  50.1  49.9  50.6   0.3
  4. AS???    206.81.81.10    0.0%     4   61.6  63.4  61.6  68.3   3.3
  5. AS13335  172.71.144.3    0.0%     4   61.5  61.9  61.5  62.4   0.4
  6. AS13335  1.1.1.1         0.0%     4   61.3  61.2  61.1  61.4   0.1

Likewise, we can see the RTT of the last hop increased by 50ms, which corresponds to the IPv6 RTT between server A and server B.
There is an additional "hop 1", which has an IPv4 address that is technically globally routable but does not belong to us.

Speedtest Reports

I also tested TCP and UDP speeds over the same environment.

Before tunnel setup, from server A to a private iperf3 server over IPv4:

  • TCP upload: 290 Mbps
  • TCP download: 362 Mbps
  • UDP upload: 783 Mbps
  • UDP download: 686 Mbps

After tunnel setup, from server B to a private iperf3 server over IPv4:

  • TCP upload: 46.6 Mbps
  • TCP download: 30.9 Mbps
  • UDP upload: 407 Mbps
  • UDP download: 304 Mbps

From server B to server A, over IPv6:

  • TCP upload: 375 Mbps
  • TCP download: 263 Mbps
  • UDP upload: 317 Mbps
  • UDP download: 590 Mbps

We can see that there's considerable speed reduction in TCP traffic.
There are three possible reasons:

  • Both plain IPv4 flow and tunneled IPv4 flow are competing for bandwidth at server A.
  • Server A must re-fragment the TCP segments, because the tunnel interface has a smaller MTU than the eth0 "real" interface.
  • TCP offload features including Large Receive Offload (LRO) and TCP Segmentation Offload (TSO) provided by the virtio network interface driver are ineffective on the tunnel interface of both servers, which means the kernel and iperf3 application must handle more packets.

Conclusion

This article explores the method of moving an IPv4 address from a dual-stack KVM server to an IPv6-only KVM server.
A successful way to achieve this goal is establishing an ip6gre tunnel and configuring IPv4 routing.
It then compares the traceroute and iperf3 results before and after tunnel setup.

ServerFactory aff best VPS; HostBrr aff best storage.

Tagged:

Comments

  • Interesting read.

    @yoursunny said: Can I "borrow" the IPv4 of that dual-stack server, to use on the IPv6-only server?

    What do you think about using dual-stack server as a proxy, such that in IPv6-only server we establish a tun proxy client that exposes the IPv4 of the other server?

  • wow

    this article should be a paid article.

    i would love to test it and will do because its cool and good to know

    @Mason any chance @yoursunny can get $40 for this article ?

    Thanked by (1)mfs
  • @yusra said:
    What do you think about using dual-stack server as a proxy, such that in IPv6-only server we establish a tun proxy client that exposes the IPv4 of the other server?

    TUN is a kernel feature that creates a L3 network interface that is connected to an application.
    The application can surely act as a VPN client and "expose the IPv4".
    This is how most userspace L3 VPN clients, including BoringTUN, works.

    I choose the GRE tunnel because I don't need encryption so that this overhead can be eliminated.

    ServerFactory aff best VPS; HostBrr aff best storage.

  • @ehab said:
    this article should be a paid article.
    any chance @yoursunny can get $40 for this article ?

    This ain't an LES exclusive article.
    It's a repost from yoursunny.com blog.

    I can't sell articles because it causes tax complications.
    If @ehab likes my articles, go buy my service transfers: A B.

    ServerFactory aff best VPS; HostBrr aff best storage.

  • c1vhostingc1vhosting Hosting Provider

    @yoursunny said:

    This article is originally published on yoursunny.com blog https://yoursunny.com/t/2023/borrow-ipv4/

    With the rise of IPv4 costs, IPv6-only servers are becoming more popular in low end hosting world.
    In one occasion, I acquired an IPv6-only server, but wanted to have IPv4 on it.
    Somewhere else, I have a dual-stack IPv4+IPv6 server, idling away.
    Can I "borrow" the IPv4 of that dual-stack server, to use on the IPv6-only server?
    I tried several methods, and found a way to make this work.

    Situation / Assumption / Requirement

    In this case, both servers have KVM virtualization and are running Debian 12 operating system.
    Server A is a dual-stack server, with IPv4 address 192.0.2.158 and IPv6 address 2001:db8:aefc::2.
    Server B is an IPv6-only server, with IPv6 address 2001:db8:eec0::2.
    At server A, both IPv4 and IPv6 service are delivered on the same network interface.

    My goal is to somehow "move" the IPv4 address from server A to server B.
    In particular, I want all IPv4 traffic to the 192.0.2.158 destination address to reach server B, and allow server B to send any IPv4 traffic with 192.0.2.158 source address.
    This shall include all TCP and UDP ports, as well as other IPv4 traffic such as ICMP ping.
    A "port forwarding" solution would be insufficient, as it cannot deliver non-TCP/UDP traffic.

    Tunnel + Ethernet Bridge (does not work)

    The first idea came to my mind is creating an Ethernet bridge:

    # server A
    ip link add br4 type bridge
    ip addr flush dev eth0
    ip link set eth0 master br4
    ip addr add 2001:db8:aefc::2/64 dev br4
    ip route add default via 2001:db8:aefc::1
    ip link add tnl4 type ip6gre remote 2001:db8:eec0::2 local 2001:db8:aefc::2 hoplimit 64
    ip link set tnl4 up master br4
    
    # server B
    ip link add tnl4 type ip6gre remote 2001:db8:aefc::2 local 2001:db8:eec0::2 hoplimit 64
    ip link set tnl4 up
    ip addr add 192.0.2.158/24 dev tnl4
    ip route add default via 192.0.2.1
    

    These commands would achieve the following:

    1. Establish an ip6gre tunnel between server A and server B.
    2. Add the 192.0.2.158 IPv4 address on server B's tunnel interface.
    3. Create an Ethernet bridge on server A that includes both the eth0 "real" network interface and the tunnel interface.

    Theoretically, incoming ARP requests for the 192.0.2.158 IPv4 address would travel over the ip6gre tunnel to server B, server B would respond to those ARP requests, and then IPv4 traffic could flow normally.
    However, this method does not work, because:

    • Most hosting providers have MAC address filtering in place.
      The hypervisor does not allow server A to send packets with a source MAC address that differs from the assigned MAC address of its eth0 "real" network interface.

    • When server B responds to the ARP requests for the 192.0.2.158 IPv4 address, the source MAC address would be the MAC address of server B's ip6gre tunnel interface.
      These packets would be dropped by the hypervisor of server A's hosting provider.

    I tried changing the MAC address of server B's ip6gre tunnel interface to be same as server A's eth0, but this does not work either.
    Effectively, there are two network interfaces with duplicate MAC address in the br4 bridge, confusing the Linux bridge driver.

    Tunnel + IPv4 Routing (it works)

    The second idea is configuring IPv4 routing, but without any address in server A.
    According to ip-route(8) manpage, the next hop of a route could specify a device name without an address:

    NH := [ encap ENCAP ] [ via [ FAMILY ] ADDRESS ] [ dev STRING ] [ weight NUMBER ] NHFLAGS
    

    Therefore, I could make a tunnel and route the IPv4 address over the tunnel interface:

    # server A
    echo 1 >/proc/sys/net/ipv4/ip_forward
    ip addr del 192.0.2.158/24 dev eth0
    ip link add tnl4 type ip6gre remote 2001:db8:eec0::2 local 2001:db8:aefc::2 hoplimit 64
    ip link set tnl4 up
    ip route add default via 192.0.2.1 dev eth0 onlink
    ip route add 192.0.2.158/32 dev tnl4
    echo 1 >/proc/sys/net/ipv4/conf/all/proxy_arp
    
    # server B
    ip link add tnl4 type ip6gre remote 2001:db8:aefc::2 local 2001:db8:eec0::2 hoplimit 64
    ip link set tnl4 up
    ip addr add 192.0.2.158/32 dev tnl4
    ip route add default dev tnl4
    

    These commands would achieve the following:

    1. Establish an ip6gre tunnel between server A and server B.
    2. Delete the 192.0.2.158 IPv4 address from server A, and add it on server B's tunnel interface.
    3. Make server A forward IPv4 traffic with destination address 192.0.2.158 over the tunnel interface, and route all other IPv4 traffic over the eth0 "real" network interface.
    4. Make server A respond to ARP requests for the 192.0.2.158 IPv4 address.
    5. Make server B send all IPv4 traffic over the tunnel interface.

    This method worked.
    Traceroute and tcpdump suggest that the traffic is indeed being passed to server B.

    Traceroute Reports

    I tested this procedure between Crunchbits "server A" in Spokane WA and Limitless Hosting "server B" in Chicago IL, with 50ms RTT in between over IPv6.

    Before tunnel setup, from outside to our IPv4 address:

    sunny@clientC:~$ mtr -wnz -c4 192.0.2.158
    Start: 2023-09-09T22:19:32+0000
    HOST: vps4                   Loss%   Snt   Last   Avg  Best  Wrst StDev
      1. AS???    10.0.0.73       0.0%     4    9.3  13.0   9.3  18.7   4.0
      2. AS201106 172.83.154.5    0.0%     4    1.3   2.2   1.0   4.5   1.6
      3. AS???    ???            100.0     4    0.0   0.0   0.0   0.0   0.0
      4. AS400304 23.147.152.15   0.0%     4   24.3  55.7  24.3 130.8  50.2
      5. AS400304 192.0.2.158     0.0%     4   13.7  13.7  13.6  13.7   0.1
    

    After tunnel setup, from outside to our IPv4 address:

    sunny@clientC:~$ mtr -wnz -c4 192.0.2.158
    Start: 2023-09-09T22:22:38+0000
    HOST: vps4                   Loss%   Snt   Last   Avg  Best  Wrst StDev
      1. AS???    10.0.0.73       0.0%     4    4.8  12.8   2.5  36.4  15.9
      2. AS201106 172.83.154.5    0.0%     4    0.3   0.3   0.2   0.3   0.0
      3. AS???    ???            100.0     4    0.0   0.0   0.0   0.0   0.0
      4. AS400304 23.147.152.15   0.0%     4   19.5  31.7  17.4  67.7  24.1
      5. AS???    ???            100.0     4    0.0   0.0   0.0   0.0   0.0
      6. AS400304 192.0.2.158     0.0%     4   63.0  63.1  63.0  63.2   0.1
    

    Notice that the RTT of the last hop increased by 50ms, which corresponds to the IPv6 RTT between server A and server B.
    There is also a mysterious "hop 5" that we cannot see the IPv4 address, because server A no longer possesses a globally reachable IPv4 address.

    Before tunnel setup, from server A to an outside IPv4 destination:

    sunny@serverA:~$ mtr -wnz -c4 1.1.1.1
    Start: 2023-09-09T22:25:33+0000
    HOST: vps6                   Loss%   Snt   Last   Avg  Best  Wrst StDev
      1. AS400304 104.36.84.1     0.0%     4   18.5  49.1  13.2 151.5  68.3
      2. AS400304 23.147.152.14   0.0%     4    0.4   0.7   0.4   1.3   0.4
      3. AS???    206.81.81.10    0.0%     4   12.0  12.5  12.0  12.9   0.4
      4. AS13335  172.71.144.3    0.0%     4   12.2  13.2  11.8  16.0   1.9
      5. AS13335  1.1.1.1         0.0%     4   11.7  11.7  11.7  11.8   0.1
    

    After tunnel setup, from server B to an outside IPv4 destination:

    sunny@serverB:~$ mtr -wnz -c4 1.1.1.1
    Start: 2023-09-09T18:23:43-0400
    HOST: box4                   Loss%   Snt   Last   Avg  Best  Wrst StDev
      1. AS???    192.0.0.8       0.0%     4   49.8  51.1  49.6  55.1   2.7
      2. AS400304 104.36.84.1     0.0%     4   72.7  67.9  59.6  73.7   6.6
      3. AS400304 23.147.152.14   0.0%     4   49.9  50.1  49.9  50.6   0.3
      4. AS???    206.81.81.10    0.0%     4   61.6  63.4  61.6  68.3   3.3
      5. AS13335  172.71.144.3    0.0%     4   61.5  61.9  61.5  62.4   0.4
      6. AS13335  1.1.1.1         0.0%     4   61.3  61.2  61.1  61.4   0.1
    

    Likewise, we can see the RTT of the last hop increased by 50ms, which corresponds to the IPv6 RTT between server A and server B.
    There is an additional "hop 1", which has an IPv4 address that is technically globally routable but does not belong to us.

    Speedtest Reports

    I also tested TCP and UDP speeds over the same environment.

    Before tunnel setup, from server A to a private iperf3 server over IPv4:

    • TCP upload: 290 Mbps
    • TCP download: 362 Mbps
    • UDP upload: 783 Mbps
    • UDP download: 686 Mbps

    After tunnel setup, from server B to a private iperf3 server over IPv4:

    • TCP upload: 46.6 Mbps
    • TCP download: 30.9 Mbps
    • UDP upload: 407 Mbps
    • UDP download: 304 Mbps

    From server B to server A, over IPv6:

    • TCP upload: 375 Mbps
    • TCP download: 263 Mbps
    • UDP upload: 317 Mbps
    • UDP download: 590 Mbps

    We can see that there's considerable speed reduction in TCP traffic.
    There are three possible reasons:

    • Both plain IPv4 flow and tunneled IPv4 flow are competing for bandwidth at server A.
    • Server A must re-fragment the TCP segments, because the tunnel interface has a smaller MTU than the eth0 "real" interface.
    • TCP offload features including Large Receive Offload (LRO) and TCP Segmentation Offload (TSO) provided by the virtio network interface driver are ineffective on the tunnel interface of both servers, which means the kernel and iperf3 application must handle more packets.

    Conclusion

    This article explores the method of moving an IPv4 address from a dual-stack KVM server to an IPv6-only KVM server.
    A successful way to achieve this goal is establishing an ip6gre tunnel and configuring IPv4 routing.
    It then compares the traceroute and iperf3 results before and after tunnel setup.

    It's an easy method.

    By the way, we are considering whether to provide NAT64 VPS. What do you think?

    C1V hosting: Italy's Leading Data Center | Unbeatable VPS, Dedicated Servers, and Colocation | Cutting-Edge Facilities in Pomezia | Where Your Success Takes Center Stage.
    🚀 VPN for €5/year 🚀 | Follow us on Telegram

  • linveolinveo Hosting ProviderOG

    This is a great write-up and was easy to understand. Networking is fascinating to me, even though is not one of my strong skills. Would it be possible to route multiple IPs to the second server, such as a /29 block? Would it be possible to have tunnels to multiple servers with different IPs?

    linveo.com | Shared Hosting | KVM VPS | Dedicated Servers

  • @linveo said:
    Would it be possible to route multiple IPs to the second server, such as a /29 block?

    It's certainly possible.
    Just change the routed prefix.

    Would it be possible to have tunnels to multiple servers with different IPs?

    It's certainly possible.
    Just repeat the procedure to create multiple tunnel interfaces and routes.

    The unique feature of this article is how to move away the single IPv4 on server A.
    I think most people who tunnels IPv4 would still leave one IPv4 on server A.

    Thanked by (1)linveo

    ServerFactory aff best VPS; HostBrr aff best storage.

  • Maybe some form of IPv6 VPN would be a solution too. I never tried it though.

    How are you... online?

  • @yoursunny great article

    In an older article (https://yoursunny.com/t/2020/EUserv-IPv4/) you used vxlan to do the tunneling.

    Would you suggest ip6gre over vxlan for such cases?

    Thanked by (1)yoursunny
  • Good stuff 👍

    For domain registrations, create an account at Dynadot (ref) and spend $9.99 within 48 hours to receive $5 DynaDollars!
    Looking for cost-effective Managed/Anycast/DDoS-Protected/Geo DNS Services? Try ClouDNS (aff).

  • @yoursunny said:

    i can buy you coffee, those vms are of no use to me.

  • iirc tailscale can do this by default:

    https://tailscale.com/kb/1103/exit-nodes

    Thanked by (3)tmntwitw ehab fluttershy

    Hey teamacc. You're a dick. (c) Jon Biloh, 2020.

  • @itsdeadjim said:
    In an older article (https://yoursunny.com/t/2020/EUserv-IPv4/) you used vxlan to do the tunneling.

    Would you suggest ip6gre over vxlan for such cases?

    The benefit of IP6GRE is having less header overhead.

    • IP6GRE header structure: outer IPv6 - GRE - inner IPv4.
      If outer MTU is 1500 octets, inner MTU would be 1456 octets.

    • VXLAN header structure: outer IPv6 - outer UDP - VXLAN - inner Ethernet - inner IPv4.
      If outer MTU is 1500 octets, inner MTU would be 1430 octets.

    • WireGuard header structure: outer IPv6 - outer UDP - WireGuard - inner IPv4.
      If outer MTU is 1500 octets, inner MTU would be 1420 octets.

    Use visual packet size calculator to calculate inner MTU.
    The inner IPv4 header would take a part of inner MTU.

    When I wrote the EUserv-IPv4 article, I haven't heard about IP6GRE tunnel protocol, so I went for VXLAN that I'm most familiar with.

    Thanked by (3)mfs Not_Oles itsdeadjim

    ServerFactory aff best VPS; HostBrr aff best storage.

  • Good article!

    One thing you forgot is TCP MSS clamping. This can be set up with an iptables rule. Otherwise, TCP sessions will freeze if the peer has non-functional PMTUD.

    Thanked by (3)Otus9051 yoursunny skhron
  • @aeg said:
    One thing you forgot is TCP MSS clamping. This can be set up with an iptables rule. Otherwise, TCP sessions will freeze if the peer has non-functional PMTUD.

    I don't believe TCP MSS clamping is necessary.
    In either direction, if the packet size exceeds tunnel MTU, the tunnel driver will perform IPv4 fragmentation as needed.

    ServerFactory aff best VPS; HostBrr aff best storage.

  • @yoursunny said:

    @aeg said:
    One thing you forgot is TCP MSS clamping. This can be set up with an iptables rule. Otherwise, TCP sessions will freeze if the peer has non-functional PMTUD.

    I don't believe TCP MSS clamping is necessary.
    In either direction, if the packet size exceeds tunnel MTU, the tunnel driver will perform IPv4 fragmentation as needed.

    As part of attempting PMTUD, the peer will send packets with DF set. The tunnel driver shouldn't fragment those. If PMTUD fails (ie: overly aggressive ICMP filtering on the peer's side, outside your control), the peer will just keep sending oversized packets with DF set and they'll get dropped on the floor.

Sign In or Register to comment.