Kimsufi: unable to get IPv6 connectivity in guests (either ndppd or static neighbour)
Hi all,
My Kimsufi is running Proxmox with encryption and all, thanks for the help!
The past few nights I've spent trying to get IPv6 to the containers. As OVH shows my server as having a /128, I've searched for yoursunny's hall of shame, but was not able to find the list (I'm sure to have seen it a couple of times during BF/CM).
I did find @loay 's IPv6 subnet-checker. It warned me of OVH's on-link practise and suggested ndppd.
Enabling forwarding and proxying (all/vmbr0/eno1) and configuring ndppd on either the bridge (vmbr0) or the actual interface (eno1) does not have the desired effect: neighbour advertisements / neighbour sollicitations are not proxied.
I tried setting a static neighbour for two containers on eno1, seperately also on vmbr0, again with no result.
I threw the story in a chatbot, which regurgitated the same commands and told me that it should work that way.
Is there something obvious I must have missed in the context of Kimsufi OVH BHS?
Comments
Capture traffic with tcpdump and analyze with Wireshark.
You can see the NDP packets and find out what's missing.
No hostname available. affbrr
Thanks!
tcpdumpshowed that NDP does not seem to get proxied. Traffic over eno1 shows packages with the IP of the guests, instead of that of of the host. So I know what is missing: proxying of NDP, but with ndppd already running, I don't know what else to add.I was notified that that a thread by Maounique (OGF) might hold some pointers.
It seems so!
Maounique described a setup with two bridges:
1. vmbr0 enslaving eno1, on a chosen IP/128 from the /64 bloc
2. vmbr64 on the ::1/128 in the OVH panel, but with /128 replaced by /64
3. ndppd proxies on vmbr0, for the subnet on vmbr64
Guests use the IP of vmbr64 as gateway; traffic forwarded with net.ipv6.conf.all.forwarding=1 in systcl (but seemingly without the need to set proxy_ndp=1)
I'll be reconfiguring following that suggestion and testing whether it gives me connectivity.
Make sure thr kernel has ndp enabled. Without doesn't matter if nppd is installed and running.
That is,
isn't it? Thanks for mentioning it, as I did not notice it in the other thread. (It was in my attempts so far, so I'll keep it)
Strong boys use ndpresponder instead of ndppd.
https://lowendspirit.com/discussion/2815/ipv6-neighbor-discovery-responder-for-kvm-vps
No hostname available. affbrr
Is the /128 and default route assigned to the bridge?
Can you ping out from the bridge?
Assign a static address to the bridge inside the /128 use that as the default gateway see if it works inside the container.
Also make sure ipv6 forwarding is on.
Real strong boys don't use any extra sh*t but add the additional IPv6 static to the bridge
ip -6 neigh add proxyndppd works fine if properly configured.
Free NAT KVM | Free NAT LXC
Hi all, thanks a lot for your input, suggestions and patience :-)
I got it to work!
While detailing my situation for follow-up questions I dropped all configs and the results of troubleshooting in an editor with similar-selection-highlighting.
In the end it turned out I mixed up the IP of vmbr0 (outer bridge to eno1) and vmbr64 (inner bridge connected to the containers).
As a result, my containers were configured with the wrong gateway and the traffic went nowhere.
For anyone with the same problem on Kimsufi: as of december 2025 it still works with double bridges and ndppd, and Maounique's howto I abbreviated above.
Lesson learned: explaining the problem to someone to ask them a question in 90% hands the solution by itself.
Happy networking :-)
Better reply than "nvm figured it out"
Hey teamacc. You're a dick. (c) Jon Biloh, 2020.
Aiming to make the world a better place