Proxmox SDN

edited April 23 in Help

Hello!

I am looking to setup 5 Proxmox nodes in a datacenter. The datacenter is going to provide me with two VLANs. A private VLAN and a public, internet, VLAN.

What I am looking to do is segment my VMs over different SDNs using the private VLAN as backing. Example networking config below.

Per Node:


iface vmbr0.200 <snip> # Internet iface vmbr0.201 address 192.168.xxx.2x/24 gateway 192.168.xxx.1 # Private

I would then use PfSense/OpnSense as the firewall for each SDN.

I am thinking of the following SDNs

Secure
Insecure
Kubernetes

So to my question, which SDN should I pick? Since it is a DC would it be VLAN, VXLAN, QinQ? Which is the least painful on CPU usage?

Note; I'd asked this on Proxmox's forum as well, but wasn't getting the traction I had hoped for.

Comments

  • Any reason why you are not using the native Proxmox firewall ?
    Afraid I don't really understand enough to answer real question but my standard answer would be KISS.

  • I think this might do what I am looking for, honestly just looking for clarification on all of it and best practices. Which are hard to find at the moment.

    https://mattglass-it.com/software-defined-network-proxmox/

  • If you need to divvy up the private VLAN further, you need QinQ. You just then set up the VMs to use the corresponding vnetX network devices and specify which VLAN you want them to sit in.

    No point doing VXLAN if you have direct L2 connectivity across all nodes.

  • edited April 24

    @jmgcaguicla said:
    If you need to divvy up the private VLAN further, you need QinQ. You just then set up the VMs to use the corresponding vnetX network devices and specify which VLAN you want them to sit in.

    No point doing VXLAN if you have direct L2 connectivity across all nodes.

    Does the provider need to support this? What is the advantage over VXLAN?

    Reading this suggests I need to verify the provider supports QinQ. https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvesdn_tech_and_config_overview

  • @CMunroe said:
    Does the provider need to support this?

    Yes, a good portion of switches sitting in a DC should support 802.1ad anyway; imo this will be more of a concern for low-end homelabs.

    What is the advantage over VXLAN?

    No node overhead since the switch does the en/de-capsulation. You also don't lose that much MTU. Although I don't know if any of these would cause any noticeable performance degradation for your use case, if you're really curious why not try both out before committing.

  • @jmgcaguicla said:

    @CMunroe said:
    Does the provider need to support this?

    Yes, a good portion of switches sitting in a DC should support 802.1ad anyway; imo this will be more of a concern for low-end homelabs.

    What is the advantage over VXLAN?

    No node overhead since the switch does the en/de-capsulation. You also don't lose that much MTU. Although I don't know if any of these would cause any noticeable performance degradation for your use case, if you're really curious why not try both out before committing.

    Awesome, thank you! I shot a message to my provider to ask.

Sign In or Register to comment.