Boringguard - Ansible Role for Wireguard install & setup

Boringguard - Ansible Role for Wireguard install & setup

https://github.com/N-Storm/boringguard

Hey everyone! 👋

I wanted to share a piece of my private collection of custom ansible roles that I've created for configuring and managing various VPSs, lowends included. I've recently decided to make one of these roles, Boringguard, public to see if it might be useful to the community.

Features:

  • Compatibility: It works with deb/apt-based distros like Debian 11+ (might work with 9+, haven't tested), Ubuntu 20.04+, Armbian, and RPM-based RHEL8+ distros (CentOS, Rocky, Alma, Oracle, etc).
  • Boringtun Installation: This role can install Boringtun, a userspace Wireguard daemon implementation by CloudFlare, which doesn't require a kernel module. It's great for container-based VPSs (OpenVZ, LXC, Virtuozo, etc), especially if you have TUN/TAP capability. It even works on NAT VPSs with UDP port-forwarding.
  • Binary Packages Included (.deb and .rpm): Since there's no official repo for Boringguard and no distro packages available, I've built binaries from sources for various architectures (x86_64, aarch64, ARMv7). This includes builds with MUSL lib as well as Glibc, to better suit resource-constrained devices. Should work on a variety of small/embedded devices, like SBCs, ARM routers, etc. Tested on Hetzner CAX ARM64 plans and ARMv7 Orange Pi One SBC.
    Don't trust my binaries? I'm absolutely with you here ;) You can build and add your own. The packages to install are configurable in a yaml vars file. Let me know if you need a guide for building those packages. Just a ~couple of requests and I'll write recipe to automate building those packages I have there.

  • Configurations and QR Codes: Configure the server with as many peers as you want and generate client config files and QR-codes.

  • Idempotent with Persistent Config: The primary reason I created my own 'Wireguard installer' is probably its idempotency and persistent configuration. It's mostly inspired by the "Nyr wireguard-install script", but I found it lacking in the ability to restore VPN settings on VPS reinstall/migration/etc. (like almost every Wireguard install script), not to mention the absence of ARM support. This is where Ansible comes in as a more suitable tool for such tasks - you simply define your configuration with variables for each host (or even host group), tweak some settings or start with the defaults, and set up your VPN. Once generated, items like private keys and other settings will be stored on the host used for configuration (the "ansible host").
    I don't want to configure clients from scratch every time I need to rebuild a VPS. Or manually fix configs on VPS migration, for example. Can be a serious hassle if you have many VPSs or clients. Managed configuration approach solves a few things at once here:
    • a) generates a populated VPN config file which you can edit. When you run the playbook again, the new settings will be applied;
    • b) ensures that if you reinstall/migrate/change your VPS, running the playbook again will install and restore the same settings as before (assuming the hostname remains the same). Peers can connect with the same keys/certs as before.

This might not be a huge deal, but it's incredibly useful for me. As a part of a much larger "VPS toolkit" I have, which I'm not planning to make fully public (it's tailored to my specific environment). However, if Boringguard is useful to others, I might consider migrating more features from my private collection.

Docs & Feedback:

I haven't finished the documentation yet (missing Quick Start, etc.). Feel free to ask here if you're interested, and I'll work on improving the docs if there's enough interest.

Cheers! 🚀

Thanked by (4)sh97 skorous wankel Encoders

Comments

  • Added build script & Dockerfile to build boringtun packages for x86 & ARM: https://github.com/N-Storm/boringguard/tree/master/build
    Tested to work on Debian 11/12 as a host.
    Can be used separetely if you just need to build the packages.

  • I haven't had the time to read your repository thoroughly, but I'd say it's overly complicated (just like most ansible lol).

    imho people that interested using someone's ansible setup would have enough knowledge to set it up on their own to certain degree, so you should prioritize making docs on how to add and remove client. this is just not obvious enough;

    • where does the client defined?
    • what should i do to add/remove client?
    • which script is responsible for add/remove client?
    • after client add is completed, how does it display my configuration?
    • how do i retrieve client configuration that already made in the past?

    as the current state of the ansible script, it's hardly competing with https://github.com/Nyr/wireguard-install while it's still fucntion the same and is easier to use

    Thanked by (1)NStorm

    Fuck this 24/7 internet spew of trivia and celebrity bullshit.

  • Is this diiferent from Boringtun by Cloudflare and the one @Nyr used?

  • edited May 27

    @Fritz said:
    Is this diiferent from Boringtun by Cloudflare and the one @Nyr used?

    It was inspired by @Nyr WG install scripts (credited at end of README.md). It installs Boringguard as well, but I've built my own binary packages from sources. Compared to Nyr scripts I've added various ARM builds as well (runs on Hetzner ARM plans VPS, tested, runs on single-board computers like RPi and others).

    @Encoders said: you should prioritize making docs on how to add and remove client. this is just not obvious enough;

    Thanks for your comments. Some of the questions you've mentioned, are already answered in the Usage section: https://github.com/N-Storm/boringguard/?tab=readme-ov-file#usage
    But I'll consider how to make it easier to understand. Probably should add some sort of simple step by step "HowToUse Walkthrough".

    Most significant difference from the Nyr installer are the idempotency from the IaaS approach, which I've already noted in the 1st post here. With this Ansible role once I configure WG on the host once, I don't have to do anything except for running a single simple to apply playbook again. I can reinstall/rebuild VPS, even to another distro. Or even if I have to migrate to other provider (i.e. "buy new VPS somewhere else"). Just run it again and it will restore all settings. Means that I don't need to update configs on the clients at all. Host and preshared keys will be kept back. No hassle to walk though the installer questions as well. I have a lot of clients on various devices, different platforms, some are not reachable all the time, etc. This approach really saves a lot of my time.

    Scales better as well. Once you have a dozen of VPS and even more clients it will end up a pain to run and walk though installer script each time you need to change something.

    And the ARM support, including container based VPS (OpenVZ/LXC/etc) I've mentioned already, is another feature I've added over the original idea from those scripts.

  • NyrNyr OG

    @NStorm said: runs on Hetzner ARM plans VPS, tested, runs on single-board computers like RPi and others

    Just to clarify, my installer runs on ARM too, using the native kernel module instead of an user space implementation as it is more efficient. It only uses the boringtun where it is required (that is, places without the kernel module available, mainly some containers).

    Yeah, I did not care about building the boringtun binaries for architectures other than x86_64, but that is because there is virtually zero demand for this edge case and it would be a waste of time and resources (my humble opinion, of course).

    Thanked by (2)NStorm skorous
  • edited May 27

    @Nyr said: native kernel module instead of an user space implementation as it is more efficient.

    But unavailable under containers. Even if the host has wireguard kernel module added and loaded, it will still require additional permissions settings on the host to allow container to use it.

    @Nyr said: Yeah, I did not care about building the boringtun binaries for architectures other than x86_64, but that is because there is virtually zero demand for this edge case and it would be a waste of time and resources (my humble opinion, of course).

    Might be interested to take a look on my build script and Github actions: https://github.com/N-Storm/boringguard/tree/master/build
    All builds are done automatically inside temporary docker containers. With the simple Github action this could be done on the Github itself, built by their free "runner" hosts. A few more steps added and it can automatically upload resulting binaries somewhere. I've did an action workflow to build & upload binaries to Github release section in other unrelated project: https://github.com/N-Storm/DigiLivolo/actions . Action script are a bit complicated there due to a nature of the project. It won't be that complex for Boringtun builds at all. But it builds everything for various platforms on every commit to report if there are any build errors. And uploads built binaries to Releases when version tag are added.

    And sorry I didn't made it clear that your script are missing ARM support only inside a containers. A rare case indeed. It will work on full VM (like KVM) or hardware servers where kernel module can be loaded.

  • @NStorm said: Thanks for your comments. Some of the questions you've mentioned, are already answered in the Usage section: https://github.com/N-Storm/boringguard/?tab=readme-ov-file#usage
    But I'll consider how to make it easier to understand. Probably should add some sort of simple step by step "HowToUse Walkthrough".

    Thank you, looking forward for this

    @NStorm said: Most significant difference from the Nyr installer are the idempotency from the IaaS approach, which I've already noted in the 1st post here. With this Ansible role once I configure WG on the host once, I don't have to do anything except for running a single simple to apply playbook again. I can reinstall/rebuild VPS, even to another distro. Or even if I have to migrate to other provider (i.e. "buy new VPS somewhere else"). Just run it again and it will restore all settings. Means that I don't need to update configs on the clients at all. Host and preshared keys will be kept back. No hassle to walk though the installer questions as well. I have a lot of clients on various devices, different platforms, some are not reachable all the time, etc. This approach really saves a lot of my time.
    Scales better as well. Once you have a dozen of VPS and even more clients it will end up a pain to run and walk though installer script each time you need to change something.

    I like this IaaS approach, personally I'm managing 8 node for wireguard and it's been a pain if i have to move to new IP (not often, but still felt the hassle). so far I've been using salt stack for this, but yeah I don't really like how the salt-minion are hogging quite some resource in lowend vps. I was thinking to use terraform to do this (so things can be done over ssh), but i've been procastinating lol. happy to see a working example in ansible

    Fuck this 24/7 internet spew of trivia and celebrity bullshit.

  • edited May 28

    @Encoders, first draft here. Willing to test it? :)

    1. Install ansible, git, qrencode and optional sshpass if you login with password-based authentication (not with the keys):
      sudo apt install ansible git qrencode sshpass
    2. Prepare directories for ansible role and config storage:
      mkdir -p ansible/roles ansible/configs/wireguard
    3. Clone git repo into ansible/roles:
      cd ansible/roles && git clone https://github.com/N-Storm/boringguard/ && cd ..
    4. Copy example playbook for the role (should be in ansible directory right now):
      cp roles/boringguard/boringguard.yaml .
    5. Check & modify vars inside ansible/boringguard.yaml if required. It will work with defaults as is. But you might want to modify defaults. Uncomment vars: section and any of the wg_port, wg_iface or wg_npeers lines you want to modify:
      • wg_port sets WireGuard port. Default: 51820
      • wg_iface sets network interface interface WireGuard will listen on. Default: eth0
      • wg_npeers sets number of WireGuard peers (clients) what will be created during setup. On first run they will be named Peer, you can change names later if required. Default: 2
    6. Once done run the playbook on the desired host. User on the target host can be specified on command line (as with ssh). That account should have sudo access (to install package, configure WireGuard, etc). Option -k allows to type ssh password, -K does the same if sudo asks for the password again, -v turn on a bit more verbose output. -i usually sets inventory file with the host list, but you can specify host directly from command line, just note that ',' at the end:
      ansible-playbook -vkK -i host@port, boringguard.yaml

    Now it should connect to the host via ssh and begin installing and configuring WireGuard. On successful run it will store all the things related to WireGuard setup in the configs/config-<hostname>.yaml file. This includes secret keys for the WG, so keep this file private. You can edit that file later to modify settings you want (change port, add/rename/remove peers, etc), then just run the playbook once again as noted in step #6 and it will re-configure everything based on this file.
    Configs for the WG clients will be located in the configs/wireguard directory. Beginning with the [host name]-[peer name] there will be 3 files for each:

    • [host name]-[peer name].conf - are the file-based WireGuard config, which can be imported on the client
    • [host name]-[peer name].qrcode.png - are the image with the WireGuard connection QRCode which can be scanned by supported clients.
    • [host name]-[peer name].qrcode.txt - same QRCode, but using ANSIUTF8 pseudo-graphics which can be output to the text terminal right away.

    If you want to run on multiple hosts in one run, you can prepare Ansible Inventory file by following this guide: https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html
    It's pretty simple in ini format, and can be just "one line per host" as well in this form. Once prepared, you can specify this file with -i option on step #6 instead of host directly. Per-host variable can be specified in the inventory file as well. SSH use can be specified as Ansible built-in variable ansible_user. Port can be specified as host:port. Example inventory file:

    host1.example.com wg_port=12345
    host2.example.com:1234 wg_npeers=1 wg_port=3333, wg_iface=venet0
    

    More flexible config options can be configured by adding them to initial playbook or inventory file, but these aren't covered here right now.

    Thanked by (1)Encoders
  • While Ansible might sound complicated, once you figure out how it works (might check yaml files in boringguard/tasks, but there a lot of more suitable examples & tutorials you can find somewhere else on the net) it really helps to manage configurations of the hosts.

    As I've noted in the 1st post, Boringguard are just the part of my custom roles collection I use to manage VPS. Every time I get new VPS / Migration / Configuration change / etc I just apply roles I need to configure the VPS (including "from scratch" for the freshly obtained VPS) to get all the functionality I need from the VPS. One role does the initial config - set up user accounts, ssh keys, permissions, installs & updates packages. Other roles, like this one, adds the functionality I need. Others I've made installs & configures OpenVPN, SoftEther (might add them to Boringguard later as well). Have role to do backups, other adds custom rclone-based cloud sync feature. And a few small others for the rest.

    SCMs with agents usually scales well for high count of hosts. I'm using one at my work, but it's used to manage few thousands of hosts. More resource usage for the agents, right. So it's usually worthy starting from at least a 30-40+ hosts. Below that count Ansible can be better by not wasting resources with an agent. Even more reasonable if configuration doesn't changes too often. Still, Ansbile are used even in my work environment sometimes, for a "special occasion" tasks.

  • @NStorm said:
    @Encoders, first draft here. Willing to test it? :)

    looks good, I'll try it this weekend. will report back once it's done.
    I have a bit experience using ansible at my old employer (deploying cpanel, openstack .etc) so I'm not really going in dark

    Fuck this 24/7 internet spew of trivia and celebrity bullshit.

  • @NStorm said:
    @Encoders, first draft here. Willing to test it? :)

    1. Install ansible, git, qrencode and optional sshpass if you login with password-based authentication (not with the keys):
      sudo apt install ansible git qrencode sshpass
    2. Prepare directories for ansible role and config storage:
      mkdir -p ansible/roles ansible/configs/wireguard
    3. Clone git repo into ansible/roles:
      cd ansible/roles && git clone https://github.com/N-Storm/boringguard/ && cd ..
    4. Copy example playbook for the role (should be in ansible directory right now):
      cp roles/boringguard/boringguard.yaml .
    5. Check & modify vars inside ansible/boringguard.yaml if required. It will work with defaults as is. But you might want to modify defaults. Uncomment vars: section and any of the wg_port, wg_iface or wg_npeers lines you want to modify:
      • wg_port sets WireGuard port. Default: 51820
      • wg_iface sets network interface interface WireGuard will listen on. Default: eth0
      • wg_npeers sets number of WireGuard peers (clients) what will be created during setup. On first run they will be named Peer, you can change names later if required. Default: 2
    6. Once done run the playbook on the desired host. User on the target host can be specified on command line (as with ssh). That account should have sudo access (to install package, configure WireGuard, etc). Option -k allows to type ssh password, -K does the same if sudo asks for the password again, -v turn on a bit more verbose output. -i usually sets inventory file with the host list, but you can specify host directly from command line, just note that ',' at the end:
      ansible-playbook -vkK -i host@port, boringguard.yaml

    Now it should connect to the host via ssh and begin installing and configuring WireGuard. On successful run it will store all the things related to WireGuard setup in the configs/config-<hostname>.yaml file. This includes secret keys for the WG, so keep this file private. You can edit that file later to modify settings you want (change port, add/rename/remove peers, etc), then just run the playbook once again as noted in step #6 and it will re-configure everything based on this file.
    Configs for the WG clients will be located in the configs/wireguard directory. Beginning with the [host name]-[peer name] there will be 3 files for each:

    • [host name]-[peer name].conf - are the file-based WireGuard config, which can be imported on the client
    • [host name]-[peer name].qrcode.png - are the image with the WireGuard connection QRCode which can be scanned by supported clients.
    • [host name]-[peer name].qrcode.txt - same QRCode, but using ANSIUTF8 pseudo-graphics which can be output to the text terminal right away.

    If you want to run on multiple hosts in one run, you can prepare Ansible Inventory file by following this guide: https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html
    It's pretty simple in ini format, and can be just "one line per host" as well in this form. Once prepared, you can specify this file with -i option on step #6 instead of host directly. Per-host variable can be specified in the inventory file as well. SSH use can be specified as Ansible built-in variable ansible_user. Port can be specified as host:port. Example inventory file:

    host1.example.com wg_port=12345
    host2.example.com:1234 wg_npeers=1 wg_port=3333, wg_iface=venet0
    

    More flexible config options can be configured by adding them to initial playbook or inventory file, but these aren't covered here right now.

    I tested to use this on 2 of my vps last week and it works, this guide is intuitive and it's making to use the playbook easier. good job

    personally I just altered the jinja2/client.conf to use AllowedIPs excluding 192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8 range using this calculator

    Thanked by (1)NStorm

    Fuck this 24/7 internet spew of trivia and celebrity bullshit.

  • @Encoders, thanks for testing and the feedback! Glad that it did it's purpose. I'll include it on the Github, probably need to "polish" some parts a bit.
    As for the AllowedIPs you've reminded me that I've wanted to make it configurable as well as other things like the subnet for virtual network on the VPN interface, etc.

Sign In or Register to comment.