How to deploy a single Kubernetes cluster across multiple clouds using k3s and WireGuard - deprecate

Kubernetes is hard enough, and now your boss tells you to migrate your application from AWS to Azure, split your back end and front end between public and private data centers, and deploy to six different environments simultaneously.

Before you decide to quit your brand new DevOps job, let's see if we can set this up an easier way.

You look around and find a whole slew of tools out there for managing multiple Kubernetes clusters across environments, and many may even help you deploy your app. However, this all raises an important question: Why run multiple Kubernetes clusters at all?

Kubernetes is a control plane plus managed worker nodes. Why can’t you just deploy these worker nodes to different environments and be done with it?

There are two answers you’ll typically hear:

  • You can't do that because of latency.

  • You can't do that because of security.

I’m here to tell you that you can do it, and that it’s easier than you might have thought.

The Solution

  • You can't do that because of latency. Use k3s: the latency problem with Kubernetes is often due to etcd, which is sensitive to lower performance environments. k3s lets you use SQL which doesn’t have that issue. An alternative approach might be to co-locate your masters while having distributed workers.

  • You can't do that because of security. Run this over WireGuard to create encrypted tunnels between all your nodes to keep traffic secure while minimizing latency. For dynamic networks, use a WireGuard management tool; this guide uses Netmaker (https://github.com/gravitl/netmaker). Alternatives mentioned: Kilo, Wormhole, Ansible scripts, or other WireGuard config managers (links in the original).

If you prefer a visual walkthrough, there is a YouTube tutorial: https://youtu.be/z2jvlFVU3dw

Setup (overview and notes)

  • This guide is a short demonstration only. It does not cover DNS, storage, or High Availability. Do not treat this as production-ready without adding those layers.

  • Get a few cloud VMs with public IPs. Example used by the author: one Linode, two AWS EC2s, and a home network machine. Three will act as the cluster, and one runs Netmaker.

  • Install Ubuntu 20.04 on each machine (systemd-based Linux recommended).

  • On each cluster node, install wireguard-tools (e.g. apt install wireguard-tools).

  • On the Netmaker VM, install Docker and docker-compose:

    • Docker: https://docs.docker.com/engine/install/ubuntu/

    • docker-compose: https://docs.docker.com/compose/install/

  • Ensure ports 80, 8081, and 50051 are open on the Netmaker VM.

Part 1: Netmaker Install / WireGuard Setup

We’ll create a flat, secure network for cluster nodes: a virtual subnet 10.11.11.0/24 and add nodes to it.

On the Netmaker VM

1

Prepare Netmaker (on Netmaker VM)

SSH to the Netmaker VM:

Download the docker-compose file:

Replace the backend address placeholder in docker-compose.yml:

Start Netmaker:

Now head to the IP of that VM to access the Netmaker UI, create a user and log in.

2

Create network and access key

  • In the Netmaker UI, click "Create Network". Name it k3s with address range 10.11.11.0/24.

  • Click "Access Keys", select your network (k3s), create a key (e.g. k3s-key) and give it uses (e.g. 1000).

  • Copy the install script and the access KEY (look for the curl -sfL … | KEY=… sh - command). You'll use this on each cluster VM.

Deploy Netclient on cluster VMs

Note: Make sure wireguard-tools is installed on each cluster VM before running the netclient install.

1

Verify WireGuard tooling

On each cluster VM:

Switch to root:

2

Install netclient

Run the Netmaker install script on each cluster VM, replacing <YOUR ACCESS KEY FROM NETMAKER> with your access key:

3

Verify WireGuard interface

On each node, check WireGuard:

If wg show shows the interface on each node, Netmaker/Netclient is running correctly. Check the Netmaker UI — you should see all nodes online (green).

Part 2: K3s Installation

Master (server) node

  • SSH to the node that will be your master and become root:

  • Find the WireGuard-private address (look for an address under the nm-k3s interface):

If Netmaker was installed on the master first, this will likely be 10.11.11.1. Use that address in the K3s install command; otherwise replace it with the address observed.

Install k3s on the master:

Wait ~5 minutes for k3s to start, then check status:

Get the node token needed by workers:

Worker nodes

On each worker node:

From ip a get the node's WireGuard-private IP (e.g. 10.11.11.X). Then on the worker run (replace < TOKEN VAL > with the server node-token, 10.11.11.X with the worker's WireGuard IP, and 10.11.11.MASTER with the master WireGuard IP):

Check the agent status:

Back on the master, verify nodes and pods:

If nodes and pods are present and ready, the cluster spanning multiple locations is created.

Part 4: Testing

Deploy simple pingtest pods to verify pod-to-pod connectivity across clouds.

  • Create pingtest resources (YAML provided by the author): https://pastebin.com/BSqLnP57

You should see pods scheduled across different nodes. Exec into a pingtest pod and ping other pod IPs to verify connectivity.

Next, test the service network across clouds by deploying nginx with a ClusterIP/load-balancer service.

  • nginx YAML provided by the author: https://pastebin.com/ttadjjDA

From a pingtest pod that’s on a different host than the nginx pods:

If you successfully retrieve the document, the cluster service network is working across clouds.

Conclusion

You created a single Kubernetes cluster that spans multiple clouds using k3s and WireGuard. To add more nodes later: run the Netmaker install script and the k3s install script on the new node.

Next steps (not covered here): set up High Availability with multiple masters, add Ingress, configure distributed storage, or deploy Netmaker inside the cluster so it becomes part of it instead of running on an extra VM.

Last updated

Was this helpful?