Deploying K3s with VeilNet

Learn how to deploy K3s with VeilNet across multiple regions.

Prerequisites

  • Ubuntu/Debian-based Linux system
  • Root or sudo access
  • VeilNet Conflux binary (veilnet-conflux)
  • VeilNet registration token

Setup K3s Cluster

This guide walks you through setting up a K3s Kubernetes cluster using VeilNet for networking across multiple nodes.

Step 1: Install VeilNet

First, prepare the VeilNet Conflux binary and register the node:

chmod +x ./veilnet-conflux

Register the node with VeilNet:

sudo ./veilnet-conflux register \
    -t <YOUR_VEILNET_TOKEN> \
    --cidr <YOUR_CIDR> \
    --tag <YOUR_TAG> \
    -p

Replace the placeholders:

  • <YOUR_VEILNET_TOKEN>: Your VeilNet registration token
  • <YOUR_CIDR>: The CIDR block for this node (e.g., 10.128.0.1/16)
  • <YOUR_TAG>: A tag to identify this node (e.g., master-node-1)

Check the VeilNet service logs:

journalctl -u veilnet -f

Step 2: Install K3s Control Node

Update the system:

sudo apt update
sudo apt upgrade -y

Install K3s on the control node with VeilNet network configuration. Replace <YOUR_NODE_IP> with the VeilNet IP address assigned to this node:

curl -sfL https://get.k3s.io | sh -s - server --cluster-init \
    --node-ip <YOUR_NODE_IP> \
    --bind-address <YOUR_NODE_IP> \
    --advertise-address <YOUR_NODE_IP> \
    --tls-san <YOUR_NODE_IP> \
    --flannel-iface veilnet \
    --node-name <YOUR_NODE_NAME>

Replace the placeholders:

  • <YOUR_NODE_IP>: The VeilNet IP address of this node (e.g., 10.128.0.1)
  • <YOUR_NODE_NAME>: A name for this node (e.g., master-node-1)

Get the node token for joining additional nodes:

sudo cat /var/lib/rancher/k3s/server/node-token

Step 3: Join Additional Server Nodes

To join additional server nodes to form a HA cluster, first register each node with VeilNet (as in Step 1), then run:

curl -sfL https://get.k3s.io | K3S_TOKEN=<NODE_TOKEN> sh -s - server \
    --server https://<CONTROL_NODE_IP>:6443 \
    --node-ip <NEW_NODE_IP> \
    --bind-address <NEW_NODE_IP> \
    --advertise-address <NEW_NODE_IP> \
    --tls-san <NEW_NODE_IP> \
    --flannel-iface veilnet \
    --node-name <NEW_NODE_NAME>

Replace the placeholders:

  • <NODE_TOKEN>: The token from the control node
  • <CONTROL_NODE_IP>: The VeilNet IP of the control node (e.g., 10.128.0.1)
  • <NEW_NODE_IP>: The VeilNet IP of the new server node (e.g., 10.128.0.2)
  • <NEW_NODE_NAME>: A name for the new server node (e.g., master-node-2)

Step 4: Join Worker Nodes

To join worker nodes to the cluster, first register each node with VeilNet (as in Step 1), then run:

curl -sfL https://get.k3s.io | K3S_TOKEN=<NODE_TOKEN> sh -s - agent \
    --server https://<CONTROL_NODE_IP>:6443 \
    --node-ip <WORKER_NODE_IP> \
    --flannel-iface veilnet \
    --node-name <WORKER_NODE_NAME>

Replace the placeholders:

  • <NODE_TOKEN>: The token from the control node
  • <CONTROL_NODE_IP>: The VeilNet IP of the control node (e.g., 10.128.0.1)
  • <WORKER_NODE_IP>: The VeilNet IP of the worker node (e.g., 10.128.0.3)
  • <WORKER_NODE_NAME>: A name for the worker node (e.g., worker-node-1)

Verification

Verify your cluster is running correctly:

kubectl get nodes
kubectl get pods --all-namespaces

Updating VeilNet

To update VeilNet on a node, download the new binary and follow these steps:

  1. Download the new VeilNet Conflux binary
  2. Make it executable:
chmod +x ./veilnet-conflux
  1. Remove the existing VeilNet installation:
sudo ./veilnet-conflux remove
  1. Install the new version:
sudo ./veilnet-conflux install
  1. Reboot the node:
sudo reboot

After rebooting, the node will reconnect to the VeilNet network with the updated binary, as well as the K3s cluster.

Using VeilNet Conflux as Sidecar for Direct Service Mesh

To achieve direct service mesh connectivity (similar to Docker's network namespace sharing where containers share the network namespace with veilnet-conflux), you can deploy veilnet-conflux as a sidecar container in your Kubernetes pods. This allows your application containers to share the network namespace with veilnet-conflux, giving them direct access to the VeilNet TUN device and enabling direct communication between services using VeilNet IP addresses.

Example: Deployment with VeilNet Conflux Sidecar

Here's an example manifest that deploys an application with veilnet-conflux as a sidecar:

apiVersion: v1
kind: Secret
metadata:
  name: veilnet-conflux-secret
  namespace: default
type: Opaque
stringData:
  VEILNET_REGISTRATION_TOKEN: <YOUR_REGISTRATION_TOKEN>
  VEILNET_GUARDIAN: <YOUR_GUARDIAN_URL>
  VEILNET_PORTAL: "true"
  VEILNET_CONFLUX_TAG: <YOUR_CONFLUX_TAG>
  VEILNET_CONFLUX_CIDR: <VEILNET_CIDR>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      # VeilNet Conflux sidecar - must be first container
      - name: veilnet-conflux
        image: veilnet/conflux:beta
        imagePullPolicy: Always
        securityContext:
          capabilities:
            add:
              - NET_ADMIN
        volumeMounts:
          - name: dev-net-tun
            mountPath: /dev/net/tun
        envFrom:
          - secretRef:
              name: veilnet-conflux-secret
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "128Mi"
            cpu: "100m"
      # Your application container
      - name: app
        image: your-app:latest
        ports:
          - containerPort: 8080
            name: http
        # Application shares network namespace with veilnet-conflux
        # Access other services via VeilNet IP addresses
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"
      volumes:
        - name: dev-net-tun
          hostPath:
            path: /dev/net/tun
            type: CharDevice
      # All containers in the pod share the same network namespace
      # This is the default behavior in Kubernetes
---
apiVersion: v1
kind: Service
metadata:
  name: my-app
  namespace: default
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: ClusterIP

Key Points:

  1. Shared Network Namespace: All containers in a Kubernetes pod share the same network namespace by default, which achieves the same effect as Docker's network_mode: "container:veilnet-conflux".
  2. Sidecar Container: The veilnet-conflux container runs as a sidecar alongside your application container in the same pod.
  3. TUN Device Access: The sidecar needs access to /dev/net/tun and NET_ADMIN capability to create the VeilNet interface.
  4. Environment Variables: Store VeilNet configuration in a Secret and reference it using envFrom.
  5. Service Access: Your application can access other services using their VeilNet IP addresses, just like in the Docker setup.

Accessing Services

Once deployed, your application can:

  • Access services on other pods using their VeilNet IP addresses
  • Use the VeilNet TUN device directly through the shared network namespace
  • Communicate with services across different Kubernetes nodes via VeilNet

Example: Multi-Container Pod with Database

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app-with-db
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-app-with-db
  template:
    metadata:
      labels:
        app: web-app-with-db
    spec:
      containers:
      # VeilNet Conflux sidecar
      - name: veilnet-conflux
        image: veilnet/conflux:beta
        imagePullPolicy: Always
        securityContext:
          capabilities:
            add:
              - NET_ADMIN
        volumeMounts:
          - name: dev-net-tun
            mountPath: /dev/net/tun
        envFrom:
          - secretRef:
              name: veilnet-conflux-secret
      # Web application
      - name: web-app
        image: nginx:latest
        ports:
          - containerPort: 80
      # Database (shares network namespace)
      - name: database
        image: postgres:15-alpine
        env:
          - name: POSTGRES_DB
            value: mydb
          - name: POSTGRES_USER
            value: user
          - name: POSTGRES_PASSWORD
            value: password
        ports:
          - containerPort: 5432
      volumes:
        - name: dev-net-tun
          hostPath:
            path: /dev/net/tun
            type: CharDevice

In this example, all three containers (veilnet-conflux, web-app, and database) share the same network namespace, allowing them to communicate via localhost while also having access to the VeilNet network.

FAQ

Do I need to configure a sub-router?

No, you do not need to configure a sub-router. VeilNet handles all the networking automatically, including routing between nodes across different regions.

Do I need to configure firewall rules or Flannel VXLAN settings?

No, you do not need to configure firewall rules or Flannel VXLAN settings. VeilNet manages the network layer, and by specifying --flannel-iface veilnet during K3s installation, Flannel will use the VeilNet interface automatically without requiring additional VXLAN configuration.

Can I use Longhorn for distributed storage?

We do not recommend using Longhorn for distributed storage unless all nodes are in the same local network. Longhorn has strict latency requirements that may not be met when nodes are distributed across different regions or have higher network latency. For multi-region deployments, consider using other storage solutions that are designed for higher latency environments.

Should I use VeilNet even if all my nodes are local?

Yes, you can still use VeilNet for your cluster even if all nodes are on the same local network. VeilNet provides additional security by encrypting all traffic between nodes and can help isolate your cluster traffic from other network traffic on the same physical network.