veilnet-conflux)This guide walks you through setting up a K3s Kubernetes cluster using VeilNet for networking across multiple nodes.
First, prepare the VeilNet Conflux binary and register the node:
chmod +x ./veilnet-conflux
Register the node with VeilNet:
sudo ./veilnet-conflux register \
-t <YOUR_VEILNET_TOKEN> \
--cidr <YOUR_CIDR> \
--tag <YOUR_TAG> \
-p
Replace the placeholders:
<YOUR_VEILNET_TOKEN>: Your VeilNet registration token<YOUR_CIDR>: The CIDR block for this node (e.g., 10.128.0.1/16)<YOUR_TAG>: A tag to identify this node (e.g., master-node-1)Check the VeilNet service logs:
journalctl -u veilnet -f
Update the system:
sudo apt update
sudo apt upgrade -y
Install K3s on the control node with VeilNet network configuration. Replace <YOUR_NODE_IP> with the VeilNet IP address assigned to this node:
curl -sfL https://get.k3s.io | sh -s - server --cluster-init \
--node-ip <YOUR_NODE_IP> \
--bind-address <YOUR_NODE_IP> \
--advertise-address <YOUR_NODE_IP> \
--tls-san <YOUR_NODE_IP> \
--flannel-iface veilnet \
--node-name <YOUR_NODE_NAME>
Replace the placeholders:
<YOUR_NODE_IP>: The VeilNet IP address of this node (e.g., 10.128.0.1)<YOUR_NODE_NAME>: A name for this node (e.g., master-node-1)Get the node token for joining additional nodes:
sudo cat /var/lib/rancher/k3s/server/node-token
To join additional server nodes to form a HA cluster, first register each node with VeilNet (as in Step 1), then run:
curl -sfL https://get.k3s.io | K3S_TOKEN=<NODE_TOKEN> sh -s - server \
--server https://<CONTROL_NODE_IP>:6443 \
--node-ip <NEW_NODE_IP> \
--bind-address <NEW_NODE_IP> \
--advertise-address <NEW_NODE_IP> \
--tls-san <NEW_NODE_IP> \
--flannel-iface veilnet \
--node-name <NEW_NODE_NAME>
Replace the placeholders:
<NODE_TOKEN>: The token from the control node<CONTROL_NODE_IP>: The VeilNet IP of the control node (e.g., 10.128.0.1)<NEW_NODE_IP>: The VeilNet IP of the new server node (e.g., 10.128.0.2)<NEW_NODE_NAME>: A name for the new server node (e.g., master-node-2)To join worker nodes to the cluster, first register each node with VeilNet (as in Step 1), then run:
curl -sfL https://get.k3s.io | K3S_TOKEN=<NODE_TOKEN> sh -s - agent \
--server https://<CONTROL_NODE_IP>:6443 \
--node-ip <WORKER_NODE_IP> \
--flannel-iface veilnet \
--node-name <WORKER_NODE_NAME>
Replace the placeholders:
<NODE_TOKEN>: The token from the control node<CONTROL_NODE_IP>: The VeilNet IP of the control node (e.g., 10.128.0.1)<WORKER_NODE_IP>: The VeilNet IP of the worker node (e.g., 10.128.0.3)<WORKER_NODE_NAME>: A name for the worker node (e.g., worker-node-1)Verify your cluster is running correctly:
kubectl get nodes
kubectl get pods --all-namespaces
To update VeilNet on a node, download the new binary and follow these steps:
chmod +x ./veilnet-conflux
sudo ./veilnet-conflux remove
sudo ./veilnet-conflux install
sudo reboot
After rebooting, the node will reconnect to the VeilNet network with the updated binary, as well as the K3s cluster.
To achieve direct service mesh connectivity (similar to Docker's network namespace sharing where containers share the network namespace with veilnet-conflux), you can deploy veilnet-conflux as a sidecar container in your Kubernetes pods. This allows your application containers to share the network namespace with veilnet-conflux, giving them direct access to the VeilNet TUN device and enabling direct communication between services using VeilNet IP addresses.
Here's an example manifest that deploys an application with veilnet-conflux as a sidecar:
apiVersion: v1
kind: Secret
metadata:
name: veilnet-conflux-secret
namespace: default
type: Opaque
stringData:
VEILNET_REGISTRATION_TOKEN: <YOUR_REGISTRATION_TOKEN>
VEILNET_GUARDIAN: <YOUR_GUARDIAN_URL>
VEILNET_PORTAL: "true"
VEILNET_CONFLUX_TAG: <YOUR_CONFLUX_TAG>
VEILNET_CONFLUX_CIDR: <VEILNET_CIDR>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
# VeilNet Conflux sidecar - must be first container
- name: veilnet-conflux
image: veilnet/conflux:beta
imagePullPolicy: Always
securityContext:
capabilities:
add:
- NET_ADMIN
volumeMounts:
- name: dev-net-tun
mountPath: /dev/net/tun
envFrom:
- secretRef:
name: veilnet-conflux-secret
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
# Your application container
- name: app
image: your-app:latest
ports:
- containerPort: 8080
name: http
# Application shares network namespace with veilnet-conflux
# Access other services via VeilNet IP addresses
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: dev-net-tun
hostPath:
path: /dev/net/tun
type: CharDevice
# All containers in the pod share the same network namespace
# This is the default behavior in Kubernetes
---
apiVersion: v1
kind: Service
metadata:
name: my-app
namespace: default
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
network_mode: "container:veilnet-conflux".veilnet-conflux container runs as a sidecar alongside your application container in the same pod./dev/net/tun and NET_ADMIN capability to create the VeilNet interface.envFrom.Once deployed, your application can:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-with-db
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: web-app-with-db
template:
metadata:
labels:
app: web-app-with-db
spec:
containers:
# VeilNet Conflux sidecar
- name: veilnet-conflux
image: veilnet/conflux:beta
imagePullPolicy: Always
securityContext:
capabilities:
add:
- NET_ADMIN
volumeMounts:
- name: dev-net-tun
mountPath: /dev/net/tun
envFrom:
- secretRef:
name: veilnet-conflux-secret
# Web application
- name: web-app
image: nginx:latest
ports:
- containerPort: 80
# Database (shares network namespace)
- name: database
image: postgres:15-alpine
env:
- name: POSTGRES_DB
value: mydb
- name: POSTGRES_USER
value: user
- name: POSTGRES_PASSWORD
value: password
ports:
- containerPort: 5432
volumes:
- name: dev-net-tun
hostPath:
path: /dev/net/tun
type: CharDevice
In this example, all three containers (veilnet-conflux, web-app, and database) share the same network namespace, allowing them to communicate via localhost while also having access to the VeilNet network.
No, you do not need to configure a sub-router. VeilNet handles all the networking automatically, including routing between nodes across different regions.
No, you do not need to configure firewall rules or Flannel VXLAN settings. VeilNet manages the network layer, and by specifying --flannel-iface veilnet during K3s installation, Flannel will use the VeilNet interface automatically without requiring additional VXLAN configuration.
We do not recommend using Longhorn for distributed storage unless all nodes are in the same local network. Longhorn has strict latency requirements that may not be met when nodes are distributed across different regions or have higher network latency. For multi-region deployments, consider using other storage solutions that are designed for higher latency environments.
Yes, you can still use VeilNet for your cluster even if all nodes are on the same local network. VeilNet provides additional security by encrypting all traffic between nodes and can help isolate your cluster traffic from other network traffic on the same physical network.