This guide shows you how to deploy a private AI agent stack using:
With VeilNet, you can securely access your private AI agent from anywhere without exposing services to the public internet.
Create a docker-compose.yml file with the following configuration:
services:
veilnet-conflux:
container_name: veilnet-conflux
restart: unless-stopped
env_file:
- .env
image: veilnet/conflux:beta
pull_policy: always
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
network_mode: host
ollama:
image: ollama/ollama:latest
volumes:
- ollama:/root/.ollama
network_mode: "container:veilnet-conflux"
depends_on:
- veilnet-conflux
open-webui:
image: ghcr.io/open-webui/open-webui:main
volumes:
- open-webui:/app/backend/data
network_mode: "container:veilnet-conflux"
depends_on:
- veilnet-conflux
- ollama
environment:
- OLLAMA_BASE_URL=http://localhost:11434
n8n:
image: n8nio/n8n:latest
volumes:
- n8n:/home/node/.n8n
network_mode: "container:veilnet-conflux"
depends_on:
- veilnet-conflux
environment:
- N8N_SECURE_COOKIE=false
volumes:
ollama:
driver: local
driver_opts:
type: none
o: bind
device: ./ollama-data
open-webui:
driver: local
driver_opts:
type: none
o: bind
device: ./open-webui-data
n8n:
driver: local
driver_opts:
type: none
o: bind
device: ./n8n-data
Create a .env file in the same directory as your docker-compose.yml with the following variables:
VEILNET_REGISTRATION_TOKEN=<YOUR_REGISTRATION_TOKEN>
VEILNET_GUARDIAN=<YOUR_GUARDIAN_URL>
VEILNET_PORTAL=true
VEILNET_CONFLUX_TAG=<YOUR_CONFLUX_TAG>
VEILNET_CONFLUX_CIDR=<VEILNET_CIDR>
Replace the placeholders:
<YOUR_REGISTRATION_TOKEN>: Your VeilNet registration token (obtained from the VeilNet portal)<YOUR_GUARDIAN_URL>: The URL of your VeilNet Guardian service (e.g., https://guardian.veilnet.app)<YOUR_CONFLUX_TAG>: A tag to identify this Conflux instance (e.g., ai-agent-server)<VEILNET_CIDR>: Any IP address (e.g., 10.128.0.5/16) in CIDR format that belongs to the realm subnet (e.g., 10.128.0.0/16)Create the directories for persistent data storage:
mkdir -p ollama-data open-webui-data n8n-data
These directories will store:
ollama-data: Downloaded AI models and Ollama configurationopen-webui-data: Open WebUI user data and conversationsn8n-data: n8n workflows and credentialsStart all services:
docker-compose up -d
This will:
Check that all containers are running:
docker-compose ps
View the VeilNet Conflux logs to verify it's connecting:
docker logs veilnet-conflux -f
You should see logs indicating successful registration and connection to the VeilNet network.
Once Ollama is running, download the AI models you want to use:
docker exec -it <ollama-container-name> ollama pull llama2
Or download other models like:
llama2 - Meta's Llama 2 modelmistral - Mistral AI modelcodellama - Code-focused Llama modelphi - Microsoft's Phi modelYou can also download models through the Open WebUI interface.
Once the services are running, you can access them locally:
http://localhost:3000http://localhost:5678http://localhost:11434With VeilNet configured, you can access these services remotely from anywhere in the world using the host's VeilNet IP address, as long as your device is also connected to the same VeilNet realm.
ip addr show veilnet
Or check the VeilNet portal to see your assigned IP address.
http://<veilnet-ip>:3000http://<veilnet-ip>:5678http://<veilnet-ip>:11434For example, if your host has VeilNet IP 10.128.0.5, you can access Open WebUI from anywhere using http://10.128.0.5:3000, as long as your device is connected to VeilNet.
Local Access:
http://localhost:3000Remote Access via VeilNet:
http://<veilnet-ip>:3000 (replace <veilnet-ip> with your host's VeilNet IP, e.g., http://10.128.0.5:3000)Local Access:
http://localhost:5678Remote Access via VeilNet:
http://<veilnet-ip>:5678 (replace <veilnet-ip> with your host's VeilNet IP, e.g., http://10.128.0.5:5678)Local Access:
You can interact with Ollama directly via its REST API:
curl http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt": "Why is the sky blue?",
"stream": false
}'
Remote Access via VeilNet:
Access the Ollama API remotely using the VeilNet IP:
curl http://<veilnet-ip>:11434/api/generate -d '{
"model": "llama2",
"prompt": "Why is the sky blue?",
"stream": false
}'
Replace <veilnet-ip> with your host's VeilNet IP (e.g., http://10.128.0.5:11434). This works from any device connected to VeilNet.
To update to newer versions:
docker-compose pull
docker-compose up -d
This will pull the latest images and restart the containers with updated versions.
To stop all services:
docker-compose down
To remove containers and volumes (this will delete all data):
docker-compose down -v
Warning: Removing volumes will delete all downloaded models, conversations, and workflows. Make sure to back up important data before removing volumes.
AI models can be large (several GB each). Plan for at least 20-50 GB of free space, depending on how many models you want to download. The ollama-data directory will grow as you download more models.
Yes! Once your phone is connected to the same VeilNet realm, you can access Open WebUI and n8n using the host's VeilNet IP address from anywhere. For example, if your server has VeilNet IP 10.128.0.5, you can access Open WebUI on your phone using http://10.128.0.5:3000 from any location, as long as your phone is connected to VeilNet. Since all containers share the network namespace with veilnet-conflux, they can also use the VeilNet TUN device for optimal network performance.
Add team members to the same VeilNet realm through the VeilNet portal. Once they're connected, they can access the services using the host's VeilNet IP address from anywhere in the world. They don't need to be on the same local network - as long as both devices are connected to VeilNet, they can access the services remotely. Since all containers share the network namespace with veilnet-conflux, they can also use the VeilNet TUN device for optimal network performance.
Yes! You can deploy this stack on multiple servers, each with VeilNet Conflux configured. Each server will have its own AI models and services, and you can access all of them through VeilNet using their respective VeilNet IP addresses from anywhere, as long as your device is connected to VeilNet.
The NET_ADMIN capability provides only the necessary permissions for VeilNet to create and manage network interfaces, without granting full privileged access. This is more secure while still allowing VeilNet to function properly.