Build Your Own Private AI Infrastructure in 7 Simple Steps
In this article, you will learn how to build your own private multi-AI agent infrastructure. This setup provides you with private LLM automation workflows, databases, and AI models that never send your data to external servers. By the end of this guide, you will have your own private AI chat interface similar to ChatGPT, a professional database system, automation workflows connecting multiple AI agents, and local AI models running on your own server—all accessible through professional domains with SSL certificates. No coding experience is required, as we will use powerful tools to handle the technical complexity.
Why Build a Private AI Cloud?
You might wonder why we are building this on a cloud VPS instead of a local computer. This approach offers several distinct advantages:
- 24/7 Accessibility: Your AI infrastructure runs continuously and can be accessed from anywhere—your phone, laptop, or any device with an internet connection.
- Professional Setup: You get real domains, SSL certificates, and enterprise-grade infrastructure that looks and feels like a commercial product.
- Multi-User Access: Multiple people can access and use your infrastructure simultaneously.
- Complete Privacy: Even though it's in the cloud, it is your private server. No data is ever sent to OpenAI, Anthropic, or other third-party AI companies.
This setup has numerous real-world business applications, such as automated customer support, AI writing assistants for content creators, secure document processing for businesses, and cost-free prototyping for developers.
The Technology Stack
We will be using the Local AI package from GitHub, an incredible open-source project that bundles together several powerful tools:
- N8N: For creating complex automation workflows between AI agents.
- Flowise: For building conversational AI and chatbots.
- Supabase: A complete database system for storing all your data.
- Ollama: For running AI models locally on your server.
- Open Web UI: A ChatGPT-like interface for interacting with your AI.
The project is pre-configured to ensure all components work together seamlessly, eliminating the need for deep development expertise.
Step 1: Set Up a Powerful VPS Server
For this project, a reliable Virtual Private Server (VPS) is necessary to host our AI infrastructure. A VPS provides the dedicated resources crucial for running AI models effectively.
We recommend a KVM-based VPS plan for its dedicated resources. A good starting point is a plan with at least four vCPU cores, 16 GB of RAM, and 200 GB of NVMe disk space. This is often labeled as a "KVM 4" plan or similar.
When setting up your VPS, follow these general steps: 1. Choose a server location closest to you for optimal performance. 2. Select Ubuntu as the operating system (one of the latest versions is ideal). 3. Create a secure root password for your server. You can also use an SSH key for enhanced security.
Once your VPS is provisioned, you will receive an IP address. This is your server's address on the internet. Most VPS providers offer a browser-based terminal for easy access, which we will use for the following steps.
Step 2: Update the System and Install Tools
First, connect to your server via the browser terminal. We will start by updating the system's package list and installing essential tools. This ensures your server has the latest security updates.
Copy and paste the following command into your terminal:
apt-get update && apt-get install -y git curl nano
This command will take a moment to complete.
Step 3: Configure the Server Firewall
A firewall acts as a security guard for your server. By default, a server may accept connections on thousands of ports, creating vulnerabilities. We need to close all unnecessary ports and only allow traffic for our specific AI services.
Run the following commands to set up the firewall rules:
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
The terminal will warn you that the command may disrupt existing SSH connections. Type y
and press Enter to proceed. This configuration blocks all incoming connections except for standard web traffic (HTTP/HTTPS) and your management connection (SSH), significantly reducing the server's attack surface.
Step 4: Install Docker
Docker is a containerization system that packages each AI service with all its dependencies, allowing them to run in isolated environments. This prevents conflicts, enhances security, and makes managing the services (starting, stopping, updating) much easier.
Execute the following script to install Docker and Docker Compose:
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
This script installs the Docker engine, Docker Compose for managing our multi-service application, and verifies that the installation is successful.
Step 5: Download the Local AI Package
Next, we will download the pre-configured Local AI package. This repository contains all the necessary Docker setups for the AI services we need.
Clone the repository with this command:
git clone https://github.com/collinmadina/local-ai-packaged.git
After the download is complete, you will have a new directory named local-ai-packaged
containing the deployment scripts and configuration templates.
Step 6: Configure Environment Variables
This is one of the most critical steps. We need to set up accounts, passwords, and security settings for each AI service. These are stored in an environment file.
First, copy the example environment file to a new .env
file:
cd local-ai-packaged
cp .env.example .env
Now, open the file for editing using the nano text editor:
nano .env
This file contains placeholders for all the necessary credentials. You must replace them with your own secure, unique values. You will need to define usernames, passwords, encryption keys, and other secrets.
Important: You will also need to configure your domain names in this file. Purchase a domain from any registrar. In the .env
file, you will set the hostnames for each service, like n8n.yourdomain.com
, chat.yourdomain.com
, and db.yourdomain.com
.
Here is an example of what the .env
file structure looks like. Fill in your details accordingly.
# N8N Settings
N8N_ENCRYPTION_KEY=yoursupersecretn8nencryptionkey
N8N_BASIC_AUTH_USER=yourn8nusername
N8N_BASIC_AUTH_PASSWORD=yourn8npassword
# Supabase Settings
POSTGRES_PASSWORD=yoursupersecretpostgrespassword
JWT_SECRET=yoursupersecretjwtsecret
ANON_KEY=yoursupersecretanonkey
SERVICE_ROLE_KEY=yoursupersecretservicekey
# Domain Settings
DOMAIN_NAME=yourdomain.com
N8N_HOST=n8n
CHAT_HOST=chat
DB_HOST=db
# SSL Settings
TRAEFIK_EMAIL=[email protected]
# ... and other settings
Use the arrow keys to navigate the file in nano. Once you have filled in all your credentials and domain information, press Ctrl+O
to write the changes, Enter
to confirm, and Ctrl+X
to exit the editor.
Step 7: Point Your Domains to the Server
Now that the services are configured with subdomains, you need to point those subdomains to your server's IP address. In your domain registrar's DNS management panel, create A
records for each service.
For each subdomain, create a new A
record with the following details:
* Type: A
* Name/Host: The subdomain (e.g., chat
, n8n
, db
)
* Value/Points to: Your server's IP address.
Example DNS A
Records:
* chat.yourdomain.com
-> YOUR_SERVER_IP
* n8n.yourdomain.com
-> YOUR_SERVER_IP
* db.yourdomain.com
-> YOUR_SERVER_IP
* flow-wise.yourdomain.com
-> YOUR_SERVER_IP
DNS changes can take a few minutes to propagate. You can test if the propagation is complete by using the dig
command:
dig chat.yourdomain.com
If it returns your server's IP in the "ANSWER SECTION," you are ready to proceed.
Step 8: Deploy the Infrastructure
With all configurations in place, it's time to deploy the entire infrastructure. The repository includes an automated script that handles this.
Run the deployment script:
python3 start-services.py
This script will pull all the necessary Docker images and start each service in its own container. This process will take some time as it downloads several large images. Once it's finished, all your AI services will be running.
Step 9: Access and Verify Your Services
Your private AI infrastructure is now live! You can access the different components through the URLs you configured.
N8N Automation:
Open http://n8n.yourdomain.com
. You will be prompted to set up an owner account. Once logged in, you will find pre-built workflow templates for various AI tasks.
Open Web UI (Chat Interface):
Open http://chat.yourdomain.com
. Create an admin account to log in. Initially, there will be no models available.
To add models, return to your server terminal and run the following commands to download popular open-source models like Llama 3.2 and Qwen 2.5:
docker-compose exec ollama ollama pull llama3
docker-compose exec ollama ollama pull qwen:7b
After the downloads are complete, you may need to restart the Web UI to make it recognize the new models.
docker-compose stop open-webui
docker-compose rm -f open-webui
docker-compose up -d open-webui
Refresh the chat UI page, and you should now see the new models available for use in the dropdown menu.
Supabase Database:
Access your database management interface at http://db.yourdomain.com
. Log in using the postgres
user and the POSTGRES_PASSWORD
you set in your .env
file. Here, you can manage your database tables and data.
Conclusion
You have successfully set up your own private AI agent infrastructure. You can now build custom automation workflows in N8N, interact with private LLMs through your chat interface, and manage all your data securely in Supabase. This powerful, private setup opens up a world of possibilities for building sophisticated AI applications without compromising on data privacy or incurring high API costs.