Skip to main content
The Alquimia stack runs entirely via Docker Compose. No local builds, no dependency management — just pull and run.

Prerequisites

  • Docker Desktop (v24+) or Docker Engine + Docker Compose v2
  • Git
  • 8 GB RAM minimum (16 GB recommended if using local models with Ollama)

Quick start

git clone https://github.com/alquimia-ai/alquimia-local.git
cd alquimia-local
cp .env.example .env
docker compose up -d
Studio is now running at http://localhost:3001.
On first startup, give the stack 30–60 seconds to finish bootstrapping before logging in.
If your .env uses the Lite auth strategy for Studio or InsightHub, the default credentials are:
FieldValue
Usernameadmin
Passwordadmin
Change the default Lite credentials before exposing the stack to any network. Lite auth is intended for local development only.

Cloud and custom models

Studio is where you register cloud LLMs (Anthropic, OpenAI, Groq, etc.): open Settings → Models Registry, add each model, and paste the API key — credentials are stored as secrets. Use Base URL when you point at OpenAI-compatible gateways, proxies, or self-hosted endpoints; you are not limited to a fixed list baked into .env. (The optional Ollama workflow below still uses Compose for local pull-and-register — see Local models with Ollama.)

Local models with Ollama

The stack includes optional Ollama support for running inference locally (CPU-based):
# Add to .env before starting
OLLAMA_MODELS=qwen2.5:0.5b nomic-embed-text

# Start with the local models profile
docker compose --profile with-local-models up -d
Naming convention for OLLAMA_MODELS:
  • Models with embed in the name → registered as Embeddings Models in Studio
  • All others → registered as chat models
Ollama runs on CPU inside Docker. Models larger than 3B parameters require significant RAM. Recommended: use 0.5B–3B models for local development.

Adding Ollama models without restarting

# Pull a new model into the running Ollama container
docker compose exec ollama ollama pull llama3.2:3b

# Re-run studio-init to register it in Studio
docker compose --profile with-local-models run --rm studio-init

Useful commands

# View logs for all services
docker compose logs -f

# View logs for a specific service
docker compose logs -f studio

# Stop all services (data is preserved)
docker compose down

# Stop and delete all data (volumes)
docker compose down -v

# Restart a single service
docker compose restart studio

Next steps

First login

Log in to Studio and take the onboarding tour.