Run your FastAPI backend in one container and later connect it with Ollama (running either locally or in another container), here’s a simple docker-compose.yml
setup.
✅ docker-compose.yml
version: "3.9"
services:
backend:
build: .
container_name: company-recommender-backend
ports:
- "8000:8000"
environment:
- OLLAMA_URL=http://ollama:11434 # backend talks to ollama service
depends_on:
- ollama
ollama:
image: ollama/ollama:latest
container_name: ollama
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama # persist models
volumes:
ollama_data:
✅ Changes in your main.py
Right now you hardcoded:
ollama_url = "https://immai.acintia.com"
Update this to read from an environment variable so it works inside Docker:
import os
ollama_url = os.getenv("OLLAMA_URL", "http://localhost:11434")
That way:
-
If you run with
docker-compose
, it will connect to theollama
container. -
If you run locally without Docker, it defaults to
http://localhost:11434
.
✅ Usage
From your project root:
docker-compose up --build
Then visit:
http://localhost:8000
Your backend will talk to the Ollama container automatically 🎯.
No comments:
Post a Comment