Sunday, October 5, 2025

Ready-made Bash script for Docker workflow

Ready-made Bash script that automates your Docker workflow end-to-end:

  • Build backend & frontend images

  • Tag & push them to Docker Hub

  • Pull & run images on any machine

  • Set up Certbot HTTPS via a temporary NGINX container

Below is a complete script. You just need to edit your Docker Hub username, email, and domain at the top.


deploy_docker.sh

#!/bin/bash
set -e

# -----------------------------
# CONFIGURATION - EDIT THESE
# -----------------------------
DOCKERHUB_USER="yourhubusername"
DOMAIN="immai.acintia.com"
EMAIL="your-email@domain.com"

BACKEND_IMAGE="${DOCKERHUB_USER}/ollama-backend:latest"
FRONTEND_IMAGE="${DOCKERHUB_USER}/ollama-frontend:latest"

# -----------------------------
# STEP 1: Build Docker images
# -----------------------------
echo "Building backend image..."
docker build -t $BACKEND_IMAGE ./backend

echo "Building frontend image..."
docker build -t $FRONTEND_IMAGE ./frontend

# -----------------------------
# STEP 2: Push images to Docker Hub
# -----------------------------
echo "Logging in to Docker Hub..."
docker login

echo "Pushing backend image..."
docker push $BACKEND_IMAGE

echo "Pushing frontend image..."
docker push $FRONTEND_IMAGE

# -----------------------------
# STEP 3: Run containers on this machine
# -----------------------------
echo "Stopping any running containers..."
docker stop ollama-backend ollama-frontend nginx-proxy || true
docker rm ollama-backend ollama-frontend nginx-proxy || true

echo "Running backend container..."
docker run -d --name ollama-backend -p 8000:8000 $BACKEND_IMAGE

echo "Running frontend container..."
docker run -d --name ollama-frontend -p 8080:80 $FRONTEND_IMAGE

# -----------------------------
# STEP 4: Set up temporary NGINX for Certbot
# -----------------------------
echo "Creating temporary NGINX for Certbot challenge..."
docker run -d --name nginx-certbot \
  -p 80:80 \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  nginx:alpine

sleep 5

echo "Obtaining HTTPS certificate with Certbot..."
docker run --rm \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  certbot/certbot certonly \
  --webroot --webroot-path=/var/www/certbot \
  --email $EMAIL \
  -d $DOMAIN \
  --agree-tos --no-eff-email

echo "Stopping temporary NGINX..."
docker stop nginx-certbot
docker rm nginx-certbot

# -----------------------------
# STEP 5: Run NGINX with HTTPS
# -----------------------------
echo "Running NGINX proxy with HTTPS..."
cat > nginx.conf <<EOF
server {
    listen 80;
    server_name $DOMAIN;
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }
    location / {
        return 301 https://\$host\$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name $DOMAIN;

    ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;

    location /api/ {
        proxy_pass http://host.docker.internal:8000/;
        proxy_set_header Host \$host;
        proxy_set_header X-Real-IP \$remote_addr;
        proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
    }

    location / {
        proxy_pass http://host.docker.internal:8080/;
        proxy_set_header Host \$host;
        proxy_set_header X-Real-IP \$remote_addr;
        proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
    }
}
EOF

docker run -d --name nginx-proxy \
  -p 80:80 -p 443:443 \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  -v $(pwd)/nginx.conf:/etc/nginx/conf.d/default.conf:ro \
  nginx:alpine

echo "✅ Deployment complete!"
echo "Frontend: https://$DOMAIN/"
echo "Backend API: https://$DOMAIN/api/"

✅ Features

  1. Build: backend & frontend Docker images.

  2. Push: tags and uploads to Docker Hub.

  3. Run: pulls the latest images and runs containers on your host.

  4. Certbot HTTPS: temporary NGINX container to get TLS certificates.

  5. Production NGINX: runs NGINX reverse proxy with HTTPS for frontend & backend.


🔧 How to use

  1. Make script executable:

chmod +x deploy_docker.sh
  1. Run it:

./deploy_docker.sh
  1. Access services:

  • Frontend: https://immai.acintia.com/

  • Backend: https://immai.acintia.com/api/

  1. Certificates are stored in Docker volumes: certbot-www & certbot-etc.

  2. To renew certificates:

docker run --rm \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  certbot/certbot renew --webroot -w /var/www/certbot
docker restart nginx-proxy


Ready-to-use Kubernetes folder 05.02.2025

Ready-to-use Kubernetes folder with all manifests for your backend, frontend, and optional Ollama service, fully configured for HTTPS via cert-manager. You will just need to replace Docker Hub images and your email/domain.

Here’s the structure:

k8s/
├── namespace.yaml
├── clusterissuer.yaml
├── backend-deployment.yaml
├── frontend-deployment.yaml
├── ollama-deployment.yaml
├── ingress.yaml

1️⃣ namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ollama-chat

2️⃣ clusterissuer.yaml

Replace your-email@domain.com with your email.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: your-email@domain.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: nginx

3️⃣ backend-deployment.yaml

Replace yourhubusername/ollama-backend:latest with your Docker Hub backend image.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama-backend
  namespace: ollama-chat
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ollama-backend
  template:
    metadata:
      labels:
        app: ollama-backend
    spec:
      containers:
        - name: backend
          image: yourhubusername/ollama-backend:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 8000
          env:
            - name: OLLAMA_HOST
              value: "http://ollama:11434"
---
apiVersion: v1
kind: Service
metadata:
  name: ollama-backend
  namespace: ollama-chat
spec:
  selector:
    app: ollama-backend
  ports:
    - protocol: TCP
      port: 8000
      targetPort: 8000
  type: ClusterIP

4️⃣ frontend-deployment.yaml

Replace yourhubusername/ollama-frontend:latest with your Docker Hub frontend image.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama-frontend
  namespace: ollama-chat
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ollama-frontend
  template:
    metadata:
      labels:
        app: ollama-frontend
    spec:
      containers:
        - name: frontend
          image: yourhubusername/ollama-frontend:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: ollama-frontend
  namespace: ollama-chat
spec:
  selector:
    app: ollama-frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP

5️⃣ ollama-deployment.yaml (optional)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama
  namespace: ollama-chat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ollama
  template:
    metadata:
      labels:
        app: ollama
    spec:
      containers:
        - name: ollama
          image: ollama/ollama:latest
          ports:
            - containerPort: 11434
---
apiVersion: v1
kind: Service
metadata:
  name: ollama
  namespace: ollama-chat
spec:
  selector:
    app: ollama
  ports:
    - protocol: TCP
      port: 11434
      targetPort: 11434
  type: ClusterIP

6️⃣ ingress.yaml

Replace immai.acintia.com with your domain.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ollama-ingress
  namespace: ollama-chat
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - immai.acintia.com
      secretName: ollama-tls
  rules:
    - host: immai.acintia.com
      http:
        paths:
          - path: /api/
            pathType: Prefix
            backend:
              service:
                name: ollama-backend
                port:
                  number: 8000
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ollama-frontend
                port:
                  number: 80

7️⃣ Deployment Instructions

Step 1: Apply namespace

kubectl apply -f k8s/namespace.yaml

Step 2: Apply cert-manager ClusterIssuer

kubectl apply -f k8s/clusterissuer.yaml

Step 3: Deploy backend, frontend, and Ollama

kubectl apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/frontend-deployment.yaml
kubectl apply -f k8s/ollama-deployment.yaml   # optional

Step 4: Deploy Ingress

kubectl apply -f k8s/ingress.yaml

Step 5: Verify resources

kubectl get pods -n ollama-chat
kubectl get svc -n ollama-chat
kubectl get ingress -n ollama-chat

Step 6: Access services

  • Frontend → https://immai.acintia.com/

  • Backend API → https://immai.acintia.com/api/

TLS is automatically handled by cert-manager.


✅ With this setup:

  • You can run your backend, frontend, and Ollama in any Kubernetes cluster.

  • Certificates auto-renew with cert-manager.

  • Services are scalable by adjusting replicas.

  • Ingress routes traffic securely over HTTPS.


Dockerization Ready Files for IMMAI 05.10.20205

Ready-to-use Docker setup for your project so you can literally copy-paste and run everything. This will include:

  1. Backend Dockerfile

  2. Frontend Dockerfile

  3. docker-compose.yml (with backend, frontend, optional Ollama service)

  4. NGINX config for reverse proxy + HTTPS (via Certbot)

  5. Instructions to build, push to Docker Hub, and deploy on any machine


1️⃣ Backend Dockerfile (backend/Dockerfile)

# backend/Dockerfile
FROM python:3.11-slim

ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

WORKDIR /app

RUN apt-get update && \
    apt-get install -y --no-install-recommends build-essential gcc libpq-dev && \
    rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --upgrade pip && pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "1"]

2️⃣ Frontend Dockerfile (frontend/Dockerfile)

# frontend/Dockerfile
# Stage 1: build Angular app
FROM node:20 AS build

WORKDIR /usr/src/app

COPY legal-chatbot-ui/package*.json ./
RUN npm ci --legacy-peer-deps

COPY legal-chatbot-ui/ .
RUN npm run build -- --configuration production

# Stage 2: Serve with NGINX
FROM nginx:alpine

RUN rm -rf /usr/share/nginx/html/*

COPY --from=build /usr/src/app/dist/legal-chatbot-ui /usr/share/nginx/html

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

3️⃣ docker-compose.yml

version: "3.9"

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: unless-stopped
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama
    command: serve

  backend:
    build: ./backend
    container_name: ollama-backend
    restart: unless-stopped
    environment:
      - OLLAMA_HOST=http://ollama:11434
    ports:
      - "8000:8000"
    volumes:
      - ./backend:/app
    depends_on:
      - ollama

  frontend:
    build: ./frontend
    container_name: ollama-frontend
    restart: unless-stopped
    ports:
      - "8080:80"
    depends_on:
      - backend

  nginx:
    image: nginx:alpine
    container_name: nginx-proxy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - certbot-www:/var/www/certbot
      - certbot-etc:/etc/letsencrypt
    depends_on:
      - frontend
      - backend

  certbot:
    image: certbot/certbot
    container_name: certbot
    volumes:
      - certbot-www:/var/www/certbot
      - certbot-etc:/etc/letsencrypt
    entrypoint: ""

volumes:
  ollama_data:
  certbot-www:
  certbot-etc:

4️⃣ NGINX Config (nginx/conf.d/default.conf)

server {
    listen 80;
    server_name immai.acintia.com;

    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name immai.acintia.com;

    ssl_certificate /etc/letsencrypt/live/immai.acintia.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/immai.acintia.com/privkey.pem;

    location /api/ {
        proxy_pass http://backend:8000/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location / {
        proxy_pass http://frontend:80/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

5️⃣ Steps to Build, Push, and Deploy Anywhere

Step 1: Build Docker images locally

docker-compose build

Step 2: Run locally

docker-compose up -d

Step 3: Push images to Docker Hub

docker login
docker tag ollama-backend yourhubusername/ollama-backend:latest
docker tag ollama-frontend yourhubusername/ollama-frontend:latest
docker push yourhubusername/ollama-backend:latest
docker push yourhubusername/ollama-frontend:latest

Step 4: Pull & run on any other machine

docker pull yourhubusername/ollama-backend:latest
docker pull yourhubusername/ollama-frontend:latest
docker-compose up -d

Step 5: Obtain HTTPS certificate (first-time)

docker-compose up -d nginx
docker-compose run --rm certbot certonly \
  --webroot --webroot-path=/var/www/certbot \
  --email you@example.com \
  -d immai.acintia.com \
  --agree-tos --no-eff-email
docker-compose restart nginx

✅ Now your backend + frontend are containerized, served through NGINX, and HTTPS-ready.

If you want, I can also provide a ready-made script that automates:

  • Building images

  • Tagging & pushing to Docker Hub

  • Running on any machine

  • Certbot HTTPS setup

Do you want me to create that script?

Tuesday, September 23, 2025

Dock erization 230925 backend for Maran

let’s set this up properly with Docker Compose so you can run:

docker compose up -d

and have everything (FastAPI backend + Ollama + NGINX reverse proxy with HTTPS/WebSockets) working together.


📂 Project Structure

your-project/
│── main.py
│── service.py
│── requirements.txt
│── Dockerfile
│── nginx.conf
│── docker-compose.yml
│── certs/   (your SSL cert + key go here)

✅ Dockerfile (backend)

You already have this, but let’s finalize:

FROM python:3.11-slim

WORKDIR /app

RUN apt-get update && apt-get install -y \
    build-essential curl \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

✅ requirements.txt

fastapi
uvicorn[standard]
langchain
langchain-community
langchain-ollama
duckduckgo-search

✅ nginx.conf

events {}

http {
    upstream backend {
        server backend:8000;
    }

    upstream ollama {
        server ollama:11434;
    }

    server {
        listen 80;
        server_name immai.acintia.com;

        # Redirect HTTP → HTTPS
        return 301 https://$host$request_uri;
    }

    server {
        listen 443 ssl;
        server_name immai.acintia.com;

        ssl_certificate /etc/nginx/certs/fullchain.pem;
        ssl_certificate_key /etc/nginx/certs/privkey.pem;

        location / {
            proxy_pass http://backend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
        }

        location /ws {
            proxy_pass http://backend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
        }

        location /ollama/ {
            proxy_pass http://ollama/;
            proxy_http_version 1.1;
            proxy_set_header Host $host;
        }
    }
}

✅ docker-compose.yml

version: "3.9"

services:
  backend:
    build: .
    container_name: fastapi_backend
    restart: always
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    container_name: ollama_service
    restart: always
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama

  nginx:
    image: nginx:latest
    container_name: nginx_proxy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certs:/etc/nginx/certs:ro
    depends_on:
      - backend
      - ollama

volumes:
  ollama_data:

✅ Certificates

  • Place your SSL certs inside ./certs/:

    • fullchain.pem

    • privkey.pem

(You can generate free ones with Let’s Encrypt + certbot, or use self-signed for testing).


🚀 Run it

docker compose up -d --build
  • FastAPI backend → https://immai.acintia.com

  • WebSockets → wss://immai.acintia.com/ws/query

  • Ollama API proxied → https://immai.acintia.com/ollama/


⚠️ Question for you:
In your main.py, you’re pointing ChatOllama(base_url="https://immai.acintia.com", ...).
👉 Do you want me to adjust that so it talks directly to ollama service inside Docker (http://ollama:11434)? That way it doesn’t loop back through NGINX unnecessarily.


======================================================================

Got it ✅

We’ll make your FastAPI backend talk directly to the ollama service inside Docker, instead of looping through NGINX.

That means in main.py (or wherever you use ChatOllama), change this:

from langchain_community.chat_models import ChatOllama

llm = ChatOllama(
    base_url="https://immai.acintia.com",  # ❌ not needed
    model="llama2:7b",
)

👉 To this:

from langchain_community.chat_models import ChatOllama
import os

ollama_base_url = os.getenv("OLLAMA_BASE_URL", "http://ollama:11434")

llm = ChatOllama(
    base_url=ollama_base_url,
    model="llama2:7b",
)

✅ Why this works

  • Inside docker-compose.yml, we defined:

environment:
  - OLLAMA_BASE_URL=http://ollama:11434
  • So your backend container resolves ollama (Docker DNS) → Ollama container.

  • This avoids unnecessary SSL or reverse proxy overhead when the backend talks to Ollama.

  • External clients (browser, frontend, Postman) will still hit NGINX → FastAPI → Ollama using https://immai.acintia.com.


🔄 Next Steps for You

  1. Update main.py as above.

  2. Rebuild and restart with:

docker compose up -d --build
  1. Test WebSocket:

wscat -c wss://immai.acintia.com/ws/query

Do you want me to also add a /health endpoint in FastAPI so you can quickly confirm backend and Ollama connectivity (like GET /health"backend OK", "ollama OK") before debugging WebSocket?

Wednesday, September 17, 2025

Document for Docker + nginx + HTTPS setup for Company Recommender

Docker + nginx + HTTPS setup for Company Recommender

This document contains a ready-to-use Docker deployment for your FastAPI backend and Angular frontend, plus an NGINX reverse proxy. It also includes instructions for obtaining Let’s Encrypt certificates with Certbot (manual step).

Important: I placed all files and config examples below. Follow the numbered steps in Deployment to build, obtain certificates, and run in production.


Files included

  • docker-compose.yml — orchestrates backend, frontend, nginx, and certbot (optional)

  • backend/Dockerfile — builds your FastAPI app (uvicorn)

  • frontend/Dockerfile — builds Angular production bundle and serves using nginx

  • nginx/nginx.conf — main nginx config with HTTP -> HTTPS redirect

  • nginx/conf.d/immai.acintia.com.conf — site config (reverse proxy to backend + static hosting for frontend)

  • README_DEPLOY.md — deployment steps and Certbot instructions


docker-compose.yml

version: '3.8'
services:
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    container_name: company_recommender_backend
    environment:
      - PORT=8000
      - OLLAMA_URL=https://immai.acintia.com
    expose:
      - "8000"
    restart: unless-stopped

  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    container_name: company_recommender_frontend
    restart: unless-stopped
    expose:
      - "80"

  nginx:
    image: nginx:stable
    container_name: company_recommender_nginx
    ports:
      - "80:80"
      - "443:443"
    depends_on:
      - frontend
      - backend
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./certs:/etc/letsencrypt/live:ro
      - ./nginx/html:/usr/share/nginx/html:ro
    restart: unless-stopped

  certbot:
    image: certbot/certbot
    container_name: company_recommender_certbot
    volumes:
      - ./certs:/etc/letsencrypt/live
      - ./nginx/html:/var/www/html
    entrypoint: ''
    command: "/bin/sh -c 'sleep infinity'"
    restart: 'no'

networks:
  default:
    driver: bridge

Notes:

  • certbot here is present to allow you to run one-off cert issuance commands using the certbot container (instructions below).

  • ./certs will hold the live certificate files after you create them (mounted read-only into nginx).


backend/Dockerfile

# backend/Dockerfile
FROM python:3.11-slim

WORKDIR /app

# system deps (if needed)
RUN apt-get update && apt-get install -y --no-install-recommends build-essential curl && rm -rf /var/lib/apt/lists/*

COPY backend/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY backend/ .

ENV PORT=8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "1"]

Make sure backend/requirements.txt contains: fastapi, uvicorn[standard], langgraph, langchain-core, langchain-community, pydantic, and any other packages your main.py imports.


frontend/Dockerfile (Angular)

# frontend/Dockerfile
# Build stage
FROM node:20 AS build
WORKDIR /app
COPY frontend/package*.json ./
RUN npm ci --legacy-peer-deps
COPY frontend/ .
RUN npm run build -- --configuration production

# Production stage
FROM nginx:stable
COPY --from=build /app/dist/ /usr/share/nginx/html/
# optional: copy a custom nginx conf for serving SPA (404 -> index.html handled by nginx conf)
COPY nginx/spa.conf /etc/nginx/conf.d/default.conf

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Assumes your Angular build output lands in /app/dist/ (adjust if your project name differs; ng build by default creates dist/<project-name>).


nginx/nginx.conf

user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events { worker_connections 1024; }

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    include /etc/nginx/conf.d/*.conf;
}

nginx/conf.d/immai.acintia.com.conf

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name immai.acintia.com;

    root /var/www/html;

    location /.well-known/acme-challenge/ {
        alias /var/www/html/.well-known/acme-challenge/;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# HTTPS server
server {
    listen 443 ssl http2;
    server_name immai.acintia.com;

    ssl_certificate /etc/letsencrypt/live/immai.acintia.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/immai.acintia.com/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;

    # Serve frontend static files
    location / {
        proxy_pass http://frontend:80;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # API proxying
    location /recommend {
        proxy_pass http://backend:8000/recommend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location /chat {
        proxy_pass http://backend:8000/chat;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # optionally allow access to certbot challenge files served from nginx html
    location ^~ /.well-known/acme-challenge/ {
        alias /var/www/html/.well-known/acme-challenge/;
        allow all;
    }
}

The proxy_pass uses the Docker service names frontend and backend defined in docker-compose.yml so nginx communicates over the Docker network.


nginx/spa.conf (used by frontend image)

server {
  listen 80;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;

  location / {
    try_files $uri $uri/ /index.html;
  }
}

README_DEPLOY.md — Deployment steps (summary)

  1. Clone repo and place the backend and frontend project folders side-by-side with docker-compose.yml and nginx/ folder.

project-root/
  ├─ backend/        # your FastAPI app (main.py, requirements.txt)
  ├─ frontend/       # your Angular project (package.json, angular.json)
  ├─ nginx/
  ├─ docker-compose.yml
  └─ certs/          # created after certbot
  1. Build images and start containers (without certs yet):

docker compose up -d --build
  1. Obtain Let’s Encrypt certificates using the certbot container + webroot method (run on the host):

  • Ensure DNS for immai.acintia.com points to the server IP.

  • Ensure port 80 is reachable externally and not blocked by firewall.

Run this command (example using docker exec into certbot container):

# create the directory for challenge files
mkdir -p nginx/html/.well-known/acme-challenge

# run certbot interactively to obtain certs
docker run --rm -it \
  -v "$(pwd)/certs:/etc/letsencrypt/live" \
  -v "$(pwd)/nginx/html:/var/www/html" \
  certbot/certbot certonly --webroot \
    --webroot-path /var/www/html \
    -d immai.acintia.com \
    --email your-email@example.com --agree-tos --non-interactive

If successful, certificate files will be in ./certs/immai.acintia.com/ and will be mounted into the nginx container.

  1. Reload nginx to pick up the certificates:

docker compose restart nginx
  1. (Optional) Set up automatic renewal (cron on host) using certbot renew and reload nginx after renewal.

Local dev alternative (self-signed)

If you don't want to use certbot yet, you can create a self-signed cert and mount it into ./certs with the same filenames fullchain.pem and privkey.pem for testing.


Additional operational notes

  • If you use Cloudflare, you may prefer to enable Cloudflare proxy and use their TLS — in that case you'd point nginx to use Cloudflare origin certs or use ssl_certificate accordingly.

  • Make sure your backend FastAPI app binds to 0.0.0.0 (the provided Dockerfile uses that). If your FastAPI uvicorn.run call uses 127.0.0.1, update it.

  • In production, consider using environment variables or a .env file for secrets and configuration. Also increase uvicorn worker count appropriately.

  • Monitor logs: docker compose logs -f nginx and docker compose logs -f backend.


Troubleshooting

  • Certbot fails: ensure port 80 isn't blocked and DNS resolves. Run certbot with --staging to test.

  • Backend 502 from nginx: check the proxy_pass host/port match your compose service names and ports; use docker compose ps to verify.


If you want, I can also:

  • Provide a .env and systemd unit file for auto-start on server boot.

  • Add a healthcheck to the backend service and change nginx config to use proxy_pass http://backend:8000; more generally.


End of file.

Angular frontend for company-recommender-frontend

FastAPI backend for recommending the type of company registration in India using an LLM. Now you want a web frontend that can talk to your backend (/recommend and /chat).

I’ll give you a simple Angular (latest) frontend with a form to collect inputs and a chat interface.


Steps to Build Frontend

1. Create Angular Project

npm install -g @angular/cli
ng new company-recommender-frontend
cd company-recommender-frontend
ng serve

2. Generate Components & Service

ng g c components/recommendation-form
ng g c components/chat
ng g s services/api

3. Angular Service (src/app/services/api.service.ts)

This will handle HTTP calls to your FastAPI backend.

import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { Observable } from 'rxjs';

@Injectable({
  providedIn: 'root'
})
export class ApiService {
  private baseUrl = 'http://127.0.0.1:8000'; // Change to https://immai.acintia.com in production

  constructor(private http: HttpClient) {}

  getRecommendation(payload: any): Observable<any> {
    return this.http.post(`${this.baseUrl}/recommend`, payload);
  }

  chat(payload: any): Observable<any> {
    return this.http.post(`${this.baseUrl}/chat`, payload);
  }
}

4. Recommendation Form Component (src/app/components/recommendation-form/recommendation-form.component.ts)

import { Component } from '@angular/core';
import { ApiService } from '../../services/api.service';
import { v4 as uuidv4 } from 'uuid';

@Component({
  selector: 'app-recommendation-form',
  templateUrl: './recommendation-form.component.html'
})
export class RecommendationFormComponent {
  business_type = '';
  investment_amount: number | null = null;
  number_of_partners = 1;
  is_foreign_citizen = false;
  session_id = uuidv4(); // generate unique session

  recommendation: string | null = null;

  constructor(private api: ApiService) {}

  submitForm() {
    const payload = {
      business_type: this.business_type,
      investment_amount: this.investment_amount,
      number_of_partners: this.number_of_partners,
      is_foreign_citizen: this.is_foreign_citizen,
      session_id: this.session_id
    };

    this.api.getRecommendation(payload).subscribe({
      next: (res) => {
        this.recommendation = res.response;
      },
      error: (err) => {
        console.error(err);
      }
    });
  }
}

HTML (recommendation-form.component.html)

<div class="p-4 max-w-lg mx-auto">
  <h2 class="text-xl font-bold mb-2">Company Recommender Form</h2>

  <form (ngSubmit)="submitForm()">
    <label>Business Type:</label>
    <input [(ngModel)]="business_type" name="business_type" class="border p-2 w-full mb-2" required />

    <label>Investment Amount (INR):</label>
    <input type="number" [(ngModel)]="investment_amount" name="investment_amount" class="border p-2 w-full mb-2" />

    <label>Number of Partners:</label>
    <input type="number" [(ngModel)]="number_of_partners" name="number_of_partners" class="border p-2 w-full mb-2" />

    <label>
      <input type="checkbox" [(ngModel)]="is_foreign_citizen" name="is_foreign_citizen" />
      Foreign Citizen Involved
    </label>

    <button type="submit" class="bg-blue-500 text-white px-4 py-2 rounded mt-3">Get Recommendation</button>
  </form>

  <div *ngIf="recommendation" class="mt-4 p-3 bg-gray-100 rounded">
    <h3 class="font-bold">Recommendation:</h3>
    <p>{{ recommendation }}</p>
  </div>
</div>

5. Chat Component (src/app/components/chat/chat.component.ts)

import { Component, Input } from '@angular/core';
import { ApiService } from '../../services/api.service';

@Component({
  selector: 'app-chat',
  templateUrl: './chat.component.html'
})
export class ChatComponent {
  @Input() session_id!: string;
  userInput = '';
  messages: { sender: string, text: string }[] = [];

  constructor(private api: ApiService) {}

  sendMessage() {
    if (!this.userInput.trim()) return;

    this.messages.push({ sender: 'You', text: this.userInput });
    const payload = {
      user_input: this.userInput,
      session_id: this.session_id
    };

    this.api.chat(payload).subscribe({
      next: (res) => {
        this.messages.push({ sender: 'AI', text: res.response });
      },
      error: (err) => console.error(err)
    });

    this.userInput = '';
  }
}

HTML (chat.component.html)

<div class="p-4 max-w-lg mx-auto mt-6 border rounded">
  <h2 class="text-xl font-bold mb-2">Chat with AI</h2>

  <div class="h-60 overflow-y-auto bg-gray-100 p-2 mb-2 rounded">
    <div *ngFor="let msg of messages">
      <p [ngClass]="{'font-bold text-blue-700': msg.sender==='AI'}">
        {{ msg.sender }}: {{ msg.text }}
      </p>
    </div>
  </div>

  <div class="flex">
    <input [(ngModel)]="userInput" name="chatInput" class="flex-1 border p-2" placeholder="Ask something..." />
    <button (click)="sendMessage()" class="bg-green-500 text-white px-4 ml-2 rounded">Send</button>
  </div>
</div>

6. App Component (src/app/app.component.html)

<app-recommendation-form></app-recommendation-form>
<app-chat [session_id]="session_id"></app-chat>

7. Install Dependencies

npm install uuid

✅ Now you have a working Angular frontend for your FastAPI backend:

  • Fill form → /recommend

  • Chat with AI → /chat


Do you want me to also dockerize this Angular frontend and add it to your existing docker-compose.yml (with backend + nginx reverse proxy + HTTPS)?

-------------------


Upgrading reverse proxy server at https://immai.actindia.com

Upgrading the reverse proxy to serve your app securely at https://immai.acintia.com using NGINX + Let’s Encrypt (Certbot).

We’ll use docker-compose with a dedicated reverse proxy container and automatic SSL certificate management.


🔹 Step 1: Update NGINX Config for SSL

Replace your nginx/nginx.conf with this:

server {
    listen 80;
    server_name immai.acintia.com;

    # Redirect all HTTP traffic to HTTPS
    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name immai.acintia.com;

    # SSL Certificates (managed by certbot)
    ssl_certificate /etc/letsencrypt/live/immai.acintia.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/immai.acintia.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;

    # Serve frontend Angular app
    location / {
        proxy_pass http://frontend:80;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Proxy backend FastAPI
    location /api/ {
        proxy_pass http://backend:8000/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Proxy Ollama (optional)
    location /ollama/ {
        proxy_pass http://ollama:11434/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

🔹 Step 2: Extend docker-compose.yml

Here’s the updated file with nginx + certbot:

version: "3.9"

services:
  backend:
    build: .
    container_name: company-recommender-backend
    expose:
      - "8000"
    environment:
      - OLLAMA_URL=http://ollama:11434
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    expose:
      - "11434"
    volumes:
      - ollama_data:/root/.ollama

  frontend:
    build: ./frontend
    container_name: company-recommender-frontend
    expose:
      - "80"
    environment:
      - API_URL=/api
    depends_on:
      - backend

  proxy:
    image: nginx:alpine
    container_name: reverse-proxy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
      - certbot-etc:/etc/letsencrypt
      - certbot-var:/var/lib/letsencrypt
    depends_on:
      - frontend
      - backend
      - ollama

  certbot:
    image: certbot/certbot
    container_name: certbot
    volumes:
      - certbot-etc:/etc/letsencrypt
      - certbot-var:/var/lib/letsencrypt
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
    entrypoint: >
      sh -c "trap exit TERM; while :; do certbot renew --quiet; sleep 12h & wait $${!}; done"

volumes:
  ollama_data:
  certbot-etc:
  certbot-var:

🔹 Step 3: Issue the SSL Certificate

Before starting everything, you need to request the certificate once:

docker run -it --rm \
  -v certbot-etc:/etc/letsencrypt \
  -v certbot-var:/var/lib/letsencrypt \
  -v ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf \
  certbot/certbot certonly --webroot \
  --webroot-path=/var/lib/letsencrypt \
  --agree-tos --no-eff-email \
  -m youremail@example.com \
  -d immai.acintia.com

Replace youremail@example.com with your real email (for renewal notices).


🔹 Step 4: Start the Stack

docker-compose up --build -d

Now:

Certbot will auto-renew every 12 hours.


🔹 Step 5: Verify HTTPS

Open:

curl -I https://immai.acintia.com

You should see HTTP/1.1 200 OK with a valid SSL certificate.


👉 Do you want me to also configure auto-redirect of /api requests inside Angular (so your Angular code never needs to know /api explicitly), or keep it like this (frontend calls /api/...)?

Ready-made Bash script for Docker workflow

R eady-made Bash script that automates your Docker workflow end-to-end: Build backend & frontend images Tag & push them to D...