Sunday, October 5, 2025

Ready-made Bash script for Docker workflow

Ready-made Bash script that automates your Docker workflow end-to-end:

  • Build backend & frontend images

  • Tag & push them to Docker Hub

  • Pull & run images on any machine

  • Set up Certbot HTTPS via a temporary NGINX container

Below is a complete script. You just need to edit your Docker Hub username, email, and domain at the top.


deploy_docker.sh

#!/bin/bash
set -e

# -----------------------------
# CONFIGURATION - EDIT THESE
# -----------------------------
DOCKERHUB_USER="yourhubusername"
DOMAIN="immai.acintia.com"
EMAIL="your-email@domain.com"

BACKEND_IMAGE="${DOCKERHUB_USER}/ollama-backend:latest"
FRONTEND_IMAGE="${DOCKERHUB_USER}/ollama-frontend:latest"

# -----------------------------
# STEP 1: Build Docker images
# -----------------------------
echo "Building backend image..."
docker build -t $BACKEND_IMAGE ./backend

echo "Building frontend image..."
docker build -t $FRONTEND_IMAGE ./frontend

# -----------------------------
# STEP 2: Push images to Docker Hub
# -----------------------------
echo "Logging in to Docker Hub..."
docker login

echo "Pushing backend image..."
docker push $BACKEND_IMAGE

echo "Pushing frontend image..."
docker push $FRONTEND_IMAGE

# -----------------------------
# STEP 3: Run containers on this machine
# -----------------------------
echo "Stopping any running containers..."
docker stop ollama-backend ollama-frontend nginx-proxy || true
docker rm ollama-backend ollama-frontend nginx-proxy || true

echo "Running backend container..."
docker run -d --name ollama-backend -p 8000:8000 $BACKEND_IMAGE

echo "Running frontend container..."
docker run -d --name ollama-frontend -p 8080:80 $FRONTEND_IMAGE

# -----------------------------
# STEP 4: Set up temporary NGINX for Certbot
# -----------------------------
echo "Creating temporary NGINX for Certbot challenge..."
docker run -d --name nginx-certbot \
  -p 80:80 \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  nginx:alpine

sleep 5

echo "Obtaining HTTPS certificate with Certbot..."
docker run --rm \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  certbot/certbot certonly \
  --webroot --webroot-path=/var/www/certbot \
  --email $EMAIL \
  -d $DOMAIN \
  --agree-tos --no-eff-email

echo "Stopping temporary NGINX..."
docker stop nginx-certbot
docker rm nginx-certbot

# -----------------------------
# STEP 5: Run NGINX with HTTPS
# -----------------------------
echo "Running NGINX proxy with HTTPS..."
cat > nginx.conf <<EOF
server {
    listen 80;
    server_name $DOMAIN;
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }
    location / {
        return 301 https://\$host\$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name $DOMAIN;

    ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;

    location /api/ {
        proxy_pass http://host.docker.internal:8000/;
        proxy_set_header Host \$host;
        proxy_set_header X-Real-IP \$remote_addr;
        proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
    }

    location / {
        proxy_pass http://host.docker.internal:8080/;
        proxy_set_header Host \$host;
        proxy_set_header X-Real-IP \$remote_addr;
        proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
    }
}
EOF

docker run -d --name nginx-proxy \
  -p 80:80 -p 443:443 \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  -v $(pwd)/nginx.conf:/etc/nginx/conf.d/default.conf:ro \
  nginx:alpine

echo "✅ Deployment complete!"
echo "Frontend: https://$DOMAIN/"
echo "Backend API: https://$DOMAIN/api/"

✅ Features

  1. Build: backend & frontend Docker images.

  2. Push: tags and uploads to Docker Hub.

  3. Run: pulls the latest images and runs containers on your host.

  4. Certbot HTTPS: temporary NGINX container to get TLS certificates.

  5. Production NGINX: runs NGINX reverse proxy with HTTPS for frontend & backend.


🔧 How to use

  1. Make script executable:

chmod +x deploy_docker.sh
  1. Run it:

./deploy_docker.sh
  1. Access services:

  • Frontend: https://immai.acintia.com/

  • Backend: https://immai.acintia.com/api/

  1. Certificates are stored in Docker volumes: certbot-www & certbot-etc.

  2. To renew certificates:

docker run --rm \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  certbot/certbot renew --webroot -w /var/www/certbot
docker restart nginx-proxy


Ready-to-use Kubernetes folder 05.02.2025

Ready-to-use Kubernetes folder with all manifests for your backend, frontend, and optional Ollama service, fully configured for HTTPS via cert-manager. You will just need to replace Docker Hub images and your email/domain.

Here’s the structure:

k8s/
├── namespace.yaml
├── clusterissuer.yaml
├── backend-deployment.yaml
├── frontend-deployment.yaml
├── ollama-deployment.yaml
├── ingress.yaml

1️⃣ namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ollama-chat

2️⃣ clusterissuer.yaml

Replace your-email@domain.com with your email.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: your-email@domain.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: nginx

3️⃣ backend-deployment.yaml

Replace yourhubusername/ollama-backend:latest with your Docker Hub backend image.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama-backend
  namespace: ollama-chat
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ollama-backend
  template:
    metadata:
      labels:
        app: ollama-backend
    spec:
      containers:
        - name: backend
          image: yourhubusername/ollama-backend:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 8000
          env:
            - name: OLLAMA_HOST
              value: "http://ollama:11434"
---
apiVersion: v1
kind: Service
metadata:
  name: ollama-backend
  namespace: ollama-chat
spec:
  selector:
    app: ollama-backend
  ports:
    - protocol: TCP
      port: 8000
      targetPort: 8000
  type: ClusterIP

4️⃣ frontend-deployment.yaml

Replace yourhubusername/ollama-frontend:latest with your Docker Hub frontend image.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama-frontend
  namespace: ollama-chat
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ollama-frontend
  template:
    metadata:
      labels:
        app: ollama-frontend
    spec:
      containers:
        - name: frontend
          image: yourhubusername/ollama-frontend:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: ollama-frontend
  namespace: ollama-chat
spec:
  selector:
    app: ollama-frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP

5️⃣ ollama-deployment.yaml (optional)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama
  namespace: ollama-chat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ollama
  template:
    metadata:
      labels:
        app: ollama
    spec:
      containers:
        - name: ollama
          image: ollama/ollama:latest
          ports:
            - containerPort: 11434
---
apiVersion: v1
kind: Service
metadata:
  name: ollama
  namespace: ollama-chat
spec:
  selector:
    app: ollama
  ports:
    - protocol: TCP
      port: 11434
      targetPort: 11434
  type: ClusterIP

6️⃣ ingress.yaml

Replace immai.acintia.com with your domain.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ollama-ingress
  namespace: ollama-chat
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - immai.acintia.com
      secretName: ollama-tls
  rules:
    - host: immai.acintia.com
      http:
        paths:
          - path: /api/
            pathType: Prefix
            backend:
              service:
                name: ollama-backend
                port:
                  number: 8000
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ollama-frontend
                port:
                  number: 80

7️⃣ Deployment Instructions

Step 1: Apply namespace

kubectl apply -f k8s/namespace.yaml

Step 2: Apply cert-manager ClusterIssuer

kubectl apply -f k8s/clusterissuer.yaml

Step 3: Deploy backend, frontend, and Ollama

kubectl apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/frontend-deployment.yaml
kubectl apply -f k8s/ollama-deployment.yaml   # optional

Step 4: Deploy Ingress

kubectl apply -f k8s/ingress.yaml

Step 5: Verify resources

kubectl get pods -n ollama-chat
kubectl get svc -n ollama-chat
kubectl get ingress -n ollama-chat

Step 6: Access services

  • Frontend → https://immai.acintia.com/

  • Backend API → https://immai.acintia.com/api/

TLS is automatically handled by cert-manager.


✅ With this setup:

  • You can run your backend, frontend, and Ollama in any Kubernetes cluster.

  • Certificates auto-renew with cert-manager.

  • Services are scalable by adjusting replicas.

  • Ingress routes traffic securely over HTTPS.


Dockerization Ready Files for IMMAI 05.10.20205

Ready-to-use Docker setup for your project so you can literally copy-paste and run everything. This will include:

  1. Backend Dockerfile

  2. Frontend Dockerfile

  3. docker-compose.yml (with backend, frontend, optional Ollama service)

  4. NGINX config for reverse proxy + HTTPS (via Certbot)

  5. Instructions to build, push to Docker Hub, and deploy on any machine


1️⃣ Backend Dockerfile (backend/Dockerfile)

# backend/Dockerfile
FROM python:3.11-slim

ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

WORKDIR /app

RUN apt-get update && \
    apt-get install -y --no-install-recommends build-essential gcc libpq-dev && \
    rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --upgrade pip && pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "1"]

2️⃣ Frontend Dockerfile (frontend/Dockerfile)

# frontend/Dockerfile
# Stage 1: build Angular app
FROM node:20 AS build

WORKDIR /usr/src/app

COPY legal-chatbot-ui/package*.json ./
RUN npm ci --legacy-peer-deps

COPY legal-chatbot-ui/ .
RUN npm run build -- --configuration production

# Stage 2: Serve with NGINX
FROM nginx:alpine

RUN rm -rf /usr/share/nginx/html/*

COPY --from=build /usr/src/app/dist/legal-chatbot-ui /usr/share/nginx/html

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

3️⃣ docker-compose.yml

version: "3.9"

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: unless-stopped
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama
    command: serve

  backend:
    build: ./backend
    container_name: ollama-backend
    restart: unless-stopped
    environment:
      - OLLAMA_HOST=http://ollama:11434
    ports:
      - "8000:8000"
    volumes:
      - ./backend:/app
    depends_on:
      - ollama

  frontend:
    build: ./frontend
    container_name: ollama-frontend
    restart: unless-stopped
    ports:
      - "8080:80"
    depends_on:
      - backend

  nginx:
    image: nginx:alpine
    container_name: nginx-proxy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - certbot-www:/var/www/certbot
      - certbot-etc:/etc/letsencrypt
    depends_on:
      - frontend
      - backend

  certbot:
    image: certbot/certbot
    container_name: certbot
    volumes:
      - certbot-www:/var/www/certbot
      - certbot-etc:/etc/letsencrypt
    entrypoint: ""

volumes:
  ollama_data:
  certbot-www:
  certbot-etc:

4️⃣ NGINX Config (nginx/conf.d/default.conf)

server {
    listen 80;
    server_name immai.acintia.com;

    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name immai.acintia.com;

    ssl_certificate /etc/letsencrypt/live/immai.acintia.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/immai.acintia.com/privkey.pem;

    location /api/ {
        proxy_pass http://backend:8000/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location / {
        proxy_pass http://frontend:80/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

5️⃣ Steps to Build, Push, and Deploy Anywhere

Step 1: Build Docker images locally

docker-compose build

Step 2: Run locally

docker-compose up -d

Step 3: Push images to Docker Hub

docker login
docker tag ollama-backend yourhubusername/ollama-backend:latest
docker tag ollama-frontend yourhubusername/ollama-frontend:latest
docker push yourhubusername/ollama-backend:latest
docker push yourhubusername/ollama-frontend:latest

Step 4: Pull & run on any other machine

docker pull yourhubusername/ollama-backend:latest
docker pull yourhubusername/ollama-frontend:latest
docker-compose up -d

Step 5: Obtain HTTPS certificate (first-time)

docker-compose up -d nginx
docker-compose run --rm certbot certonly \
  --webroot --webroot-path=/var/www/certbot \
  --email you@example.com \
  -d immai.acintia.com \
  --agree-tos --no-eff-email
docker-compose restart nginx

✅ Now your backend + frontend are containerized, served through NGINX, and HTTPS-ready.

If you want, I can also provide a ready-made script that automates:

  • Building images

  • Tagging & pushing to Docker Hub

  • Running on any machine

  • Certbot HTTPS setup

Do you want me to create that script?

Tuesday, September 23, 2025

Dock erization 230925 backend for Maran

let’s set this up properly with Docker Compose so you can run:

docker compose up -d

and have everything (FastAPI backend + Ollama + NGINX reverse proxy with HTTPS/WebSockets) working together.


📂 Project Structure

your-project/
│── main.py
│── service.py
│── requirements.txt
│── Dockerfile
│── nginx.conf
│── docker-compose.yml
│── certs/   (your SSL cert + key go here)

✅ Dockerfile (backend)

You already have this, but let’s finalize:

FROM python:3.11-slim

WORKDIR /app

RUN apt-get update && apt-get install -y \
    build-essential curl \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

✅ requirements.txt

fastapi
uvicorn[standard]
langchain
langchain-community
langchain-ollama
duckduckgo-search

✅ nginx.conf

events {}

http {
    upstream backend {
        server backend:8000;
    }

    upstream ollama {
        server ollama:11434;
    }

    server {
        listen 80;
        server_name immai.acintia.com;

        # Redirect HTTP → HTTPS
        return 301 https://$host$request_uri;
    }

    server {
        listen 443 ssl;
        server_name immai.acintia.com;

        ssl_certificate /etc/nginx/certs/fullchain.pem;
        ssl_certificate_key /etc/nginx/certs/privkey.pem;

        location / {
            proxy_pass http://backend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
        }

        location /ws {
            proxy_pass http://backend;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
        }

        location /ollama/ {
            proxy_pass http://ollama/;
            proxy_http_version 1.1;
            proxy_set_header Host $host;
        }
    }
}

✅ docker-compose.yml

version: "3.9"

services:
  backend:
    build: .
    container_name: fastapi_backend
    restart: always
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    container_name: ollama_service
    restart: always
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama

  nginx:
    image: nginx:latest
    container_name: nginx_proxy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certs:/etc/nginx/certs:ro
    depends_on:
      - backend
      - ollama

volumes:
  ollama_data:

✅ Certificates

  • Place your SSL certs inside ./certs/:

    • fullchain.pem

    • privkey.pem

(You can generate free ones with Let’s Encrypt + certbot, or use self-signed for testing).


🚀 Run it

docker compose up -d --build
  • FastAPI backend → https://immai.acintia.com

  • WebSockets → wss://immai.acintia.com/ws/query

  • Ollama API proxied → https://immai.acintia.com/ollama/


⚠️ Question for you:
In your main.py, you’re pointing ChatOllama(base_url="https://immai.acintia.com", ...).
👉 Do you want me to adjust that so it talks directly to ollama service inside Docker (http://ollama:11434)? That way it doesn’t loop back through NGINX unnecessarily.


======================================================================

Got it ✅

We’ll make your FastAPI backend talk directly to the ollama service inside Docker, instead of looping through NGINX.

That means in main.py (or wherever you use ChatOllama), change this:

from langchain_community.chat_models import ChatOllama

llm = ChatOllama(
    base_url="https://immai.acintia.com",  # ❌ not needed
    model="llama2:7b",
)

👉 To this:

from langchain_community.chat_models import ChatOllama
import os

ollama_base_url = os.getenv("OLLAMA_BASE_URL", "http://ollama:11434")

llm = ChatOllama(
    base_url=ollama_base_url,
    model="llama2:7b",
)

✅ Why this works

  • Inside docker-compose.yml, we defined:

environment:
  - OLLAMA_BASE_URL=http://ollama:11434
  • So your backend container resolves ollama (Docker DNS) → Ollama container.

  • This avoids unnecessary SSL or reverse proxy overhead when the backend talks to Ollama.

  • External clients (browser, frontend, Postman) will still hit NGINX → FastAPI → Ollama using https://immai.acintia.com.


🔄 Next Steps for You

  1. Update main.py as above.

  2. Rebuild and restart with:

docker compose up -d --build
  1. Test WebSocket:

wscat -c wss://immai.acintia.com/ws/query

Do you want me to also add a /health endpoint in FastAPI so you can quickly confirm backend and Ollama connectivity (like GET /health"backend OK", "ollama OK") before debugging WebSocket?

Wednesday, September 17, 2025

Document for Docker + nginx + HTTPS setup for Company Recommender

Docker + nginx + HTTPS setup for Company Recommender

This document contains a ready-to-use Docker deployment for your FastAPI backend and Angular frontend, plus an NGINX reverse proxy. It also includes instructions for obtaining Let’s Encrypt certificates with Certbot (manual step).

Important: I placed all files and config examples below. Follow the numbered steps in Deployment to build, obtain certificates, and run in production.


Files included

  • docker-compose.yml — orchestrates backend, frontend, nginx, and certbot (optional)

  • backend/Dockerfile — builds your FastAPI app (uvicorn)

  • frontend/Dockerfile — builds Angular production bundle and serves using nginx

  • nginx/nginx.conf — main nginx config with HTTP -> HTTPS redirect

  • nginx/conf.d/immai.acintia.com.conf — site config (reverse proxy to backend + static hosting for frontend)

  • README_DEPLOY.md — deployment steps and Certbot instructions


docker-compose.yml

version: '3.8'
services:
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    container_name: company_recommender_backend
    environment:
      - PORT=8000
      - OLLAMA_URL=https://immai.acintia.com
    expose:
      - "8000"
    restart: unless-stopped

  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
    container_name: company_recommender_frontend
    restart: unless-stopped
    expose:
      - "80"

  nginx:
    image: nginx:stable
    container_name: company_recommender_nginx
    ports:
      - "80:80"
      - "443:443"
    depends_on:
      - frontend
      - backend
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./certs:/etc/letsencrypt/live:ro
      - ./nginx/html:/usr/share/nginx/html:ro
    restart: unless-stopped

  certbot:
    image: certbot/certbot
    container_name: company_recommender_certbot
    volumes:
      - ./certs:/etc/letsencrypt/live
      - ./nginx/html:/var/www/html
    entrypoint: ''
    command: "/bin/sh -c 'sleep infinity'"
    restart: 'no'

networks:
  default:
    driver: bridge

Notes:

  • certbot here is present to allow you to run one-off cert issuance commands using the certbot container (instructions below).

  • ./certs will hold the live certificate files after you create them (mounted read-only into nginx).


backend/Dockerfile

# backend/Dockerfile
FROM python:3.11-slim

WORKDIR /app

# system deps (if needed)
RUN apt-get update && apt-get install -y --no-install-recommends build-essential curl && rm -rf /var/lib/apt/lists/*

COPY backend/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY backend/ .

ENV PORT=8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "1"]

Make sure backend/requirements.txt contains: fastapi, uvicorn[standard], langgraph, langchain-core, langchain-community, pydantic, and any other packages your main.py imports.


frontend/Dockerfile (Angular)

# frontend/Dockerfile
# Build stage
FROM node:20 AS build
WORKDIR /app
COPY frontend/package*.json ./
RUN npm ci --legacy-peer-deps
COPY frontend/ .
RUN npm run build -- --configuration production

# Production stage
FROM nginx:stable
COPY --from=build /app/dist/ /usr/share/nginx/html/
# optional: copy a custom nginx conf for serving SPA (404 -> index.html handled by nginx conf)
COPY nginx/spa.conf /etc/nginx/conf.d/default.conf

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Assumes your Angular build output lands in /app/dist/ (adjust if your project name differs; ng build by default creates dist/<project-name>).


nginx/nginx.conf

user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events { worker_connections 1024; }

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    include /etc/nginx/conf.d/*.conf;
}

nginx/conf.d/immai.acintia.com.conf

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name immai.acintia.com;

    root /var/www/html;

    location /.well-known/acme-challenge/ {
        alias /var/www/html/.well-known/acme-challenge/;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# HTTPS server
server {
    listen 443 ssl http2;
    server_name immai.acintia.com;

    ssl_certificate /etc/letsencrypt/live/immai.acintia.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/immai.acintia.com/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;

    # Serve frontend static files
    location / {
        proxy_pass http://frontend:80;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # API proxying
    location /recommend {
        proxy_pass http://backend:8000/recommend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location /chat {
        proxy_pass http://backend:8000/chat;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # optionally allow access to certbot challenge files served from nginx html
    location ^~ /.well-known/acme-challenge/ {
        alias /var/www/html/.well-known/acme-challenge/;
        allow all;
    }
}

The proxy_pass uses the Docker service names frontend and backend defined in docker-compose.yml so nginx communicates over the Docker network.


nginx/spa.conf (used by frontend image)

server {
  listen 80;
  server_name _;
  root /usr/share/nginx/html;
  index index.html;

  location / {
    try_files $uri $uri/ /index.html;
  }
}

README_DEPLOY.md — Deployment steps (summary)

  1. Clone repo and place the backend and frontend project folders side-by-side with docker-compose.yml and nginx/ folder.

project-root/
  ├─ backend/        # your FastAPI app (main.py, requirements.txt)
  ├─ frontend/       # your Angular project (package.json, angular.json)
  ├─ nginx/
  ├─ docker-compose.yml
  └─ certs/          # created after certbot
  1. Build images and start containers (without certs yet):

docker compose up -d --build
  1. Obtain Let’s Encrypt certificates using the certbot container + webroot method (run on the host):

  • Ensure DNS for immai.acintia.com points to the server IP.

  • Ensure port 80 is reachable externally and not blocked by firewall.

Run this command (example using docker exec into certbot container):

# create the directory for challenge files
mkdir -p nginx/html/.well-known/acme-challenge

# run certbot interactively to obtain certs
docker run --rm -it \
  -v "$(pwd)/certs:/etc/letsencrypt/live" \
  -v "$(pwd)/nginx/html:/var/www/html" \
  certbot/certbot certonly --webroot \
    --webroot-path /var/www/html \
    -d immai.acintia.com \
    --email your-email@example.com --agree-tos --non-interactive

If successful, certificate files will be in ./certs/immai.acintia.com/ and will be mounted into the nginx container.

  1. Reload nginx to pick up the certificates:

docker compose restart nginx
  1. (Optional) Set up automatic renewal (cron on host) using certbot renew and reload nginx after renewal.

Local dev alternative (self-signed)

If you don't want to use certbot yet, you can create a self-signed cert and mount it into ./certs with the same filenames fullchain.pem and privkey.pem for testing.


Additional operational notes

  • If you use Cloudflare, you may prefer to enable Cloudflare proxy and use their TLS — in that case you'd point nginx to use Cloudflare origin certs or use ssl_certificate accordingly.

  • Make sure your backend FastAPI app binds to 0.0.0.0 (the provided Dockerfile uses that). If your FastAPI uvicorn.run call uses 127.0.0.1, update it.

  • In production, consider using environment variables or a .env file for secrets and configuration. Also increase uvicorn worker count appropriately.

  • Monitor logs: docker compose logs -f nginx and docker compose logs -f backend.


Troubleshooting

  • Certbot fails: ensure port 80 isn't blocked and DNS resolves. Run certbot with --staging to test.

  • Backend 502 from nginx: check the proxy_pass host/port match your compose service names and ports; use docker compose ps to verify.


If you want, I can also:

  • Provide a .env and systemd unit file for auto-start on server boot.

  • Add a healthcheck to the backend service and change nginx config to use proxy_pass http://backend:8000; more generally.


End of file.

Angular frontend for company-recommender-frontend

FastAPI backend for recommending the type of company registration in India using an LLM. Now you want a web frontend that can talk to your backend (/recommend and /chat).

I’ll give you a simple Angular (latest) frontend with a form to collect inputs and a chat interface.


Steps to Build Frontend

1. Create Angular Project

npm install -g @angular/cli
ng new company-recommender-frontend
cd company-recommender-frontend
ng serve

2. Generate Components & Service

ng g c components/recommendation-form
ng g c components/chat
ng g s services/api

3. Angular Service (src/app/services/api.service.ts)

This will handle HTTP calls to your FastAPI backend.

import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { Observable } from 'rxjs';

@Injectable({
  providedIn: 'root'
})
export class ApiService {
  private baseUrl = 'http://127.0.0.1:8000'; // Change to https://immai.acintia.com in production

  constructor(private http: HttpClient) {}

  getRecommendation(payload: any): Observable<any> {
    return this.http.post(`${this.baseUrl}/recommend`, payload);
  }

  chat(payload: any): Observable<any> {
    return this.http.post(`${this.baseUrl}/chat`, payload);
  }
}

4. Recommendation Form Component (src/app/components/recommendation-form/recommendation-form.component.ts)

import { Component } from '@angular/core';
import { ApiService } from '../../services/api.service';
import { v4 as uuidv4 } from 'uuid';

@Component({
  selector: 'app-recommendation-form',
  templateUrl: './recommendation-form.component.html'
})
export class RecommendationFormComponent {
  business_type = '';
  investment_amount: number | null = null;
  number_of_partners = 1;
  is_foreign_citizen = false;
  session_id = uuidv4(); // generate unique session

  recommendation: string | null = null;

  constructor(private api: ApiService) {}

  submitForm() {
    const payload = {
      business_type: this.business_type,
      investment_amount: this.investment_amount,
      number_of_partners: this.number_of_partners,
      is_foreign_citizen: this.is_foreign_citizen,
      session_id: this.session_id
    };

    this.api.getRecommendation(payload).subscribe({
      next: (res) => {
        this.recommendation = res.response;
      },
      error: (err) => {
        console.error(err);
      }
    });
  }
}

HTML (recommendation-form.component.html)

<div class="p-4 max-w-lg mx-auto">
  <h2 class="text-xl font-bold mb-2">Company Recommender Form</h2>

  <form (ngSubmit)="submitForm()">
    <label>Business Type:</label>
    <input [(ngModel)]="business_type" name="business_type" class="border p-2 w-full mb-2" required />

    <label>Investment Amount (INR):</label>
    <input type="number" [(ngModel)]="investment_amount" name="investment_amount" class="border p-2 w-full mb-2" />

    <label>Number of Partners:</label>
    <input type="number" [(ngModel)]="number_of_partners" name="number_of_partners" class="border p-2 w-full mb-2" />

    <label>
      <input type="checkbox" [(ngModel)]="is_foreign_citizen" name="is_foreign_citizen" />
      Foreign Citizen Involved
    </label>

    <button type="submit" class="bg-blue-500 text-white px-4 py-2 rounded mt-3">Get Recommendation</button>
  </form>

  <div *ngIf="recommendation" class="mt-4 p-3 bg-gray-100 rounded">
    <h3 class="font-bold">Recommendation:</h3>
    <p>{{ recommendation }}</p>
  </div>
</div>

5. Chat Component (src/app/components/chat/chat.component.ts)

import { Component, Input } from '@angular/core';
import { ApiService } from '../../services/api.service';

@Component({
  selector: 'app-chat',
  templateUrl: './chat.component.html'
})
export class ChatComponent {
  @Input() session_id!: string;
  userInput = '';
  messages: { sender: string, text: string }[] = [];

  constructor(private api: ApiService) {}

  sendMessage() {
    if (!this.userInput.trim()) return;

    this.messages.push({ sender: 'You', text: this.userInput });
    const payload = {
      user_input: this.userInput,
      session_id: this.session_id
    };

    this.api.chat(payload).subscribe({
      next: (res) => {
        this.messages.push({ sender: 'AI', text: res.response });
      },
      error: (err) => console.error(err)
    });

    this.userInput = '';
  }
}

HTML (chat.component.html)

<div class="p-4 max-w-lg mx-auto mt-6 border rounded">
  <h2 class="text-xl font-bold mb-2">Chat with AI</h2>

  <div class="h-60 overflow-y-auto bg-gray-100 p-2 mb-2 rounded">
    <div *ngFor="let msg of messages">
      <p [ngClass]="{'font-bold text-blue-700': msg.sender==='AI'}">
        {{ msg.sender }}: {{ msg.text }}
      </p>
    </div>
  </div>

  <div class="flex">
    <input [(ngModel)]="userInput" name="chatInput" class="flex-1 border p-2" placeholder="Ask something..." />
    <button (click)="sendMessage()" class="bg-green-500 text-white px-4 ml-2 rounded">Send</button>
  </div>
</div>

6. App Component (src/app/app.component.html)

<app-recommendation-form></app-recommendation-form>
<app-chat [session_id]="session_id"></app-chat>

7. Install Dependencies

npm install uuid

✅ Now you have a working Angular frontend for your FastAPI backend:

  • Fill form → /recommend

  • Chat with AI → /chat


Do you want me to also dockerize this Angular frontend and add it to your existing docker-compose.yml (with backend + nginx reverse proxy + HTTPS)?

-------------------


Upgrading reverse proxy server at https://immai.actindia.com

Upgrading the reverse proxy to serve your app securely at https://immai.acintia.com using NGINX + Let’s Encrypt (Certbot).

We’ll use docker-compose with a dedicated reverse proxy container and automatic SSL certificate management.


🔹 Step 1: Update NGINX Config for SSL

Replace your nginx/nginx.conf with this:

server {
    listen 80;
    server_name immai.acintia.com;

    # Redirect all HTTP traffic to HTTPS
    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name immai.acintia.com;

    # SSL Certificates (managed by certbot)
    ssl_certificate /etc/letsencrypt/live/immai.acintia.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/immai.acintia.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers on;

    # Serve frontend Angular app
    location / {
        proxy_pass http://frontend:80;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Proxy backend FastAPI
    location /api/ {
        proxy_pass http://backend:8000/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Proxy Ollama (optional)
    location /ollama/ {
        proxy_pass http://ollama:11434/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

🔹 Step 2: Extend docker-compose.yml

Here’s the updated file with nginx + certbot:

version: "3.9"

services:
  backend:
    build: .
    container_name: company-recommender-backend
    expose:
      - "8000"
    environment:
      - OLLAMA_URL=http://ollama:11434
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    expose:
      - "11434"
    volumes:
      - ollama_data:/root/.ollama

  frontend:
    build: ./frontend
    container_name: company-recommender-frontend
    expose:
      - "80"
    environment:
      - API_URL=/api
    depends_on:
      - backend

  proxy:
    image: nginx:alpine
    container_name: reverse-proxy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
      - certbot-etc:/etc/letsencrypt
      - certbot-var:/var/lib/letsencrypt
    depends_on:
      - frontend
      - backend
      - ollama

  certbot:
    image: certbot/certbot
    container_name: certbot
    volumes:
      - certbot-etc:/etc/letsencrypt
      - certbot-var:/var/lib/letsencrypt
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
    entrypoint: >
      sh -c "trap exit TERM; while :; do certbot renew --quiet; sleep 12h & wait $${!}; done"

volumes:
  ollama_data:
  certbot-etc:
  certbot-var:

🔹 Step 3: Issue the SSL Certificate

Before starting everything, you need to request the certificate once:

docker run -it --rm \
  -v certbot-etc:/etc/letsencrypt \
  -v certbot-var:/var/lib/letsencrypt \
  -v ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf \
  certbot/certbot certonly --webroot \
  --webroot-path=/var/lib/letsencrypt \
  --agree-tos --no-eff-email \
  -m youremail@example.com \
  -d immai.acintia.com

Replace youremail@example.com with your real email (for renewal notices).


🔹 Step 4: Start the Stack

docker-compose up --build -d

Now:

Certbot will auto-renew every 12 hours.


🔹 Step 5: Verify HTTPS

Open:

curl -I https://immai.acintia.com

You should see HTTP/1.1 200 OK with a valid SSL certificate.


👉 Do you want me to also configure auto-redirect of /api requests inside Angular (so your Angular code never needs to know /api explicitly), or keep it like this (frontend calls /api/...)?

NGINX reverse proxy (one domain support)

 Adding an NGINX reverse proxy so your users just hit one domain/port (e.g. http://localhost) and NGINX will serve:

  • Frontend at /

  • Backend API at /api/

  • Ollama at /ollama/ (optional, in case you want to expose it too)


🔹 Step 1: Create nginx.conf (reverse proxy)

Make a new folder nginx/ in your project root and add nginx.conf:

server {
  listen 80;

  server_name _;

  # Serve frontend Angular app
  location / {
    proxy_pass http://frontend:80;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }

  # Proxy backend FastAPI
  location /api/ {
    proxy_pass http://backend:8000/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }

  # Proxy Ollama (optional)
  location /ollama/ {
    proxy_pass http://ollama:11434/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

🔹 Step 2: Add Reverse Proxy Service to docker-compose.yml

Extend your compose file:

version: "3.9"

services:
  backend:
    build: .
    container_name: company-recommender-backend
    expose:
      - "8000"
    environment:
      - OLLAMA_URL=http://ollama:11434
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    expose:
      - "11434"
    volumes:
      - ollama_data:/root/.ollama

  frontend:
    build: ./frontend
    container_name: company-recommender-frontend
    expose:
      - "80"
    environment:
      - API_URL=/api   # frontend calls backend via proxy
    depends_on:
      - backend

  proxy:
    image: nginx:alpine
    container_name: reverse-proxy
    ports:
      - "80:80"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      - frontend
      - backend
      - ollama

volumes:
  ollama_data:

🔹 Step 3: Adjust Angular Runtime Config

Since the reverse proxy maps backend → /api/, update frontend/src/assets/env.js:

(function(window) {
  window["env"] = window["env"] || {};
  window["env"]["API_URL"] = "${API_URL}";
})(this);

In docker-compose.yml, the frontend service already sets:

environment:
  - API_URL=/api

So the Angular app will now call /api/..., which gets routed to the backend by NGINX.


🔹 Step 4: Rebuild & Run

From your project root:

docker-compose up --build

✅ Final Setup

  • Visit → http://localhost → Angular frontend served.

  • Frontend API calls → /api/... → routed to backend.

  • Optional Ollama calls → /ollama/....

Everything is cleanly under one domain and one port (80) 🎯.


👉 Do you want me to also add HTTPS (SSL/TLS) with Let’s Encrypt support to this proxy so you can put it on a real server/domain securely?

Live API URL injection

Let’s set up live API URL injection so you can change backend URL without rebuilding the Angular frontend. This is super useful for Docker because the container just reads env variables at runtime.


🔹 Step 1: Use env.js instead of Angular’s static environment.ts

Already we created src/assets/env.js:

(function(window) {
  window["env"] = window["env"] || {};
  window["env"]["API_URL"] = "${API_URL}";
})(this);

This file will get replaced with real env vars when the container starts.


🔹 Step 2: Modify Angular ApiService

Update src/app/services/api.service.ts:

import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { Observable } from 'rxjs';

@Injectable({
  providedIn: 'root'
})
export class ApiService {
  private apiUrl = (window as any)["env"]?.API_URL || "http://localhost:8000";

  constructor(private http: HttpClient) {}

  getRecommendation(payload: any): Observable<any> {
    return this.http.post(`${this.apiUrl}/recommend`, payload);
  }

  chat(payload: any): Observable<any> {
    return this.http.post(`${this.apiUrl}/chat`, payload);
  }
}

🔹 Step 3: Add an Entrypoint Script in Docker

Create frontend/docker-entrypoint.sh:

#!/bin/sh

# Replace placeholder vars in env.js with real env vars
echo "Injecting runtime environment variables..."
sed -i "s|\${API_URL}|${API_URL:-http://localhost:8000}|g" /usr/share/nginx/html/assets/env.js

exec "$@"

Make it executable:

chmod +x frontend/docker-entrypoint.sh

🔹 Step 4: Update Frontend Dockerfile

Modify frontend/Dockerfile:

# Stage 1: Build Angular app
FROM node:18-slim as build

WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build -- --configuration production

# Stage 2: Serve with Nginx
FROM nginx:alpine

# Copy build output
COPY --from=build /app/dist/frontend /usr/share/nginx/html

# Copy custom entrypoint
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh

# Copy nginx config
COPY nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 80

ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]

🔹 Step 5: Update docker-compose.yml

version: "3.9"

services:
  backend:
    build: .
    container_name: company-recommender-backend
    ports:
      - "8000:8000"
    environment:
      - OLLAMA_URL=http://ollama:11434
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama

  frontend:
    build: ./frontend
    container_name: company-recommender-frontend
    ports:
      - "3000:80"
    environment:
      - API_URL=http://backend:8000
    depends_on:
      - backend

volumes:
  ollama_data:

✅ Final Behavior

  • At runtime, when you start the container, docker-entrypoint.sh injects API_URL into env.js.

  • Your Angular app reads from window.env.API_URL.

  • You can point the frontend to a different backend without rebuilding the Angular image.

Example:

docker-compose up --build

Frontend → http://localhost:3000
Backend → http://localhost:8000
Ollama → http://localhost:11434


👉 Do you want me to also add an NGINX reverse proxy in front (so that frontend and backend are served under one domain, e.g. /api), or keep them on separate ports (3000 + 8000)?

Angular Front end(UI)

 Perfect 👍 Let’s do this with Angular (latest) frontend. 5 clear steps + code so that it works with the backend and Ollama inside the same docker-compose setup.


✅ Step 1: Create Angular Frontend

Run this on your host machine (outside Docker):

npm install -g @angular/cli
ng new frontend --routing --style=css

This creates a new Angular app in frontend/.

Move into the folder:

cd frontend

✅ Step 2: Add an API Service

Inside Angular, create a service for backend calls:

ng generate service services/api

This generates src/app/services/api.service.ts. Replace with:

import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { Observable } from 'rxjs';

@Injectable({
  providedIn: 'root'
})
export class ApiService {
  private apiUrl = (window as any)['env']?.API_URL || 'http://localhost:8000';

  constructor(private http: HttpClient) {}

  getRecommendation(payload: any): Observable<any> {
    return this.http.post(`${this.apiUrl}/recommend`, payload);
  }

  chat(payload: any): Observable<any> {
    return this.http.post(`${this.apiUrl}/chat`, payload);
  }
}

✅ Step 3: Allow Environment Config at Runtime

Angular normally hardcodes environment.ts, but for Docker we want runtime config.

Create a file: src/assets/env.js:

(function(window) {
  window["env"] = window["env"] || {};
  window["env"]["API_URL"] = "${REACT_APP_API_URL}";
})(this);

Add it in angular.json under assets so it’s copied into the build:

"assets": [
  "src/favicon.ico",
  "src/assets",
  "src/assets/env.js"
]

✅ Step 4: Update AppComponent to Call Backend

Modify src/app/app.component.ts:

import { Component } from '@angular/core';
import { ApiService } from './services/api.service';

@Component({
  selector: 'app-root',
  template: `
    <h1>AI Company Recommender</h1>
    <button (click)="getRecommendation()">Get Recommendation</button>
    <p *ngIf="response">{{ response }}</p>
  `
})
export class AppComponent {
  response: string | null = null;

  constructor(private api: ApiService) {}

  getRecommendation() {
    const payload = {
      business_type: "IT Startup",
      investment_amount: 100000,
      number_of_partners: 2,
      is_foreign_citizen: false,
      session_id: "demo-session"
    };
    this.api.getRecommendation(payload).subscribe(res => {
      this.response = res.response;
    });
  }
}

✅ Step 5: Dockerize Angular Frontend

Create frontend/Dockerfile:

# Stage 1: Build Angular app
FROM node:18-slim as build

WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build -- --configuration production

# Stage 2: Serve with Nginx
FROM nginx:alpine

# Copy build output
COPY --from=build /app/dist/frontend /usr/share/nginx/html

# Replace env vars in env.js at runtime
COPY nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Create frontend/nginx.conf:

server {
  listen 80;

  location / {
    root   /usr/share/nginx/html;
    index  index.html;
    try_files $uri /index.html;
  }

  # Allow API proxy if needed
  location /api/ {
    proxy_pass http://backend:8000/;
  }
}

✅ docker-compose.yml (final with Angular frontend)

version: "3.9"

services:
  backend:
    build: .
    container_name: company-recommender-backend
    ports:
      - "8000:8000"
    environment:
      - OLLAMA_URL=http://ollama:11434
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama

  frontend:
    build: ./frontend
    container_name: company-recommender-frontend
    ports:
      - "3000:80"
    environment:
      - REACT_APP_API_URL=http://backend:8000
    depends_on:
      - backend

volumes:
  ollama_data:

✅ Usage

From project root (where docker-compose.yml is):

docker-compose up --build

Now:

  • Frontend → http://localhost:3000

  • Backend → http://localhost:8000

  • Ollama → http://localhost:11434


👉 Do you want me to also set up live API URL injection (so Angular doesn’t need rebuilds when backend URL changes), or is the static env.js approach good enough for your case?

+Front end service ->docker-compose.yml (all in one)

Extends the above docker-compose.yml so that it runs backend + Ollama + frontend in one go.  Assumption frontend is a simple Node.js/React app (like a Vite or CRA build). 


✅ docker-compose.yml (with frontend)

version: "3.9"

services:
  backend:
    build: .
    container_name: company-recommender-backend
    ports:
      - "8000:8000"
    environment:
      - OLLAMA_URL=http://ollama:11434
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama

  frontend:
    build: ./frontend   # assumes you have a frontend/ directory
    container_name: company-recommender-frontend
    ports:
      - "3000:3000"
    environment:
      - REACT_APP_API_URL=http://backend:8000   # frontend talks to backend inside Docker
    depends_on:
      - backend

volumes:
  ollama_data:

✅ Frontend Dockerfile (in frontend/Dockerfile)

If your frontend is React (Vite or CRA):

# Use Node.js for building the frontend
FROM node:18-slim

WORKDIR /app

# Install dependencies first
COPY package.json package-lock.json ./
RUN npm install

# Copy all frontend files
COPY . .

# Start dev server (hot reload) or build for production
# For dev mode:
CMD ["npm", "start"]

# For production (static build with Nginx), replace above CMD with:
# RUN npm run build
# FROM nginx:alpine
# COPY --from=0 /app/dist /usr/share/nginx/html
# EXPOSE 80
# CMD ["nginx", "-g", "daemon off;"]

👉 If you want production-ready deployment, I’d recommend the Nginx option (the commented block above), because it’s lighter and faster than running a Node dev server.


✅ Usage

From project root (where docker-compose.yml is):

docker-compose up --build
  • Backend → http://localhost:8000

  • Frontend → http://localhost:3000 (and it calls backend via Docker network using http://backend:8000)

  • Ollama → http://localhost:11434



docker-compose.yml for backend ALONE

Run your FastAPI backend in one container and later connect it with Ollama (running either locally or in another container), here’s a simple docker-compose.yml setup.


✅ docker-compose.yml

version: "3.9"

services:
  backend:
    build: .
    container_name: company-recommender-backend
    ports:
      - "8000:8000"
    environment:
      - OLLAMA_URL=http://ollama:11434  # backend talks to ollama service
    depends_on:
      - ollama

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama  # persist models

volumes:
  ollama_data:

✅ Changes in your main.py

Right now you hardcoded:

ollama_url = "https://immai.acintia.com"

Update this to read from an environment variable so it works inside Docker:

import os

ollama_url = os.getenv("OLLAMA_URL", "http://localhost:11434")

That way:

  • If you run with docker-compose, it will connect to the ollama container.

  • If you run locally without Docker, it defaults to http://localhost:11434.


✅ Usage

From your project root:

docker-compose up --build

Then visit:

http://localhost:8000

Your backend will talk to the Ollama container automatically 🎯.



Dockerize CAAI_AI (As per Request from Maran)

 This supplements to CAAI_AI repository

Dockerfile you can use to dockerize your FastAPI backend (main.py). I’ll assume your project structure looks like this:

project-root/
│── main.py
│── requirements.txt
│── Dockerfile

✅ Dockerfile.txt

# Use an official lightweight Python image
FROM python:3.11-slim

# Set environment variables
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1

# Set work directory
WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential curl git && \
    rm -rf /var/lib/apt/lists/*

# Copy requirements first for efficient caching
COPY requirements.txt .

# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy the entire project
COPY . .

# Expose FastAPI port
EXPOSE 8000

# Run the FastAPI app with uvicorn
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

✅ requirements.txt

If you don’t already have one, here’s what your requirements.txt should contain based on your code:

fastapi
uvicorn[standard]
pydantic
langchain
langchain-community
langgraph

(If you know extra versions you want pinned, you can add == versions.)


✅ Build & Run Docker Container

From your project root:

# Build the image
docker build -t company-recommender-backend .

# Run the container
docker run -d -p 8000:8000 company-recommender-backend

⚡ Your FastAPI backend will now be available at:

http://localhost:8000

Response :



Thursday, September 11, 2025

Ollama in K8S (Name space : immai) [Note the Time Also @ end]

Jeyachandran configured and installed Ollama and given the URL (as given below)

Step 1: To Check the Ollama Running In browser Type

                     http://167.86.108.113:30007/ 

                     Response :  Ollama is running

Step 2: in VS code run this Python Script:

import requests
import time
import json

# Ollama server endpoint
url = "https://immai.acintia.com/api/generate"

# Prompt to test
prompt = "Explain quantum computing in simple words"

payload = {
    "model": "gemma3:4b",
    "prompt": prompt,
    "stream": True   # set to False if you want the whole output at once
}

# Start timer
start_time = time.time()

response = requests.post(url, json=payload, stream=True)

output_text = ""
for line in response.iter_lines():
    if line:
        data = json.loads(line.decode("utf-8"))
        if "response" in data:
            output_text += data["response"]

# End timer
end_time = time.time()
elapsed = end_time - start_time

print("=== Prompt ===")
print(prompt)
print("\n=== Response ===")
print(output_text.strip())
print(f"\n⏱ Response time: {elapsed:.2f} seconds")

Response:
PS F:\xampp\htdocs\wp-plugins\EduSite> & C:/Users/AURMC/AppData/Local/Microsoft/WindowsApps/python3.11.exe f:/xampp/htdocs/wp-plugins/EduSite/test.py
=== Prompt ===
Explain quantum computing in simple words

=== Response ===
Okay, let's break down quantum computing in a way that's hopefully easy to understand. It's a really complex field, but here’s the gist:

**1. Regular Computers (Classical Computers):**

* **Bits:** Think of a regular computer as using tiny switches that are either **on** (1) or **off** (0). Everything a regular computer does – from browsing the internet to playing games – is based on these 0s and 1s.
* **One thing at a time:**  A bit can only be in one state (0 or 1) at any given moment.  It’s like a light switch – it's either on *or* off, but not both at the same time.


**2. Quantum Computers:**

* **Qubits:** Instead of bits, quantum computers use **qubits**. These are based on the weird rules of quantum mechanics.
* **Superposition:** This is the key! A qubit can be **both 0 and 1 *at the same time***. Think of it like a spinning coin – it’s neither heads nor tails until you stop it and look.  This "both at once" ability is called superposition.
* **Entanglement:**  This is even weirder.  When two qubits are entangled, they become linked in a spooky way.  If you measure the state of one, you instantly know the state of the other, no matter how far apart they are.

**3. What does this all mean?**

* **Massive Parallelism:** Because a qubit can be in multiple states at once, a quantum computer can explore many possibilities simultaneously.  A regular computer has to try each option one after another.  This allows quantum computers to potentially solve certain problems *much* faster than classical computers.
* **Specific Problems:** Quantum computers aren't going to replace your laptop. They're designed for *specific* types of problems that are incredibly complex for classical computers.  These include:        
    * **Drug Discovery:** Simulating molecules to find new medicines.
    * **Materials Science:** Designing new materials with specific properties.

**4. Analogy Time:**

Imagine trying to find your way through a maze.

* **Classical Computer:**  You try one path, if it's a dead end, you backtrack and try another.
* **Quantum Computer:** You explore *all* the paths simultaneously!

**Important Note:** Quantum computing is still in its very early stages. Building and programming quantum computers is incredibly difficult.  They are fragile and prone to errors.

---

**Resources to learn more:**

* **IBM Quantum Experience:** [https://quantum.ibm.com/](https://quantum.ibm.com/) – A great place to experiment with real quantum computers.
* **Wikipedia - Quantum Computing:** [https://en.wikipedia.org/wiki/Quantum_computing](https://en.wikipedia.org/wiki/Quantum_computing)


Do you want me to delve into a specific aspect of quantum computing, like:

*   Superposition in more detail?
*   Entanglement?
*   What types of problems are most suited for quantum computers?

⏱ Response time: 158.22 seconds


After Changing URL (-ip address) add adding Marans prompt template Response + UI

import streamlit as st
import requests
import time
import json

# Default Ollama server endpoint
DEFAULT_URL = "https://immai.acintia.com/api/generate"

st.set_page_config(page_title="AI Legal Assistant (India)", layout="wide")
st.title("⚖️ AI Legal Assistant (Indian Laws & Corporate Regulations)")

# Sidebar for settings
st.sidebar.header("Settings")
url = st.sidebar.text_input("Ollama API URL", DEFAULT_URL)
model = st.sidebar.text_input("Model", "gemma3:4b")

# ✅ Initialize conversation history in session state
if "history" not in st.session_state:
    st.session_state["history"] = []

# Prompt input
query = st.text_area("Enter your legal query:", height=100)

# Template
PROMPT_TEMPLATE = (
    "You are an AI Legal Assistant specialized only in Indian laws and corporate regulations. "
    "You must not answer any queries unrelated to Indian legal and corporate law. "
    "Provide a brief summary (2-3 bullet points) by default. If the user asks for more details, "
    "a deeper explanation, or a follow-up question, provide a thorough response based on the conversation history. "
    "Use **bold**, *italics*, bullet points (e.g., - Bullet), and emojis where appropriate. "
    "Current conversation:\n{history}\n"
    "--- Examples ---\n"
    "User: What are the documents required for private limited company registration in India?\n"
    "Legal Assistant: \n"
    "- **Identity Proof**: PAN card, passport, Aadhar card, or driver's license of all directors and shareholders.\n"
    "- **Address Proof**: Voter ID, passport, or a recent utility bill (less than 2 months old).\n"
    "- **Registered Office Proof**: Rent agreement or sale deed + NOC from landlord.\n"
    "- **DIN** and **DSC** for all directors.\n\n"
    "User: What is the weather like today in Tokyo?\n"
    "Legal Assistant: I'm sorry, I can only provide information related to Indian laws and corporate regulations.\n"
    "--- End of Examples ---\n"
    "User: {query}\n\nLegal Assistant:"
)

if st.button("Ask Legal Assistant"):
    if not query.strip():
        st.warning("⚠️ Please enter a query")
    else:
        # Format conversation history
        history_str = "\n".join(
            [f"User: {h['user']}\nLegal Assistant: {h['assistant']}" for h in st.session_state["history"]]
        )

        # Final prompt with history + template
        final_prompt = PROMPT_TEMPLATE.format(history=history_str, query=query)

        payload = {
            "model": model,
            "prompt": final_prompt,
            "stream": True
        }

        st.info("⏳ Generating response...")

        start_time = time.time()
        try:
            response = requests.post(url, json=payload, stream=True, timeout=600)

            output_text = ""
            text_box = st.empty()

            for line in response.iter_lines():
                if line:
                    data = json.loads(line.decode("utf-8"))
                    if "response" in data:
                        output_text += data["response"]
                        text_box.markdown(f"**Response (streaming):**\n\n{output_text}")

            end_time = time.time()
            elapsed = end_time - start_time

            st.success(f"✅ Done in {elapsed:.2f} seconds")

            # Store in history
            st.session_state["history"].append({"user": query, "assistant": output_text.strip()})

            st.markdown("### Final Response")
            st.write(output_text.strip())

        except Exception as e:
            st.error(f"Error: {e}")

# ✅ Safe check for history before showing
if st.session_state.get("history"):
    st.markdown("## 📝 Conversation History")
    for h in st.session_state["history"]:
        st.markdown(f"**User:** {h['user']}")
        st.markdown(f"**Assistant:** {h['assistant']}")
        st.markdown("---")






Friday, April 25, 2025

MCP Agent for Name Checking MCA Portal

Perfect — if you're using Ollama to run local LLMs (like llama3, mistral, or custom models), you can absolutely build an agent-based system that checks company name availability using the Ministry of Corporate Affairs (MCA), India site.


🧠 Goal:

Create a local AI agent (powered by an Ollama model) that:

  1. Accepts a company name as input.

  2. Opens the MCA name check website.

  3. Inputs the name.

  4. Returns whether the name is available or not.


🛠️ Tools & Tech Stack

Component Purpose
Ollama Runs the LLM (e.g., llama3, mistral)
Python Main script + automation
Selenium / Playwright For web interaction (headless browser)
LangChain (Optional) Framework to wrap the agent logic
FastAPI (Optional) API to call agent locally

🧩 Step-by-Step Implementation

1. 🧠 Run LLM Locally with Ollama

Install Ollama and run a model:

ollama run mistralbye

Or start a background service:

ollama serve

You can load any model, like:

ollama run llama3

2. 🐍 Build the Agent Logic in Python

You’ll want to build a tool-using agent that lets the LLM instruct actions. Here's a basic script:

mca_agent.py

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import time

def check_company_name_availability(company_name):
    options = Options()
    options.add_argument("--headless")
    driver = webdriver.Chrome(options=options)

    driver.get("https://www.mca.gov.in/mcafoportal/showCheckCompanyName.do")

    # MCA site sometimes loads slowly
    time.sleep(3)

    input_box = driver.find_element(By.ID, "companyName")
    input_box.send_keys(company_name)

    # Submit the form
    driver.find_element(By.ID, "companySubmit").click()

    # Wait for result to render
    time.sleep(3)

    # Extract the result message
    try:
        result_element = driver.find_element(By.ID, "companyResult")
        result = result_element.text
    except:
        result = "Unable to fetch result or page layout changed."

    driver.quit()
    return result

3. 🧠 Connect This to Ollama via a Local LLM Agent

You can create a lightweight LLM interface like this:

llm_agent.py

import subprocess
import json
from mca_agent import check_company_name_availability

def run_llm(prompt: str):
    """Use ollama locally to run a prompt and return a reply"""
    process = subprocess.run(
        ["ollama", "run", "mistral"],
        input=prompt.encode(),
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE
    )
    return process.stdout.decode()

def main():
    while True:
        user_input = input("🧑‍💼 Enter company name to check availability: ")
        if user_input.lower() in ["exit", "quit"]:
            break

        # Use the LLM to interpret or enrich the prompt (optional)
        system_prompt = f"""
        The user wants to check if the company name "{user_input}" is available for registration in India via the MCA portal.
        Just confirm and explain what will happen next. 
        """

        print(run_llm(system_prompt))

        # Actually run the tool
        result = check_company_name_availability(user_input)
        print(f"✅ MCA Portal Result:\n{result}\n")

if __name__ == "__main__":
    main()

4. ✅ Example Run

🧑‍💼 Enter company name to check availability: TechNova Solutions
🤖 LLM: Okay, I will check if "TechNova Solutions" is available for registration via the MCA portal.

✅ MCA Portal Result:
The name 'TechNova Solutions' is already in use or too similar to an existing company.

Optional: Use LangChain for Agent-Orchestration

You can also plug this into a LangChain Agent with tools:

  • Tool: CheckMCANameTool

  • Agent: LLM with tool access

  • Prompt: "Is 'ZebraSoft India' available on MCA?"

Let me know if you want the LangChain + Ollama version too!


🔐 Note:

  • MCA's site doesn’t have an official API, so this is browser automation (Selenium).

  • Always follow the MCA's terms of use.


Response : for company Name entered as TESLA

🧑‍💼 Enter company name to check availability: TESLA

 To check the availability of a company name such as "TESLA" in India, you would need to use the Ministry of Corporate Affairs (MCA) portal. However, as a responsive AI model, I don't have real-time capabilities or access to external databases like the MCA portal. Here's what you can do:


1. Visit the official MCA portal at https://mca.gov.in/

2. Navigate to the 'Incorporate a company' section, often found under the 'Services' tab.

3. Look for the option to check name availability and click on it. You might need to register yourself if you haven't already.

4. Enter the proposed name "TESLA" (without quotes) in the provided field and follow the instructions to find out its availability.


Keep in mind that company names in India are not permitted to be identical or too similar to an existing name, or offensive, or potentially misleading. Also, certain words require government approval. For instance, 'Tesla' is a foreign word and might need additional approvals if it's the main component of your proposed company name.


If the name is available, you can proceed with the incorporation process, which involves preparing and submitting various documents and paying the required fees. If the name is not available, you will have to choose another name.


To stay updated on India's business registration process, visit the MCA portal or consult a professional service provider for guidance.




DevTools listening on ws://127.0.0.1:49360/devtools/browser/ab054f98-e916-4f94-9637-74557b3b00c9

Traceback (most recent call last):

  File "d:\demo\caai-ai\mcp-agentforcompanyname\llm_agent.py", line 34, in <module>

    main()

    ~~~~^^

  File "d:\demo\caai-ai\mcp-agentforcompanyname\llm_agent.py", line 30, in main

    result = check_company_name_availability(user_input)

  File "d:\demo\caai-ai\mcp-agentforcompanyname\mca_agent.py", line 16, in check_company_name_availability

    input_box = driver.find_element(By.ID, "companyName")

  File "C:\Users\AURMC\AppData\Roaming\Python\Python313\site-packages\selenium\webdriver\remote\webdriver.py", line 898, in find_element

    return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]

           ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\AURMC\AppData\Roaming\Python\Python313\site-packages\selenium\webdriver\remote\webdriver.py", line 429, in execute

    self.error_handler.check_response(response)

    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^

  File "C:\Users\AURMC\AppData\Roaming\Python\Python313\site-packages\selenium\webdriver\remote\errorhandler.py", line 232, in check_response

    raise exception_class(message, screen, stacktrace)

selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="companyName"]"}   

  (Session info: chrome=135.0.7049.115); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception

Stacktrace:

        GetHandleVerifier [0x00007FF70D23EFA5+77893]

        GetHandleVerifier [0x00007FF70D23F000+77984]

        (No symbol) [0x00007FF70D0091BA]

        (No symbol) [0x00007FF70D05F16D]

        (No symbol) [0x00007FF70D05F41C]

        (No symbol) [0x00007FF70D0B2237]

        (No symbol) [0x00007FF70D08716F]

        (No symbol) [0x00007FF70D0AF07F]

        (No symbol) [0x00007FF70D086F03]

        (No symbol) [0x00007FF70D050328]

        (No symbol) [0x00007FF70D051093]

        GetHandleVerifier [0x00007FF70D4F7B6D+2931725]

        GetHandleVerifier [0x00007FF70D4F2132+2908626]

        GetHandleVerifier [0x00007FF70D5100F3+3031443]

        GetHandleVerifier [0x00007FF70D2591EA+184970]

        GetHandleVerifier [0x00007FF70D26086F+215311]

        GetHandleVerifier [0x00007FF70D246EC4+110436]

        GetHandleVerifier [0x00007FF70D247072+110866]

        GetHandleVerifier [0x00007FF70D22D479+5401]

        BaseThreadInitThunk [0x00007FFC2732259D+29]

        RtlUserThreadStart [0x00007FFC287CAF38+40]


PS D:\demo\caai-ai\mcp-agentforcompanyname>

-----------------------------------------------------------------------------------------------------------------------

Any way Still, trying to figure out?!



Ready-made Bash script for Docker workflow

R eady-made Bash script that automates your Docker workflow end-to-end: Build backend & frontend images Tag & push them to D...