Sunday, October 5, 2025

Ready-made Bash script for Docker workflow

Ready-made Bash script that automates your Docker workflow end-to-end:

  • Build backend & frontend images

  • Tag & push them to Docker Hub

  • Pull & run images on any machine

  • Set up Certbot HTTPS via a temporary NGINX container

Below is a complete script. You just need to edit your Docker Hub username, email, and domain at the top.


deploy_docker.sh

#!/bin/bash
set -e

# -----------------------------
# CONFIGURATION - EDIT THESE
# -----------------------------
DOCKERHUB_USER="yourhubusername"
DOMAIN="immai.acintia.com"
EMAIL="your-email@domain.com"

BACKEND_IMAGE="${DOCKERHUB_USER}/ollama-backend:latest"
FRONTEND_IMAGE="${DOCKERHUB_USER}/ollama-frontend:latest"

# -----------------------------
# STEP 1: Build Docker images
# -----------------------------
echo "Building backend image..."
docker build -t $BACKEND_IMAGE ./backend

echo "Building frontend image..."
docker build -t $FRONTEND_IMAGE ./frontend

# -----------------------------
# STEP 2: Push images to Docker Hub
# -----------------------------
echo "Logging in to Docker Hub..."
docker login

echo "Pushing backend image..."
docker push $BACKEND_IMAGE

echo "Pushing frontend image..."
docker push $FRONTEND_IMAGE

# -----------------------------
# STEP 3: Run containers on this machine
# -----------------------------
echo "Stopping any running containers..."
docker stop ollama-backend ollama-frontend nginx-proxy || true
docker rm ollama-backend ollama-frontend nginx-proxy || true

echo "Running backend container..."
docker run -d --name ollama-backend -p 8000:8000 $BACKEND_IMAGE

echo "Running frontend container..."
docker run -d --name ollama-frontend -p 8080:80 $FRONTEND_IMAGE

# -----------------------------
# STEP 4: Set up temporary NGINX for Certbot
# -----------------------------
echo "Creating temporary NGINX for Certbot challenge..."
docker run -d --name nginx-certbot \
  -p 80:80 \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  nginx:alpine

sleep 5

echo "Obtaining HTTPS certificate with Certbot..."
docker run --rm \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  certbot/certbot certonly \
  --webroot --webroot-path=/var/www/certbot \
  --email $EMAIL \
  -d $DOMAIN \
  --agree-tos --no-eff-email

echo "Stopping temporary NGINX..."
docker stop nginx-certbot
docker rm nginx-certbot

# -----------------------------
# STEP 5: Run NGINX with HTTPS
# -----------------------------
echo "Running NGINX proxy with HTTPS..."
cat > nginx.conf <<EOF
server {
    listen 80;
    server_name $DOMAIN;
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }
    location / {
        return 301 https://\$host\$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name $DOMAIN;

    ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;

    location /api/ {
        proxy_pass http://host.docker.internal:8000/;
        proxy_set_header Host \$host;
        proxy_set_header X-Real-IP \$remote_addr;
        proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
    }

    location / {
        proxy_pass http://host.docker.internal:8080/;
        proxy_set_header Host \$host;
        proxy_set_header X-Real-IP \$remote_addr;
        proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
    }
}
EOF

docker run -d --name nginx-proxy \
  -p 80:80 -p 443:443 \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  -v $(pwd)/nginx.conf:/etc/nginx/conf.d/default.conf:ro \
  nginx:alpine

echo "✅ Deployment complete!"
echo "Frontend: https://$DOMAIN/"
echo "Backend API: https://$DOMAIN/api/"

✅ Features

  1. Build: backend & frontend Docker images.

  2. Push: tags and uploads to Docker Hub.

  3. Run: pulls the latest images and runs containers on your host.

  4. Certbot HTTPS: temporary NGINX container to get TLS certificates.

  5. Production NGINX: runs NGINX reverse proxy with HTTPS for frontend & backend.


🔧 How to use

  1. Make script executable:

chmod +x deploy_docker.sh
  1. Run it:

./deploy_docker.sh
  1. Access services:

  • Frontend: https://immai.acintia.com/

  • Backend: https://immai.acintia.com/api/

  1. Certificates are stored in Docker volumes: certbot-www & certbot-etc.

  2. To renew certificates:

docker run --rm \
  -v certbot-www:/var/www/certbot \
  -v certbot-etc:/etc/letsencrypt \
  certbot/certbot renew --webroot -w /var/www/certbot
docker restart nginx-proxy


Ready-to-use Kubernetes folder 05.02.2025

Ready-to-use Kubernetes folder with all manifests for your backend, frontend, and optional Ollama service, fully configured for HTTPS via cert-manager. You will just need to replace Docker Hub images and your email/domain.

Here’s the structure:

k8s/
├── namespace.yaml
├── clusterissuer.yaml
├── backend-deployment.yaml
├── frontend-deployment.yaml
├── ollama-deployment.yaml
├── ingress.yaml

1️⃣ namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ollama-chat

2️⃣ clusterissuer.yaml

Replace your-email@domain.com with your email.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: your-email@domain.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: nginx

3️⃣ backend-deployment.yaml

Replace yourhubusername/ollama-backend:latest with your Docker Hub backend image.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama-backend
  namespace: ollama-chat
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ollama-backend
  template:
    metadata:
      labels:
        app: ollama-backend
    spec:
      containers:
        - name: backend
          image: yourhubusername/ollama-backend:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 8000
          env:
            - name: OLLAMA_HOST
              value: "http://ollama:11434"
---
apiVersion: v1
kind: Service
metadata:
  name: ollama-backend
  namespace: ollama-chat
spec:
  selector:
    app: ollama-backend
  ports:
    - protocol: TCP
      port: 8000
      targetPort: 8000
  type: ClusterIP

4️⃣ frontend-deployment.yaml

Replace yourhubusername/ollama-frontend:latest with your Docker Hub frontend image.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama-frontend
  namespace: ollama-chat
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ollama-frontend
  template:
    metadata:
      labels:
        app: ollama-frontend
    spec:
      containers:
        - name: frontend
          image: yourhubusername/ollama-frontend:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: ollama-frontend
  namespace: ollama-chat
spec:
  selector:
    app: ollama-frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP

5️⃣ ollama-deployment.yaml (optional)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ollama
  namespace: ollama-chat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ollama
  template:
    metadata:
      labels:
        app: ollama
    spec:
      containers:
        - name: ollama
          image: ollama/ollama:latest
          ports:
            - containerPort: 11434
---
apiVersion: v1
kind: Service
metadata:
  name: ollama
  namespace: ollama-chat
spec:
  selector:
    app: ollama
  ports:
    - protocol: TCP
      port: 11434
      targetPort: 11434
  type: ClusterIP

6️⃣ ingress.yaml

Replace immai.acintia.com with your domain.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ollama-ingress
  namespace: ollama-chat
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - immai.acintia.com
      secretName: ollama-tls
  rules:
    - host: immai.acintia.com
      http:
        paths:
          - path: /api/
            pathType: Prefix
            backend:
              service:
                name: ollama-backend
                port:
                  number: 8000
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ollama-frontend
                port:
                  number: 80

7️⃣ Deployment Instructions

Step 1: Apply namespace

kubectl apply -f k8s/namespace.yaml

Step 2: Apply cert-manager ClusterIssuer

kubectl apply -f k8s/clusterissuer.yaml

Step 3: Deploy backend, frontend, and Ollama

kubectl apply -f k8s/backend-deployment.yaml
kubectl apply -f k8s/frontend-deployment.yaml
kubectl apply -f k8s/ollama-deployment.yaml   # optional

Step 4: Deploy Ingress

kubectl apply -f k8s/ingress.yaml

Step 5: Verify resources

kubectl get pods -n ollama-chat
kubectl get svc -n ollama-chat
kubectl get ingress -n ollama-chat

Step 6: Access services

  • Frontend → https://immai.acintia.com/

  • Backend API → https://immai.acintia.com/api/

TLS is automatically handled by cert-manager.


✅ With this setup:

  • You can run your backend, frontend, and Ollama in any Kubernetes cluster.

  • Certificates auto-renew with cert-manager.

  • Services are scalable by adjusting replicas.

  • Ingress routes traffic securely over HTTPS.


Dockerization Ready Files for IMMAI 05.10.20205

Ready-to-use Docker setup for your project so you can literally copy-paste and run everything. This will include:

  1. Backend Dockerfile

  2. Frontend Dockerfile

  3. docker-compose.yml (with backend, frontend, optional Ollama service)

  4. NGINX config for reverse proxy + HTTPS (via Certbot)

  5. Instructions to build, push to Docker Hub, and deploy on any machine


1️⃣ Backend Dockerfile (backend/Dockerfile)

# backend/Dockerfile
FROM python:3.11-slim

ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1

WORKDIR /app

RUN apt-get update && \
    apt-get install -y --no-install-recommends build-essential gcc libpq-dev && \
    rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --upgrade pip && pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "1"]

2️⃣ Frontend Dockerfile (frontend/Dockerfile)

# frontend/Dockerfile
# Stage 1: build Angular app
FROM node:20 AS build

WORKDIR /usr/src/app

COPY legal-chatbot-ui/package*.json ./
RUN npm ci --legacy-peer-deps

COPY legal-chatbot-ui/ .
RUN npm run build -- --configuration production

# Stage 2: Serve with NGINX
FROM nginx:alpine

RUN rm -rf /usr/share/nginx/html/*

COPY --from=build /usr/src/app/dist/legal-chatbot-ui /usr/share/nginx/html

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

3️⃣ docker-compose.yml

version: "3.9"

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: unless-stopped
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama
    command: serve

  backend:
    build: ./backend
    container_name: ollama-backend
    restart: unless-stopped
    environment:
      - OLLAMA_HOST=http://ollama:11434
    ports:
      - "8000:8000"
    volumes:
      - ./backend:/app
    depends_on:
      - ollama

  frontend:
    build: ./frontend
    container_name: ollama-frontend
    restart: unless-stopped
    ports:
      - "8080:80"
    depends_on:
      - backend

  nginx:
    image: nginx:alpine
    container_name: nginx-proxy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - certbot-www:/var/www/certbot
      - certbot-etc:/etc/letsencrypt
    depends_on:
      - frontend
      - backend

  certbot:
    image: certbot/certbot
    container_name: certbot
    volumes:
      - certbot-www:/var/www/certbot
      - certbot-etc:/etc/letsencrypt
    entrypoint: ""

volumes:
  ollama_data:
  certbot-www:
  certbot-etc:

4️⃣ NGINX Config (nginx/conf.d/default.conf)

server {
    listen 80;
    server_name immai.acintia.com;

    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl;
    server_name immai.acintia.com;

    ssl_certificate /etc/letsencrypt/live/immai.acintia.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/immai.acintia.com/privkey.pem;

    location /api/ {
        proxy_pass http://backend:8000/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location / {
        proxy_pass http://frontend:80/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

5️⃣ Steps to Build, Push, and Deploy Anywhere

Step 1: Build Docker images locally

docker-compose build

Step 2: Run locally

docker-compose up -d

Step 3: Push images to Docker Hub

docker login
docker tag ollama-backend yourhubusername/ollama-backend:latest
docker tag ollama-frontend yourhubusername/ollama-frontend:latest
docker push yourhubusername/ollama-backend:latest
docker push yourhubusername/ollama-frontend:latest

Step 4: Pull & run on any other machine

docker pull yourhubusername/ollama-backend:latest
docker pull yourhubusername/ollama-frontend:latest
docker-compose up -d

Step 5: Obtain HTTPS certificate (first-time)

docker-compose up -d nginx
docker-compose run --rm certbot certonly \
  --webroot --webroot-path=/var/www/certbot \
  --email you@example.com \
  -d immai.acintia.com \
  --agree-tos --no-eff-email
docker-compose restart nginx

✅ Now your backend + frontend are containerized, served through NGINX, and HTTPS-ready.

If you want, I can also provide a ready-made script that automates:

  • Building images

  • Tagging & pushing to Docker Hub

  • Running on any machine

  • Certbot HTTPS setup

Do you want me to create that script?

Ready-made Bash script for Docker workflow

R eady-made Bash script that automates your Docker workflow end-to-end: Build backend & frontend images Tag & push them to D...