Monday, February 24, 2025

CAAI-AI #2 Docker Operations

1. Create a Dockerfile for the Ollama Service (Dockerfile.ollama)

FROM python:3.11-slim

ENV DEBIAN_FRONTEND=noninteractive
WORKDIR /app

# Install required dependencies
RUN apt update && apt install -y curl && rm -rf /var/lib/apt/lists/*

# Install Ollama
RUN curl -fsSL https://ollama.com/install.sh | bash

# Ensure Ollama is in PATH
ENV PATH="/root/.ollama/bin:$PATH"

# Copy entrypoint script and make it executable
COPY ollama-entrypoint.sh /usr/local/bin/ollama-entrypoint.sh
RUN chmod +x /usr/local/bin/ollama-entrypoint.sh

EXPOSE 11434

ENTRYPOINT ["/usr/local/bin/ollama-entrypoint.sh"]

2. Create the Entry Point Script for Ollama (ollama-entrypoint.sh)

#!/bin/bash

# Ensure Ollama is installed and in PATH
export PATH="/root/.ollama/bin:$PATH"

# Start Ollama
# /root/.ollama/bin/ollama serve &
ollama serve &

# Wait for Ollama to initialize
sleep 5

# Pull the Llama3 model
# /root/.ollama/bin/ollama pull llama3:8b
ollama pull llama3.2

# Keep the container running
wait


3. Create a Dockerfile for the Streamlit Service (Dockerfile.streamlit)

FROM python:3.11-slim

WORKDIR /app

# Copy dependency file and install packages
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy your app code
COPY . .

EXPOSE 8501

CMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]



4. Create a Docker Compose File (docker-compose.yml)

version: '3.8'

services:
  ollama:
    build:
      context: .
      dockerfile: Dockerfile.ollama
    ports:
      - "11434:11434"

  streamlit:
    build:
      context: .
      dockerfile: Dockerfile.streamlit
    ports:
      - "8501:8501"
    depends_on:
      - ollama

Steps to Run

  1. Place the two Dockerfiles and the entrypoint script in your project folder.

  2. Make sure you have a valid requirements.txt (e.g., with streamlit).

  3. Place your app.py in the same folder.

  4. Run the following command in the project directory:

     docker-compose build --no-cache
  5.  You will get two images ollama-service:latest, streamlit-service:latest
  6.  You will run both to make as containers
  7.  Right click on streamlit-service:latest container to get a dropdown. 
  8.  Pick open in browser to see the screen as shown below: 




Wow! We have made Dockerfile in parts and created Images for services ollama, streamlit. We have also run in container!!! Great !!!

PART II 

To push your Ollama + Streamlit Docker Compose setup to Docker Hub, follow these steps:


1. Log in to Docker Hub

First, ensure you're logged in:

docker login

Enter your Docker Hub username and password when prompted.


2. Tag Your Images Properly

Since Docker Compose builds images with generic names like ollama-ollama and ollama-streamlit, you must tag them with your Docker Hub repository name before pushing.

Assume your Docker Hub username is yourusername. Replace it with your actual username.

docker tag ollama-ollama yourusername/ollama-service:latest
docker tag ollama-streamlit Yourusername/streamlit-service:latest
my case : 
docker tag ollama-ollama chandra65/ollama-service:latest
docker tag ollama-streamlit chandra65/streamlit-service:latest

3. Push Images to Docker Hub

Now, push each image separately:

docker push yourusername/ollama-service:latest
docker push yourusername/streamlit-service:latest
my case :
docker push chandra65/ollama-service:latest
docker push chandra65/streamlit-service:latest

4. Verify on Docker Hub

Go to Docker Hub and check if your images appear under your repositories.


5. Update docker-compose.yml to Use Pushed Images (Optional)

If you want to use these images without rebuilding them locally, update docker-compose.yml like this:

version: '3.8'

services:
  ollama:
    image: yourusername/ollama-service:latest
    ports:
      - "11434:11434"

  streamlit:
    image: yourusername/streamlit-service:latest
    ports:
      - "8501:8501"
    depends_on:
      - ollama

6. Pull & Run from Docker Hub

On any machine, you can now pull and run your services directly:

docker-compose up

This will fetch the images from Docker Hub and run them without rebuilding! 🚀

          
This configuration creates two containers. The Ollama container runs the Ollama service and pulls the Llama3 model. The Streamlit container runs your Streamlit app. They are connected via Docker Compose, and the Streamlit service waits for the Ollama service to start.


No comments:

Post a Comment

#1 K8S Intro -Lab

GCP Kubernetes Hands-on Lab Objective By the end of this lab, students will be able to: Log in to Google Cloud Platform (GCP) Create a Kub...