Friday, March 21, 2025

#1 K8S Intro -Lab


GCP Kubernetes Hands-on Lab

Objective

By the end of this lab, students will be able to:

  • Log in to Google Cloud Platform (GCP)
  • Create a Kubernetes cluster using Google Kubernetes Engine (GKE)
  • Deploy and manage nodes
  • Deploy and manage pods
  • Run and expose an application

Prerequisites

  • A Google Cloud Platform (GCP) account
  • Billing enabled for the GCP project
  • Google Cloud SDK installed (or use Google Cloud Shell)
  • Basic understanding of Kubernetes concepts

Step 1: Log in to GCP

  1. Open Google Cloud Console.
  2. Click on Select a project and create a new project (or use an existing one).
  3. Enable the Kubernetes Engine API:
    • Navigate to APIs & Services > Library.
    • Search for Kubernetes Engine API and enable it.
  4. Open Cloud Shell (recommended) or install and configure gcloud CLI:
    gcloud auth login
    gcloud config set project [PROJECT_ID]
    




Step 2: Create a Kubernetes Cluster

  1. In the Google Cloud Console, navigate to the left panel and select Kubernetes Engine.
  2. Click on Create and choose either Auto Pilot or Standard (configure as needed).
  3. Select Zonal as the location type.
  4. Use the Release Channel (default) setting.
  5. (Optional) Use the Setup Guide from the left menu for guided setup.
  6. In the Default-pool section, configure:
    • Number of Nodes (e.g., 3)
    • Machine Configuration:
      • Nodes: E2
      • E2-micro instance type
  7. Click Create Cluster.
  8. Configure kubectl to use the cluster:
    gcloud container clusters get-credentials my-cluster --zone us-central1-a
    
  9. Verify the cluster status:
    kubectl get nodes
    

Step 3: Deploy a Pod

  1. Create a deployment running an Nginx container:
    kubectl create deployment niginx --image=nginx
    
  2. Check the status of the pod:
    kubectl get pods
    
    Output should show running if successful.

Step 4: Expose the Deployment

  1. Expose the deployment as a service:
    kubectl expose deployment my-nginx --type=LoadBalancer --port=80
    
  2. Get the external IP of the service:
    kubectl get services
    
  3. Open a web browser and enter the external IP to verify that Nginx is running.

Step 5: Scale the Deployment

  1. Scale the deployment to 3 replicas:
    kubectl scale deployment my-nginx --replicas=3
    
  2. Verify the number of pods:
    kubectl get pods
    

Step 6: Clean Up

  1. Delete the service:
    kubectl delete service my-nginx
    
  2. Delete the deployment:
    kubectl delete deployment my-nginx
    
  3. Delete the cluster:
    gcloud container clusters delete my-cluster --zone us-central1-a
    

    Note: If you do not delete the cluster and pods, you may continue to incur charges while they are running.


Additional Resources


Conclusion

In this lab, students learned how to create a Kubernetes cluster on GCP, deploy an application, expose it to the internet, scale it, and clean up resources. This provides a basic understanding of Kubernetes operations on Google Cloud.

Thursday, March 20, 2025

For Demo - Discussion?

25.03.2025

Let me explain how to create docker conatiner to run Document processing Agent by AI-Team :

Step 1:  

In Visual code editor create Agent-Doc folder

Please generate all the necessary files one by one:

  1. Agent.py - The Flask backend with Llama3.2 integration, EasyOCR document processing, and AI-based name suggestions.

  2. index.html - The frontend for user interaction with AI-generated data and document processing.

  3. Dockerfile - A Dockerfile to containerize the application.

  4. requirements.txt - Dependencies for the Flask server.

Let us generate these now.

Agent.py (backend)

from flask import Flask, jsonify, request
from flask_cors import CORS  # Import CORS
import requests
from functools import lru_cache
import asyncio
import easyocr
from PIL import Image
import io
import re
 

app = Flask(__name__)
CORS(app)  # Enable CORS globally for all routes

reader = easyocr.Reader(['en'])

# Mock database (replace with real database later)
mock_user_data = {
    "userId": 1,
    "panNumber": "ABCDE1234F",
    "registeredAddress": "123 Business Park, Mumbai, Maharashtra, India"
}

# Cache LLM responses for frequently asked questions
@lru_cache(maxsize=128)
def get_llm_response(prompt):
    full_prompt = f"{prompt} Please provide a short and relevant answer (max 10 words)."
    url = "http://localhost:11434/api/generate"
    payload = {
        "model": "llama3.2",
        "prompt": full_prompt,
        "stream": False
    }
    response = requests.post(url, json=payload)
    if response.status_code == 200:
        llm_response = response.json().get("response", "I am not sure about this.")
        if len(llm_response.split()) > 10:  # Limit to 10 words
            llm_response = " ".join(llm_response.split()[:10]) + "..."
        return llm_response
    return "Error communicating with LLM."

# Endpoint for basic and generic data
@app.route('/api/auto-fill', methods=['GET'])
async def auto_fill():
    user_data = mock_user_data

    business_activity_prompt = "Describe the primary business activity for a company in India."
    ownership_type_prompt = "What are common ownership types for Indian companies?"

    business_activity_task = asyncio.to_thread(get_llm_response, business_activity_prompt)
    ownership_type_task = asyncio.to_thread(get_llm_response, ownership_type_prompt)

    business_activity, ownership_type = await asyncio.gather(business_activity_task, ownership_type_task)

    return jsonify({
        "panNumber": user_data["panNumber"],
        "registeredAddress": user_data["registeredAddress"],
        "businessActivity": business_activity,
        "ownershipType": ownership_type
    })


# Other endpoints remain unchanged...
# Initialize EasyOCR reader
# English language

# Endpoint for document processing


@app.route('/api/process-documents', methods=['POST'])
def process_documents():
    try:
        print("Processing uploaded documents...")  # Log the start of the process

        files = request.files.getlist('documents')
        if not files:
            return jsonify({"error": "No files uploaded"}), 400

        extracted_data = {"gstNumber": "", "directorAadhaar": ""}

        for file in files:
            print(f"Processing file: {file.filename}")  # Log the file name

            # Check file type
            if file.filename.lower().endswith(('.png', '.jpg', '.jpeg')):
                print("Detected image file.")  # Log file type
                try:
                    image = Image.open(io.BytesIO(file.read()))
                    # Convert image to bytes for EasyOCR
                    image_bytes = io.BytesIO()
                    image.save(image_bytes, format='PNG')
                    image_bytes.seek(0)

                    # Extract text using EasyOCR
                    result = reader.readtext(image_bytes.getvalue())
                    text = " ".join([item[1] for item in result])  # Join all detected text
                    print(f"Extracted text from image: {text[:100]}...")  # Log extracted text
                except Exception as e:
                    print(f"Error processing image: {e}")
                    return jsonify({"error": f"Failed to process image file: {file.filename}"}), 500

            elif file.filename.lower().endswith('.pdf'):
                print("Detected PDF file.")  # Log file type
                try:
                    from pdf2image import convert_from_bytes
                    images = convert_from_bytes(file.read())
                    text = ""
                    for img in images:
                        # Convert each page to bytes for EasyOCR
                        img_byte_arr = io.BytesIO()
                        img.save(img_byte_arr, format='PNG')
                        img_byte_arr.seek(0)

                        # Extract text using EasyOCR
                        result = reader.readtext(img_byte_arr.getvalue())
                        text += " ".join([item[1] for item in result])  # Join all detected text
                    print(f"Extracted text from PDF: {text[:100]}...")  # Log extracted text
                except Exception as e:
                    print(f"Error processing PDF: {e}")
                    return jsonify({"error": f"Failed to process PDF file: {file.filename}"}), 500

            else:
                return jsonify({"error": f"Unsupported file type: {file.filename}"}), 400

            # Extract GST and Aadhaar numbers using regex
            gst_match = re.search(r'\b\d{2}[A-Z]{5}\d{4}[A-Z]{1}[A-Z\d]{1}[Z]{1}[A-Z\d]{1}\b', text)
            aadhaar_match = re.search(r'\b\d{4}-\d{4}-\d{4}\b', text)

            if gst_match:
                extracted_data["gstNumber"] = gst_match.group(0)
                print(f"Extracted GST Number: {gst_match.group(0)}")  # Log GST number
            if aadhaar_match:
                extracted_data["directorAadhaar"] = aadhaar_match.group(0)
                print(f"Extracted Aadhaar Number: {aadhaar_match.group(0)}")  # Log Aadhaar number

        return jsonify(extracted_data)

    except Exception as e:
        print(f"Unexpected error: {e}")  # Log unexpected errors
        return jsonify({"error": "An unexpected error occurred while processing the documents."}), 500


# Mock function to validate company names against Indian naming conventions
def validate_company_name(name):
    prohibited_words = ["bank", "government", "reserve"]  # Example prohibited words
    if any(word in name.lower() for word in prohibited_words):
        return False
    return True


# Function to generate AI-based name suggestions
def get_ai_suggestions(firstName):
    # Craft a prompt for the LLM
    prompt = f"Generate 5 creative and unique company name suggestions based on '{firstName}' that adhere to Indian company naming conventions."

    # Call the LLM via Ollama
    url = "http://localhost:11434/api/generate"
    payload = {
        "model": "llama3.2",
        "prompt": prompt,
        "stream": False
    }
    response = requests.post(url, json=payload)
    if response.status_code == 200:
        llm_response = response.json().get("response", "")
        suggestions = [name.strip() for name in llm_response.split("\n") if name.strip()]
        return suggestions[:5]  # Return only 5 suggestions
    return []


# Endpoint for AI-based name suggestions
@app.route('/api/suggest-names', methods=['POST'])
def suggest_names():
    try:
        data = request.get_json()
        firstName = data.get("firstName", "").strip()

        if not firstName:
            return jsonify({"error": "First name preference is required."}), 400

        # Validate the first preference
        if not validate_company_name(firstName):
            return jsonify({"error": "The provided name does not meet Indian company naming conventions."}), 400

        # Generate AI-based suggestions
        suggestions = get_ai_suggestions(firstName)

        if not suggestions:
            return jsonify({"error": "Failed to generate name suggestions."}), 500

        return jsonify({"suggestions": suggestions})

    except Exception as e:
        print(f"Unexpected error: {e}")
        return jsonify({"error": "An unexpected error occurred while generating name suggestions."}), 500


if __name__ == '__main__':
    app.run(debug=True, port=3000)

Now, let's generate the remaining files:

index.html (Frontend)

This file provides a simple interface to interact with the backend.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>AI-Powered Form Generator</title>
</head>
<body>
    <h2>Generate Name Suggestions</h2>
    <input type="text" id="prompt" placeholder="Enter your prompt">
    <button onclick="generateText()">Generate</button>
    <p id="generatedResponse"></p>

    <h2>Process Document</h2>
    <input type="file" id="fileInput">
    <button onclick="processDoc()">Upload & Process</button>
    <p id="processedText"></p>

    <script>
        async function generateText() {
            let prompt = document.getElementById("prompt").value;
            let response = await fetch("/generate", {
                method: "POST",
                headers: {"Content-Type": "application/json"},
                body: JSON.stringify({ prompt })
            });
            let data = await response.json();
            document.getElementById("generatedResponse").innerText = data.generated_text;
        }

        async function processDoc() {
            let fileInput = document.getElementById("fileInput").files[0];
            let formData = new FormData();
            formData.append("file", fileInput);

            let response = await fetch("/process-doc", {
                method: "POST",
                body: formData
            });
            let data = await response.json();
            document.getElementById("processedText").innerText = data.text.join(", ");
        }
    </script>
</body>
</html>

Dockerfile

This file creates a containerized environment for the Flask application.

# Use official Python image as a base
FROM python:3.10

# Set working directory
WORKDIR /app

# Copy project files
COPY . .

# Install dependencies
RUN pip install -r requirements.txt

# Expose Flask port
EXPOSE 5000

# Run the Flask app
CMD ["python", "Agent.py"]

requirements.txt

This file lists all dependencies.

Flask
easyocr
ollama

Please install ollama from https://ollama.com/

Please pull llama3.2 from ollama by : 

ollama run llama3.2

Now you have everything to run your AI-powered form generator and document processor inside a Docker container! Let me know if you need any modifications. πŸš€ 

Friday, March 14, 2025

Run OLLAMA in Co-Lab

 

How to Run Any LLM Models in Google Colab

Introduction

Google Colab provides an easy and efficient way to run large language models (LLMs) without needing powerful local hardware. In this guide, we’ll walk through the steps to set up and run LLM models in Colab using Ollama.

Setting Up Google Colab

Step 1: Create a New Notebook

  1. Go to Google Colab.
  2. Create a new notebook and name it First.ipynb.
  3. Connect to a GPU instance.

Step 2: Enable GPU

  1. Click on Runtime in the menu.
  2. Select Change Runtime Type.
  3. Choose T4 GPU under hardware accelerator.
  4. Save and hit Connect.

To confirm GPU availability, check system resources. You should see around 15GB of VRAM.

!nvidia-smi

Installing Dependencies

Step 3: Install colab-xterm

To enable terminal access in Colab:

!pip install colab-xterm
%load_ext colabxterm
%xterm

Step 4: Install and Set Up Ollama

Ollama is a tool that allows you to run LLMs easily. Install it with the following command:

curl https://ollama.ai/install.sh | sh

Step 5: Start Ollama and Download a Model

ollama serve &
ollama pull llama3.2

You can verify the installation and downloaded models using:

ollama list
ollama show llama3.2

Step 6: Install the Ollama Python Package

!pip install ollama

Running LLM Models in Colab

Now, let's use Python to interact with the LLM model.

import ollama

prompt = "What is a pandas DataFrame?"

response = ollama.chat(
    model="llama3.2",
    messages=[{"role": "user", "content": prompt}]
)

print(response['message']['content'])

Conclusion

By following these steps, you can easily set up and run LLM models in Google Colab using Ollama. This method allows you to leverage cloud computing resources for AI model inference, making it accessible even on low-end devices.

Happy coding!

Wednesday, February 26, 2025

CAAI-Front End #3 Git & Docker Push


πŸš€ Deploying Your Angular App with Docker – The Fun Way!

Prerequisite: Installation of 

  1. Docker Desk top 
  2. Visual Studio Code, (with or w/o your angular project)
  3. Node.js,  
  4. Docker Extension for VS. 

Let’s break it down into two easy stages. No boring explanationsβ€”just simple, fun, and straight to the point! πŸŽ‰


🎨 Step 1: Create Your Angular Project

First things first, let’s create our Angular project and get things rolling. Open your terminal and type:

ng new front-end  # This creates a new Angular app Skip this if u have already project
cd front-end      # Move into the project folder
ng serve          # Start the app on localhost:4200

Now, check if your app is running in the browser. If yes, you’re awesome. If not, well… double-check the steps above. πŸ˜†


🐳 Step 2: Create the Dockerfile

Docker loves instructions, so we’ll create a Dockerfile to tell it what to do. Create a file named Dockerfile in your project root and add the following:

πŸ”¨ Stage 1: Build Angular Code

# Use the latest Node.js for building
FROM node:latest AS build

WORKDIR /usr/local/app

COPY ./ /usr/local/app/

RUN npm install  # Install dependencies
RUN npm run build  # Build the Angular app

What’s happening here? We:

  1. Pulled the latest Node.js image
  2. Set our working directory
  3. Copied all our project files into the container
  4. Installed dependencies & built the project (Boom! πŸŽ†)

🌐 Stage 2: Serve with Nginx

# Use the latest Nginx image for serving the app
FROM nginx:latest

COPY --from=build /usr/local/app/dist/front-end /usr/share/nginx/html  # Change 'front-end' if needed

EXPOSE 80  # Open port 80 for the webserver

Now, we:

  1. Pulled the latest Nginx image
  2. Copied the compiled Angular app to the Nginx web root
  3. Exposed port 80 so the world can see our masterpiece 🌎

🚒 Step 3: Build & Run the Docker Image

Build the Angular project before creating the image:

ng build  # This will generate the 'dist' folder

Now, let’s build the Docker image and run it:

docker build -t my-angular-app .  # Replace 'my-angular-app' with your preferred name
docker run -p 80:80 my-angular-app  # Runs your app on port 80

πŸŽ‰ Open your browser and visit http://localhost:80β€”your Angular app is now running inside a Docker container! πŸš€


πŸ“Œ Step 4: Push Your Project to GitHub (With Git hub account for immbizsoft)

Let’s store our project in a GitHub repository. First, initialize Git:

git init  # Initialize a new Git repository
git add .  # Stage all files
git commit -m "Initial commit"  # Commit the files

Now, connect your project to GitHub:

git remote add origin https://github.com/immbizsoft/front-end.git  # Replace with your actual repository URL
git branch -M main  # Set the default branch to main
git push -u origin main  # Push your code to GitHub

Boom! πŸŽ† Your Angular project is now safely stored on GitHub.


🏁 That’s It!

Now you’ve officially dockerized your Angular app AND pushed it to GitHub! You can share it, deploy it, or just feel cool knowing you’re a DevOps pro. 😎

Got any questions? Drop them in the comments below! Happy coding! πŸ’»πŸŽˆ

Monday, February 24, 2025

CAAI-AI #2 Docker Operations

1. Create a Dockerfile for the Ollama Service (Dockerfile.ollama)

FROM python:3.11-slim

ENV DEBIAN_FRONTEND=noninteractive
WORKDIR /app

# Install required dependencies
RUN apt update && apt install -y curl && rm -rf /var/lib/apt/lists/*

# Install Ollama
RUN curl -fsSL https://ollama.com/install.sh | bash

# Ensure Ollama is in PATH
ENV PATH="/root/.ollama/bin:$PATH"

# Copy entrypoint script and make it executable
COPY ollama-entrypoint.sh /usr/local/bin/ollama-entrypoint.sh
RUN chmod +x /usr/local/bin/ollama-entrypoint.sh

EXPOSE 11434

ENTRYPOINT ["/usr/local/bin/ollama-entrypoint.sh"]

2. Create the Entry Point Script for Ollama (ollama-entrypoint.sh)

#!/bin/bash

# Ensure Ollama is installed and in PATH
export PATH="/root/.ollama/bin:$PATH"

# Start Ollama
# /root/.ollama/bin/ollama serve &
ollama serve &

# Wait for Ollama to initialize
sleep 5

# Pull the Llama3 model
# /root/.ollama/bin/ollama pull llama3:8b
ollama pull llama3.2

# Keep the container running
wait


3. Create a Dockerfile for the Streamlit Service (Dockerfile.streamlit)

FROM python:3.11-slim

WORKDIR /app

# Copy dependency file and install packages
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy your app code
COPY . .

EXPOSE 8501

CMD ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]



4. Create a Docker Compose File (docker-compose.yml)

version: '3.8'

services:
  ollama:
    build:
      context: .
      dockerfile: Dockerfile.ollama
    ports:
      - "11434:11434"

  streamlit:
    build:
      context: .
      dockerfile: Dockerfile.streamlit
    ports:
      - "8501:8501"
    depends_on:
      - ollama

Steps to Run

  1. Place the two Dockerfiles and the entrypoint script in your project folder.

  2. Make sure you have a valid requirements.txt (e.g., with streamlit).

  3. Place your app.py in the same folder.

  4. Run the following command in the project directory:

     docker-compose build --no-cache
  5.  You will get two images ollama-service:latest, streamlit-service:latest
  6.  You will run both to make as containers
  7.  Right click on streamlit-service:latest container to get a dropdown. 
  8.  Pick open in browser to see the screen as shown below: 




Wow! We have made Dockerfile in parts and created Images for services ollama, streamlit. We have also run in container!!! Great !!!

CAAI-AI#1 - First Steps


πŸš€ CAAI-AI Module 1: The GitHub Way & Running Locally! πŸ§‘β€πŸ’»

Hey there, AI explorer! 🌍✨ Ready to get CAAI-AI running on your local machine like a boss? Let’s break it down into easy steps! πŸ”₯


πŸ“Œ Step 1: Install the Essentials (Pre-Requisites) πŸ› οΈ

Before diving in, make sure you have these installed:

βœ… Python 🐍 – Get it from Python.org
βœ… Visual Studio Code (VS Code) πŸ‘¨β€πŸ’» – Download here
βœ… Docker Desktop 🐳 – Essential for containerized magic!
βœ… GitHub Desktop πŸ¦Έβ€β™‚οΈ – Makes cloning easy-peasy!

πŸ‘‰ Once installed, restart your system (it helps avoid weird issues!) πŸ”„


πŸ“Œ Step 2: Download & Install Ollama

Ollama is your AI model manager. Let’s set it up!

1️⃣ Download Ollama from ollama.com
2️⃣ Install it (just like any regular app)
3️⃣ Pull the AI Model (Run this in Command Prompt):

ollama pull llama3.2  # Get your AI brain ready!

πŸ’‘ Now, you’ve got your AI model ready to rock! πŸ€–πŸ”₯


πŸ“Œ Step 3: Clone the CAAI-AI Repository

Time to grab the project files from GitHub!

🎯 Open Command Prompt and run:

git clone https://github.com/immbizsoft/caai-ai.git

🎯 Navigate into the project folder:

cd caai-ai

βœ… Check if all necessary files are there:

  • .py files (Python scripts) 🐍
  • requirements.txt (Dependencies list) πŸ“œ
  • README.md (Project guide) πŸ“–

πŸš€ Boom! You now have the project files on your machine!


πŸ“Œ Step 4: Install Dependencies

🎯 Inside VS Code Terminal, run:

pip install -r requirements.txt

πŸ’‘ This will install all the Python libraries needed for your project! πŸ—οΈ


πŸ“Œ Step 5: Run the AI App! πŸš€

🎯 Inside VS Code Terminal, start the application:

streamlit run app.py

πŸŽ‰ And just like that… You should see your AI-powered app running in a browser! 🌟


πŸ’‘ Troubleshooting Tips:

πŸ”§ If something doesn’t work, try:

  • Running python --version to check if Python is installed.
  • Running pip list to verify dependencies.
  • Restarting VS Code if things seem stuck.

🎯 That’s It! You’re Now an AI Engineer! 🦾

You just set up CAAI-AI on your local machine! Now go ahead, experiment, and build some AI magic! βœ¨πŸš€

πŸ”₯ Next Steps? Deploy this to the cloud? Connect it with Ollama? Let me know if you need more guides!

πŸ”— Happy coding! The AI revolution starts with YOU! πŸŽ‰πŸ’‘

Sunday, February 23, 2025

GCP#1

Module 1: Logging into Google Cloud Console

Step 1: Sign in to Google Cloud

  1. Open Google Cloud Console.
  2. Sign in with your Google account.
  3. If this is your first time, you might need to create a new project.

Step 2: Create a Google Cloud Project

  1. Click on the project dropdown (top navigation bar).
  2. Select New Project.
  3. Give it a name (e.g., MyAIProject).
  4. Choose a billing account (if applicable) and location.
  5. Click Create.

Step 3: Enable Required APIs

  1. Go to APIs & Services > Library.
  2. Search for and enable:
    • Compute Engine API (for virtual machines)
    • AI & Machine Learning APIs (like Vertex AI)
    • Cloud Storage API (for storing data)
  3. Go to IAM & Admin and ensure you have the necessary permissions.

Module 2: Running an AI Project Using Ollama

Step 1: Set Up a Google Cloud VM

  1. Navigate to Compute Engine > VM Instances.
  2. Click Create Instance.
  3. Choose a machine type:
    • n1-standard-2 (for small workloads)
    • n1-highmem-4 (for AI models)
  4. Select Ubuntu 22.04 as the OS.
  5. Allow HTTP and HTTPS traffic.
  6. Click Create.

Step 2: Install Ollama on the VM

  1. Connect to your VM via SSH:
    gcloud compute ssh my-instance-name --zone=us-central1-a
    
  2. Install Ollama:
    curl -fsSL https://ollama.ai/install.sh | sh
    
  3. Start Ollama:
    ollama serve &
    
  4. Download an AI model:
    ollama pull deepseek-r1:1.5b
    

Step 3: Run AI Inference

  1. Install dependencies:
    pip install langchain langchain-ollama
    
  2. Create a Python script (app.py):
    from langchain_core.prompts import ChatPromptTemplate
    from langchain_ollama.llms import OllamaLLM
    
    template = """Question: {question}
    Answer: Let's think step by step."""
    prompt = ChatPromptTemplate.from_template(template)
    model = OllamaLLM(model="deepseek-r1:1.5b")
    chain = prompt | model
    response = chain.invoke({"question": "What is Mixture of Experts?"})
    print(response)
    
  3. Run the script:
    python app.py
    

Module 3: Creating an Angular Project

Step 1: Install Node.js and Angular CLI

  1. Install Node.js (LTS version) from Node.js.
  2. Install Angular CLI:
    npm install -g @angular/cli
    

Step 2: Create a New Angular Project

  1. Create the project:
    ng new my-angular-app --style=scss
    
  2. Navigate to the project folder:
    cd my-angular-app
    

Step 3: Add Angular Material

  1. Install Angular Material:
    ng add @angular/material
    
  2. Choose a theme (e.g., Indigo Pink).
  3. Enable typography and animations.

Step 4: Create a Sample Component

  1. Generate a component:
    ng generate component dashboard
    
  2. Modify dashboard.component.html:
    <mat-toolbar color="primary">Welcome to My AI Dashboard</mat-toolbar>
    <button mat-raised-button color="accent">Click Me</button>
    

Step 5: Run the Angular App

ng serve --open

πŸš€

#1 K8S Intro -Lab

GCP Kubernetes Hands-on Lab Objective By the end of this lab, students will be able to: Log in to Google Cloud Platform (GCP) Create a Kub...