Github Actions
GitHub Actions — Complete Notes: Basics to Production Deployment
Section titled “GitHub Actions — Complete Notes: Basics to Production Deployment”From Syntax Fundamentals → Security Scanning → Docker → Minikube Deployment
Section titled “From Syntax Fundamentals → Security Scanning → Docker → Minikube Deployment”Project: React + Python Todos App
Section titled “Project: React + Python Todos App”Table of Contents
Section titled “Table of Contents”- What is GitHub Actions?
- GitHub Actions vs Jenkins
- Core Concepts & Terminology
- Complete YAML Syntax Reference
- Triggers — The on Block
- Jobs — Structure & Configuration
- Steps — The Building Blocks
- Runners — Where Your Code Runs
- Environment Variables & Secrets
- Expressions, Contexts & Conditionals
- Artifacts — Storing Build Outputs
- Reusable Workflows & Composite Actions
- Project Setup — Todos App
- Backend Test Cases (pytest)
- Complete Pipeline — Stage by Stage Stage 1: Checkout & Test Stage 2: GitLeaks Secret Scanning Stage 3: SonarQube Code Quality Scan Stage 4: Docker Build & Push Stage 5: Upload Build Artifacts Stage 6: Deploy to Minikube
- Full Final Workflow File
- Setting Up a Self-Hosted Runner (VM)
- Setting Up SonarQube Server
- Quick Reference Cheatsheet
1. What is GitHub Actions?
Section titled “1. What is GitHub Actions?”GitHub Actions is a CI/CD platform built directly into GitHub. It lets you automate workflows — testing, building, scanning, and deploying your code — triggered by events in your repository.
Key idea:
Section titled “Key idea:”Every time something happens in your repo (a push, a pull request, a tag), GitHub can automatically run a set of instructions you define in a YAML file.
Where do workflow files live?
Section titled “Where do workflow files live?”Always inside your repository at:
your-repo/└── .github/ └── workflows/ └── your-workflow.yaml ← you write thisYou can have multiple workflow files — one per concern (ci.yaml, deploy.yaml, security.yaml etc.).
2. GitHub Actions vs Jenkins
Section titled “2. GitHub Actions vs Jenkins”| Feature | GitHub Actions | Jenkins |
|---|---|---|
| Where it runs | GitHub’s cloud (or self-hosted) | Your own server always |
| Setup effort | Zero — built into GitHub | Significant — install, configure, maintain server |
| Configuration | YAML files in your repo | Jenkinsfile (Groovy) in your repo |
| Free tier | 2,000 minutes/month (public repos: unlimited) | Free but you pay for your server |
| Plugins | GitHub Marketplace Actions | 1,800+ Jenkins plugins |
| Scaling | GitHub handles it automatically | You manage your own agents |
| Secret management | GitHub Secrets (built-in) | Jenkins Credentials plugin |
| Visibility | Actions tab in GitHub UI | Jenkins UI (separate server) |
| Best for | Projects already on GitHub | Enterprise, complex multi-server setups |
Bottom line:
Section titled “Bottom line:”- GitHub Actions = simpler, zero-infrastructure, tightly integrated with GitHub
- Jenkins = more control, better for complex enterprise setups, tool-agnostic
3. Core Concepts & Terminology
Section titled “3. Core Concepts & Terminology”Before writing any YAML, understand these building blocks:
Workflow └── triggered by an Event (push, PR, schedule, manual) └── contains one or more Jobs └── each Job runs on a Runner (machine) └── each Job contains Steps └── each Step runs a shell command OR calls an ActionSomething that happens in GitHub that triggers a workflow. Examples: push, pull_request, schedule, workflow_dispatch (manual trigger).
Workflow
Section titled “Workflow”The entire automation definition. One YAML file = one workflow.
A group of steps that run together on the same machine. Jobs run in parallel by default. Use needs: to make them sequential.
A single task inside a job. Either:
run:— a shell command you writeuses:— a pre-built Action from the Marketplace
Action
Section titled “Action”A reusable unit of code (someone else’s step you can plug in). Example: actions/checkout@v4 checks out your repo code.
Runner
Section titled “Runner”The machine (virtual or physical) that runs your job. Can be:
- GitHub-hosted — GitHub provides it, free tier available
- Self-hosted — your own VM/server registered with GitHub
Artifact
Section titled “Artifact”A file or folder produced during a workflow that you want to save and share between jobs or download later.
Secret
Section titled “Secret”An encrypted value stored in GitHub settings — used for passwords, tokens, API keys. Accessed as ${{ secrets.MY_SECRET }}.
4. Complete YAML Syntax Reference
Section titled “4. Complete YAML Syntax Reference”A fully annotated example showing every major construct:
# Workflow name — shows in the Actions tabname: My Complete Workflow
# ─────────────────────────────────────────────# TRIGGERS — what events start this workflow# ─────────────────────────────────────────────on: push: branches: [ main, develop ] # only on these branches paths: - 'backend/**' # only when these paths change pull_request: branches: [ main ] schedule: - cron: '0 9 * * 1' # every Monday at 9am UTC workflow_dispatch: # allows manual trigger from UI inputs: environment: description: 'Deploy target' required: true default: 'staging' type: choice options: [ staging, production ]
# ─────────────────────────────────────────────# GLOBAL environment variables (all jobs see these)# ─────────────────────────────────────────────env: NODE_VERSION: '18' PYTHON_VERSION: '3.11' APP_NAME: 'todos-app'
# ─────────────────────────────────────────────# JOBS# ─────────────────────────────────────────────jobs:
# ── JOB 1 ────────────────────────────────── test: name: Run Tests # display name in UI runs-on: ubuntu-latest # which runner to use
# Job-level environment variables env: FLASK_ENV: testing
# Job-level permissions permissions: contents: read
# Strategy — run this job multiple times with different configs strategy: matrix: python-version: ['3.10', '3.11'] fail-fast: false # don't cancel other matrix jobs if one fails
steps: # Step 1: Always start by checking out your code - name: Checkout code uses: actions/checkout@v4
# Step 2: A step with a shell command - name: Print Python version run: python --version
# Step 3: Multi-line shell commands - name: Install and test run: | pip install -r requirements.txt pytest tests/ -v # Step 4: Step with environment variable override - name: Run with env env: DATABASE_URL: sqlite:///test.db run: python -m pytest
# Step 5: Conditional step — only runs if condition is true - name: Only on main branch if: github.ref == 'refs/heads/main' run: echo "This is the main branch"
# Step 6: Step that uses a secret - name: Use a secret run: echo "Token is ${{ secrets.MY_TOKEN }}"
# Step 7: Upload an artifact - name: Save test results uses: actions/upload-artifact@v4 with: name: test-results path: test-results/ retention-days: 7
# ── JOB 2 — depends on Job 1 ─────────────── build: name: Build Docker Image runs-on: ubuntu-latest needs: test # waits for 'test' job to succeed first # needs: [test, lint] # can depend on multiple jobs
steps: - uses: actions/checkout@v4
- name: Download artifact from previous job uses: actions/download-artifact@v4 with: name: test-results
- name: Build Docker image run: docker build -t myapp:latest .
# ── JOB 3 — runs in parallel with Job 2 ──── lint: name: Lint Code runs-on: ubuntu-latest # no 'needs' = runs in parallel with other jobs
steps: - uses: actions/checkout@v4 - run: pip install flake8 && flake8 .5. Triggers — The on Block
Section titled “5. Triggers — The on Block”The on block defines what event starts the workflow.
Common Triggers
Section titled “Common Triggers”on: # ── Push to specific branches push: branches: - main - 'release/*' # wildcard — matches release/1.0, release/v2 etc. branches-ignore: - 'dependabot/**' # ignore dependabot branches tags: - 'v*' # trigger on version tags like v1.0.0 paths: - 'src/**' # only trigger if files in src/ changed paths-ignore: - '**.md' # ignore markdown file changes
# ── Pull Request events pull_request: types: [opened, synchronize, reopened] # specific PR events branches: [ main ]
# ── Scheduled (cron syntax) schedule: - cron: '30 5 * * 1,3' # 5:30am every Monday and Wednesday
# ── Manual trigger with optional inputs workflow_dispatch: inputs: tag: description: 'Docker image tag' required: true type: string dry_run: description: 'Dry run?' required: false type: boolean default: false
# ── Triggered by another workflow workflow_call: inputs: environment: required: true type: string
# ── Triggered by external HTTP call (webhooks) repository_dispatch: types: [deploy-event]Accessing trigger inputs in steps:
Section titled “Accessing trigger inputs in steps:”- name: Use input run: echo "Tag is ${{ inputs.tag }}"
- name: Check if dry run if: inputs.dry_run == false run: ./deploy.sh6. Jobs — Structure & Configuration
Section titled “6. Jobs — Structure & Configuration”Job Dependencies (Sequential vs Parallel)
Section titled “Job Dependencies (Sequential vs Parallel)”jobs: # These three run IN PARALLEL (no needs:) test-backend: runs-on: ubuntu-latest steps: [...]
test-frontend: runs-on: ubuntu-latest steps: [...]
security-scan: runs-on: ubuntu-latest steps: [...]
# This runs AFTER all three above complete build: needs: [test-backend, test-frontend, security-scan] runs-on: ubuntu-latest steps: [...]
# This runs AFTER build deploy: needs: build runs-on: ubuntu-latest steps: [...]Job Outputs — Passing data between jobs
Section titled “Job Outputs — Passing data between jobs”jobs: job1: runs-on: ubuntu-latest outputs: image-tag: ${{ steps.set-tag.outputs.tag }} # expose step output as job output steps: - name: Set image tag id: set-tag # give the step an ID run: echo "tag=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
job2: needs: job1 runs-on: ubuntu-latest steps: - name: Use the tag run: echo "Image tag is ${{ needs.job1.outputs.image-tag }}"Matrix Strategy — Run one job with multiple configurations
Section titled “Matrix Strategy — Run one job with multiple configurations”jobs: test: runs-on: ubuntu-latest strategy: matrix: python: ['3.9', '3.10', '3.11'] os: [ubuntu-latest, windows-latest] fail-fast: false # keep running other combos if one fails max-parallel: 3 # run at most 3 at once steps: - uses: actions/setup-python@v5 with: python-version: ${{ matrix.python }}Concurrency — Prevent overlapping runs
Section titled “Concurrency — Prevent overlapping runs”concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: true # cancels older run if a new one starts7. Steps — The Building Blocks
Section titled “7. Steps — The Building Blocks”Every step is either run (shell command) or uses (action).
run — Shell Commands
Section titled “run — Shell Commands”steps: # Single line - name: Hello run: echo "Hello World"
# Multi-line (each line runs as a separate command) - name: Multi-line run: | echo "Line 1" echo "Line 2" pip install -r requirements.txt # Change working directory - name: Run in subfolder working-directory: ./backend run: python app.py
# Use a specific shell - name: Use bash explicitly shell: bash run: echo "Using bash"
# Windows PowerShell - name: PowerShell step shell: pwsh run: Write-Host "Windows"
# Python script inline - name: Inline Python shell: python run: | import os print(os.environ.get('HOME'))uses — Actions
Section titled “uses — Actions”steps: # Basic action — no parameters - uses: actions/checkout@v4
# Action with parameters - uses: actions/setup-python@v5 with: python-version: '3.11' cache: 'pip' # cache pip dependencies automatically
# Action from a specific commit (most secure — no supply chain risk) - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
# Local action defined in your own repo - uses: ./.github/actions/my-custom-action
# Docker-based action - uses: docker://python:3.11 with: args: python --versionStep outputs — Reading results from steps
Section titled “Step outputs — Reading results from steps”steps: - name: Get date id: date run: echo "today=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Use date run: echo "Today is ${{ steps.date.outputs.today }}"Step continuation on failure
Section titled “Step continuation on failure”steps: - name: This might fail run: ./risky-script.sh continue-on-error: true # workflow continues even if this fails
- name: Always runs if: always() # runs regardless of previous step status run: echo "Cleanup time"
- name: Only on failure if: failure() run: echo "Something went wrong, sending alert"
- name: Only on success if: success() run: echo "Everything passed!"8. Runners — Where Your Code Runs
Section titled “8. Runners — Where Your Code Runs”GitHub-Hosted Runners (Free)
Section titled “GitHub-Hosted Runners (Free)”jobs: my-job: runs-on: ubuntu-latest # Ubuntu Linux (most common) # runs-on: ubuntu-22.04 # specific Ubuntu version # runs-on: windows-latest # Windows Server # runs-on: macos-latest # macOS # runs-on: macos-14 # specific macOS versionWhat GitHub-hosted runners include:
- Docker, Docker Compose
- Git, curl, wget
- Node.js, Python, Java, Go, Ruby (multiple versions)
- kubectl, Helm
- AWS CLI, Azure CLI, GCP CLI
- And 100s more pre-installed tools
Free tier limits:
- Public repos: unlimited minutes
- Private repos: 2,000 minutes/month on free plan
Self-Hosted Runners
Section titled “Self-Hosted Runners”Use your own VM when you need:
- More RAM/CPU than GitHub provides
- Access to private network resources
- Persistent storage between runs
- No minute limits
- Specific software pre-installed
jobs: deploy: runs-on: self-hosted # any registered self-hosted runner # runs-on: [self-hosted, linux] # runs-on: [self-hosted, linux, sonarqube] # runner with labelSee Section 23 for how to register your own VM as a runner.
9. Environment Variables & Secrets
Section titled “9. Environment Variables & Secrets”Types of Variables
Section titled “Types of Variables”GitHub Level Secrets/Variables → available to ALL repos in your account/orgRepository Level Secrets/Variables → available to ONE repoEnvironment Level Secrets/Variables → available only when deploying to specific environmentSetting Variables
Section titled “Setting Variables”In the workflow file (non-sensitive):
env: APP_NAME: todos-app # top level = all jobs NODE_ENV: production
jobs: build: env: BUILD_DIR: ./dist # job level = this job only steps: - name: Step with env env: STEP_VAR: hello # step level = this step only run: echo $STEP_VARIn GitHub UI (for secrets): Go to: Repository → Settings → Secrets and variables → Actions → New repository secret
Accessing secrets in workflow:
steps: - name: Login to DockerHub run: echo "${{ secrets.DOCKERHUB_PASSWORD }}" | docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" --password-stdin
- name: Use token env: TOKEN: ${{ secrets.MY_TOKEN }} # safer: set as env var, not inline run: curl -H "Authorization: Bearer $TOKEN" https://api.example.comBuilt-in GitHub Context Variables
Section titled “Built-in GitHub Context Variables”These are automatically available — no setup needed:
run: | echo "Repo: ${{ github.repository }}" # owner/repo-name echo "Branch: ${{ github.ref_name }}" # main echo "Full ref: ${{ github.ref }}" # refs/heads/main echo "Commit SHA: ${{ github.sha }}" # abc1234... echo "Short SHA: ${{ github.sha }}" echo "Actor: ${{ github.actor }}" # who triggered it echo "Event: ${{ github.event_name }}" # push, pull_request, etc. echo "Run ID: ${{ github.run_id }}" echo "Run number: ${{ github.run_number }}" echo "Workspace: ${{ github.workspace }}" # /home/runner/work/repo/repoWriting to $GITHUB_ENV — Persist variables across steps
Section titled “Writing to $GITHUB_ENV — Persist variables across steps”steps: - name: Set variable for later steps run: echo "IMAGE_TAG=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
- name: Use the variable run: docker build -t myapp:$IMAGE_TAG . # $IMAGE_TAG is now availableWriting to $GITHUB_OUTPUT — Pass values between steps
Section titled “Writing to $GITHUB_OUTPUT — Pass values between steps”steps: - name: Compute tag id: tag run: echo "value=v1.2.3" >> $GITHUB_OUTPUT
- name: Use tag run: echo "${{ steps.tag.outputs.value }}"10. Expressions, Contexts & Conditionals
Section titled “10. Expressions, Contexts & Conditionals”Expressions Syntax
Section titled “Expressions Syntax”Expressions always go inside ${{ }}:
run: echo ${{ github.actor }}if: ${{ github.ref == 'refs/heads/main' }}Contexts Available
Section titled “Contexts Available”${{ github.* }} # repo, branch, SHA, event info${{ env.* }} # environment variables${{ secrets.* }} # encrypted secrets${{ steps.* }} # outputs from steps in current job${{ needs.* }} # outputs from jobs this job depends on${{ inputs.* }} # workflow_dispatch or workflow_call inputs${{ matrix.* }} # current matrix values${{ runner.* }} # runner.os, runner.arch, runner.temp${{ job.status }} # success, failure, cancelledConditional if: expressions
Section titled “Conditional if: expressions”# Only on main branchif: github.ref == 'refs/heads/main'
# Only on pull requestsif: github.event_name == 'pull_request'
# Only if previous step failedif: failure()
# Run always (even if workflow is failing)if: always()
# Only on specific OS in matrixif: matrix.os == 'ubuntu-latest'
# Combine conditionsif: github.ref == 'refs/heads/main' && github.event_name == 'push'
# Check if a secret existsif: secrets.DEPLOY_KEY != ''
# Only if NOT a forkif: github.event.pull_request.head.repo.full_name == github.repositoryFunctions in Expressions
Section titled “Functions in Expressions”# String containsif: contains(github.ref, 'release')
# String starts withif: startsWith(github.ref, 'refs/tags/v')
# Check if file changed (needs specific setup)if: contains(steps.changes.outputs.changed_files, 'backend/')
# Format a stringrun: echo "${{ format('Hello {0}!', github.actor) }}"
# Convert to JSONrun: echo "${{ toJSON(github.event) }}"
# Check if input is trueif: ${{ inputs.deploy == true }}11. Artifacts — Storing Build Outputs
Section titled “11. Artifacts — Storing Build Outputs”Artifacts let you save files from a job so you can:
- Download them from the GitHub UI after the run
- Share them between jobs in the same workflow
- Keep build outputs (binaries, reports, coverage files) for reference
Upload an Artifact
Section titled “Upload an Artifact”- name: Upload test report uses: actions/upload-artifact@v4 with: name: test-report # name shown in GitHub UI path: | test-results/ coverage.xml retention-days: 30 # auto-delete after 30 days (default: 90) if-no-files-found: error # error | warn | ignore overwrite: trueDownload an Artifact (in a later job)
Section titled “Download an Artifact (in a later job)”jobs: build: steps: - name: Build frontend run: npm run build - name: Upload dist folder uses: actions/upload-artifact@v4 with: name: frontend-dist path: frontend/dist/
deploy: needs: build steps: - name: Download dist folder uses: actions/download-artifact@v4 with: name: frontend-dist path: ./dist # where to put it on this runner
- name: Deploy run: rsync -av ./dist/ user@server:/var/www/html/Download all artifacts at once
Section titled “Download all artifacts at once”- uses: actions/download-artifact@v4 with: path: all-artifacts/ # all artifacts downloaded into subdirectories here12. Reusable Workflows & Composite Actions
Section titled “12. Reusable Workflows & Composite Actions”Reusable Workflow — Call one workflow from another
Section titled “Reusable Workflow — Call one workflow from another”Define it (.github/workflows/deploy-template.yaml):
on: workflow_call: inputs: environment: required: true type: string secrets: DEPLOY_KEY: required: true
jobs: deploy: runs-on: ubuntu-latest steps: - run: echo "Deploying to ${{ inputs.environment }}"Call it from another workflow:
jobs: deploy-staging: uses: ./.github/workflows/deploy-template.yaml with: environment: staging secrets: DEPLOY_KEY: ${{ secrets.DEPLOY_KEY }}Composite Action — Reuse steps across workflows
Section titled “Composite Action — Reuse steps across workflows”Create .github/actions/setup-python-env/action.yaml:
name: 'Setup Python Environment'description: 'Installs Python and dependencies'inputs: python-version: required: false default: '3.11'runs: using: composite steps: - uses: actions/setup-python@v5 with: python-version: ${{ inputs.python-version }} - run: pip install -r requirements.txt shell: bash working-directory: ./backendUse it:
steps: - uses: ./.github/actions/setup-python-env with: python-version: '3.11'13. Project Setup — Todos App
Section titled “13. Project Setup — Todos App”Project Structure
Section titled “Project Structure”test-todos/├── .github/│ └── workflows/│ └── ci-cd.yaml ← our pipeline├── backend/│ ├── app.py ← Flask API│ ├── requirements.txt│ ├── Dockerfile│ └── tests/│ └── test_api.py ← we will create this├── frontend/│ ├── src/│ │ ├── App.jsx│ │ └── main.jsx│ ├── Dockerfile│ ├── package.json│ └── vite.config.js├── k8s/│ ├── backend.yaml│ └── frontend.yaml└── sonar-project.properties ← we will create thisBackend API Endpoints (from app.py)
Section titled “Backend API Endpoints (from app.py)”GET /api/v1/get-todos → returns list of todosGET /api/v1/ping → pings a target (has security vulnerability)GET /api/v1/user → fetches user by id (has SQL injection vulnerability)Update requirements.txt to include test dependencies
Section titled “Update requirements.txt to include test dependencies”Flask==3.0.0flask-cors==4.0.0pytest==7.4.0pytest-cov==4.1.0Create sonar-project.properties (project root)
Section titled “Create sonar-project.properties (project root)”sonar.projectKey=todos-appsonar.projectName=Todos Appsonar.projectVersion=1.0sonar.sources=backend,frontend/srcsonar.language=pysonar.python.coverage.reportPaths=backend/coverage.xmlsonar.exclusions=**/node_modules/**,**/__pycache__/**,**/tests/**14. Backend Test Cases (pytest)
Section titled “14. Backend Test Cases (pytest)”Create backend/tests/__init__.py (empty file) and backend/tests/test_api.py:
import pytestimport jsonimport sysimport os
# Make sure the backend module is importablesys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
from app import app
# ─── FIXTURE ────────────────────────────────────────────# A pytest fixture creates a reusable test client# Flask provides a test client that simulates HTTP requests# without actually starting a server
@pytest.fixturedef client(): """Create a test client for the Flask app.""" app.config['TESTING'] = True with app.test_client() as client: yield client
# ─── GET TODOS TESTS ────────────────────────────────────
class TestGetTodos: """Tests for the GET /api/v1/get-todos endpoint."""
def test_get_todos_returns_200(self, client): """Endpoint should return HTTP 200 OK.""" response = client.get('/api/v1/get-todos') assert response.status_code == 200
def test_get_todos_returns_json(self, client): """Response content type should be application/json.""" response = client.get('/api/v1/get-todos') assert response.content_type == 'application/json'
def test_get_todos_returns_list(self, client): """Response body should be a JSON array.""" response = client.get('/api/v1/get-todos') data = json.loads(response.data) assert isinstance(data, list)
def test_get_todos_is_not_empty(self, client): """The todos list should contain at least one item.""" response = client.get('/api/v1/get-todos') data = json.loads(response.data) assert len(data) > 0
def test_get_todos_items_have_required_fields(self, client): """Each todo item must have 'id' and 'task' fields.""" response = client.get('/api/v1/get-todos') data = json.loads(response.data) for todo in data: assert 'id' in todo, f"Missing 'id' field in todo: {todo}" assert 'task' in todo, f"Missing 'task' field in todo: {todo}"
def test_get_todos_id_is_integer(self, client): """Each todo 'id' field should be an integer.""" response = client.get('/api/v1/get-todos') data = json.loads(response.data) for todo in data: assert isinstance(todo['id'], int)
def test_get_todos_task_is_string(self, client): """Each todo 'task' field should be a non-empty string.""" response = client.get('/api/v1/get-todos') data = json.loads(response.data) for todo in data: assert isinstance(todo['task'], str) assert len(todo['task']) > 0
def test_get_todos_ids_are_unique(self, client): """All todo IDs should be unique — no duplicates.""" response = client.get('/api/v1/get-todos') data = json.loads(response.data) ids = [todo['id'] for todo in data] assert len(ids) == len(set(ids)), "Duplicate IDs found in todos"
def test_get_todos_expected_count(self, client): """Should return the expected number of pre-seeded todos (4).""" response = client.get('/api/v1/get-todos') data = json.loads(response.data) assert len(data) == 4
def test_get_todos_contains_known_item(self, client): """Should contain a specific known todo task.""" response = client.get('/api/v1/get-todos') data = json.loads(response.data) tasks = [todo['task'] for todo in data] assert 'Build Docker Image!!' in tasks
# ─── PING ENDPOINT TESTS ────────────────────────────────
class TestPingEndpoint: """Tests for the GET /api/v1/ping endpoint."""
def test_ping_returns_200(self, client): """Ping with default target should return HTTP 200.""" response = client.get('/api/v1/ping') assert response.status_code == 200
def test_ping_returns_json(self, client): """Response should be JSON.""" response = client.get('/api/v1/ping') assert response.content_type == 'application/json'
def test_ping_response_has_status_field(self, client): """Response JSON should have a 'status' field.""" response = client.get('/api/v1/ping') data = json.loads(response.data) assert 'status' in data
def test_ping_response_status_value(self, client): """The 'status' field should equal 'ping executed'.""" response = client.get('/api/v1/ping') data = json.loads(response.data) assert data['status'] == 'ping executed'
def test_ping_with_custom_target(self, client): """Endpoint should accept a custom 'target' query parameter.""" response = client.get('/api/v1/ping?target=127.0.0.1') assert response.status_code == 200
def test_ping_without_target_param(self, client): """Endpoint should work without a target param (defaults to 127.0.0.1).""" response = client.get('/api/v1/ping') assert response.status_code == 200
# ─── USER ENDPOINT TESTS ────────────────────────────────
class TestUserEndpoint: """Tests for the GET /api/v1/user endpoint."""
def test_user_returns_200(self, client): """User endpoint with valid id should return 200.""" response = client.get('/api/v1/user?id=1') assert response.status_code == 200
def test_user_returns_json(self, client): """Response should be JSON.""" response = client.get('/api/v1/user?id=1') assert response.content_type == 'application/json'
def test_user_response_has_status_field(self, client): """Response should contain a 'status' field.""" response = client.get('/api/v1/user?id=1') data = json.loads(response.data) assert 'status' in data
def test_user_works_without_id_param(self, client): """Should use default id=1 if no id param provided.""" response = client.get('/api/v1/user') assert response.status_code == 200
# ─── GENERAL API TESTS ──────────────────────────────────
class TestGeneralAPI: """General API behaviour tests."""
def test_unknown_route_returns_404(self, client): """A route that doesn't exist should return 404.""" response = client.get('/api/v1/nonexistent') assert response.status_code == 404
def test_get_todos_method_not_allowed_on_post(self, client): """POST to get-todos should return 405 Method Not Allowed.""" response = client.post('/api/v1/get-todos') assert response.status_code == 405
def test_cors_header_present(self, client): """CORS header should be present on responses (flask-cors is configured).""" response = client.get('/api/v1/get-todos', headers={'Origin': 'http://localhost:3000'}) assert 'Access-Control-Allow-Origin' in response.headersRun tests locally to verify:
Section titled “Run tests locally to verify:”cd backendpip install -r requirements.txtpytest tests/ -v --cov=. --cov-report=xml:coverage.xml15. Complete Pipeline — Stage by Stage
Section titled “15. Complete Pipeline — Stage by Stage”Here is the full pipeline we are building and what each stage does:
[push to main] │ ▼┌─────────────────┐│ 1. test-backend │ Run pytest, generate coverage report└────────┬────────┘ │ ▼┌─────────────────┐ ┌──────────────────┐│ 2. gitleaks │ │ 3. sonarqube │ (run in parallel after tests)│ secret scan │ │ code quality │└────────┬────────┘ └────────┬─────────┘ │ │ └──────────┬───────────┘ ▼ ┌─────────────────────┐ │ 4. docker-build │ Build & push backend + frontend images └──────────┬──────────┘ ▼ ┌─────────────────────┐ │ 5. upload-artifacts│ Save build info, k8s files └──────────┬──────────┘ ▼ ┌─────────────────────┐ │ 6. deploy-minikube │ Start Minikube, deploy both services └─────────────────────┘16. Stage 1: Checkout & Test
Section titled “16. Stage 1: Checkout & Test”jobs: test-backend: name: Backend Tests runs-on: ubuntu-latest
steps: # Step 1: Always first — get your code onto the runner - name: Checkout code uses: actions/checkout@v4
# Step 2: Install the right Python version - name: Set up Python uses: actions/setup-python@v5 with: python-version: '3.11' cache: 'pip' # caches pip packages — speeds up future runs
# Step 3: Install all dependencies including test tools - name: Install dependencies working-directory: ./backend run: pip install -r requirements.txt
# Step 4: Run pytest with coverage report - name: Run tests working-directory: ./backend run: | pytest tests/ -v \ --cov=. \ --cov-report=xml:coverage.xml \ --cov-report=term-missing \ --junitxml=test-results.xml # Step 5: Upload test results as artifact for later review - name: Upload test results uses: actions/upload-artifact@v4 if: always() # upload even if tests fail (for debugging) with: name: backend-test-results path: | backend/coverage.xml backend/test-results.xml retention-days: 1417. Stage 2: GitLeaks Secret Scanning
Section titled “17. Stage 2: GitLeaks Secret Scanning”GitLeaks scans your entire git history for accidentally committed secrets — API keys, passwords, tokens, AWS credentials, etc.
This is critical because your app.py has AWS_SECRET_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE" and App.jsx has a hardcoded GitHub token — both are real-world mistakes GitLeaks catches.
gitleaks-scan: name: GitLeaks Secret Scan runs-on: ubuntu-latest needs: test-backend
steps: - name: Checkout code uses: actions/checkout@v4 with: fetch-depth: 0 # IMPORTANT: fetch full git history, not just latest commit # GitLeaks needs all history to scan past commits
- name: Run GitLeaks uses: gitleaks/gitleaks-action@v2 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITLEAKS_LICENSE: ${{ secrets.GITLEAKS_LICENSE }} # needed for orgs; optional for personal repos continue-on-error: true # set to false to BLOCK the pipeline on secrets foundWhat GitLeaks detects (partial list):
Section titled “What GitLeaks detects (partial list):”- AWS Access Keys and Secret Keys
- GitHub Personal Access Tokens (
ghp_...) - Slack tokens
- Private keys (RSA, EC, PGP)
- Generic high-entropy strings that look like secrets
- Database connection strings with passwords
Creating a .gitleaks.toml config to whitelist false positives:
Section titled “Creating a .gitleaks.toml config to whitelist false positives:”# .gitleaks.toml — place in project root[allowlist] description = "Whitelist known test/example values" regexes = [ '''AKIAIOSFODNN7EXAMPLE''', # AWS example key used in docs ] paths = [ '''tests/''', # ignore test files ]18. Stage 3: SonarQube Code Quality Scan
Section titled “18. Stage 3: SonarQube Code Quality Scan”SonarQube performs static code analysis — finding bugs, code smells, security hotspots, and vulnerabilities without running the code.
What SonarQube will find in our Todos app:
Section titled “What SonarQube will find in our Todos app:”os.system(f"ping -c 1 {target}")→ Critical: Command Injection"SELECT * FROM users WHERE id = '" + user_id + "'"→ Critical: SQL Injectionapp.run(debug=True)→ Security Hotspot: Debug mode in productionAWS_SECRET_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE"→ Blocker: Hardcoded credentialdangerouslySetInnerHTMLwith raw URL param → Critical: XSS
Prerequisites:
Section titled “Prerequisites:”- A running SonarQube server (see Section 24)
SONAR_TOKENstored as a GitHub SecretSONAR_HOST_URLstored as a GitHub Secret (e.g.,http://your-server-ip:9000)
sonarqube-scan: name: SonarQube Analysis runs-on: ubuntu-latest needs: test-backend
steps: - name: Checkout code uses: actions/checkout@v4 with: fetch-depth: 0 # full history needed for accurate blame/diff analysis
# Download coverage report generated in test stage - name: Download test artifacts uses: actions/download-artifact@v4 with: name: backend-test-results path: backend/
- name: SonarQube Scan uses: SonarSource/sonarqube-scan-action@master env: SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
# Optional: Make the pipeline wait for Quality Gate result # Quality Gate = pass/fail threshold you configure in SonarQube - name: SonarQube Quality Gate Check uses: SonarSource/sonarqube-quality-gate-action@master timeout-minutes: 5 env: SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}sonar-project.properties (at project root):
Section titled “sonar-project.properties (at project root):”sonar.projectKey=todos-appsonar.projectName=Todos Appsonar.projectVersion=1.0
# Scan both backend Python and frontend JS/JSXsonar.sources=backend,frontend/src
# Tell SonarQube where the Python coverage report issonar.python.coverage.reportPaths=backend/coverage.xml
# Test resultssonar.python.xunit.reportPath=backend/test-results.xml
# Exclusions — don't scan thesesonar.exclusions=**/node_modules/**,**/__pycache__/**,**/tests/**,**/*.min.js
# Python versionsonar.python.version=3.1119. Stage 4: Docker Build & Push
Section titled “19. Stage 4: Docker Build & Push”This stage builds Docker images for both the backend and frontend, then pushes them to DockerHub.
Prerequisites — GitHub Secrets needed:
Section titled “Prerequisites — GitHub Secrets needed:”DOCKERHUB_USERNAME— your DockerHub usernameDOCKERHUB_TOKEN— DockerHub access token (Settings > Security > New Access Token)
docker-build: name: Build & Push Docker Images runs-on: ubuntu-latest needs: [gitleaks-scan, sonarqube-scan] # only build if scans passed
steps: - name: Checkout code uses: actions/checkout@v4
# Set up Docker Buildx — enables advanced build features, multi-platform builds - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3
# Login to DockerHub - name: Login to DockerHub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }}
# Generate a unique tag using the short commit SHA # This ensures every build has a unique, traceable image tag - name: Generate image tag id: tag run: echo "value=$(echo ${{ github.sha }} | cut -c1-7)" >> $GITHUB_OUTPUT
# Build and push backend image - name: Build and push backend uses: docker/build-push-action@v5 with: context: ./backend # folder containing the Dockerfile push: true tags: | ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:${{ steps.tag.outputs.value }} cache-from: type=gha # use GitHub Actions cache to speed up builds cache-to: type=gha,mode=max
# Build and push frontend image - name: Build and push frontend uses: docker/build-push-action@v5 with: context: ./frontend push: true tags: | ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:${{ steps.tag.outputs.value }} cache-from: type=gha cache-to: type=gha,mode=max
# Save the image tag as output so deploy job can use it - name: Export image tag run: echo "IMAGE_TAG=${{ steps.tag.outputs.value }}" >> $GITHUB_ENV20. Stage 5: Upload Build Artifacts
Section titled “20. Stage 5: Upload Build Artifacts”Save build-related files so they can be:
- Downloaded from the GitHub UI for reference
- Used by the deploy job
upload-artifacts: name: Upload Build Artifacts runs-on: ubuntu-latest needs: docker-build
steps: - name: Checkout code uses: actions/checkout@v4
# Create an artifacts folder with useful build info - name: Prepare artifacts run: | mkdir -p artifacts cp -r k8s/ artifacts/k8s/ echo "Build: ${{ github.run_number }}" > artifacts/build-info.txt echo "Commit: ${{ github.sha }}" >> artifacts/build-info.txt echo "Branch: ${{ github.ref_name }}" >> artifacts/build-info.txt echo "Actor: ${{ github.actor }}" >> artifacts/build-info.txt echo "Timestamp: $(date -u)" >> artifacts/build-info.txt echo "Backend image: ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest" >> artifacts/build-info.txt echo "Frontend image: ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest" >> artifacts/build-info.txt - name: Upload artifacts uses: actions/upload-artifact@v4 with: name: build-artifacts-${{ github.run_number }} path: artifacts/ retention-days: 3021. Stage 6: Deploy to Minikube
Section titled “21. Stage 6: Deploy to Minikube”This final stage:
- Installs Minikube on the GitHub runner
- Starts a Kubernetes cluster
- Updates the k8s manifests to use your DockerHub images
- Deploys both backend and frontend
- Verifies the deployment
deploy-minikube: name: Deploy to Minikube runs-on: ubuntu-latest needs: upload-artifacts
steps: - name: Checkout code uses: actions/checkout@v4
# Start Minikube on the GitHub runner - name: Start Minikube uses: medyagh/setup-minikube@master with: driver: docker kubernetes-version: v1.28.0
# Verify cluster is running - name: Verify cluster run: | kubectl cluster-info kubectl get nodes kubectl get pods -A # Build both images INSIDE Minikube's Docker # This is the same trick from the Node.js lab — # eval $(minikube docker-env) redirects docker commands # to build inside Minikube so Kubernetes can find the images - name: Build images inside Minikube Docker run: | eval $(minikube -p minikube docker-env) docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest ./backend docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest ./frontend echo "--- Images available inside Minikube: ---" docker images # Update k8s manifests to use our images # sed replaces the placeholder image name with our actual username - name: Update k8s manifests with correct image names run: | sed -i "s|pavanepam/test-todos-backend:latest|${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest|g" k8s/backend.yaml sed -i "s|pavanepam/test-todos-frontend:latest|${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest|g" k8s/frontend.yaml # Set imagePullPolicy to Never so Kubernetes uses the locally built images # instead of trying to pull from DockerHub (it won't find them there in Minikube) - name: Patch imagePullPolicy to Never run: | sed -i "s|imagePullPolicy: Always|imagePullPolicy: Never|g" k8s/backend.yaml sed -i "s|imagePullPolicy: Always|imagePullPolicy: Never|g" k8s/frontend.yaml # Deploy both services - name: Deploy backend run: kubectl apply -f k8s/backend.yaml
- name: Deploy frontend run: kubectl apply -f k8s/frontend.yaml
# Wait for both deployments to be ready (pods running) - name: Wait for backend to be ready run: kubectl rollout status deployment/backend-deploy --timeout=120s
- name: Wait for frontend to be ready run: kubectl rollout status deployment/frontend-deploy --timeout=120s
# Verify everything is running - name: Check deployment status run: | echo "=== Pods ===" kubectl get pods echo "=== Services ===" kubectl get services echo "=== Deployments ===" kubectl get deployments # Show the service URLs - name: Show service URLs run: | echo "=== Service List ===" minikube service list echo "Backend URL: $(minikube service backend-service --url)" echo "Frontend URL: $(minikube service frontend-service --url)"22. Full Final Workflow File
Section titled “22. Full Final Workflow File”Save this as .github/workflows/ci-cd.yaml:
name: Todos App — CI/CD Pipeline
on: push: branches: [ main ] pull_request: branches: [ main ] workflow_dispatch:
env: PYTHON_VERSION: '3.11'
jobs:
# ─── JOB 1: Backend Tests ───────────────────────────────────────────────── test-backend: name: Backend Tests runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4
- name: Set up Python uses: actions/setup-python@v5 with: python-version: ${{ env.PYTHON_VERSION }} cache: 'pip'
- name: Install dependencies working-directory: ./backend run: pip install -r requirements.txt
- name: Run pytest with coverage working-directory: ./backend run: | pytest tests/ -v \ --cov=. \ --cov-report=xml:coverage.xml \ --cov-report=term-missing \ --junitxml=test-results.xml - name: Upload test artifacts uses: actions/upload-artifact@v4 if: always() with: name: backend-test-results path: | backend/coverage.xml backend/test-results.xml retention-days: 14
# ─── JOB 2: GitLeaks Scan ───────────────────────────────────────────────── gitleaks-scan: name: GitLeaks Secret Scan runs-on: ubuntu-latest needs: test-backend steps: - name: Checkout code uses: actions/checkout@v4 with: fetch-depth: 0
- name: Run GitLeaks uses: gitleaks/gitleaks-action@v2 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} continue-on-error: true
# ─── JOB 3: SonarQube Scan ──────────────────────────────────────────────── sonarqube-scan: name: SonarQube Analysis runs-on: ubuntu-latest needs: test-backend steps: - name: Checkout code uses: actions/checkout@v4 with: fetch-depth: 0
- name: Download test artifacts uses: actions/download-artifact@v4 with: name: backend-test-results path: backend/
- name: SonarQube Scan uses: SonarSource/sonarqube-scan-action@master env: SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }} SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
# ─── JOB 4: Docker Build & Push ─────────────────────────────────────────── docker-build: name: Build & Push Docker Images runs-on: ubuntu-latest needs: [gitleaks-scan, sonarqube-scan] steps: - name: Checkout code uses: actions/checkout@v4
- name: Set up Docker Buildx uses: docker/setup-buildx-action@v3
- name: Login to DockerHub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Generate image tag id: tag run: echo "value=$(echo ${{ github.sha }} | cut -c1-7)" >> $GITHUB_OUTPUT
- name: Build and push backend uses: docker/build-push-action@v5 with: context: ./backend push: true tags: | ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:${{ steps.tag.outputs.value }} cache-from: type=gha cache-to: type=gha,mode=max
- name: Build and push frontend uses: docker/build-push-action@v5 with: context: ./frontend push: true tags: | ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:${{ steps.tag.outputs.value }} cache-from: type=gha cache-to: type=gha,mode=max
# ─── JOB 5: Upload Build Artifacts ──────────────────────────────────────── upload-artifacts: name: Upload Build Artifacts runs-on: ubuntu-latest needs: docker-build steps: - name: Checkout code uses: actions/checkout@v4
- name: Prepare artifacts folder run: | mkdir -p artifacts cp -r k8s/ artifacts/k8s/ echo "Build: ${{ github.run_number }}" > artifacts/build-info.txt echo "Commit: ${{ github.sha }}" >> artifacts/build-info.txt echo "Branch: ${{ github.ref_name }}" >> artifacts/build-info.txt echo "Actor: ${{ github.actor }}" >> artifacts/build-info.txt echo "Timestamp: $(date -u)" >> artifacts/build-info.txt - name: Upload artifacts uses: actions/upload-artifact@v4 with: name: build-artifacts-${{ github.run_number }} path: artifacts/ retention-days: 30
# ─── JOB 6: Deploy to Minikube ──────────────────────────────────────────── deploy-minikube: name: Deploy to Minikube runs-on: ubuntu-latest needs: upload-artifacts steps: - name: Checkout code uses: actions/checkout@v4
- name: Start Minikube uses: medyagh/setup-minikube@master with: driver: docker kubernetes-version: v1.28.0
- name: Verify cluster run: | kubectl cluster-info kubectl get nodes - name: Build images inside Minikube Docker run: | eval $(minikube -p minikube docker-env) docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest ./backend docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest ./frontend docker images - name: Update image names in k8s manifests run: | sed -i "s|pavanepam/test-todos-backend:latest|${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest|g" k8s/backend.yaml sed -i "s|pavanepam/test-todos-frontend:latest|${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest|g" k8s/frontend.yaml sed -i "s|imagePullPolicy: Always|imagePullPolicy: Never|g" k8s/backend.yaml sed -i "s|imagePullPolicy: Always|imagePullPolicy: Never|g" k8s/frontend.yaml - name: Deploy to Minikube run: | kubectl apply -f k8s/backend.yaml kubectl apply -f k8s/frontend.yaml - name: Wait for deployments to be ready run: | kubectl rollout status deployment/backend-deploy --timeout=120s kubectl rollout status deployment/frontend-deploy --timeout=120s - name: Verify deployment run: | echo "=== Pods ===" kubectl get pods echo "=== Services ===" kubectl get services echo "=== Deployments ===" kubectl get deployments - name: Show service URLs run: | minikube service list echo "Backend: $(minikube service backend-service --url)" echo "Frontend: $(minikube service frontend-service --url)"23. Setting Up a Self-Hosted Runner (VM)
Section titled “23. Setting Up a Self-Hosted Runner (VM)”Use this when you want the pipeline to run on your own machine/server instead of GitHub’s cloud runners.
Why use self-hosted runners?
Section titled “Why use self-hosted runners?”- Your SonarQube server is on a private network
- You need more compute than GitHub provides
- You want to keep Docker images cached between runs
- You have GPU or specialized hardware requirements
Step 1: Go to your repo settings
Section titled “Step 1: Go to your repo settings”Repository → Settings → Actions → Runners → New self-hosted runner
Select your OS and follow the instructions shown. They look like this:
# On your VM/machine:
# 1. Create a directorymkdir actions-runner && cd actions-runner
# 2. Download the runner package (use the URL GitHub shows you)curl -o actions-runner-linux-x64-2.311.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.311.0/actions-runner-linux-x64-2.311.0.tar.gz
# 3. Extracttar xzf ./actions-runner-linux-x64-2.311.0.tar.gz
# 4. Configure (GitHub gives you a unique token)./config.sh --url https://github.com/YOUR_USERNAME/YOUR_REPO --token YOUR_TOKEN
# 5. Start the runner./run.sh
# OR install as a system service so it auto-startssudo ./svc.sh installsudo ./svc.sh startStep 2: Add labels to your runner
Section titled “Step 2: Add labels to your runner”During ./config.sh you’ll be asked for labels. Add labels like: self-hosted,linux,sonarqube or self-hosted,linux,docker
Step 3: Use it in your workflow
Section titled “Step 3: Use it in your workflow”jobs: my-job: runs-on: [self-hosted, linux] # matches your runner's labels # runs-on: [self-hosted, sonarqube] # target runner with specific labelInstalling prerequisites on your self-hosted runner
Section titled “Installing prerequisites on your self-hosted runner”# Dockersudo apt-get updatesudo apt-get install -y docker.iosudo usermod -aG docker $USER
# kubectlcurl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Java (needed for SonarScanner)sudo apt-get install -y openjdk-17-jdk24. Setting Up SonarQube Server
Section titled “24. Setting Up SonarQube Server”The quickest way to run SonarQube is with Docker on your VM or local machine.
Option 1: Docker (recommended for local/dev)
Section titled “Option 1: Docker (recommended for local/dev)”# Run SonarQubedocker run -d \ --name sonarqube \ -p 9000:9000 \ -v sonarqube_data:/opt/sonarqube/data \ -v sonarqube_extensions:/opt/sonarqube/extensions \ -v sonarqube_logs:/opt/sonarqube/logs \ sonarqube:lts-community
# Access at: http://localhost:9000# Default login: admin / admin (you'll be asked to change this)Option 2: Docker Compose (better for persistent setup)
Section titled “Option 2: Docker Compose (better for persistent setup)”version: '3'services: sonarqube: image: sonarqube:lts-community ports: - "9000:9000" environment: SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar SONAR_JDBC_USERNAME: sonar SONAR_JDBC_PASSWORD: sonar volumes: - sonarqube_data:/opt/sonarqube/data - sonarqube_extensions:/opt/sonarqube/extensions - sonarqube_logs:/opt/sonarqube/logs
db: image: postgres:15 environment: POSTGRES_USER: sonar POSTGRES_PASSWORD: sonar POSTGRES_DB: sonar volumes: - postgresql_data:/var/lib/postgresql/data
volumes: sonarqube_data: sonarqube_extensions: sonarqube_logs: postgresql_data:docker-compose up -dAfter SonarQube starts:
Section titled “After SonarQube starts:”- Open
http://your-server-ip:9000 - Login with
admin/admin, set a new password - Go to My Account → Security → Generate Tokens
- Create a token named
github-actions, copy it - Add to GitHub Secrets:
SONAR_TOKEN= the token you just copiedSONAR_HOST_URL=http://your-server-ip:9000
If SonarQube fails to start (Elasticsearch issue):
Section titled “If SonarQube fails to start (Elasticsearch issue):”# On your host machine, increase virtual memorysudo sysctl -w vm.max_map_count=524288sudo sysctl -w fs.file-max=131072
# Make permanentecho "vm.max_map_count=524288" | sudo tee -a /etc/sysctl.confecho "fs.file-max=131072" | sudo tee -a /etc/sysctl.conf25. Quick Reference Cheatsheet
Section titled “25. Quick Reference Cheatsheet”Workflow File Skeleton
Section titled “Workflow File Skeleton”name: Workflow Nameon: [push]env: KEY: valuejobs: job-name: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - run: echo "hello"Most Used Actions
Section titled “Most Used Actions”| Action | Purpose | Usage |
|---|---|---|
actions/checkout@v4 | Get your code | uses: actions/checkout@v4 |
actions/setup-python@v5 | Install Python | with: python-version: '3.11' |
actions/setup-node@v4 | Install Node.js | with: node-version: '20' |
actions/upload-artifact@v4 | Save files | with: name: x, path: ./folder |
actions/download-artifact@v4 | Get saved files | with: name: x |
docker/login-action@v3 | DockerHub login | with: username, password |
docker/build-push-action@v5 | Build+push image | with: context, push, tags |
medyagh/setup-minikube@master | Start Minikube | bare uses: works |
Key Contexts
Section titled “Key Contexts”${{ github.sha }} # full commit SHA${{ github.ref_name }} # branch name${{ github.actor }} # who triggered${{ github.run_number }} # incrementing build number${{ secrets.MY_SECRET }} # encrypted secret${{ env.MY_VAR }} # environment variable${{ steps.STEP_ID.outputs.KEY }} # step output${{ needs.JOB_ID.outputs.KEY }} # job outputCommon if conditions
Section titled “Common if conditions”if: github.ref == 'refs/heads/main'if: github.event_name == 'push'if: failure()if: always()if: success()if: contains(github.ref, 'release')if: startsWith(github.ref, 'refs/tags/')Writing outputs
Section titled “Writing outputs”# From a step to later steps in same jobecho "key=value" >> $GITHUB_OUTPUT
# Persist env var across steps in same jobecho "MY_VAR=hello" >> $GITHUB_ENV
# Add to PATHecho "/my/tool/bin" >> $GITHUB_PATHSecrets to Add in GitHub for this Project
Section titled “Secrets to Add in GitHub for this Project”| Secret Name | Value |
|---|---|
DOCKERHUB_USERNAME | Your DockerHub username |
DOCKERHUB_TOKEN | DockerHub access token |
SONAR_TOKEN | Token from SonarQube UI |
SONAR_HOST_URL | http://your-server-ip:9000 |
File Structure for This Project
Section titled “File Structure for This Project”test-todos/├── .github/│ └── workflows/│ └── ci-cd.yaml├── backend/│ ├── app.py│ ├── requirements.txt│ ├── Dockerfile│ └── tests/│ ├── __init__.py│ └── test_api.py├── frontend/│ ├── src/│ ├── Dockerfile│ └── package.json├── k8s/│ ├── backend.yaml│ └── frontend.yaml├── sonar-project.properties└── .gitleaks.toml