Skip to content

Github Actions

GitHub Actions — Complete Notes: Basics to Production Deployment

Section titled “GitHub Actions — Complete Notes: Basics to Production Deployment”

From Syntax Fundamentals → Security Scanning → Docker → Minikube Deployment

Section titled “From Syntax Fundamentals → Security Scanning → Docker → Minikube Deployment”

  1. What is GitHub Actions?
  2. GitHub Actions vs Jenkins
  3. Core Concepts & Terminology
  4. Complete YAML Syntax Reference
  5. Triggers — The on Block
  6. Jobs — Structure & Configuration
  7. Steps — The Building Blocks
  8. Runners — Where Your Code Runs
  9. Environment Variables & Secrets
  10. Expressions, Contexts & Conditionals
  11. Artifacts — Storing Build Outputs
  12. Reusable Workflows & Composite Actions
  13. Project Setup — Todos App
  14. Backend Test Cases (pytest)
  15. Complete Pipeline — Stage by Stage Stage 1: Checkout & Test Stage 2: GitLeaks Secret Scanning Stage 3: SonarQube Code Quality Scan Stage 4: Docker Build & Push Stage 5: Upload Build Artifacts Stage 6: Deploy to Minikube
  16. Full Final Workflow File
  17. Setting Up a Self-Hosted Runner (VM)
  18. Setting Up SonarQube Server
  19. Quick Reference Cheatsheet

GitHub Actions is a CI/CD platform built directly into GitHub. It lets you automate workflows — testing, building, scanning, and deploying your code — triggered by events in your repository.

Every time something happens in your repo (a push, a pull request, a tag), GitHub can automatically run a set of instructions you define in a YAML file.

Always inside your repository at:

your-repo/
└── .github/
└── workflows/
└── your-workflow.yaml ← you write this

You can have multiple workflow files — one per concern (ci.yaml, deploy.yaml, security.yaml etc.).


FeatureGitHub ActionsJenkins
Where it runsGitHub’s cloud (or self-hosted)Your own server always
Setup effortZero — built into GitHubSignificant — install, configure, maintain server
ConfigurationYAML files in your repoJenkinsfile (Groovy) in your repo
Free tier2,000 minutes/month (public repos: unlimited)Free but you pay for your server
PluginsGitHub Marketplace Actions1,800+ Jenkins plugins
ScalingGitHub handles it automaticallyYou manage your own agents
Secret managementGitHub Secrets (built-in)Jenkins Credentials plugin
VisibilityActions tab in GitHub UIJenkins UI (separate server)
Best forProjects already on GitHubEnterprise, complex multi-server setups
  • GitHub Actions = simpler, zero-infrastructure, tightly integrated with GitHub
  • Jenkins = more control, better for complex enterprise setups, tool-agnostic

Before writing any YAML, understand these building blocks:

Workflow
└── triggered by an Event (push, PR, schedule, manual)
└── contains one or more Jobs
└── each Job runs on a Runner (machine)
└── each Job contains Steps
└── each Step runs a shell command OR calls an Action

Something that happens in GitHub that triggers a workflow. Examples: pushpull_requestscheduleworkflow_dispatch (manual trigger).

The entire automation definition. One YAML file = one workflow.

A group of steps that run together on the same machine. Jobs run in parallel by default. Use needs: to make them sequential.

A single task inside a job. Either:

  • run: — a shell command you write
  • uses: — a pre-built Action from the Marketplace

A reusable unit of code (someone else’s step you can plug in). Example: actions/checkout@v4 checks out your repo code.

The machine (virtual or physical) that runs your job. Can be:

  • GitHub-hosted — GitHub provides it, free tier available
  • Self-hosted — your own VM/server registered with GitHub

A file or folder produced during a workflow that you want to save and share between jobs or download later.

An encrypted value stored in GitHub settings — used for passwords, tokens, API keys. Accessed as ${{ secrets.MY_SECRET }}.


A fully annotated example showing every major construct:

# Workflow name — shows in the Actions tab
name: My Complete Workflow
# ─────────────────────────────────────────────
# TRIGGERS — what events start this workflow
# ─────────────────────────────────────────────
on:
push:
branches: [ main, develop ] # only on these branches
paths:
- 'backend/**' # only when these paths change
pull_request:
branches: [ main ]
schedule:
- cron: '0 9 * * 1' # every Monday at 9am UTC
workflow_dispatch: # allows manual trigger from UI
inputs:
environment:
description: 'Deploy target'
required: true
default: 'staging'
type: choice
options: [ staging, production ]
# ─────────────────────────────────────────────
# GLOBAL environment variables (all jobs see these)
# ─────────────────────────────────────────────
env:
NODE_VERSION: '18'
PYTHON_VERSION: '3.11'
APP_NAME: 'todos-app'
# ─────────────────────────────────────────────
# JOBS
# ─────────────────────────────────────────────
jobs:
# ── JOB 1 ──────────────────────────────────
test:
name: Run Tests # display name in UI
runs-on: ubuntu-latest # which runner to use
# Job-level environment variables
env:
FLASK_ENV: testing
# Job-level permissions
permissions:
contents: read
# Strategy — run this job multiple times with different configs
strategy:
matrix:
python-version: ['3.10', '3.11']
fail-fast: false # don't cancel other matrix jobs if one fails
steps:
# Step 1: Always start by checking out your code
- name: Checkout code
uses: actions/checkout@v4
# Step 2: A step with a shell command
- name: Print Python version
run: python --version
# Step 3: Multi-line shell commands
- name: Install and test
run: |
pip install -r requirements.txt
pytest tests/ -v
# Step 4: Step with environment variable override
- name: Run with env
env:
DATABASE_URL: sqlite:///test.db
run: python -m pytest
# Step 5: Conditional step — only runs if condition is true
- name: Only on main branch
if: github.ref == 'refs/heads/main'
run: echo "This is the main branch"
# Step 6: Step that uses a secret
- name: Use a secret
run: echo "Token is ${{ secrets.MY_TOKEN }}"
# Step 7: Upload an artifact
- name: Save test results
uses: actions/upload-artifact@v4
with:
name: test-results
path: test-results/
retention-days: 7
# ── JOB 2 — depends on Job 1 ───────────────
build:
name: Build Docker Image
runs-on: ubuntu-latest
needs: test # waits for 'test' job to succeed first
# needs: [test, lint] # can depend on multiple jobs
steps:
- uses: actions/checkout@v4
- name: Download artifact from previous job
uses: actions/download-artifact@v4
with:
name: test-results
- name: Build Docker image
run: docker build -t myapp:latest .
# ── JOB 3 — runs in parallel with Job 2 ────
lint:
name: Lint Code
runs-on: ubuntu-latest
# no 'needs' = runs in parallel with other jobs
steps:
- uses: actions/checkout@v4
- run: pip install flake8 && flake8 .

The on block defines what event starts the workflow.

on:
# ── Push to specific branches
push:
branches:
- main
- 'release/*' # wildcard — matches release/1.0, release/v2 etc.
branches-ignore:
- 'dependabot/**' # ignore dependabot branches
tags:
- 'v*' # trigger on version tags like v1.0.0
paths:
- 'src/**' # only trigger if files in src/ changed
paths-ignore:
- '**.md' # ignore markdown file changes
# ── Pull Request events
pull_request:
types: [opened, synchronize, reopened] # specific PR events
branches: [ main ]
# ── Scheduled (cron syntax)
schedule:
- cron: '30 5 * * 1,3' # 5:30am every Monday and Wednesday
# ── Manual trigger with optional inputs
workflow_dispatch:
inputs:
tag:
description: 'Docker image tag'
required: true
type: string
dry_run:
description: 'Dry run?'
required: false
type: boolean
default: false
# ── Triggered by another workflow
workflow_call:
inputs:
environment:
required: true
type: string
# ── Triggered by external HTTP call (webhooks)
repository_dispatch:
types: [deploy-event]
- name: Use input
run: echo "Tag is ${{ inputs.tag }}"
- name: Check if dry run
if: inputs.dry_run == false
run: ./deploy.sh

jobs:
# These three run IN PARALLEL (no needs:)
test-backend:
runs-on: ubuntu-latest
steps: [...]
test-frontend:
runs-on: ubuntu-latest
steps: [...]
security-scan:
runs-on: ubuntu-latest
steps: [...]
# This runs AFTER all three above complete
build:
needs: [test-backend, test-frontend, security-scan]
runs-on: ubuntu-latest
steps: [...]
# This runs AFTER build
deploy:
needs: build
runs-on: ubuntu-latest
steps: [...]
jobs:
job1:
runs-on: ubuntu-latest
outputs:
image-tag: ${{ steps.set-tag.outputs.tag }} # expose step output as job output
steps:
- name: Set image tag
id: set-tag # give the step an ID
run: echo "tag=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
job2:
needs: job1
runs-on: ubuntu-latest
steps:
- name: Use the tag
run: echo "Image tag is ${{ needs.job1.outputs.image-tag }}"

Matrix Strategy — Run one job with multiple configurations

Section titled “Matrix Strategy — Run one job with multiple configurations”
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python: ['3.9', '3.10', '3.11']
os: [ubuntu-latest, windows-latest]
fail-fast: false # keep running other combos if one fails
max-parallel: 3 # run at most 3 at once
steps:
- uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true # cancels older run if a new one starts

Every step is either run (shell command) or uses (action).

steps:
# Single line
- name: Hello
run: echo "Hello World"
# Multi-line (each line runs as a separate command)
- name: Multi-line
run: |
echo "Line 1"
echo "Line 2"
pip install -r requirements.txt
# Change working directory
- name: Run in subfolder
working-directory: ./backend
run: python app.py
# Use a specific shell
- name: Use bash explicitly
shell: bash
run: echo "Using bash"
# Windows PowerShell
- name: PowerShell step
shell: pwsh
run: Write-Host "Windows"
# Python script inline
- name: Inline Python
shell: python
run: |
import os
print(os.environ.get('HOME'))
steps:
# Basic action — no parameters
- uses: actions/checkout@v4
# Action with parameters
- uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip' # cache pip dependencies automatically
# Action from a specific commit (most secure — no supply chain risk)
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
# Local action defined in your own repo
- uses: ./.github/actions/my-custom-action
# Docker-based action
- uses: docker://python:3.11
with:
args: python --version

Step outputs — Reading results from steps

Section titled “Step outputs — Reading results from steps”
steps:
- name: Get date
id: date
run: echo "today=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- name: Use date
run: echo "Today is ${{ steps.date.outputs.today }}"
steps:
- name: This might fail
run: ./risky-script.sh
continue-on-error: true # workflow continues even if this fails
- name: Always runs
if: always() # runs regardless of previous step status
run: echo "Cleanup time"
- name: Only on failure
if: failure()
run: echo "Something went wrong, sending alert"
- name: Only on success
if: success()
run: echo "Everything passed!"

jobs:
my-job:
runs-on: ubuntu-latest # Ubuntu Linux (most common)
# runs-on: ubuntu-22.04 # specific Ubuntu version
# runs-on: windows-latest # Windows Server
# runs-on: macos-latest # macOS
# runs-on: macos-14 # specific macOS version

What GitHub-hosted runners include:

  • Docker, Docker Compose
  • Git, curl, wget
  • Node.js, Python, Java, Go, Ruby (multiple versions)
  • kubectl, Helm
  • AWS CLI, Azure CLI, GCP CLI
  • And 100s more pre-installed tools

Free tier limits:

  • Public repos: unlimited minutes
  • Private repos: 2,000 minutes/month on free plan

Use your own VM when you need:

  • More RAM/CPU than GitHub provides
  • Access to private network resources
  • Persistent storage between runs
  • No minute limits
  • Specific software pre-installed
jobs:
deploy:
runs-on: self-hosted # any registered self-hosted runner
# runs-on: [self-hosted, linux]
# runs-on: [self-hosted, linux, sonarqube] # runner with label

See Section 23 for how to register your own VM as a runner.


GitHub Level Secrets/Variables → available to ALL repos in your account/org
Repository Level Secrets/Variables → available to ONE repo
Environment Level Secrets/Variables → available only when deploying to specific environment

In the workflow file (non-sensitive):

env:
APP_NAME: todos-app # top level = all jobs
NODE_ENV: production
jobs:
build:
env:
BUILD_DIR: ./dist # job level = this job only
steps:
- name: Step with env
env:
STEP_VAR: hello # step level = this step only
run: echo $STEP_VAR

In GitHub UI (for secrets): Go to: Repository → Settings → Secrets and variables → Actions → New repository secret

Accessing secrets in workflow:

steps:
- name: Login to DockerHub
run: echo "${{ secrets.DOCKERHUB_PASSWORD }}" | docker login -u "${{ secrets.DOCKERHUB_USERNAME }}" --password-stdin
- name: Use token
env:
TOKEN: ${{ secrets.MY_TOKEN }} # safer: set as env var, not inline
run: curl -H "Authorization: Bearer $TOKEN" https://api.example.com

These are automatically available — no setup needed:

run: |
echo "Repo: ${{ github.repository }}" # owner/repo-name
echo "Branch: ${{ github.ref_name }}" # main
echo "Full ref: ${{ github.ref }}" # refs/heads/main
echo "Commit SHA: ${{ github.sha }}" # abc1234...
echo "Short SHA: ${{ github.sha }}"
echo "Actor: ${{ github.actor }}" # who triggered it
echo "Event: ${{ github.event_name }}" # push, pull_request, etc.
echo "Run ID: ${{ github.run_id }}"
echo "Run number: ${{ github.run_number }}"
echo "Workspace: ${{ github.workspace }}" # /home/runner/work/repo/repo

Writing to $GITHUB_ENV — Persist variables across steps

Section titled “Writing to $GITHUB_ENV — Persist variables across steps”
steps:
- name: Set variable for later steps
run: echo "IMAGE_TAG=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
- name: Use the variable
run: docker build -t myapp:$IMAGE_TAG . # $IMAGE_TAG is now available

Writing to $GITHUB_OUTPUT — Pass values between steps

Section titled “Writing to $GITHUB_OUTPUT — Pass values between steps”
steps:
- name: Compute tag
id: tag
run: echo "value=v1.2.3" >> $GITHUB_OUTPUT
- name: Use tag
run: echo "${{ steps.tag.outputs.value }}"

Expressions always go inside ${{ }}:

run: echo ${{ github.actor }}
if: ${{ github.ref == 'refs/heads/main' }}
${{ github.* }} # repo, branch, SHA, event info
${{ env.* }} # environment variables
${{ secrets.* }} # encrypted secrets
${{ steps.* }} # outputs from steps in current job
${{ needs.* }} # outputs from jobs this job depends on
${{ inputs.* }} # workflow_dispatch or workflow_call inputs
${{ matrix.* }} # current matrix values
${{ runner.* }} # runner.os, runner.arch, runner.temp
${{ job.status }} # success, failure, cancelled
# Only on main branch
if: github.ref == 'refs/heads/main'
# Only on pull requests
if: github.event_name == 'pull_request'
# Only if previous step failed
if: failure()
# Run always (even if workflow is failing)
if: always()
# Only on specific OS in matrix
if: matrix.os == 'ubuntu-latest'
# Combine conditions
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
# Check if a secret exists
if: secrets.DEPLOY_KEY != ''
# Only if NOT a fork
if: github.event.pull_request.head.repo.full_name == github.repository
# String contains
if: contains(github.ref, 'release')
# String starts with
if: startsWith(github.ref, 'refs/tags/v')
# Check if file changed (needs specific setup)
if: contains(steps.changes.outputs.changed_files, 'backend/')
# Format a string
run: echo "${{ format('Hello {0}!', github.actor) }}"
# Convert to JSON
run: echo "${{ toJSON(github.event) }}"
# Check if input is true
if: ${{ inputs.deploy == true }}

Artifacts let you save files from a job so you can:

  • Download them from the GitHub UI after the run
  • Share them between jobs in the same workflow
  • Keep build outputs (binaries, reports, coverage files) for reference
- name: Upload test report
uses: actions/upload-artifact@v4
with:
name: test-report # name shown in GitHub UI
path: |
test-results/
coverage.xml retention-days: 30 # auto-delete after 30 days (default: 90)
if-no-files-found: error # error | warn | ignore
overwrite: true
jobs:
build:
steps:
- name: Build frontend
run: npm run build
- name: Upload dist folder
uses: actions/upload-artifact@v4
with:
name: frontend-dist
path: frontend/dist/
deploy:
needs: build
steps:
- name: Download dist folder
uses: actions/download-artifact@v4
with:
name: frontend-dist
path: ./dist # where to put it on this runner
- name: Deploy
run: rsync -av ./dist/ user@server:/var/www/html/
- uses: actions/download-artifact@v4
with:
path: all-artifacts/ # all artifacts downloaded into subdirectories here

12. Reusable Workflows & Composite Actions

Section titled “12. Reusable Workflows & Composite Actions”

Reusable Workflow — Call one workflow from another

Section titled “Reusable Workflow — Call one workflow from another”

Define it (.github/workflows/deploy-template.yaml):

on:
workflow_call:
inputs:
environment:
required: true
type: string
secrets:
DEPLOY_KEY:
required: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- run: echo "Deploying to ${{ inputs.environment }}"

Call it from another workflow:

jobs:
deploy-staging:
uses: ./.github/workflows/deploy-template.yaml
with:
environment: staging
secrets:
DEPLOY_KEY: ${{ secrets.DEPLOY_KEY }}

Composite Action — Reuse steps across workflows

Section titled “Composite Action — Reuse steps across workflows”

Create .github/actions/setup-python-env/action.yaml:

name: 'Setup Python Environment'
description: 'Installs Python and dependencies'
inputs:
python-version:
required: false
default: '3.11'
runs:
using: composite
steps:
- uses: actions/setup-python@v5
with:
python-version: ${{ inputs.python-version }}
- run: pip install -r requirements.txt
shell: bash
working-directory: ./backend

Use it:

steps:
- uses: ./.github/actions/setup-python-env
with:
python-version: '3.11'

test-todos/
├── .github/
│ └── workflows/
│ └── ci-cd.yaml ← our pipeline
├── backend/
│ ├── app.py ← Flask API
│ ├── requirements.txt
│ ├── Dockerfile
│ └── tests/
│ └── test_api.py ← we will create this
├── frontend/
│ ├── src/
│ │ ├── App.jsx
│ │ └── main.jsx
│ ├── Dockerfile
│ ├── package.json
│ └── vite.config.js
├── k8s/
│ ├── backend.yaml
│ └── frontend.yaml
└── sonar-project.properties ← we will create this
GET /api/v1/get-todos → returns list of todos
GET /api/v1/ping → pings a target (has security vulnerability)
GET /api/v1/user → fetches user by id (has SQL injection vulnerability)

Update requirements.txt to include test dependencies

Section titled “Update requirements.txt to include test dependencies”
Flask==3.0.0
flask-cors==4.0.0
pytest==7.4.0
pytest-cov==4.1.0

Create sonar-project.properties (project root)

Section titled “Create sonar-project.properties (project root)”
sonar.projectKey=todos-app
sonar.projectName=Todos App
sonar.projectVersion=1.0
sonar.sources=backend,frontend/src
sonar.language=py
sonar.python.coverage.reportPaths=backend/coverage.xml
sonar.exclusions=**/node_modules/**,**/__pycache__/**,**/tests/**

Create backend/tests/__init__.py (empty file) and backend/tests/test_api.py:

backend/tests/test_api.py
import pytest
import json
import sys
import os
# Make sure the backend module is importable
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
from app import app
# ─── FIXTURE ────────────────────────────────────────────
# A pytest fixture creates a reusable test client
# Flask provides a test client that simulates HTTP requests
# without actually starting a server
@pytest.fixture
def client():
"""Create a test client for the Flask app."""
app.config['TESTING'] = True
with app.test_client() as client:
yield client
# ─── GET TODOS TESTS ────────────────────────────────────
class TestGetTodos:
"""Tests for the GET /api/v1/get-todos endpoint."""
def test_get_todos_returns_200(self, client):
"""Endpoint should return HTTP 200 OK."""
response = client.get('/api/v1/get-todos')
assert response.status_code == 200
def test_get_todos_returns_json(self, client):
"""Response content type should be application/json."""
response = client.get('/api/v1/get-todos')
assert response.content_type == 'application/json'
def test_get_todos_returns_list(self, client):
"""Response body should be a JSON array."""
response = client.get('/api/v1/get-todos')
data = json.loads(response.data)
assert isinstance(data, list)
def test_get_todos_is_not_empty(self, client):
"""The todos list should contain at least one item."""
response = client.get('/api/v1/get-todos')
data = json.loads(response.data)
assert len(data) > 0
def test_get_todos_items_have_required_fields(self, client):
"""Each todo item must have 'id' and 'task' fields."""
response = client.get('/api/v1/get-todos')
data = json.loads(response.data)
for todo in data:
assert 'id' in todo, f"Missing 'id' field in todo: {todo}"
assert 'task' in todo, f"Missing 'task' field in todo: {todo}"
def test_get_todos_id_is_integer(self, client):
"""Each todo 'id' field should be an integer."""
response = client.get('/api/v1/get-todos')
data = json.loads(response.data)
for todo in data:
assert isinstance(todo['id'], int)
def test_get_todos_task_is_string(self, client):
"""Each todo 'task' field should be a non-empty string."""
response = client.get('/api/v1/get-todos')
data = json.loads(response.data)
for todo in data:
assert isinstance(todo['task'], str)
assert len(todo['task']) > 0
def test_get_todos_ids_are_unique(self, client):
"""All todo IDs should be unique — no duplicates."""
response = client.get('/api/v1/get-todos')
data = json.loads(response.data)
ids = [todo['id'] for todo in data]
assert len(ids) == len(set(ids)), "Duplicate IDs found in todos"
def test_get_todos_expected_count(self, client):
"""Should return the expected number of pre-seeded todos (4)."""
response = client.get('/api/v1/get-todos')
data = json.loads(response.data)
assert len(data) == 4
def test_get_todos_contains_known_item(self, client):
"""Should contain a specific known todo task."""
response = client.get('/api/v1/get-todos')
data = json.loads(response.data)
tasks = [todo['task'] for todo in data]
assert 'Build Docker Image!!' in tasks
# ─── PING ENDPOINT TESTS ────────────────────────────────
class TestPingEndpoint:
"""Tests for the GET /api/v1/ping endpoint."""
def test_ping_returns_200(self, client):
"""Ping with default target should return HTTP 200."""
response = client.get('/api/v1/ping')
assert response.status_code == 200
def test_ping_returns_json(self, client):
"""Response should be JSON."""
response = client.get('/api/v1/ping')
assert response.content_type == 'application/json'
def test_ping_response_has_status_field(self, client):
"""Response JSON should have a 'status' field."""
response = client.get('/api/v1/ping')
data = json.loads(response.data)
assert 'status' in data
def test_ping_response_status_value(self, client):
"""The 'status' field should equal 'ping executed'."""
response = client.get('/api/v1/ping')
data = json.loads(response.data)
assert data['status'] == 'ping executed'
def test_ping_with_custom_target(self, client):
"""Endpoint should accept a custom 'target' query parameter."""
response = client.get('/api/v1/ping?target=127.0.0.1')
assert response.status_code == 200
def test_ping_without_target_param(self, client):
"""Endpoint should work without a target param (defaults to 127.0.0.1)."""
response = client.get('/api/v1/ping')
assert response.status_code == 200
# ─── USER ENDPOINT TESTS ────────────────────────────────
class TestUserEndpoint:
"""Tests for the GET /api/v1/user endpoint."""
def test_user_returns_200(self, client):
"""User endpoint with valid id should return 200."""
response = client.get('/api/v1/user?id=1')
assert response.status_code == 200
def test_user_returns_json(self, client):
"""Response should be JSON."""
response = client.get('/api/v1/user?id=1')
assert response.content_type == 'application/json'
def test_user_response_has_status_field(self, client):
"""Response should contain a 'status' field."""
response = client.get('/api/v1/user?id=1')
data = json.loads(response.data)
assert 'status' in data
def test_user_works_without_id_param(self, client):
"""Should use default id=1 if no id param provided."""
response = client.get('/api/v1/user')
assert response.status_code == 200
# ─── GENERAL API TESTS ──────────────────────────────────
class TestGeneralAPI:
"""General API behaviour tests."""
def test_unknown_route_returns_404(self, client):
"""A route that doesn't exist should return 404."""
response = client.get('/api/v1/nonexistent')
assert response.status_code == 404
def test_get_todos_method_not_allowed_on_post(self, client):
"""POST to get-todos should return 405 Method Not Allowed."""
response = client.post('/api/v1/get-todos')
assert response.status_code == 405
def test_cors_header_present(self, client):
"""CORS header should be present on responses (flask-cors is configured)."""
response = client.get('/api/v1/get-todos',
headers={'Origin': 'http://localhost:3000'})
assert 'Access-Control-Allow-Origin' in response.headers
Terminal window
cd backend
pip install -r requirements.txt
pytest tests/ -v --cov=. --cov-report=xml:coverage.xml

Here is the full pipeline we are building and what each stage does:

[push to main]
┌─────────────────┐
│ 1. test-backend │ Run pytest, generate coverage report
└────────┬────────┘
┌─────────────────┐ ┌──────────────────┐
│ 2. gitleaks │ │ 3. sonarqube │ (run in parallel after tests)
│ secret scan │ │ code quality │
└────────┬────────┘ └────────┬─────────┘
│ │
└──────────┬───────────┘
┌─────────────────────┐
│ 4. docker-build │ Build & push backend + frontend images
└──────────┬──────────┘
┌─────────────────────┐
│ 5. upload-artifacts│ Save build info, k8s files
└──────────┬──────────┘
┌─────────────────────┐
│ 6. deploy-minikube │ Start Minikube, deploy both services
└─────────────────────┘

jobs:
test-backend:
name: Backend Tests
runs-on: ubuntu-latest
steps:
# Step 1: Always first — get your code onto the runner
- name: Checkout code
uses: actions/checkout@v4
# Step 2: Install the right Python version
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip' # caches pip packages — speeds up future runs
# Step 3: Install all dependencies including test tools
- name: Install dependencies
working-directory: ./backend
run: pip install -r requirements.txt
# Step 4: Run pytest with coverage report
- name: Run tests
working-directory: ./backend
run: |
pytest tests/ -v \
--cov=. \
--cov-report=xml:coverage.xml \
--cov-report=term-missing \
--junitxml=test-results.xml
# Step 5: Upload test results as artifact for later review
- name: Upload test results
uses: actions/upload-artifact@v4
if: always() # upload even if tests fail (for debugging)
with:
name: backend-test-results
path: |
backend/coverage.xml
backend/test-results.xml retention-days: 14

GitLeaks scans your entire git history for accidentally committed secrets — API keys, passwords, tokens, AWS credentials, etc.

This is critical because your app.py has AWS_SECRET_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE" and App.jsx has a hardcoded GitHub token — both are real-world mistakes GitLeaks catches.

gitleaks-scan:
name: GitLeaks Secret Scan
runs-on: ubuntu-latest
needs: test-backend
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # IMPORTANT: fetch full git history, not just latest commit
# GitLeaks needs all history to scan past commits
- name: Run GitLeaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITLEAKS_LICENSE: ${{ secrets.GITLEAKS_LICENSE }} # needed for orgs; optional for personal repos
continue-on-error: true # set to false to BLOCK the pipeline on secrets found
  • AWS Access Keys and Secret Keys
  • GitHub Personal Access Tokens (ghp_...)
  • Slack tokens
  • Private keys (RSA, EC, PGP)
  • Generic high-entropy strings that look like secrets
  • Database connection strings with passwords

Creating a .gitleaks.toml config to whitelist false positives:

Section titled “Creating a .gitleaks.toml config to whitelist false positives:”
# .gitleaks.toml — place in project root
[allowlist]
description = "Whitelist known test/example values"
regexes = [
'''AKIAIOSFODNN7EXAMPLE''', # AWS example key used in docs
]
paths = [
'''tests/''', # ignore test files
]

SonarQube performs static code analysis — finding bugs, code smells, security hotspots, and vulnerabilities without running the code.

What SonarQube will find in our Todos app:

Section titled “What SonarQube will find in our Todos app:”
  • os.system(f"ping -c 1 {target}") → Critical: Command Injection
  • "SELECT * FROM users WHERE id = '" + user_id + "'" → Critical: SQL Injection
  • app.run(debug=True) → Security Hotspot: Debug mode in production
  • AWS_SECRET_ACCESS_KEY = "AKIAIOSFODNN7EXAMPLE" → Blocker: Hardcoded credential
  • dangerouslySetInnerHTML with raw URL param → Critical: XSS
  • A running SonarQube server (see Section 24)
  • SONAR_TOKEN stored as a GitHub Secret
  • SONAR_HOST_URL stored as a GitHub Secret (e.g., http://your-server-ip:9000)
sonarqube-scan:
name: SonarQube Analysis
runs-on: ubuntu-latest
needs: test-backend
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # full history needed for accurate blame/diff analysis
# Download coverage report generated in test stage
- name: Download test artifacts
uses: actions/download-artifact@v4
with:
name: backend-test-results
path: backend/
- name: SonarQube Scan
uses: SonarSource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
# Optional: Make the pipeline wait for Quality Gate result
# Quality Gate = pass/fail threshold you configure in SonarQube
- name: SonarQube Quality Gate Check
uses: SonarSource/sonarqube-quality-gate-action@master
timeout-minutes: 5
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}

sonar-project.properties (at project root):

Section titled “sonar-project.properties (at project root):”
sonar.projectKey=todos-app
sonar.projectName=Todos App
sonar.projectVersion=1.0
# Scan both backend Python and frontend JS/JSX
sonar.sources=backend,frontend/src
# Tell SonarQube where the Python coverage report is
sonar.python.coverage.reportPaths=backend/coverage.xml
# Test results
sonar.python.xunit.reportPath=backend/test-results.xml
# Exclusions — don't scan these
sonar.exclusions=**/node_modules/**,**/__pycache__/**,**/tests/**,**/*.min.js
# Python version
sonar.python.version=3.11

This stage builds Docker images for both the backend and frontend, then pushes them to DockerHub.

  • DOCKERHUB_USERNAME — your DockerHub username
  • DOCKERHUB_TOKEN — DockerHub access token (Settings > Security > New Access Token)
docker-build:
name: Build & Push Docker Images
runs-on: ubuntu-latest
needs: [gitleaks-scan, sonarqube-scan] # only build if scans passed
steps:
- name: Checkout code
uses: actions/checkout@v4
# Set up Docker Buildx — enables advanced build features, multi-platform builds
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# Login to DockerHub
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Generate a unique tag using the short commit SHA
# This ensures every build has a unique, traceable image tag
- name: Generate image tag
id: tag
run: echo "value=$(echo ${{ github.sha }} | cut -c1-7)" >> $GITHUB_OUTPUT
# Build and push backend image
- name: Build and push backend
uses: docker/build-push-action@v5
with:
context: ./backend # folder containing the Dockerfile
push: true
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest
${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:${{ steps.tag.outputs.value }} cache-from: type=gha # use GitHub Actions cache to speed up builds
cache-to: type=gha,mode=max
# Build and push frontend image
- name: Build and push frontend
uses: docker/build-push-action@v5
with:
context: ./frontend
push: true
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest
${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:${{ steps.tag.outputs.value }} cache-from: type=gha
cache-to: type=gha,mode=max
# Save the image tag as output so deploy job can use it
- name: Export image tag
run: echo "IMAGE_TAG=${{ steps.tag.outputs.value }}" >> $GITHUB_ENV

Save build-related files so they can be:

  • Downloaded from the GitHub UI for reference
  • Used by the deploy job
upload-artifacts:
name: Upload Build Artifacts
runs-on: ubuntu-latest
needs: docker-build
steps:
- name: Checkout code
uses: actions/checkout@v4
# Create an artifacts folder with useful build info
- name: Prepare artifacts
run: |
mkdir -p artifacts
cp -r k8s/ artifacts/k8s/
echo "Build: ${{ github.run_number }}" > artifacts/build-info.txt
echo "Commit: ${{ github.sha }}" >> artifacts/build-info.txt
echo "Branch: ${{ github.ref_name }}" >> artifacts/build-info.txt
echo "Actor: ${{ github.actor }}" >> artifacts/build-info.txt
echo "Timestamp: $(date -u)" >> artifacts/build-info.txt
echo "Backend image: ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest" >> artifacts/build-info.txt
echo "Frontend image: ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest" >> artifacts/build-info.txt
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-artifacts-${{ github.run_number }}
path: artifacts/
retention-days: 30

This final stage:

  1. Installs Minikube on the GitHub runner
  2. Starts a Kubernetes cluster
  3. Updates the k8s manifests to use your DockerHub images
  4. Deploys both backend and frontend
  5. Verifies the deployment
deploy-minikube:
name: Deploy to Minikube
runs-on: ubuntu-latest
needs: upload-artifacts
steps:
- name: Checkout code
uses: actions/checkout@v4
# Start Minikube on the GitHub runner
- name: Start Minikube
uses: medyagh/setup-minikube@master
with:
driver: docker
kubernetes-version: v1.28.0
# Verify cluster is running
- name: Verify cluster
run: |
kubectl cluster-info
kubectl get nodes
kubectl get pods -A
# Build both images INSIDE Minikube's Docker
# This is the same trick from the Node.js lab —
# eval $(minikube docker-env) redirects docker commands
# to build inside Minikube so Kubernetes can find the images
- name: Build images inside Minikube Docker
run: |
eval $(minikube -p minikube docker-env)
docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest ./backend
docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest ./frontend
echo "--- Images available inside Minikube: ---"
docker images
# Update k8s manifests to use our images
# sed replaces the placeholder image name with our actual username
- name: Update k8s manifests with correct image names
run: |
sed -i "s|pavanepam/test-todos-backend:latest|${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest|g" k8s/backend.yaml
sed -i "s|pavanepam/test-todos-frontend:latest|${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest|g" k8s/frontend.yaml
# Set imagePullPolicy to Never so Kubernetes uses the locally built images
# instead of trying to pull from DockerHub (it won't find them there in Minikube)
- name: Patch imagePullPolicy to Never
run: |
sed -i "s|imagePullPolicy: Always|imagePullPolicy: Never|g" k8s/backend.yaml
sed -i "s|imagePullPolicy: Always|imagePullPolicy: Never|g" k8s/frontend.yaml
# Deploy both services
- name: Deploy backend
run: kubectl apply -f k8s/backend.yaml
- name: Deploy frontend
run: kubectl apply -f k8s/frontend.yaml
# Wait for both deployments to be ready (pods running)
- name: Wait for backend to be ready
run: kubectl rollout status deployment/backend-deploy --timeout=120s
- name: Wait for frontend to be ready
run: kubectl rollout status deployment/frontend-deploy --timeout=120s
# Verify everything is running
- name: Check deployment status
run: |
echo "=== Pods ==="
kubectl get pods
echo "=== Services ==="
kubectl get services
echo "=== Deployments ==="
kubectl get deployments
# Show the service URLs
- name: Show service URLs
run: |
echo "=== Service List ==="
minikube service list
echo "Backend URL: $(minikube service backend-service --url)"
echo "Frontend URL: $(minikube service frontend-service --url)"

Save this as .github/workflows/ci-cd.yaml:

name: Todos App — CI/CD Pipeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
workflow_dispatch:
env:
PYTHON_VERSION: '3.11'
jobs:
# ─── JOB 1: Backend Tests ─────────────────────────────────────────────────
test-backend:
name: Backend Tests
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip'
- name: Install dependencies
working-directory: ./backend
run: pip install -r requirements.txt
- name: Run pytest with coverage
working-directory: ./backend
run: |
pytest tests/ -v \
--cov=. \
--cov-report=xml:coverage.xml \
--cov-report=term-missing \
--junitxml=test-results.xml
- name: Upload test artifacts
uses: actions/upload-artifact@v4
if: always()
with:
name: backend-test-results
path: |
backend/coverage.xml
backend/test-results.xml retention-days: 14
# ─── JOB 2: GitLeaks Scan ─────────────────────────────────────────────────
gitleaks-scan:
name: GitLeaks Secret Scan
runs-on: ubuntu-latest
needs: test-backend
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Run GitLeaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
continue-on-error: true
# ─── JOB 3: SonarQube Scan ────────────────────────────────────────────────
sonarqube-scan:
name: SonarQube Analysis
runs-on: ubuntu-latest
needs: test-backend
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Download test artifacts
uses: actions/download-artifact@v4
with:
name: backend-test-results
path: backend/
- name: SonarQube Scan
uses: SonarSource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
# ─── JOB 4: Docker Build & Push ───────────────────────────────────────────
docker-build:
name: Build & Push Docker Images
runs-on: ubuntu-latest
needs: [gitleaks-scan, sonarqube-scan]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Generate image tag
id: tag
run: echo "value=$(echo ${{ github.sha }} | cut -c1-7)" >> $GITHUB_OUTPUT
- name: Build and push backend
uses: docker/build-push-action@v5
with:
context: ./backend
push: true
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest
${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:${{ steps.tag.outputs.value }} cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push frontend
uses: docker/build-push-action@v5
with:
context: ./frontend
push: true
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest
${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:${{ steps.tag.outputs.value }} cache-from: type=gha
cache-to: type=gha,mode=max
# ─── JOB 5: Upload Build Artifacts ────────────────────────────────────────
upload-artifacts:
name: Upload Build Artifacts
runs-on: ubuntu-latest
needs: docker-build
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Prepare artifacts folder
run: |
mkdir -p artifacts
cp -r k8s/ artifacts/k8s/
echo "Build: ${{ github.run_number }}" > artifacts/build-info.txt
echo "Commit: ${{ github.sha }}" >> artifacts/build-info.txt
echo "Branch: ${{ github.ref_name }}" >> artifacts/build-info.txt
echo "Actor: ${{ github.actor }}" >> artifacts/build-info.txt
echo "Timestamp: $(date -u)" >> artifacts/build-info.txt
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-artifacts-${{ github.run_number }}
path: artifacts/
retention-days: 30
# ─── JOB 6: Deploy to Minikube ────────────────────────────────────────────
deploy-minikube:
name: Deploy to Minikube
runs-on: ubuntu-latest
needs: upload-artifacts
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Start Minikube
uses: medyagh/setup-minikube@master
with:
driver: docker
kubernetes-version: v1.28.0
- name: Verify cluster
run: |
kubectl cluster-info
kubectl get nodes
- name: Build images inside Minikube Docker
run: |
eval $(minikube -p minikube docker-env)
docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest ./backend
docker build -t ${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest ./frontend
docker images
- name: Update image names in k8s manifests
run: |
sed -i "s|pavanepam/test-todos-backend:latest|${{ secrets.DOCKERHUB_USERNAME }}/todos-backend:latest|g" k8s/backend.yaml
sed -i "s|pavanepam/test-todos-frontend:latest|${{ secrets.DOCKERHUB_USERNAME }}/todos-frontend:latest|g" k8s/frontend.yaml
sed -i "s|imagePullPolicy: Always|imagePullPolicy: Never|g" k8s/backend.yaml
sed -i "s|imagePullPolicy: Always|imagePullPolicy: Never|g" k8s/frontend.yaml
- name: Deploy to Minikube
run: |
kubectl apply -f k8s/backend.yaml
kubectl apply -f k8s/frontend.yaml
- name: Wait for deployments to be ready
run: |
kubectl rollout status deployment/backend-deploy --timeout=120s
kubectl rollout status deployment/frontend-deploy --timeout=120s
- name: Verify deployment
run: |
echo "=== Pods ==="
kubectl get pods
echo "=== Services ==="
kubectl get services
echo "=== Deployments ==="
kubectl get deployments
- name: Show service URLs
run: |
minikube service list
echo "Backend: $(minikube service backend-service --url)"
echo "Frontend: $(minikube service frontend-service --url)"

Use this when you want the pipeline to run on your own machine/server instead of GitHub’s cloud runners.

  • Your SonarQube server is on a private network
  • You need more compute than GitHub provides
  • You want to keep Docker images cached between runs
  • You have GPU or specialized hardware requirements

Repository → Settings → Actions → Runners → New self-hosted runner

Select your OS and follow the instructions shown. They look like this:

Terminal window
# On your VM/machine:
# 1. Create a directory
mkdir actions-runner && cd actions-runner
# 2. Download the runner package (use the URL GitHub shows you)
curl -o actions-runner-linux-x64-2.311.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.311.0/actions-runner-linux-x64-2.311.0.tar.gz
# 3. Extract
tar xzf ./actions-runner-linux-x64-2.311.0.tar.gz
# 4. Configure (GitHub gives you a unique token)
./config.sh --url https://github.com/YOUR_USERNAME/YOUR_REPO --token YOUR_TOKEN
# 5. Start the runner
./run.sh
# OR install as a system service so it auto-starts
sudo ./svc.sh install
sudo ./svc.sh start

During ./config.sh you’ll be asked for labels. Add labels like: self-hosted,linux,sonarqube or self-hosted,linux,docker

jobs:
my-job:
runs-on: [self-hosted, linux] # matches your runner's labels
# runs-on: [self-hosted, sonarqube] # target runner with specific label

Installing prerequisites on your self-hosted runner

Section titled “Installing prerequisites on your self-hosted runner”
Terminal window
# Docker
sudo apt-get update
sudo apt-get install -y docker.io
sudo usermod -aG docker $USER
# kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Java (needed for SonarScanner)
sudo apt-get install -y openjdk-17-jdk

The quickest way to run SonarQube is with Docker on your VM or local machine.

Section titled “Option 1: Docker (recommended for local/dev)”
Terminal window
# Run SonarQube
docker run -d \
--name sonarqube \
-p 9000:9000 \
-v sonarqube_data:/opt/sonarqube/data \
-v sonarqube_extensions:/opt/sonarqube/extensions \
-v sonarqube_logs:/opt/sonarqube/logs \
sonarqube:lts-community
# Access at: http://localhost:9000
# Default login: admin / admin (you'll be asked to change this)

Option 2: Docker Compose (better for persistent setup)

Section titled “Option 2: Docker Compose (better for persistent setup)”
docker-compose.yaml
version: '3'
services:
sonarqube:
image: sonarqube:lts-community
ports:
- "9000:9000"
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
db:
image: postgres:15
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
POSTGRES_DB: sonar
volumes:
- postgresql_data:/var/lib/postgresql/data
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
postgresql_data:
Terminal window
docker-compose up -d
  1. Open http://your-server-ip:9000
  2. Login with admin / admin, set a new password
  3. Go to My Account → Security → Generate Tokens
  4. Create a token named github-actions, copy it
  5. Add to GitHub Secrets:
    • SONAR_TOKEN = the token you just copied
    • SONAR_HOST_URL = http://your-server-ip:9000

If SonarQube fails to start (Elasticsearch issue):

Section titled “If SonarQube fails to start (Elasticsearch issue):”
Terminal window
# On your host machine, increase virtual memory
sudo sysctl -w vm.max_map_count=524288
sudo sysctl -w fs.file-max=131072
# Make permanent
echo "vm.max_map_count=524288" | sudo tee -a /etc/sysctl.conf
echo "fs.file-max=131072" | sudo tee -a /etc/sysctl.conf

name: Workflow Name
on: [push]
env:
KEY: value
jobs:
job-name:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: echo "hello"
ActionPurposeUsage
actions/checkout@v4Get your codeuses: actions/checkout@v4
actions/setup-python@v5Install Pythonwith: python-version: '3.11'
actions/setup-node@v4Install Node.jswith: node-version: '20'
actions/upload-artifact@v4Save fileswith: name: x, path: ./folder
actions/download-artifact@v4Get saved fileswith: name: x
docker/login-action@v3DockerHub loginwith: username, password
docker/build-push-action@v5Build+push imagewith: context, push, tags
medyagh/setup-minikube@masterStart Minikubebare uses: works
${{ github.sha }} # full commit SHA
${{ github.ref_name }} # branch name
${{ github.actor }} # who triggered
${{ github.run_number }} # incrementing build number
${{ secrets.MY_SECRET }} # encrypted secret
${{ env.MY_VAR }} # environment variable
${{ steps.STEP_ID.outputs.KEY }} # step output
${{ needs.JOB_ID.outputs.KEY }} # job output
if: github.ref == 'refs/heads/main'
if: github.event_name == 'push'
if: failure()
if: always()
if: success()
if: contains(github.ref, 'release')
if: startsWith(github.ref, 'refs/tags/')
Terminal window
# From a step to later steps in same job
echo "key=value" >> $GITHUB_OUTPUT
# Persist env var across steps in same job
echo "MY_VAR=hello" >> $GITHUB_ENV
# Add to PATH
echo "/my/tool/bin" >> $GITHUB_PATH
Secret NameValue
DOCKERHUB_USERNAMEYour DockerHub username
DOCKERHUB_TOKENDockerHub access token
SONAR_TOKENToken from SonarQube UI
SONAR_HOST_URLhttp://your-server-ip:9000
test-todos/
├── .github/
│ └── workflows/
│ └── ci-cd.yaml
├── backend/
│ ├── app.py
│ ├── requirements.txt
│ ├── Dockerfile
│ └── tests/
│ ├── __init__.py
│ └── test_api.py
├── frontend/
│ ├── src/
│ ├── Dockerfile
│ └── package.json
├── k8s/
│ ├── backend.yaml
│ └── frontend.yaml
├── sonar-project.properties
└── .gitleaks.toml