Jenkins Architecture
Jenkins & Jargons
Section titled “Jenkins & Jargons”To understand Jenkins, you must separate the management from the execution.
- The Controller (formerly Master): The brain. It serves the web UI, stores configurations, listens for webhooks, and manages the queue. It should never run the heavy lifting.
- Node/Worker: The physical or virtual machine (EC2 instance, Kubernetes pod, Raspberry Pi) that provides CPU and RAM.
- Agent (formerly Slave): The small piece of Jenkins software running on the Node. It listens to the Controller and executes the commands it receives.
- Job (or Project): A defined set of instructions (like your Pipeline) that tells Jenkins what to do.
How Scheduling Works?
- Trigger: An event occurs (GitHub webhook, timed schedule, manual click).
- Queue: The Controller places the Job into an execution queue.
- Matching: The Controller looks at the Job’s
agentrequirements (e.g., does it need Linux? Docker? Python?). - Dispatch: It finds an idle Agent that matches the criteria, sends the script to that Agent, and monitors the output in real-time.
Jenkins Agents
Section titled “Jenkins Agents”The agent directive is the foundational routing mechanism in a Jenkins Declarative Pipeline. It answers one simple question: “Where and how should this code execute?”
You can define an agent globally (at the top of the pipeline block) to dictate the environment for the entire run, or locally (inside a stage block) to use different environments for different tasks.
1. agent any
Section titled “1. agent any”The Logic: “I don’t care about the operating system or installed tools. Just find the first available executor in the Jenkins cluster and run this.” Real-World Use Case: Simple administrative tasks, infrastructure cleanup scripts, or sending notifications where standard shell commands are sufficient.
pipeline { // Will run on the Master, Slave A, Slave B... whichever is free first agent any
stages { stage('Nightly Cleanup') { steps { echo "Cleaning up temporary files on the host..." sh 'find /tmp -type f -mtime +7 -delete' } } }}2. agent none
Section titled “2. agent none”- The Logic: “Do NOT allocate a global workspace or machine.”
- This forces every individual
stageto explicitly declare its own execution environment. - Real-World Use Case: Complex, multi-tier applications. This is exactly what we used for your React/Flask app. It prevents a Node.js build from accidentally running on a Python-only server.
pipeline { // No global machine assigned agent none
stages { stage('Compile Java') { // This stage asks for a specific machine agent { label 'ubuntu-java-21' } steps { sh 'mvn clean install' } } stage('Build UI') { // This stage asks for a Docker container agent { docker { image 'node:18' } } steps { sh 'npm run build' } } }}3. agent { label '...' } (The Specialist)
Section titled “3. agent { label '...' } (The Specialist)”- The Logic: “This job requires highly specific hardware or a specific operating system. Only run on a node tagged with this exact label.”
- Real-World Use Case: Building an iOS application. Apple requires iOS apps to be compiled on macOS using Xcode. You cannot build an iOS app inside a standard Linux Docker container.
pipeline { // Tells Jenkins to route this ONLY to a Mac Mini or Mac Pro in the cluster agent { label 'macos-xcode-15' }
stages { stage('Build iOS App') { steps { // This command only works on macOS sh 'xcodebuild -workspace MyApp.xcworkspace -scheme MyApp clean build' } } }}4. agent { docker { ... } } (The Clean Room)
Section titled “4. agent { docker { ... } } (The Clean Room)”The Logic: “Download this exact Docker image, start it as a container, mount the codebase inside it, run the steps, and then destroy the container.”
Real-World Use Case: Ensuring perfect consistency. If a developer uses Node 20 on their laptop, Jenkins uses the exact same node:20 environment. It prevents the classic “It works on my machine” problem.
pipeline { // The entire pipeline runs inside this temporary container agent { docker { image 'golang:1.21-alpine' args '-u root -v /tmp/cache:/go/pkg' } }
stages { stage('Compile Go Binary') { steps { sh 'go build -o myapp main.go' } } }}5. agent { dockerfile true } (The Custom Clean Room)
Section titled “5. agent { dockerfile true } (The Custom Clean Room)”- The Logic: “A standard Docker image isn’t enough. Look inside my Git repository for a
Dockerfile, build it into a brand new image right now, and then run the pipeline inside that custom image.” - Real-World Use Case: Legacy C++ projects or Data Science pipelines that require a very specific combination of OS packages, proprietary compilers, and environment variables that don’t exist on public DockerHub images.
pipeline { // Jenkins will run 'docker build .' before it starts the first stage agent { dockerfile true }
stages { stage('Run Tests') { steps { // This executes inside the custom environment just built from your code sh './run_complex_simulation.sh' } } }}Bonus: agent { kubernetes { ... } } (The Cloud Native)
Section titled “Bonus: agent { kubernetes { ... } } (The Cloud Native)”If you install the Kubernetes plugin, Jenkins can dynamically ask your K8s cluster to spin up a Pod just to run a job, and then delete it. This is how massive enterprises achieve infinite build scaling without paying for idle EC2 servers.
Brainstorming…
Section titled “Brainstorming…”When you run Jenkins itself inside a Docker container, How does a container spin up other containers as agents?
Section titled “When you run Jenkins itself inside a Docker container, How does a container spin up other containers as agents?”There are two ways to solve this. You mentioned DinD (Docker-in-Docker), which exists, but the industry standard is actually DooD (Docker-out-of-Docker, or Socket Binding). Here is the pure logic behind both.
1. The Bad Way: DinD (Docker-in-Docker)
Section titled “1. The Bad Way: DinD (Docker-in-Docker)”The Logic: You install a complete, fully functioning Docker daemon inside the Jenkins container.
- How it works: When Jenkins runs a pipeline, it spins up Agent containers inside its own container. It is a nested inception.
- Why it is bad: To run a Docker daemon inside a container, you must run the container with the
-privilegedflag. This gives the container root access to your entire host machine’s kernel. It is a massive security vulnerability and can cause data corruption with Linux file systems.
2. The Standard Way: DooD (Docker-out-of-Docker)
Section titled “2. The Standard Way: DooD (Docker-out-of-Docker)”You do not run Docker inside Jenkins. Instead, you give the Jenkins container a “telephone wire” to talk to the host machine’s Docker daemon.
- How it works: In Linux, Docker communicates via a UNIX socket file located at
/var/run/docker.sock. When you start the Jenkins container, you mount this file from your host EC2 into the container. - The Result: When Jenkins says “spin up a Python agent,” the command travels through the socket to the host EC2. The host EC2 spins up the Python container.
- The Architecture: The Jenkins container and the Python Agent container run side-by-side as siblings on the host machine. The Agent is not inside Jenkins.
The Implementation (DooD)
Section titled “The Implementation (DooD)”To achieve this, you cannot just run docker run jenkins:latest. You must mount the socket and ensure the Jenkins container has the Docker CLI installed to send the commands.
Here is the exact command to run Jenkins as a container that can spin up other containers: assuming you have volume like docker volume create jenkins_vol for storing Jenkins related configs and data
docker run -d --name jenkins -p 8080:8080 -p 50000:5000 -u root -v jenkins_vol:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkins/jenkins:ltsWhy this command works:
Section titled “Why this command works:”v /var/run/docker.sock:/var/run/docker.sock: This is the magic. It binds the host’s Docker socket to the container’s Docker socket.u root: The container needs root privileges internally to access the socket file (which is owned by root on the host), but it is NOT-privileged, keeping your host kernel safe.v jenkins_home...: Ensures if you kill the Jenkins container, you don’t lose all your pipelines and plugins.
When Jenkins executes agent { docker { image 'node' } } inside this setup, the host Docker engine hears the command and spins up the Node container right next to the Jenkins container.