Containers in a DevOps Workflow
Connect Docker to your CI pipeline and trace the complete journey from a git push to a verified, containerised application.
The complete pipeline picture
Over the last eight modules you have built the individual components of a DevOps pipeline. This module connects them. Here is the full picture:
The addition of Docker in the pipeline serves two purposes:
- Environment verification — if the image builds in CI, it will build anywhere. Dockerfile errors are caught immediately.
- Artefact production — the image built in CI is the exact artefact that would be deployed to production. CI is the point where the application is packaged.
Docker in CI
GitHub's hosted runners have Docker pre-installed. You do not need any special setup. Add the following steps after your tests:
- name: Build Docker image
run: docker build -t my-app:${{ github.sha }} .
- name: Run smoke test
run: |
docker run --rm \
-e APP_ENV=test \
my-app:${{ github.sha }} \
python3 -c 'import app; print("OK")'
${{ github.sha }} is a built-in GitHub Actions variable containing the commit hash. Using it as the image tag means every build produces a uniquely-identified image — you can always trace a running container back to the exact commit that built it.
Tagging images correctly
Image tags are version labels. Good tagging makes it possible to deploy any specific version and to understand which version is running where:
# By commit SHA — unique per build
docker build -t my-app:a1b2c3d .
# By semantic version — human readable
docker build -t my-app:1.4.2 .
# By branch name — useful for staging
docker build -t my-app:main .
# Multiple tags on the same image
docker build -t my-app:1.4.2 -t my-app:latest .
latest is a special convention: it refers to the most recently pushed version. Many tools pull latest by default. Be careful with it — it changes with every push, making it hard to reproduce a specific deployment. For serious deployments, always use a specific tag.
Health checks
A container can be running but not actually serving requests correctly — the process started but is stuck, or the database connection failed. Health checks provide a way to verify the container is genuinely working:
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
# Run this command to check health
# --interval: how often to check
# --timeout: how long to wait for a response
# --retries: how many failures before marking unhealthy
HEALTHCHECK --interval=30s --timeout=5s --retries=3
CMD curl -f http://localhost:8000/health || exit 1
CMD ["uvicorn", "app:app", "--host", "0.0.0.0"]
docker ps
CONTAINER ID IMAGE STATUS
a1b2c3d4e5f6 my-app:latest Up 2 mins (healthy)
# An unhealthy container
b2c3d4e5f6a7 my-app:latest Up 5 mins (unhealthy)
Environment-specific configuration
The same Docker image should run in development, staging, and production. The difference is configuration — database URLs, log levels, API endpoints. Pass these as environment variables, never bake them into the image:
# Development: connect to local database, verbose logging
docker run -e DATABASE_URL=postgresql://localhost/devdb -e LOG_LEVEL=DEBUG my-app:latest
# Production: connect to production database, minimal logging
docker run -e DATABASE_URL=postgresql://prod-db.example.com/proddb -e LOG_LEVEL=WARNING -e SECRET_KEY=very_long_random_string my-app:latest
In CI, pass test-specific configuration:
- name: Run smoke test
env:
DATABASE_URL: sqlite:///:memory:
APP_ENV: test
run: docker run --rm -e DATABASE_URL -e APP_ENV my-app:latest
Container registries
A container registry is a storage service for Docker images. When CI builds an image, it can push it to a registry. When a server deploys, it pulls from the registry.
| Registry | Use case | Notes |
|---|---|---|
| Docker Hub | Public images, open-source projects | Free for public images. Official images (python:3.11) live here. |
| GitHub Container Registry (GHCR) | Private images linked to a GitHub repo | Free for public repos. Integrates naturally with GitHub Actions. |
| AWS ECR / GCP Artifact Registry | Enterprise/cloud deployments | Paid, but tightly integrated with cloud infrastructure. |
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
run: |
IMAGE=ghcr.io/${{ github.repository }}:${{ github.sha }}
docker build -t $IMAGE .
docker push $IMAGE
The deployment gap
This module brings the pipeline to a verified, packaged container image. Deploying that image to a real server — pulling it, stopping the old version, starting the new one — is the next step, and it is outside the scope of this module.
This is an intentional boundary. Real deployment involves infrastructure decisions (where are the servers? how many replicas?), orchestration (Kubernetes, AWS ECS), secrets management in production, rolling deployments with zero downtime, and monitoring. These are Year 2 topics.
What this module teaches is the foundation: building a pipeline that produces a verified, tagged, registry-stored image is the CI/CD team's output. The deployment infrastructure consumes that output.
Common production challenges
These challenges will appear in your exam and in your career. Understanding them at a conceptual level now prepares you for solving them later:
--no-cache-dir with pip, use slim base images, and remove package manager caches after installation.Putting it all together
Here is the complete CI workflow integrating everything from Modules 6, 7, 8, and 9:
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
ci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install and test
run: |
chmod +x run.sh
./run.sh install
./run.sh lint
./run.sh test
- name: Build Docker image
run: docker build -t my-app:${{ github.sha }} .
- name: Smoke test container
run: |
docker run --rm \
-e APP_ENV=test \
my-app:${{ github.sha }} \
python3 -c 'import app; print("Image OK")'
Key terms
Exercises
Part A: Docker in the pipeline
- Take your working pipeline from Module 6.
- Write a Dockerfile for the Python application.
- Add a
docker buildstep after the tests. - Add a smoke test step:
docker run --rm my-app:latest python3 -c "import app" - Push and verify all four steps pass: install, test, build, smoke.
Part B: Tagging with commit SHA
- Update the build command to use
${{{{ github.sha }}}}as the tag. - Push and check what tag appears in the Docker build log.
- Run docker images locally — can you see the image with its SHA tag?
Part C: Environment variables
- Add an APP_ENV environment variable to the smoke test step.
- Modify your Python code to print the value of APP_ENV on startup.
- Check the pipeline log — can you see it printing test?
Part D: Health check
- Add a simple /health endpoint to your Flask app that returns {"status": "ok"}.
- Add a HEALTHCHECK to the Dockerfile.
- Run the container detached:
docker run -d -p 5000:5000 my-app:latest - Check health:
docker inspect| grep Health - Check health via curl:
curl localhost:5000/health