DevOps in Practice and Module Review
Trace the complete pipeline end to end, explore what comes next in the field, and prepare thoroughly for your exam.
Full pipeline walkthrough
Let us trace one complete scenario in full detail — this is exactly the kind of question that appears in the exam.
A developer named Alice is working on a bug fix. The application's login function accepts any password without checking it. She fixes the bug, writes a test, and pushes to GitHub. Walk through every step that follows.
- Alice fixes the bug and writes a testpython — test and fix
def test_wrong_password_rejected (): result = login ("alice" , "wrongpassword" ) assert result == False - Commit and pushbash
git add . git commit -m "Fix login always returning True" git push origin fix/login-always-true - Pipeline triggers: GitHub detects the push. The workflow file in
.github/workflows/ci.ymlstarts. A fresh Ubuntu VM is provisioned. - Checkout and setup:
actions/checkout@v4clones the repository.actions/setup-python@v5installs Python 3.11. - Install:
./run.sh installrunspip install -r requirements.txt. The pip cache is used — this step takes 3 seconds instead of 30. - Lint:
./run.sh lintruns flake8. Alice's code is style-clean. The step exits 0. - Test:
./run.sh testruns pytest. All 15 tests pass, including Alice's new test. The step exits 0. - Docker build:
docker build -t app:${{github.sha}} .builds the image. The pip install layer is cached (requirements.txt unchanged). Only the COPY layer is rebuilt. - Smoke test:
docker run --rm -e APP_ENV=test app:${{github.sha}} python3 -c "import app"imports the application, confirming the container starts correctly. - Pipeline passes: All steps exited 0. GitHub marks Alice's commit with a green tick. The PR can now be reviewed.
- Pull request review: Alice opens a PR. Bob reviews the code and approves it. The pipeline result appears as a green check on the PR.
- Merge: Alice merges the PR. The fix is now in main. The pipeline runs again on main and passes.
DevOps culture at work
The technology is half the story. Let us look at the cultural elements that make DevOps work — and fail.
What good looks like
- The team owns the whole delivery — Alice did not write a bug report and hand it to QA. She fixed it, tested it, and verified it herself.
- Small changes, tested frequently — the fix was one commit. The pipeline ran in under three minutes. The feedback was immediate.
- Automation replaced manual steps — nobody manually deployed. Nobody manually ran tests. Nobody manually built the image.
- Bob’s review was about logic, not style — flake8 caught style issues automatically. Bob spent his review time on the actual logic of the fix.
What failure looks like
- Large, infrequent merges — a team that merges once a week will have more conflicts, harder debugging, and slower feedback.
- CI bypass — if developers can merge without passing CI (or if CI is broken and ignored), the quality gate is meaningless.
- Fear of deploying — if the team is afraid to deploy because 'last time it broke something', the underlying problem is lack of automation and testing, not deployment itself.
- Blame culture — when something breaks in production, the team should ask 'how did this pass CI?' not 'who broke it?'. The answer is usually a gap in test coverage.
Deployment models
This module stopped before deployment. Here is a conceptual overview of what happens after the CI pipeline produces a verified image:
| Model | Description | Used when |
|---|---|---|
| Manual deploy | A human pulls the image and restarts the container on the target server | Simple projects, single servers |
| Continuous Delivery | The pipeline prepares a release; a human clicks to deploy | Most production systems |
| Continuous Deployment | Every pipeline that passes is automatically deployed | High-frequency teams (Flickr, Amazon) |
| Blue-green | Two identical environments; switch traffic to the new one; keep old one as rollback | Zero-downtime deployments |
| Canary | Deploy to 5% of users first; watch metrics; roll out further if healthy | Risk-reduction at scale |
Observability basics
Once code is in production, you need to know if it is working. Observability is the ability to understand the internal state of a system from its external outputs. It has three pillars:
For this module, you should understand what observability means conceptually. You will build observability into production systems in Year 2 modules.
The DevOps landscape
DevOps is a field, not a tool. Here is a map of the tooling you will encounter as you progress:
| Category | Tools | You learned |
|---|---|---|
| Version control | Git, GitHub, GitLab, Bitbucket | Git, GitHub |
| CI/CD | GitHub Actions, Jenkins, CircleCI, GitLab CI, Tekton | GitHub Actions |
| Containerisation | Docker, Podman, containerd | Docker |
| Orchestration | Kubernetes, AWS ECS, Docker Swarm | Concepts only |
| Infrastructure as Code | Terraform, Pulumi, Ansible, CloudFormation | Concepts only |
| Monitoring/Observability | Prometheus, Grafana, Datadog, Splunk | Concepts only |
| Secret management | HashiCorp Vault, AWS Secrets Manager, GitHub Secrets | GitHub Secrets |
What you have built
Over ten modules you have constructed a complete DevOps foundation. Here is the inventory:
- A Unix command-line toolkit: navigation, text processing, pipes, redirection
- Automation scripts with structured tasks, argument parsing, and error handling
- A Git repository with meaningful history, branches, and pull request workflow
- An automated test suite using pytest with fixtures and edge case coverage
- A CI pipeline in GitHub Actions with lint, test, and Docker build stages
- A Docker image with optimised layer caching and runtime configuration
- A smoke test verifying the container runs correctly in CI
This stack — Git + CI + Docker + Tests + Scripts — is the foundation of every production DevOps pipeline in the industry, from startups to large enterprises. The tools may differ; the pattern is universal.
Exam preparation guide
The exam is two hours, invigilated, and covers all seven MLOs. It is worth 70% of the module mark. Here is a systematic revision strategy:
MLO-by-MLO revision
grep with flags, pipe chains, redirection to file, a shell script with set -euo pipefail, a function, an if statement, a for loop.init, add, commit, log, diff, branch, switch, merge, push, pull, stash, restore. Know the three areas.run: vs uses:, secrets, caching.Worked exam answers
Describe the purpose of git stash and give a realistic scenario where you would use it.
git stash saves the current state of the working directory and staging area without creating a commit. The saved changes can be restored later with git stash pop.
Scenario: you are halfway through implementing a new feature on a branch when an urgent bug report arrives. You cannot commit the half-finished feature. Run git stash, switch to the main branch, create a hotfix branch, fix and commit the bug, merge and push. Then return to your feature branch and run git stash pop to restore your work. The stash bridges the gap between two contexts without polluting the history.
A GitHub Actions pipeline fails with the error: ModuleNotFoundError: No module named 'flask'. Describe the most likely cause and the fix.
The most likely cause is that Flask has not been added to requirements.txt (or that requirements.txt is not installed in the pipeline). The application runs locally because Flask is installed in the developer's local environment, but the CI runner starts from a clean state and only installs what is declared.
Fix: (1) ensure Flask is in requirements.txt; (2) ensure the pipeline has a step that runs pip install -r requirements.txt before the failing step. Push the updated files and the pipeline will pass.
Explain the relationship between Docker images and layers. Why does the order of instructions in a Dockerfile matter?
A Docker image is a stack of read-only layers. Each instruction in the Dockerfile creates one layer. When rebuilding, Docker checks each layer in order: if the layer's inputs have not changed, the cached version is reused; if they have changed, that layer and all subsequent layers are rebuilt.
Order matters because of this caching: place instructions that change rarely (base image, system packages) before instructions that change often (application code). The canonical pattern is: copy requirements.txt, run pip install, then copy the rest of the code. This way, the expensive package installation is only re-run when requirements.txt changes, not on every code change.
What is a merge conflict and how do you resolve one?
A merge conflict occurs when two branches have made changes to the same part of the same file, and Git cannot automatically determine which version to keep.
Resolution process: (1) run git merge — if a conflict exists, Git pauses and marks the conflicted file; (2) open the file — Git inserts markers: <<<<<<< HEAD marks the current branch's version, ======= divides the two versions, and >>>>>>> marks the incoming branch; (3) edit the file to the correct final state — this may mean choosing one side, combining both, or writing something new; (4) delete all four marker lines; (5) run git add ; (6) run git commit to complete the merge. Git pre-fills a merge commit message.
What does set -euo pipefail do in a shell script, and why is it considered good practice?
set -e: the script exits immediately if any command returns a non-zero exit code. Without it, the script continues after failures, potentially running subsequent steps in a broken state.
set -u: the script treats any reference to an undefined variable as an error. This catches typos in variable names that would otherwise silently expand to empty strings.
set -o pipefail: normally a pipeline (cmd1 | cmd2) returns the exit code of the last command, masking earlier failures. With pipefail, the pipeline returns a non-zero exit code if any command in it fails.
Together, these three options make scripts fail loudly and immediately rather than silently continuing in a broken state. This is essential when scripts run in CI, where silent failures produce misleading results.
Key terms reference
A complete glossary covering all ten modules:
Exam day checklist
- Arrive early — the exam starts promptly
- Read every question fully before writing anything
- For command questions: write exact syntax, flags included
- For explanation questions: two or three clear sentences are sufficient
- For scenario questions: identify the problem first, then describe the fix
- If stuck on a question, move on and return — do not lose time
- Check for marks — a 4-mark question expects four distinct points
- Re-read your answers if time allows