Module 05CI/CD~1 hour

Continuous Integration and Automated Testing

Understand why CI exists, how automated tests are structured, and how to read a pipeline log efficiently.

Covers:MLO1MLO2MLO3MLO5MLO7
47

Integration hell and CI's solution

In 2000, a software team might spend three months developing features independently, then convene for a week-long integration sprint — a period of intense, stressful debugging as all the branches were merged and the inevitable conflicts resolved. This was called integration hell.

The observation that made CI possible was simple: if integration is painful because you do it rarely, do it constantly. If you merge every change immediately and check it automatically, integration never accumulates into a crisis.

Continuous Integration means every developer merges their changes into the shared codebase at least once per day, and every merge automatically triggers a build and test run. Problems are caught within minutes, while the change is still fresh in the developer's mind.

What CI is not

CI is frequently confused with CD (Continuous Delivery or Continuous Deployment). They are related but distinct:

PracticeDefinitionGate
Continuous IntegrationFrequently merging and automatically verifying changesTests must pass
Continuous DeliveryEvery passing change is releasable at any timeManual approval to release
Continuous DeploymentEvery passing change is automatically released to productionFully automated

In this module we focus on CI. Delivery and deployment are covered in Module 9.

48

The CI feedback loop

Every CI system, regardless of which tool you use (GitHub Actions, Jenkins, CircleCI, GitLab CI), follows the same loop:

Push / PR
Checkout code
Install deps
Lint
Test
Report

If any step returns a non-zero exit code, the pipeline fails and reports the failure. No code that fails CI can be merged into the protected main branch — this is the quality gate.

The cost of slow feedback

The value of CI is in the speed of the feedback. Consider:

Feedback speedCost of fixing the bug
Within 5 minutes (CI)Minutes — change is fresh, easy to identify
End of day (code review)Hours — context lost, others may have built on top
Next sprint (QA testing)Days — may require significant rework
In production (user report)Very high — user impact, potential data issues, emergency fix required
49

Anatomy of a test

An automated test is a program that calls your code with specific inputs and verifies the outputs match expectations. The Arrange-Act-Assert pattern gives every test a consistent structure:

python — arrange, act, assert
def
 test_calculate_discount
():
    # ARRANGE: set up inputs and expected values
    price = 100.0
    discount_rate = 0.20
    expected = 80.0

    # ACT: call the code under test
    result = 
calculate_discount
(price, discount_rate)

    # ASSERT: verify the result
    assert
 result == expected

The assert statement raises an AssertionError if the condition is false. pytest catches this and marks the test as failed.

Testing edge cases

Testing only the happy path (expected input, expected output) is insufficient. Good tests also cover:

  • Boundary values — what happens with the smallest or largest valid input?
  • Empty input — what happens with an empty list, empty string, or zero?
  • Invalid input — does the function handle wrong types or values gracefully?
  • Error conditions — does the function raise the right exception when it should?
python — testing edge cases and exceptions
import
 pytest

def
 test_divide_by_zero
():
    # Verify the function raises ValueError for division by zero
    with
 pytest.raises(ValueError):
        
divide
(10, 0)

def
 test_empty_list_returns_none
():
    result = 
find_max
([])
    assert
 result is None

def
 test_single_element_list
():
    result = 
find_max
([42])
    assert
 result == 42
50

The testing pyramid

Not all tests are equal. The testing pyramid describes three layers of automated tests, with different trade-offs of speed, scope, and cost.

plain — testing pyramid
           /\
          /  \
         / E2E\  ← few, slow, expensive
        /------\
       /Integr. \  ← some, medium
      /----------\
     /    Unit    \  ← many, fast, cheap
    /______________\
LayerTestsSpeedExample
UnitIndividual functions or classes in isolationMillisecondstest that calculate_discount(100, 0.2) returns 80.0
IntegrationMultiple components working togetherSecondstest that a POST /login request creates a session
End-to-endThe entire system from user perspectiveMinutestest that a user can sign up, log in, and place an order

For this module, we focus on unit tests. They run in milliseconds, require no infrastructure, and give precise feedback on exactly which function is broken.

51

pytest in practice

pytest is Python's most popular testing library. It discovers tests automatically: any file matching test_*.py or *_test.py, and any function or method starting with test_.

bash — running tests
# Run all tests
pytest

# Run tests in a specific file
pytest tests/test_auth.py

# Run a specific test by name
pytest tests/test_auth.py::test_login_success

# Verbose output — show each test name
pytest -v

# Stop after first failure
pytest -x

# Show local variables on failure
pytest -l

# Run tests matching a keyword
pytest -k "login or logout"
plain — pytest output (passing)
========================= test session starts ==========================
collected 5 items

tests/test_auth.py::test_login_success PASSED              [ 20%]
tests/test_auth.py::test_login_wrong_password PASSED       [ 40%]
tests/test_auth.py::test_login_empty_username PASSED       [ 60%]
tests/test_auth.py::test_logout PASSED                     [ 80%]
tests/test_auth.py::test_session_expires PASSED            [100%]

========================== 5 passed in 0.23s ==========================
plain — pytest output (failing)
FAILED tests/test_auth.py::test_login_wrong_password

========================= FAILURES =================================
_____________________ test_login_wrong_password ____________________

    def test_login_wrong_password():
        result = login('alice', 'wrong')
>       assert result == False
E       AssertionError: assert True == False
E       (login returned True but should have returned False)

tests/test_auth.py:24: AssertionError
52

Fixtures and test setup

Many tests need the same setup — a database connection, a configured application, a sample user. Copying this setup into every test function is repetitive. pytest fixtures solve this:

python — pytest fixtures
import
 pytest

# Define a fixture
@pytest.fixture
def
 sample_user
():
    return
 {        'username'
: 'alice'
,        'email'
: 'alice@example.com'
    }

# Use the fixture as a parameter
def
 test_create_greeting
(sample_user):
    result = 
create_greeting
(sample_user)
    assert 'alice'
 in result

def
 test_user_has_email
(sample_user):
    assert '@'
 in sample_user['email'
]
53

Reading CI output like a pro

A CI log can be hundreds of lines. Finding the failure quickly is a critical skill. Follow this process:

  1. Find the step that failed — pipeline tools highlight failed steps in red. Click it to expand.
  2. Scroll to the first error — not the last. The first error often causes the subsequent errors.
  3. Identify the type of failure — is it a syntax error? An import error? A test assertion failure? An environment error?
  4. Note the file and line number — always shown for Python errors.
  5. Reproduce locally — run the exact same command the pipeline ran.
54

Common failure patterns

ModuleNotFoundError
A package is missing. Fix: add it to requirements.txt and push.
pytest: command not found
pytest not installed in CI environment. Fix: add pip install pytest to the pipeline install step.
PermissionError on script
The shell script is not executable. Fix: chmod +x script.sh and commit.
YAML parse error
Indentation error in the workflow file. YAML is sensitive to spaces vs tabs.
AssertionError
A test failed. Read the output — it shows what was expected vs what was returned.
Connection refused
A test is trying to connect to a database or service that is not running in CI. Unit tests should not need network connections.
55

Quality gates

A quality gate is a rule the pipeline enforces: code that fails the check cannot be merged. Common quality gates include:

  • All tests must pass
  • Code style must conform (e.g. flake8, black --check)
  • No known security vulnerabilities in dependencies
  • Test coverage does not drop below a threshold (e.g. 80%)
  • No secrets or credentials in the code

Quality gates are enforced by configuring GitHub branch protection rules: a branch can require all CI checks to pass before a PR can be merged.

56

Key terms

CI
Continuous Integration — frequently merging code and automatically verifying each change.
quality gate
A check that must pass before code can merge into a protected branch.
unit test
A test of an isolated function or class, with no external dependencies.
test fixture
Shared setup code that multiple tests can use, defined with @pytest.fixture.
assert
A statement that raises AssertionError if a condition is false.
pytest
Python's most popular testing library and test runner.
testing pyramid
Unit (many, fast) → Integration (some) → E2E (few, slow).
feedback loop
The time between making a change and knowing whether it broke something.
57

Test preparation

◆ In-class test — Week 5

The in-class test this week covers everything from Modules 1 through 4. It is one hour, closed-book, and counts for 10% of the module mark.

Revision checklist

  • Module 1: CALMS framework, DORA four key metrics, basic navigation (pwd, ls, cd), file permissions, chmod, environment variables
  • Module 2: grep flags (-i, -r, -n, -v), pipe operator, redirection (> vs >>), shell script structure, set -e
  • Module 3: Git three areas (working/staging/repo), git init, git add, git commit, git log, git diff, .gitignore
  • Module 4: Creating branches, merging, fast-forward vs three-way merge, git push, git pull, pull requests, conflict markers

Sample question types

Write the Git command that creates a branch called feature/search and immediately switches to it.
Answer: git switch -c feature/search
A developer runs git diff. There is no output. They then run git diff --staged and see changes. Explain the state the repository is in.
The changes have been staged (git add has been run) but not yet committed. git diff compares working directory to staging area; since staging area and working directory match, it shows nothing. git diff --staged compares staging area to the last commit, where the changes are visible.
What does set -e at the top of a shell script do?
It makes the script exit immediately if any command returns a non-zero exit code. Without it, the script would continue even if a step fails, potentially causing misleading or dangerous behaviour.
Describe the CALMS framework in your own words.
CALMS stands for Culture (shared responsibility and blameless post-mortems), Automation (replace repetitive manual steps with scripts and machines), Lean (eliminate waste, prefer small frequent changes), Measurement (track delivery metrics to identify improvements), and Sharing (knowledge, tools, and post-mortems cross team boundaries).