How to Speed Up Your CI Builds: A Practical Guide

Slow CI Builds Cost More Than You Think

A slow CI build does not just waste compute minutes. It breaks the feedback loop that makes continuous integration valuable in the first place. When builds take 20 minutes or longer, developers push multiple changes before seeing results, batch unrelated work into single commits, and lose context while waiting. The compounding effect is significant: slower builds lead to larger changesets, which lead to harder-to-diagnose failures, which lead to even slower builds.

Martin Fowler’s original continuous integration guidance ↗ emphasises that the build should be fast enough that developers get feedback before they context-switch to another task. In practice, that means under 10 minutes for the full pipeline and under 5 minutes for the critical path.

This article covers the specific techniques that make CI builds faster: dependency caching, build caching, parallelisation, incremental builds, and Docker layer optimisation. If you are looking for broader guidance on pipeline architecture, our article on how to build a CI/CD pipeline that actually works covers that ground.

Measure Before You Optimise

Before changing anything, establish your baseline. You cannot improve what you do not measure.

Record these numbers for your current pipeline:

MetricWhat to MeasureWhy It Matters
Total pipeline durationEnd-to-end wall clock timeThe number developers actually feel
Longest stageThe single slowest stepYour bottleneck and first optimisation target
Queue timeTime waiting for a runnerIndicates infrastructure constraints
Cache hit ratePercentage of builds using cached artefactsShows whether your caching strategy is working
Flaky failure ratePercentage of failures not caused by code changesErodes trust and wastes reruns

Most CI platforms provide built-in analytics. GitHub Actions shows workflow run durations in the Actions tab. GitLab CI has pipeline analytics under CI/CD > Analytics. CircleCI provides insights dashboards. Use these to identify your slowest stages.

Create a Build Time Budget

Once you have your baseline, set a target. A useful framework is a build time budget that allocates minutes to each stage:

CI Build Time: Before vs After Optimisation Before (25 min) Install 5 min Build 7 min Lint 2 min Tests (serial) 9 min Pkg 2 min After (7 min) I 1 min Build 3 min L 30s Tests 2 min P 30s Key savings: cached dependencies (4 min), incremental build (4 min), parallel tests (7 min), cached lint (1.5 min) 0 5 min 10 min 15 min 20+ min

The goal is not perfection. It is steady, measurable improvement. A 25-minute pipeline cut to 7 minutes transforms the developer experience.

Dependency Caching

Dependency installation is often the easiest stage to optimise and the one with the biggest payoff. Without caching, every CI run downloads the same packages from the internet, which is slow and wastes bandwidth.

How Dependency Caching Works

The principle is straightforward:

  1. After installing dependencies, save the package manager’s cache directory to a shared store
  2. On the next run, restore the cache before running the install command
  3. Use a hash of your lockfile as the cache key so the cache invalidates when dependencies change

Every major CI platform supports this natively:

Cache Key Strategy

A good cache key changes when your dependencies change and stays stable otherwise. The standard pattern is:

cache-key: deps-{os}-{lockfile-hash}

For a Node.js project on GitHub Actions, that looks like:

- uses: actions/cache@v4
  with:
    path: ~/.npm
    key: npm-${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
    restore-keys: |
      npm-${{ runner.os }}-

The restore-keys fallback is important. If the exact key does not match (because a dependency was added or updated), the runner still restores a partial cache from a previous run. This means npm install only downloads the changed packages rather than everything from scratch.

What to Cache by Language

Language/ToolCache DirectoryCache Key Source
Node.js (npm)~/.npmpackage-lock.json
Node.js (yarn)$(yarn cache dir)yarn.lock
Node.js (pnpm)$(pnpm store path)pnpm-lock.yaml
Python (pip)~/.cache/piprequirements.txt or pyproject.toml
Java (Maven)~/.m2/repositorypom.xml
Java (Gradle)~/.gradle/cachesbuild.gradle + gradle.properties
Ruby (Bundler)vendor/bundleGemfile.lock
Go~/go/pkg/modgo.sum
Rust (Cargo)~/.cargo/registry + target/Cargo.lock

Avoid Caching node_modules Directly

It is tempting to cache node_modules instead of the npm cache directory. Do not do this. Native modules compiled on one OS or architecture will not work on another. The npm cache stores downloaded tarballs, which get installed correctly for the current platform every time.

Build Caching and Incremental Builds

Dependency caching handles the install step. Build caching handles the compilation step. If nothing in a module has changed since the last build, there is no reason to rebuild it.

Local Build Caching

Most modern build tools support incremental builds out of the box:

  • TypeScript: The tsc --incremental flag stores type-checking results in a .tsbuildinfo file. Subsequent builds only recheck files that changed.
  • Webpack: Enabling cache: { type: 'filesystem' } persists the build cache to disk.
  • Vite: Caches pre-bundled dependencies in node_modules/.vite automatically.
  • Gradle: Incremental compilation is on by default. The build cache stores task outputs keyed by inputs.

To use these in CI, you need to persist and restore the relevant cache files between runs. Treat them the same way as dependency caches: save after a successful build, restore at the start of the next one.

Remote Build Caching

Remote build caching takes this further by sharing cached artefacts across your entire team. If a colleague has already built a module with the same inputs, your CI runner (or your local machine) can download the cached output instead of rebuilding.

Tools that support remote build caching:

  • Turborepo: Stores task outputs in a remote cache. The Turborepo caching documentation ↗ explains the mechanism in detail.
  • Nx: Nx Cloud provides distributed caching across CI runners and developer machines.
  • Gradle: The build cache can be backed by a remote HTTP cache server.
  • Bazel: Remote caching and remote execution are core features of the build system.

Remote caching is particularly valuable in monorepos. For teams managing multiple packages in a single repository, our article on monorepos vs polyrepos covers the broader architectural decisions involved.

Affected-Only Builds

In a monorepo, you should not rebuild everything on every commit. Instead, detect which packages changed and build only those:

# Turborepo example
npx turbo run build --filter=...[origin/main]

# Nx example
npx nx affected --target=build --base=origin/main

This combines well with remote caching. Unchanged packages hit the cache. Changed packages rebuild. The result is a pipeline that scales with the size of the change, not the size of the repository.

Parallelisation

Running steps in parallel is one of the most effective ways to reduce total pipeline duration. If your lint, type check, and test steps each take 3 minutes and run sequentially, that is 9 minutes. Run them in parallel and it is 3 minutes.

Parallel Jobs

Most CI platforms let you run independent jobs concurrently. Structure your pipeline so that steps without dependencies on each other run at the same time:

Sequential vs Parallel Pipeline Execution Sequential (12 min) Install Lint Types Tests Build Parallel (5 min) Install Lint Types Tests Build

On GitHub Actions, the matrix strategy ↗ makes this straightforward. Define independent jobs that share a dependency installation step, then let them run concurrently.

Parallel Test Execution

Splitting your test suite across multiple runners is where parallelisation delivers the largest gains. The key decisions are:

  1. How to split: By file, by test, or by historical timing data. Timing-based splitting produces the most even distribution.
  2. How many runners: Start with 2 to 4 parallel runners and increase until the overhead of spinning up additional runners outweighs the time saved.
  3. How to aggregate results: Most CI platforms can collect test results from parallel jobs into a single report.

A common pattern with Jest:

strategy:
  matrix:
    shard: [1, 2, 3, 4]
steps:
  - run: npx jest --shard=${{ matrix.shard }}/4

For Python projects, pytest-xdist provides similar functionality with -n auto to automatically detect the number of available CPU cores.

Watch Out for Shared State

Parallel tests must be independent. If two test shards write to the same database, file, or environment variable, you will get intermittent failures that are painful to debug. Use isolated test databases, unique temporary directories, and avoid global mutable state.

For more on writing tests that work reliably in CI, see how to write tests that actually help.

Docker Build Optimisation

If your CI pipeline builds Docker images, the build step can easily dominate your total pipeline time. A naive Dockerfile that installs dependencies and copies source code in a single layer will rebuild everything on every commit, even if only one line of application code changed.

Layer Ordering

Docker caches layers from top to bottom. When a layer changes, every layer below it is rebuilt. The optimisation principle is simple: put things that change rarely at the top and things that change often at the bottom.

# Good: dependencies cached separately from source code
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# Bad: source code change invalidates dependency install
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm ci
RUN npm run build

In the good example, npm ci only reruns when package.json or package-lock.json changes. In the bad example, it reruns on every commit because COPY . . comes before the install step.

The Docker build cache documentation ↗ covers the mechanics of layer caching in detail. For a broader look at Docker in development workflows, see our article on Docker for developers: beyond the basics.

Multi-Stage Builds

Multi-stage builds keep your final image small and your build fast:

# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/server.js"]

The builder stage handles compilation. The production stage copies only what is needed to run the application. Build tools, source files, and development dependencies stay out of the final image.

BuildKit Cache Mounts

Docker BuildKit (the default builder in modern Docker) supports cache mounts that persist package manager caches across builds:

RUN --mount=type=cache,target=/root/.npm npm ci

This keeps the npm cache between builds, so only changed packages need to be downloaded. It is the Docker equivalent of CI dependency caching.

Advanced Techniques

Selective Step Execution

Not every change needs every check. A documentation-only change does not need to run the full test suite. A change to a frontend component does not need backend integration tests.

Use path filters to skip irrelevant steps:

# GitHub Actions path filter example
on:
  push:
    paths:
      - 'src/**'
      - 'tests/**'
      - 'package.json'

This reduces unnecessary builds and keeps the feedback loop tight for changes that do not affect application code.

Build Artefact Reuse

Build once, deploy many times. Instead of rebuilding your application for each deployment environment, build a single artefact and promote it through environments. This is faster and eliminates the risk of environment-specific build differences.

The pattern works naturally with deployment strategies like blue-green and canary releases, where the same artefact moves from staging to production.

Self-Hosted Runners

If your builds are CPU or memory intensive, self-hosted runners can be significantly faster than the shared runners provided by CI platforms. They also give you persistent storage for caches, which eliminates the cache download step entirely.

The tradeoff is maintenance. Self-hosted runners need patching, monitoring, and capacity management. For teams with straightforward build requirements, the managed runners are usually sufficient.

Warming the Cache

A cold cache after a dependency update can slow down an entire team’s builds. Some teams run a scheduled “cache warming” job that builds and caches dependencies on a regular schedule, so the first developer to push after a dependency change still gets a warm cache.

A Practical Optimisation Checklist

If you are looking to speed up your CI builds today, work through this list in order. Each step builds on the previous one.

PriorityActionTypical SavingEffort
1Add dependency caching2 to 5 minutesLow
2Enable incremental/cached builds1 to 4 minutesLow
3Reorder stages to fail fast1 to 3 minutesLow
4Parallelise independent jobs30 to 60% of total timeMedium
5Split test suite across runners50 to 75% of test timeMedium
6Optimise Docker layer ordering2 to 5 minutes per buildLow
7Add path filters for selective executionVariableLow
8Implement remote build caching1 to 5 minutesMedium
9Evaluate self-hosted runnersVariableHigh

Start with items 1 to 3. They are low effort and often cut build times in half. For more on structuring your overall pipeline around these techniques, see our guide on how to build a CI/CD pipeline that actually works.

Monitoring Build Performance Over Time

Optimising once is not enough. Builds have a natural tendency to slow down as projects grow. New dependencies get added, test suites expand, and build steps accumulate without anyone questioning whether they are still necessary.

Set up monitoring for your build times:

  • Track median and p95 build duration weekly. A creeping p95 indicates a growing problem before the median moves.
  • Alert on builds that exceed your time budget. If your target is 10 minutes, alert when builds hit 12 minutes so you can address the regression early.
  • Review build logs quarterly. Look for steps that have grown significantly, duplicate work, or tests that have become disproportionately slow.
  • Audit your dependencies. For guidance on keeping your dependency tree lean, see dependency management without the chaos.

The best engineering teams treat CI build performance the same way they treat application performance: as a metric that matters, with a budget, monitoring, and a plan for when things degrade.

Start With the Biggest Bottleneck

You do not need to implement every technique in this article at once. Measure your current pipeline, identify the single slowest stage, and fix that one first. In my experience, dependency caching alone typically saves 3 to 5 minutes, and it takes less than 30 minutes to set up on any major CI platform.

Once that is done, move to the next bottleneck. Incremental improvements compound quickly. A team that shaves 2 minutes off their build every month will halve their total build time in a quarter, and the effect on developer productivity and satisfaction is immediate.

Frequently asked questions

How fast should a CI build be?

A good target is under 10 minutes for the full pipeline, with the core build and unit test feedback arriving in under 5 minutes. Martin Fowler's original continuous integration guidance suggests that if a build takes longer than 10 minutes, developers stop waiting for results and context-switch, which defeats the purpose of fast feedback. Start by measuring your current build time, then target a 50% reduction as your first milestone.

What is the most effective way to speed up CI builds?

Dependency caching almost always delivers the biggest improvement for the least effort. Caching your package manager's download directory (node_modules, .m2, pip cache) between runs avoids re-downloading the same packages on every build. Most teams see a 30 to 60 percent reduction in build time from caching alone. After that, test parallelisation and incremental builds provide the next largest gains.

Should I cache node_modules or the npm/yarn cache directory?

Cache the package manager's cache directory rather than node_modules directly. The cache directory stores downloaded tarballs, which npm or yarn can install from locally without network requests. Caching node_modules can cause issues with native modules compiled for different platforms or OS versions. Use a hash of your lockfile as the cache key so the cache invalidates automatically when dependencies change.

How do I parallelise tests in CI?

Most CI platforms support running jobs in parallel using a matrix strategy or parallel job configuration. Split your test suite across multiple runners based on historical run times, not file count. Tools like Jest, pytest-xdist, and RSpec can split tests automatically. The key is ensuring tests are independent and do not share state, so they produce the same results regardless of execution order.

What is remote build caching and when should I use it?

Remote build caching stores build artefacts in a shared cache that any team member or CI runner can access. Tools like Turborepo, Nx, and Gradle support this natively. If one developer builds a module locally, the next CI run can skip rebuilding it entirely. Remote caching is most valuable in monorepos where multiple projects share dependencies, and in teams where the same code paths are built repeatedly across different branches.

Enjoyed this article? Get more developer tips straight to your inbox.

Comments

Join the conversation. Share your experience or ask a question below.

0/1000

No comments yet. Be the first to share your thoughts.