Console Login

CI/CD Pipelines Are Slow Because Your I/O is Garbage: A 2022 Optimization Guide

You Are Wasting Your Life Waiting for Builds

There is nothing quite as soul-crushing as pushing a hotfix on a Friday afternoon and watching a progress bar stall at npm install for twelve minutes. I have been there. I have stared at the terminal cursor blinking mockingly while a simple CSS change propagates through a bloated Jenkins pipeline. It is not just annoying; it is expensive.

In the Nordic market, where developer hourly rates are among the highest in Europe, a 20-minute build time is a financial leak. If you have five developers pushing four times a day, that is nearly seven hours of lost productivity per day. In this guide, we are not talking about rewriting your codebase. We are talking about fixing the plumbing: the infrastructure, the caching, and the protocols that run underneath your pipeline.

The Bottleneck You Is Ignore: Disk I/O

Most DevOps engineers obsess over CPU cores. They throw more vCPUs at a runner and wonder why the speed doesn't double. Here is the hard truth: CI/CD is an I/O-bound process. Whether you are compiling Go, building Java JARs, or unzipping ten thousand node modules, you are hammering the disk.

In 2022, standard SSDs simply don't cut it for high-concurrency build environments. I recently audited a setup for a client in Oslo using a budget VPS provider. Their iowait was consistently hitting 30% during builds. Why? Noisy neighbors and throttled IOPS.

Pro Tip: Run iostat -xz 1 during your next build. If your %util is hovering near 100% while CPU is idle, your storage is the bottleneck.

This is where CoolVDS differs from the mass-market crowd. We don't oversell storage I/O. By using dedicated NVMe drives with high queue depths, we ensure that when apt-get install triggers, it writes as fast as the kernel allows.

Tactical Fix 1: Enable Docker BuildKit

If you are still using the legacy Docker builder in 2022, stop. BuildKit has been stable for a while now and it handles dependency solving significantly better, allowing for parallel build stages that the old engine could not handle.

Enable it globally on your runner environment:

export DOCKER_BUILDKIT=1

Or in your /etc/docker/daemon.json:

{
  "features": {
    "buildkit": true
  }
}

This simple switch often cuts build times by 20-30% simply by caching intermediate layers more intelligently.

Tactical Fix 2: Aggressive Layer Caching with Multi-Stage Builds

A common mistake I see in Dockerfile definitions is copying the source code before installing dependencies. This invalidates the cache every time you touch a line of code. You need to structure your Dockerfile to cache the heavy lifting.

Here is a battle-tested pattern for a Node.js application that leverages caching effectively:

# STAGE 1: Builder
FROM node:16-alpine AS builder
WORKDIR /app

# Copy only files required for installation first
COPY package.json package-lock.json ./

# Install dependencies (cached unless package.json changes)
RUN npm ci --quiet

# Copy source code AFTER installing dependencies
COPY . .

# Build the application
RUN npm run build

# STAGE 2: Runner
FROM node:16-alpine
WORKDIR /app

# Copy only the build artifacts from the builder stage
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json ./

CMD ["node", "dist/main.js"]

By separating the COPY package.json and npm ci steps, Docker will reuse the cached layer for dependencies as long as package.json remains unchanged. This reduces build time from minutes to seconds for code-only changes.

Tactical Fix 3: SSH Multiplexing for Deployment

Once the build is done, you have to ship it. If you are using Ansible or simple `rsync` scripts to deploy to your VPS Norway servers, the handshake overhead can kill you. Creating a new SSH connection for every task adds latency, especially if you are deploying from a runner in Frankfurt to a server in Oslo.

Enable SSH multiplexing (ControlMaster). This allows you to reuse a single TCP connection for multiple SSH sessions.

Add this to your ~/.ssh/config on the runner:

Host *
    ControlMaster auto
    ControlPath /tmp/ssh-%r@%h:%p
    ControlPersist 10m

Now, when you run a deployment script, the first command performs the handshake, and subsequent commands fly through the existing tunnel. It feels instant.

Data Sovereignty and Latency

We are operating in a post-Schrems II world. European companies are nervous about where their data lives, even temporary build artifacts. Using a US-based cloud CI provider often means your code snippets and secrets are traversing jurisdictions.

Hosting your GitLab Runner or Jenkins node on a CoolVDS instance in Norway solves two problems:

  1. Compliance: Data stays within the EEA/Norway legal framework, satisfying Datatilsynet requirements.
  2. Latency: If your production servers are in Norway (connected to NIX), your deployment runner should be there too. The rsync transfer speed between two CoolVDS instances in the same datacenter is effectively instant.

Comparison: Shared Hosting Runner vs. CoolVDS Dedicated Runner

FeatureStandard Shared RunnerCoolVDS NVMe Runner
Disk I/OThrottled (Shared SATA/SSD)Unthrottled NVMe
CPU StealHigh (variable performance)Near Zero (KVM Isolation)
NetworkPublic Internet RoutingLocal Datacenter Peering
PrivacyOpaqueStrict Norwegian/GDPR

The Infrastructure Check

Before you blame your developers for "spaghetti code," check your infrastructure. If your runner relies on swap memory because it lacks RAM, or if it waits 500ms for disk writes, your pipeline is broken.

Here are a few quick checks you can run right now:

Check for CPU steal time:

top -bn1 | grep "Cpu(s)" | sed "s/.*, *\([0-9.]*\)%* st.*/\1/"

Check disk write speed (safe test):

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

Check latency to NIX (Norwegian Internet Exchange):

mtr --report --report-cycles 10 nix.no

Conclusion

Optimization is about removing friction. You cannot have an agile team if they are afraid to commit code because the build takes too long. By moving to modern Docker construction methods and hosting your CI/CD infrastructure on high-performance, low-latency hardware like CoolVDS, you turn a bottleneck into a competitive advantage.

Don't let slow I/O kill your workflow. Deploy a dedicated GitLab Runner on a CoolVDS NVMe instance today and see your pipeline turn green before you can even switch tabs.