Console Login

CI/CD Velocity: Optimizing Build Pipelines on Nordic KVM Infrastructure

CI/CD Velocity: Optimizing Build Pipelines on Nordic KVM Infrastructure

There is nothing more soul-crushing than a 20-minute build pipeline for a three-line code change. I have spent too many nights staring at a Jenkins console output, watching npm install hang because the underlying storage system ran out of IOPS. If your developers are switching context to Reddit or Hacker News while waiting for the CI runner, you are burning money. Not just in idle salary hours, but in the cognitive load required to get back into the "zone."

In Norway, where developer hourly rates are among the highest in Europe, efficiency isn't a luxury—it is survival. Yet, I still see teams running heavy CI/CD workloads on oversold, shared hosting environments located in Frankfurt or Virginia, dealing with latency and "noisy neighbor" CPU steal.

Let's fix that. Today, we are going to optimize a CI/CD pipeline from the infrastructure up, focusing on Docker layer caching, artifact management, and why raw I/O throughput is the bottleneck you probably aren't monitoring.

The Hidden Bottleneck: Disk I/O Wait

Most DevOps engineers obsess over CPU cores. They throw 16 vCPUs at a runner and wonder why the build time only improves by 10%. The reality? Building software is incredibly disk-intensive. Extracting node modules, compiling binaries, and linking libraries generates thousands of small read/write operations.

If your VPS provider throttles your IOPS (Input/Output Operations Per Second), your CPU sits idle waiting for data. This is iowait.

Run this command on your current CI runner during a build:

iostat -xz 1

If your %iowait is consistently above 5-10%, your storage is the problem. On budget cloud providers using shared SATA SSDs (or worse, spinning rust), I've seen this hit 40%. This is why at CoolVDS, we standardize on local NVMe storage passed through via KVM. We don't throttle IOPS because we know compilation is bursty by nature.

Strategy 1: Docker Layer Caching with BuildKit

If you aren't using Docker BuildKit in 2023, you are living in the past. It processes the build graph in parallel and handles caching far better than the legacy builder.

Enable it explicitly in your shell or CI environment variables:

export DOCKER_BUILDKIT=1

However, simply enabling it isn't enough. You need to structure your Dockerfile to maximize layer cache hits. The most common mistake I see is copying the source code before installing dependencies.

The Wrong Way:

COPY . .
RUN npm install  # Reruns every time code changes

The Optimized Way:

By copying the manifest files first, Docker only invalidates the cache if dependencies actually change, not when you fix a typo in index.js.

# Syntax=docker/dockerfile:1.4
FROM node:18-alpine

WORKDIR /app

# Copy manifests first to leverage cache
COPY package.json package-lock.json ./

# Mount a cache directory for npm to speed up installs further
RUN --mount=type=cache,target=/root/.npm \
    npm ci --omit=dev

# Now copy the source
COPY .

CMD ["node", "server.js"]
Pro Tip: Use the --mount=type=cache feature available in modern Docker versions. This persists the node_modules cache on the host runner even if the layer is rebuilt. This alone cut one of our client's build times from 8 minutes to 90 seconds.

Strategy 2: The GDPR & Latency Factor

Why host your CI runners in Norway? Two reasons: Latency and Compliance.

If your repository is hosted on a self-managed GitLab instance in Oslo (perhaps on a CoolVDS High-Performance plan), but your runners are in AWS us-east-1, you are paying a latency tax on every git fetch and artifact upload. We measured the difference: a 2GB Docker image push to a registry in Oslo from a local runner takes seconds. From the US, it relies on trans-Atlantic fiber capacity.

More critically, Datatilsynet (The Norwegian Data Protection Authority) is strict. If your CI pipeline spins up test databases containing "sanitized" production data, and that data leaves the EEA, you are navigating a legal minefield post-Schrems II. Keeping your CI infrastructure on CoolVDS servers in Norway simplifies your ROPA (Record of Processing Activities) immensely.

Strategy 3: Distributed Caching with MinIO

For pipelines running across multiple runners (like a Kubernetes fleet), local caching isn't enough. You need a centralized object store for shared caches (ccache, go-build cache, etc.).

Here is how you set up a high-speed cache backend using MinIO on a CoolVDS instance. This setup acts as an S3-compatible layer purely for your build artifacts.

version: '3.8'
services:
  minio:
    image: minio/minio:RELEASE.2023-05-18T00-05-36Z
    volumes:
      - ./data:/data
    command: server /data --console-address ":9001"
    environment:
      MINIO_ROOT_USER: "ci_admin"
      MINIO_ROOT_PASSWORD: "ComplexPass_2023!"
    ports:
      - "9000:9000"
      - "9001:9001"
    restart: always
    ulimits:
      nofile:
        soft: 65536
        hard: 65536

Once running, configure your GitLab Runner or GitHub Actions to use this S3 endpoint for remote caching. Because CoolVDS instances share a high-bandwidth internal network, the transfer speeds between your runners and this cache server are negligible.

Strategy 4: GitLab CI Advanced Optimization

Let's look at a battle-hardened .gitlab-ci.yml configuration. This setup uses the overlay2 storage driver (standard on our kernels) and manages artifacts intelligently to prevent disk bloat.

stages:
  - build
  - test
  - deploy

variables:
  # Use the overlay2 driver for performance
  DOCKER_DRIVER: overlay2
  # Disable TLS verification if using internal private registry
  DOCKER_TLS_CERTDIR: ""

services:
  - name: docker:23.0.6-dind
    command: ["--mtu=1450"] # Optimization for certain overlay networks

build_image:
  stage: build
  image: docker:23.0.6
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    # Pull the 'latest' image to use as a cache source
    - docker pull $CI_REGISTRY_IMAGE:latest || true
    - >
      docker build 
      --cache-from $CI_REGISTRY_IMAGE:latest 
      --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA 
      --tag $CI_REGISTRY_IMAGE:latest 
      .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker push $CI_REGISTRY_IMAGE:latest
  tags:
    - coolvds-nvme-runner-oslo

Notice the --mtu=1450 command? That's a specific tweak for avoiding packet fragmentation inside Docker-in-Docker environments on certain virtualized networks. It's these small details that prevent random hang-ups.

The Hardware Reality

Software optimization can only take you so far. If the hypervisor is oversubscribed, your build is fighting for CPU cycles. We designed the CoolVDS platform specifically to avoid the "noisy neighbor" effect common in budget hosting.

Feature Budget VPS CoolVDS Architecture
Storage Shared SATA SSD (Slower) Dedicated NVMe (PCIe Gen3/4)
Virtualization Container/LXC (Shared Kernel) KVM (Hardware Virtualization)
Network Generic Routing Optimized Peering (NIX)

When you run a compile job on CoolVDS, you are getting the raw speed of the NVMe drive. We don't artificially cap your read speeds to 100MB/s like many competitors. If the drive can do 3000MB/s, you get 3000MB/s. For a heavy Java or Rust compilation, this cuts build time by upwards of 60%.

Final Thoughts

Optimization is an iterative process. Start by measuring your I/O wait. Implement Docker caching. Move your runners closer to your data. But ultimately, ensure your foundation is solid.

You can spend weeks tweaking Makefiles, or you can migrate to infrastructure that doesn't choke under load. If you are ready to see what a CI pipeline running on unthrottled NVMe feels like, spin up a CoolVDS instance today. Your developers—and your CFO—will thank you.