Console Login

Stop Burning Dev Time: Optimizing CI/CD Pipelines for Speed and Compliance in 2022

Stop Burning Dev Time: Optimizing CI/CD Pipelines for Speed and Compliance in 2022

I recently audited a deployment pipeline for a fintech startup based in Oslo. Their complaint? "Deployments take 45 minutes." I watched their developers push code, then proceed to play ping-pong while the runner choked on npm install. In an industry where time-to-market is critical, this latency is unacceptable. It’s not just about speed; it’s about developer sanity and the cost of idle context switching.

Most teams default to shared SaaS runners hosted in US-EAST-1 or Frankfurt. While convenient, they often suffer from "noisy neighbor" syndrome, unpredictable I/O, and—critical for Norwegian companies post-Schrems II—data residency ambiguity. If you are serious about DevOps, you need to own your infrastructure. Let's fix your pipeline.

1. The Hidden Killer: Disk I/O Latency

CI/CD is arguably the most I/O-intensive workload in your infrastructure. Whether you are compiling Rust binaries, building Docker images, or hydrating a massive node_modules directory, you are hammering the disk. Standard cloud instances often cap IOPS or use network-attached storage (NAS) that adds latency.

When we moved that Oslo client from a generic cloud instance to a CoolVDS instance with local NVMe storage, the build time dropped from 45 minutes to 12. Why? Because seek times on NVMe are virtually non-existent compared to standard SSDs over a network fabric.

Pro Tip: Check your current disk latency. Run iostat -x 1 10 during a build. If %iowait exceeds 5%, your storage is the bottleneck, not your CPU. You need a VPS with dedicated I/O throughput.

2. Optimizing the Docker Daemon

By 2022 standards, if you aren't using BuildKit, you are living in the past. It improves caching and allows parallel build stages. However, many default installations still don't have it enabled by default or tuned correctly.

Here is the /etc/docker/daemon.json configuration I deploy on our build servers to ensure we maximize the underlying hardware capabilities:

{
  "features": {
    "buildkit": true
  },
  "storage-driver": "overlay2",
  "max-concurrent-downloads": 10,
  "max-concurrent-uploads": 10,
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

After applying this, restart Docker. The concurrency settings alone can saturate a 1Gbps uplink—standard on CoolVDS—drastically reducing the "pull" phase of your pipeline.

3. Self-Hosted GitLab Runners: A Configuration Guide

For European teams, self-hosted runners are the only way to guarantee code never leaves the EEA, satisfying strict Datatilsynet requirements. But simply installing the runner isn't enough. You must tune the concurrency based on your core count.

On a 4 vCPU CoolVDS instance, I typically configure the runner to handle 2 heavy jobs or 4 light jobs. Oversubscription leads to context switching death.

Here is a snippet from a production config.toml tailored for performance:

concurrent = 4
check_interval = 0

[[runners]]
  name = "coolvds-nvme-runner-01"
  url = "https://gitlab.com/"
  token = "PROJECT_TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    Type = "s3"
    ServerAddress = "minio.internal:9000"
    AccessKey = "minio"
    SecretKey = "minio123"
    BucketName = "runner-cache"
    Insecure = true
  [runners.docker]
    tls_verify = false
    image = "docker:20.10.12"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 2147483648
    pull_policy = "if-not-present"

Note the pull_policy = "if-not-present". This prevents the runner from reaching out to Docker Hub if the image layer already exists locally. Combined with the local NVMe storage on CoolVDS, this makes image start times instantaneous.

4. Caching: The "node_modules" Problem

Downloading the internet every time you push a commit is inefficient. You must implement aggressive caching. However, caching only works effectively if the cache extraction speed is high. I've seen pipelines where extracting the cache zip file took longer than running npm install because of slow disk I/O.

Here is how to properly structure a .gitlab-ci.yml to leverage key-based caching in 2022:

stages:
  - build
  - test

cache:
  key:
    files:
      - package-lock.json
  paths:
    - node_modules/
  policy: pull-push

build_job:
  stage: build
  image: node:16-alpine
  script:
    - npm ci --prefer-offline --no-audit
    - npm run build
  artifacts:
    paths:
      - dist/
    expire_in: 1 hour

Using npm ci instead of npm install is crucial for deterministic builds. It requires a package-lock.json and is significantly faster.

5. The Kernel Tuning Layer

Default Linux kernel settings are often tuned for general desktop usage or light web serving, not the high-throughput network churn of a CI/CD pipeline. We need to widen the TCP pipe.

Add these to /etc/sysctl.conf to optimize for high connection rates (useful during dependency resolution):

# Increase the range of ephemeral ports
net.ipv4.ip_local_port_range = 1024 65535

# Reuse specific connections in TIME_WAIT state
net.ipv4.tcp_tw_reuse = 1

# Maximize the backlog for incoming packets
net.core.netdev_max_backlog = 16384

# Increase max TCP connections
net.core.somaxconn = 8192

Apply them with sysctl -p. This prevents your build server from running out of sockets when fetching thousands of small dependency files.

Comparison: Shared vs. Dedicated Runners

Feature Shared Cloud Runner CoolVDS Dedicated Runner
I/O Speed Unpredictable (Noisy Neighbors) Consistent NVMe Speeds
Data Residency Usually US/Global Strictly Norway/EEA
Cost Per minute billing (expensive at scale) Flat monthly fee
Security Ephemeral but public environment Private Networking & Firewalled

The Norwegian Context: Latency and Law

If your team is in Oslo, Bergen, or Trondheim, latency matters. Routing traffic through Frankfurt adds milliseconds that accumulate over millions of database calls during integration tests. Hosting your CI infrastructure on CoolVDS, which peers directly at NIX (Norwegian Internet Exchange), ensures that your runners talk to your staging servers with sub-millisecond latency.

Furthermore, with GDPR enforcement becoming stricter in 2022, ensuring your source code and proprietary algorithms never leave Norwegian soil is a massive compliance win. A dedicated VPS allows you to lock down access via IP whitelisting, something impossible with dynamic shared runners.

Final Thoughts

Optimization is an iterative process, but infrastructure is the foundation. You can tweak Webpack configs all day, but if your disk write speed is capped at 100MB/s, you are fighting a losing battle. High-performance CI/CD requires high-performance hardware.

Don't let your infrastructure be the reason your release was delayed. Deploy a dedicated runner on a CoolVDS NVMe instance today and watch your build times plummet.