Your Build Pipeline is Bleeding Money (and Developer Sanity)
I recently audited a deployment pipeline for a fintech startup in Oslo. They were complaining about "slow features." The reality? Their developers were spending 45 minutes a day staring at a Jenkins progress bar. That is not a process problem. That is an infrastructure failure.
We like to talk about code efficiency, but we ignore the physics of the metal running that code. If your CI/CD runner is sitting in a distinct region from your registry, or worse, fighting for I/O credits on a budget public cloud instance, your time-to-market is suffering.
This isn't about "digital transformation." It's about raw speed. Here is how we optimized a sluggish pipeline down to 3 minutes using strictly configured Linux kernels, Docker buildkit, and local infrastructure.
1. The I/O Bottleneck: Why Shared Cloud Kills Builds
CI/CD is disk torture. npm install, cargo build, and Docker image creation are fundamentally I/O heavy operations. Most hyperscalers put you on network-attached storage (EBS, etc.). When you hit the disk hard, you hit the IOPS limit. Your build halts. You wait.
The fix is deploying your runners on infrastructure that guarantees local NVMe access. In our benchmarks, a CoolVDS instance with direct-attached NVMe outperformed a standard cloud instance by 400% on disk-heavy compilation tasks.
Pro Tip: Always check your disk scheduler. On virtualized NVMe, you wantnoneornoopto let the host handle scheduling.
Check your scheduler:
cat /sys/block/vda/queue/schedulerIf it says cfq inside a VM, you are double-queuing I/O requests. Change it.
2. Docker Layer Caching Strategy
Most Dockerfiles I see are structured poorly. They copy the source code before installing dependencies. This invalidates the cache every time you touch a single line of code.
Here is the wrong way:
COPY . .
RUN npm installHere is the battle-hardened way. We copy the manifests first, install, and then copy source.
Optimized Dockerfile Example
# syntax=docker/dockerfile:1.4
FROM node:20-alpine AS builder
WORKDIR /app
# Copy only package manifests to leverage cache
COPY package.json package-lock.json ./
# Mount a cache type to speed up subsequent installs
RUN --mount=type=cache,target=/root/.npm \
npm ci --omit=dev
# NOW copy the source code
COPY . .
RUN npm run buildBy using --mount=type=cache, we persist the npm cache on the host runner. Even if dependencies change, we only download the delta. This alone cut the build time from 6 minutes to 90 seconds.
3. The "Local" Advantage: Latency and NIX
If your team is in Norway, your data should be too. Not just for GDPR or Schrems II compliance—though Datatilsynet will thank you—but for latency. Routing traffic from Oslo to a data center in Virginia or even Ireland adds milliseconds that compound over thousands of API calls during integration tests.
By hosting your GitLab Runner or Jenkins node on a VPS in Norway, specifically connected to the NIX (Norwegian Internet Exchange), you reduce network latency during the git pull and registry push phases.
Run a quick check on your current builder's latency to your registry:
ping -c 5 registry.gitlab.comIf you are seeing double digits and you are pushing gigabytes of Docker layers, you are losing time.
4. RAM Disks for Database Tests
Integration tests often spin up a throwaway Postgres or MySQL database. Writing these temporary databases to disk is a waste of SSD endurance and time.
We use tmpfs (RAM disk) for these containers. It’s volatile, fast, and perfect for CI.
Docker Compose Override for CI
version: '3.8'
services:
db_test:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: secret
# This is the magic sauce
tmpfs:
- /var/lib/postgresql/data:rw,noexec,nosuid,size=512m
ports:
- "5432:5432"
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=offThis configuration mounts the database entirely in RAM. We also turn off fsync. If the server crashes during a test, who cares? It's a test. This reduces database setup time to near zero.
5. Tuning the Runner Kernel
Default Linux distributions are tuned for general-purpose web serving, not high-concurrency building. When running parallel jobs, you might hit file descriptor limits.
Increase your limits in /etc/sysctl.conf:
fs.file-max = 2097152And apply it:
sysctl -pFurthermore, ensure your runner handles heavy TCP churn if you are running microservice integration tests. You don't want to run out of ephemeral ports.
GitLab Runner Advanced Configuration
If you manage your own runner on CoolVDS, you have full control over the config.toml. Do not rely on shared runners. They are noisy neighbors.
[[runners]]
name = "norway-fast-runner-01"
url = "https://gitlab.com/"
token = "REDACTED"
executor = "docker"
limit = 4
[runners.docker]
tls_verify = false
image = "docker:24.0.5"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 2147483648
pull_policy = "if-not-present"Note the pull_policy =