Stop Renting Slow Runners: Optimizing CI/CD Pipelines for Nordic Dev Teams
I have watched too many brilliant developers in Oslo and Bergen lose their minds staring at a "Pending" status. You push a critical hotfix, and then... you wait. You wait for the SaaS provider to provision a container. You wait for the network to drag your artifacts across the Atlantic. You wait for npm install to finish extracting fifty thousand tiny files on a shared HDD volume that is currently being hammered by a noisy neighbor.
If your build takes 20 minutes, you aren't doing Continuous Integration; you are doing "Coffee Break Integration." For teams operating under strict SLAs or dealing with the Norwegian Datatilsynet's requirements on data sovereignty, relying on default shared runners hosted in US-East-1 is a strategic failure. Letβs talk about how to fix this using raw compute, superior I/O, and architectural common sense.
The Hidden Cost of Latency and I/O Wait
Most developers treat CI/CD as a black box: code goes in, green checkmark comes out. But the infrastructure running that box matters. A CI pipeline is essentially a stress test for Disk I/O and Network Latency. When you run `docker build` or install dependencies, you are performing thousands of random read/write operations. On a standard cloud provider's "burstable" instance, your IOPS are throttled. This is why a build that takes 2 minutes on your MacBook Pro takes 12 minutes on the cloud.
Pro Tip: Check your runner's iowait during a build. If it spikes above 10%, your CPU is sitting idle while the disk struggles to catch up. This is money burning. We default to NVMe storage on CoolVDS specifically to kill this bottleneck.
Optimization 1: Strategic Docker Layer Caching
The most common error I see in `Dockerfile` configurations is poor ordering. Docker builds layers. If you change a file in an early layer, every subsequent layer must be rebuilt. Stop copying your entire source code before installing dependencies.
Here is the wrong way (that 80% of you are doing):
# BAD PRACTICE
FROM node:20-alpine
WORKDIR /app
COPY . .
# If you change ONE line of code in index.js,
# this huge install layer runs again.
RUN npm ci
CMD ["node", "index.js"]
Here is the optimized approach. We copy only the package definition files first, install dependencies, and then copy the source code. This leverages the cache for the heaviest operation.
# OPTIMIZED PRACTICE
FROM node:20-alpine
WORKDIR /app
# Copy only dependency definitions first
COPY package.json package-lock.json ./
# This layer is now cached unless dependencies change
RUN npm ci --quiet
# Now copy source code
COPY . .
CMD ["node", "index.js"]
Optimization 2: The Self-Hosted Runner
SaaS runners are convenient, but they are generic. By deploying a self-hosted runner on a VPS in Norway, you gain three advantages:
- Data Sovereignty: Your code never leaves the EEA/Norway, satisfying strict interpretations of GDPR/Schrems II.
- Persistent Cache: You can keep your `node_modules` or `vendor` folders on the disk between builds.
- NIX Latency: If your production servers are in Oslo, deploying from a runner in Oslo (peered via NIX) is nearly instant.
Configuring a GitLab Runner for Concurrency
Don't just install the runner and walk away. You need to tune the `config.toml` to utilize the full power of the underlying VPS. If you are running on a CoolVDS instance with 8 vCPUs, do not limit yourself to one job at a time.
concurrent = 4
check_interval = 0
[[runners]]
name = "CoolVDS-Oslo-NVMe-01"
url = "https://gitlab.com/"
token = "REDACTED"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "docker:24.0.5"
privileged = true
disable_entrypoint_overwrite = false
# Use the host's overlay2 driver to avoid VFS slowness
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
Optimization 3: Managing Disk Space Automatically
High-performance runners have a downside: they eat disk space. Docker images accumulate rapidly. If your runner fills up, your pipeline halts. Do not rely on manual cleanup. Set up a cron job that aggressively prunes unused objects.
Create a script at `/usr/local/bin/docker-cleanup.sh`:
#!/bin/bash
# Remove unused images not referenced by any container
docker image prune -a -f --filter "until=24h"
# Remove build cache objects older than 48 hours
docker builder prune -f --filter "until=48h"
# Check usage
USAGE=$(df -h / | grep / | awk '{ print $5 }' | sed 's/%//g')
if [ "$USAGE" -gt 85 ]; then
echo "Warning: Disk usage at ${USAGE}% on runner"
# Emergency deeper clean
docker system prune -a -f
fi
Add this to your crontab to run daily at 04:00 Oslo time.
Network Topology: Why Location Matters
Letβs look at the numbers. If your repository is hosted on a SaaS platform, but your production environment is a VPS in Norway, the artifact transfer speed becomes the bottleneck.
| Scenario | Runner Location | Target (Norway) | Artifact Upload Speed |
|---|---|---|---|
| Standard SaaS | US East (Virginia) | Oslo | ~12 MB/s |
| CoolVDS Self-Hosted | Oslo (Datacenter) | Oslo | ~850 MB/s (Internal Network) |
For a 500MB Docker image, that is the difference between a 40-second transfer and a 0.6-second transfer. Over 50 builds a day, you save 30 minutes of idle time.
The Architecture of Trust
We don't recommend self-hosting for everything. If you have a tiny static site, use shared SaaS runners; the maintenance isn't worth it. But if you are compiling Go binaries, processing heavy Java builds, or dealing with large Docker contexts, the math changes.
The sweet spot for a dedicated runner is usually a system with high single-thread CPU performance (for compilation) and extremely fast random I/O (for caching). This is why we architect CoolVDS with local NVMe rather than network-attached block storage (like Ceph) for these workloads. Network storage adds latency that kills compilation times.
Monitoring Your Pipeline
Finally, you cannot optimize what you do not measure. If you are serious about this, spin up a Prometheus exporter on your runner.
# docker-compose.yml for monitoring
version: '3.8'
services:
node-exporter:
image: prom/node-exporter:v1.6.0
container_name: node-exporter
restart: unless-stopped
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
ports:
- "9100:9100"
When you see the CPU pegged at 100% and I/O wait at 0%, you know your hardware is actually working for you, not against you.
Stop accepting slow builds as a fact of life. Your infrastructure is a variable, not a constant. Don't let slow I/O kill your developer velocity. Deploy a high-performance runner on CoolVDS today and watch your pipelines turn green before you can even finish your coffee.