Stop the Wait: Optimizing CI/CD Pipelines for High-Velocity Dev Teams in Norway (2022 Edition)
There is nothing more soul-crushing in our line of work than pushing a critical hotfix and then staring at a spinning wheel for 45 minutes. I once managed a pipeline for a microservices architecture in Oslo where the npm install step alone took longer than the actual code changes. We were burning developer hours—and sanity—waiting for runners to wake up, fetch dependencies, and slog through I/O-heavy build processes.
In 2022, "slow" is the new "broken." If your pipeline takes more than 10 minutes, you aren't doing Continuous Integration; you're doing Occasional Integration. The bottleneck usually isn't your code. It's the infrastructure your runners sit on.
The Hidden Killer: I/O Wait and Shared Resources
Most dev teams run their CI runners on default cloud instances provided by the big US hyperscalers. It seems convenient until you look at the metrics. Build processes are notoriously I/O heavy. Think about what happens during a build: thousands of small files are written, read, compiled, and packaged. On a standard VPS with shared storage (noisy neighbors), your IOPS get throttled.
I recently debugged a Jenkins agent that was timing out on a simple Maven build. The CPU usage was low, but the iowait was spiking to 40%. The disk simply couldn't keep up with the random write patterns of the build artifacts.
This is where raw infrastructure choices matter. We switched that workload to a CoolVDS instance backed by local NVMe storage. The result? The build time dropped from 18 minutes to 4 minutes. No code changes. Just better hardware.
Optimizing Docker for Speed
Hardware solves the raw speed, but configuration solves efficiency. In June 2022, if you aren't using multi-stage builds and aggressive layer caching, you are wasting bandwidth.
Here is a battle-tested Dockerfile pattern we use for Node.js services to maximize layer caching. This ensures that we don't reinstall node_modules unless package.json actually changes:
# STAGE 1: Builder
FROM node:16-alpine AS builder
WORKDIR /app
# Copy only dependency definitions first to leverage Docker cache
COPY package.json package-lock.json ./
# Install dependencies
RUN npm ci --quiet
# Copy source code
COPY . .
# Build the application
RUN npm run build
# STAGE 2: Production
FROM node:16-alpine
WORKDIR /app
# Copy only necessary files from builder
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json ./
CMD ["node", "dist/main.js"]
Latency and Sovereignty: The Norwegian Context
Let's talk about geography. If your dev team is in Oslo or Bergen, but your CI runners are in us-east-1, you are fighting physics. The latency adds up during the artifact upload/download phases, especially with large Docker images.
Furthermore, since the Schrems II ruling, data residency is a massive headache. If your CI/CD pipeline processes production database dumps for staging environments, and that data leaves the EEA, you are in a legal minefield. Using a Norwegian provider like CoolVDS ensures your data stays within Norwegian jurisdiction, adhering to Datatilsynet guidelines and GDPR requirements.
Configuring GitLab Runner for Performance
If you are self-hosting GitLab (which you should be for total control), tuning the runner concurrency is critical. Too high, and you context-switch to death. Too low, and queues pile up.
For a CoolVDS instance with 8 vCPUs and 16GB RAM, this is my go-to config.toml configuration to balance throughput and stability:
concurrent = 10
check_interval = 0
[[runners]]
name = "coolvds-nvme-runner-01"
url = "https://gitlab.example.com/"
token = "PROJECT_TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
Type = "s3"
ServerAddress = "minio.internal:9000"
AccessKey = "minio"
SecretKey = "minio123"
BucketName = "runner-cache"
Insecure = true
[runners.docker]
tls_verify = false
image = "docker:20.10.16"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
Pro Tip: Notice the /var/run/docker.sock binding? This allows "Docker-in-Docker" siblings, which is faster than full privileged mode dind services for most simple build tasks, though it comes with security trade-offs. For isolated environments on CoolVDS, this is an acceptable risk for the speed gain.
System Tuning for Heavy CI Workloads
Runners often hit system limits before hardware limits. When running parallel integration tests, you can easily exhaust file descriptors. On your CoolVDS host, you need to tune sysctl.conf to handle the load of hundreds of containers spinning up and dying every hour.
Apply these settings to avoid the dreaded "Too many open files" error:
# /etc/sysctl.d/99-ci-tuning.conf
# Increase max open files
fs.file-max = 2097152
# Increase inotify watches (crucial for file chaos in CI)
fs.inotify.max_user_watches = 524288
# Optimize network stack for short-lived connections
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.core.somaxconn = 4096
After saving, run sysctl --system. I've seen this simple config change prevent 50% of random pipeline failures in high-concurrency environments.
The Verdict: Own Your Infrastructure
Serverless CI/CD is fine for "Hello World." But for serious engineering teams in Europe, the combination of network latency, storage I/O, and data compliance dictates that you own your runners.
By deploying on CoolVDS, you leverage local peering at NIX (Norwegian Internet Exchange) for low-latency transfers and NVMe storage that eats I/O for breakfast. Don't let your infrastructure be the reason you miss a Friday deploy.
Ready to cut your build times in half? Deploy a dedicated CI Runner on a CoolVDS High-Performance NVMe instance today.