Console Login

Slash Your CI/CD Build Times: A DevOps Guide to Self-Hosted Runners in Norway

Slash Your CI/CD Build Times: A DevOps Guide to Self-Hosted Runners in Norway

There is nothing more soul-crushing for a developer than pushing a hotfix and staring at a pending status for 15 minutes because the shared runners on your SaaS git provider are clogged. If your team treats the CI/CD pipeline as a mandatory coffee break, you are bleeding money. In 2022, "it works on my machine" is not enough; it needs to work on the build server, and it needs to happen fast.

I have spent the last decade debugging pipelines that crawl. The culprit is rarely the code itself. It is almost always the infrastructure. Shared runners are convenient, but they suffer from noisy neighbors, unpredictable I/O latency, and network throttling. If you are serving customers in the Nordics, or your dev team is based in Oslo, relying on a build server in a US-East region is a latency nightmare waiting to happen.

This guide cuts through the noise. We are going to build a high-performance, self-hosted CI/CD environment using GitLab CI as the reference architecture. We will focus on raw I/O throughput, Docker layer caching, and why geographic proximity to the Norwegian Internet Exchange (NIX) matters more than you think.

The Hidden Cost of Shared I/O

Most CI/CD jobs are I/O bound, not CPU bound. `npm install`, `docker build`, and `mvn package` spend the vast majority of their time reading and writing small files. On a standard cloud instance with shared HDD or throttled SSD storage, your disk queues fill up instantly.

To fix this, you need two things: KVM virtualization (to ensure your kernel isn't fighting for resources) and NVMe storage. SATA SSDs cap out around 550 MB/s. NVMe drives, like the ones standard in CoolVDS instances, push 3000+ MB/s. In a pipeline that extracts thousands of node modules, this is the difference between a 4-minute build and a 40-second build.

Benchmarking Your Current Runner

Don't take my word for it. Run this `fio` test on your current build agent. If your random write IOPS are under 10k, your storage is the bottleneck.

fio --name=random-write --ioengine=libaio --rw=randwrite --bs=4k --size=1G --numjobs=1 --iodepth=16 --runtime=60 --time_based --end_fsync=1

Setting Up a High-Performance GitLab Runner

We will deploy a self-hosted GitLab Runner on a CoolVDS instance running Ubuntu 22.04 LTS. This setup gives you total control over the Docker socket and caching mechanisms.

1. Installation and Registration

curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
sudo apt-get install gitlab-runner

# Register the runner (Replace TOKEN with your project token)
sudo gitlab-runner register \
  --url "https://gitlab.com/" \
  --registration-token "TOKEN" \
  --description "coolvds-norway-runner-01" \
  --executor "docker" \
  --docker-image "docker:20.10.16"

2. Optimizing the Runner Configuration

The default configuration is safe, not fast. We need to tweak `/etc/gitlab-runner/config.toml` to handle concurrency and volume mounting efficiently. By mounting the docker socket, we enable "Docker-in-Docker" siblings, allowing us to use the host's layer cache.

[[runners]]
  name = "coolvds-norway-runner-01"
  url = "https://gitlab.com/"
  token = "YOUR_TOKEN"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.docker]
    tls_verify = false
    image = "docker:20.10.16"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0
    pull_policy = "if-not-present"
Pro Tip: The `pull_policy = "if-not-present"` combined with the mounted socket is critical. It prevents the runner from re-downloading base images (like Node or Python) if they already exist on the CoolVDS host. This saves gigabytes of bandwidth per day.

The Norwegian Context: Latency and GDPR

Performance isn't just about disk speed; it's about physics. If your repository is hosted on a self-managed GitLab instance in Norway, but your runner is in Frankfurt or Amsterdam, you are adding 20-40ms of round-trip time (RTT) to every git fetch and artifact upload. Over thousands of operations, this adds up.

By placing your runner in Oslo (CoolVDS datacenter), you achieve single-digit millisecond latency to local services. Furthermore, with the current legal landscape following the Schrems II ruling, data residency is a massive compliance headache. Using a Norwegian VPS provider ensures that your build artifacts—which often contain intellectual property and potentially PII—never leave Norwegian jurisdiction, satisfying Datatilsynet requirements.

Feature Shared Cloud Runner CoolVDS Self-Hosted Runner
Storage Backend Network Attached (High Latency) Local NVMe (Low Latency)
Caching Uploaded/Downloaded every job Persistent on Host
Cost Predictability Per-minute billing Fixed monthly rate
Data Sovereignty Unclear (often US-owned) Strictly Norway/Europe

Advanced: Registry Mirroring

To squeeze the final drops of performance, configure the Docker daemon on your runner to use a registry mirror. This is particularly useful if you have a local Nexus or Artifactory instance.

Edit `/etc/docker/daemon.json`:

{
  "registry-mirrors": ["https://mirror.gcr.io"],
  "storage-driver": "overlay2",
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

Restart Docker (`systemctl restart docker`) to apply. The `overlay2` storage driver is mandatory for efficiency; if you are still on `aufs` or `devicemapper` in 2022, you are living in the past.

Conclusion

Optimization is an iterative process, but infrastructure is the foundation. You can refactor your Dockerfiles all day, but if the underlying disk I/O is choking, your pipeline will drag. By moving to a dedicated CoolVDS instance with NVMe storage and keeping your data within the Norwegian border, you solve performance and compliance issues simultaneously.

Ready to cut your build times in half? Spin up a High-Performance NVMe instance on CoolVDS today and regain control of your pipeline.