The Hidden Cost of "Pending" Status
Itβs 3:45 PM. You push a critical hotfix. You switch tabs to your CI dashboard. Pending. Five minutes later? Still pending. Your shared cloud runner is stuck in a queue behind a thousand other developers across Europe. When it finally picks up, the build crawls because the shared hypervisor is throttling disk I/O.
I've seen entire engineering teams lose hours of productivity every week simply waiting for green checkmarks. In 2024, with the hardware we have available, this is unacceptable.
The solution isn't "more cloud." The solution is moving your compute closer to your data and owning your infrastructure. If you are targeting users or developers in Norway, relying on a generic runner in `us-east-1` or even `eu-central-1` is inefficient. Let's talk about building a pipeline that actually respects your time.
The I/O Bottleneck No One Talks About
Most CI/CD tasks are disk-bound, not CPU-bound. Think about it: npm install, extracting Docker images, compiling Rust artifacts, linking binaries. These operations hammer the filesystem.
Public cloud providers often cap IOPS (Input/Output Operations Per Second) on their standard instances. If you hit that cap during a large build, your CPU sits idle while the disk catches up. This is where CoolVDS differs. By utilizing local NVMe storage with direct PCI passthrough technologies (standard on KVM), we eliminate the noisy neighbor effect common in container-based VPS solutions.
Pro Tip: Check your current runner's I/O wait time. Runiostat -x 1 10during a build. If%iowaitconsistently exceeds 5-10%, your storage is the bottleneck, not your code.
Architecture: The Self-Hosted Runner
We are going to deploy a GitLab Runner (though the logic applies to GitHub Actions self-hosted runners too) on a CoolVDS instance located in Oslo. This gives us two advantages: low latency to local Norwegian infrastructure (NIX) and compliance with strict data residency requirements enforced by Datatilsynet.
Step 1: System Preparation
First, secure the host. We are using Ubuntu 24.04 LTS. We need to optimize the kernel for high container density since CI jobs spawn and kill containers rapidly.
sudo sysctl -w net.ipv4.ip_forward=1
Next, install the Docker engine. Don't use the snap package; it introduces unnecessary loopback device overhead.
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
Step 2: The Runner Configuration
This is where most implementations fail. They stick to defaults. We need to configure the concurrent limits and volume mounting to leverage host caching.
Here is a battle-tested /etc/gitlab-runner/config.toml designed for a CoolVDS instance with 4 vCPUs and 8GB RAM:
concurrent = 4
check_interval = 0
[[runners]]
name = "coolvds-norway-runner-01"
url = "https://gitlab.com/"
token = "YOUR_TOKEN_HERE"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
Type = "s3"
ServerAddress = "127.0.0.1:9000"
AccessKey = "minioadmin"
SecretKey = "minioadmin"
BucketName = "runner-cache"
Insecure = true
[runners.docker]
tls_verify = false
image = "docker:26.1.3"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
pull_policy = "if-not-present"
Notice pull_policy =