Console Login

Stop Watching Progress Bars: Optimizing CI/CD Pipelines on Norwegian Infrastructure

Stop Watching Progress Bars: Optimizing CI/CD Pipelines on Norwegian Infrastructure

There is nothing more soul-crushing in this industry than pushing a hotfix and staring at a spinning circle for 25 minutes. Your coffee is cold. The client is calling. And npm install is still fetching dependencies from a registry halfway across the world.

I’ve managed infrastructure for teams ranging from lean startups in Bergen to enterprise banking sectors in Oslo. The story is always the same: developers blame the code, ops blames the network, but nobody looks at the metal running the pipeline. If you are relying on shared, oversold cloud runners hosted in Virginia or Frankfurt while your team sits in Norway, you are voluntarily adding latency to every single packet.

Let’s cut the noise. We’re going to look at why your pipeline is slow, how to fix it with self-hosted runners, and why the underlying hardware (specifically NVMe and local peering) matters more than your caching strategy.

The Hidden Bottleneck: I/O Wait and Context Switching

Most CI/CD jobs are I/O bound, not CPU bound. Think about it. What does a typical pipeline do? It clones a repo, pulls Docker images, extracts layers, installs thousands of tiny files (looking at you, node_modules), and compiles binaries.

On a standard shared VPS or a free-tier runner, you are fighting for disk time. If a neighbor on the same physical host decides to reindex a massive database, your build times spike. This is the "Steal Time" metric you see in top. You cannot optimize away bad neighbors with code.

The Hardware Fix

We built CoolVDS on pure NVMe storage arrays for this exact reason. When your runner extracts a 2GB Docker image, it needs random read/write speeds that spinning rust (HDD) or cheap SATA SSDs simply cannot provide. We consistently see 30-40% faster pipeline execution just by moving from standard cloud runners to a dedicated KVM instance with NVMe.

Here is how you verify if your current runner is choking on I/O. Run this fio test during a build:

fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=512m --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1

If your IOPS are under 5,000, your hardware is the bottleneck. On our CoolVDS instances, we tune for high throughput specifically to handle these bursty workloads.

Topology Matters: The Norwegian Advantage

Latency is physics. If your repository is hosted on GitLab.com (usually in the US) and your runner is in Norway, you have a round-trip time (RTT) of ~100ms. But if you are using a self-hosted GitLab instance or caching artifacts, keeping data inside Norway is critical.

For Norwegian dev teams, data sovereignty is also a legal headache. Under GDPR and the post-Schrems II landscape, knowing exactly where your build artifacts and test databases live is mandatory. Using a runner in an Oslo datacenter ensures your data never leaves the jurisdiction unexpectedly.

Furthermore, CoolVDS peers directly at NIX (Norwegian Internet Exchange). If your production servers are also in Norway, the deployment phase (scp/rsync) is practically instantaneous.

Implementing the Solution: Self-Hosted GitLab Runner

Let's get technical. We are going to deploy a GitLab Runner on a CoolVDS instance. This gives you full control over the environment, Docker socket binding, and caching.

1. System Tuning

Before installing the runner, tune the kernel for high network throughput. Default Linux settings are often too conservative for heavy CI/CD traffic.

# /etc/sysctl.conf

# Increase the maximum number of open file descriptors
fs.file-max = 2097152

# Maximize the backlog of incoming connections
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000

# TCP optimization for low latency
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_no_metrics_save = 1

Apply with sysctl -p.

2. The Runner Configuration

Install the runner using the official repositories (never use the package manager's outdated default). Once registered, the magic happens in /etc/gitlab-runner/config.toml.

[[runners]]
  name = "coolvds-oslo-runner-01"
  url = "https://gitlab.com/"
  token = "YOUR_TOKEN"
  executor = "docker"
  # Run multiple jobs if your VDS has enough cores
  limit = 4
  [runners.custom_build_dir]
  [runners.cache]
    # Utilize localized MinIO if available, otherwise local cache
    Type = "s3"
    Shared = true
  [runners.docker]
    tls_verify = false
    image = "docker:27.3.1"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    # Use the host's Docker socket for layer caching benefits (use with caution)
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0
Pro Tip: Mounting /var/run/docker.sock is dangerous if you don't trust the code running on the pipeline. However, for internal teams, it allows the runner to reuse the host's Docker cache, making subsequent builds of the same Dockerfile near-instant. The isolation provided by CoolVDS KVM means even if a container breaks out, it's contained to your VM, not the whole fleet.

3. Optimizing the Pipeline Definition

Hardware solves the resource constraint, but you still need smart config. Use Docker BuildKit and inline caching to speed up image creation.

# .gitlab-ci.yml

variables:
  DOCKER_DRIVER: overlay2
  # Enable BuildKit for performance
  DOCKER_BUILDKIT: 1

build:
  stage: build
  image: docker:27.3.1
  before_script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    - |
      docker build \
        --cache-from $CI_REGISTRY_IMAGE:latest \
        --build-arg BUILDKIT_INLINE_CACHE=1 \
        -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA \
        -t $CI_REGISTRY_IMAGE:latest .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker push $CI_REGISTRY_IMAGE:latest

The Economic Argument

SaaS CI providers charge by the minute. It adds up. A "medium" instance on a popular CI platform costs significantly more per hour than a monthly flat-rate CoolVDS instance.

By moving to a self-hosted model on our infrastructure, you gain:

  • Predictable Pricing: No surprise bills when a junior dev pushes a loop that runs for 4 hours.
  • Vertical Scaling: Need 32GB RAM for a Java compilation? Just upgrade the VPS. You don't need to buy an "Enterprise" plan.
  • Security: Private networking (VLANs) between your runner and your staging environment without exposing ports to the public internet.

Don't Tolerate Latency

In 2024, there is no excuse for a slow pipeline. The tools exist. The hardware exists. The bottleneck is usually inertia—sticking to defaults because "it works." But if "working" means waiting, it's broken.

Your developers cost significantly more than a VPS. If you save them 30 minutes a day per person, the ROI on a high-performance CoolVDS instance is measured in days, not months.

Ready to fix your build times? Deploy a high-frequency NVMe instance in our Oslo datacenter today and experience the difference raw I/O makes.