Console Login

CI/CD Pipeline Optimization: Cutting Build Times in Half with Self-Hosted Runners in Norway

Stop Burning Engineer Time on Slow Builds

It is 2022, and yet I still see senior developers using the "compiling" excuse to play ping-pong. If your pipeline takes 45 minutes to lint, test, and build a Docker container, your infrastructure is broken. It is not the code; it is the environment. Most teams default to shared runners provided by SaaS platforms like GitHub or GitLab. It is easy, sure. But it is also a performance black hole.

Shared runners are usually throttle-capped, running on standard spinning rust (HDD) or low-tier SSDs, and often located in data centers thousands of kilometers away from your deployment target. If you are serving customers in Norway but building your artifacts in `us-east-1`, you are fighting physics. And physics always wins.

Let’s fix this. We are going to look at moving to self-hosted runners, optimizing Docker layer caching, and why raw NVMe I/O is the only metric that matters for CI/CD.

The Anatomy of a Slow Pipeline

I recently audited a Magento deployment for a retail client in Oslo. Their deployment script was a nightmare. It pulled source code from a repo in Europe, downloaded dependencies from a US mirror, built the image on a shared runner in Ireland, and then pushed the artifact back to a registry in Frankfurt. Finally, it deployed to a server in Norway.

The total time was 38 minutes. By moving the runner to a CoolVDS instance in Oslo—physically close to the target environment and utilizing the Norwegian Internet Exchange (NIX)—we dropped that time to 7 minutes. That is an 81% reduction. Here is how we did it.

1. Abandon Shared Runners

Shared runners suffer from the "noisy neighbor" effect. You are sharing CPU cycles with thousands of other builds. For consistent performance, you need a Self-Hosted Runner. This gives you dedicated resources and, crucially, persistence.

On a persistent runner, you don't start from zero every time. The Docker cache remains. The `node_modules` or `vendor` folders can be cached locally on the filesystem, bypassing the network entirely.

Here is how you register a robust GitLab runner on a CoolVDS Linux instance (Ubuntu 20.04/22.04 LTS):

# Install GitLab Runner
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
sudo apt-get install gitlab-runner

# Register the runner (Replace TOKEN and URL)
sudo gitlab-runner register \
  --non-interactive \
  --url "https://gitlab.com/" \
  --registration-token "PROJECT_REGISTRATION_TOKEN" \
  --executor "docker" \
  --docker-image "docker:20.10.16" \
  --description "coolvds-nvme-runner-oslo" \
  --tag-list "nvme,fast,norway" \
  --run-untagged="false" \
  --locked="false" \
  --access-level="not_protected"

Notice the tags. In your `.gitlab-ci.yml`, you specifically target this runner using tags: ['nvme']. This ensures high-priority builds land on your high-performance hardware, not a random slow container in the cloud.

2. The I/O Bottleneck: Why NVMe Matters

CI/CD is disk-bound. Think about what happens during a build: thousands of small files are created (compilation), archived (zipping artifacts), and moved (Docker layer extraction). Standard SSDs often choke on the IOPS (Input/Output Operations Per Second) required for parallel builds.

Pro Tip: Monitor your CPU wait times (`iowait`). If your CPU usage is low but the build is slow, your disk is the bottleneck. CoolVDS infrastructure is built entirely on NVMe storage, offering up to 6x the read/write speeds of standard SATA SSDs found in budget VPS providers.

3. Optimizing Docker Caching

If you are rebuilding your entire dependency tree on every commit, you are wasting money. Docker layers are cached based on the instruction string and the files changed.

Bad Dockerfile Pattern:

FROM node:16
COPY . .
RUN npm install
CMD ["npm", "start"]

In the example above, if you change a single line of code in `index.js`, Docker invalidates the `COPY . .` layer. Consequently, the `RUN npm install` layer (which comes after) is also invalidated. You re-download the internet every time you fix a typo.

Optimized Dockerfile Pattern:

FROM node:16-alpine
WORKDIR /app

# Copy only dependency definitions first
COPY package.json package-lock.json ./

# Install dependencies (This layer is cached unless package.json changes)
RUN npm ci --quiet

# Copy the rest of the source code
COPY . .

CMD ["npm", "start"]

By copying `package.json` separately, we ensure that `npm ci` only runs when dependencies actually change. On a CoolVDS runner with local layer caching enabled, this step becomes instantaneous for 99% of builds.

4. Configuring a Local Registry Mirror

When you have 20 pipelines running simultaneously, you might hit Docker Hub's rate limits (which were tightened significantly in late 2020). Running a local registry mirror on your VPS saves bandwidth and avoids 429 errors.

Configure your runner's Docker daemon (`/etc/docker/daemon.json`) to use a local pull-through cache:

{
  "registry-mirrors": ["https://mirror.gcr.io"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2"
}

Restart Docker: sudo systemctl restart docker. This simple change reduces external network calls, keeping traffic within the fast internal network where possible.

Data Sovereignty and GDPR Compliance

We cannot talk about infrastructure in 2022 without addressing the elephant in the room: Schrems II. The transfer of personal data to the US is legally precarious. Source code often contains hardcoded secrets (it shouldn't, but it does) or PII in test databases.

Using a shared runner hosted by a US provider technically constitutes a data transfer. By hosting your CI/CD runner on CoolVDS in Norway, you ensure that your code, artifacts, and test data remain within the EEA/Norwegian jurisdiction. This satisfies the strict requirements of Datatilsynet (The Norwegian Data Protection Authority) and keeps your Legal team happy.

The Hardware Reality Check

Software optimization only gets you so far. Eventually, you need raw horsepower. A `t3.medium` equivalent on public cloud often steals CPU cycles when your neighbors get busy. That unpredictability ruins build time consistency.

For a reliable CI/CD pipeline, I recommend the following baseline specs for a dedicated runner:

Resource Requirement Why?
CPU 4 vCores (Dedicated) Parallel compilation (e.g., `make -j4`) requires true concurrency.
RAM 8GB - 16GB Webpack and heavy Java builds are memory hogs. OOM kills are pipeline killers.
Storage NVMe SSD High IOPS for Docker layer extraction and artifact zipping.
Network 1Gbps Port Fast uploads to production servers.

CoolVDS offers these specifications at a fraction of the cost of the hyperscalers, with the added benefit of predictable performance.

Conclusion

Slow pipelines are a choice. They bleed developer productivity and delay time-to-market. By bringing your CI/CD infrastructure in-house—or rather, to a managed VPS in Norway—you gain control over caching, hardware resources, and data sovereignty.

Don't let latency dictate your deployment schedule. Spin up a high-performance NVMe instance on CoolVDS today, install a runner, and watch your build times plummet.