Console Login

Accelerating CI/CD Pipelines: Overcoming I/O Bottlenecks & Data Sovereignty in Norway

Stop Staring at the Progress Bar

There is nothing more soul-crushing for a development team than a 25-minute build time for a three-line code change. I have walked into engineering departments in Oslo where developers treat the "Push to Master" button as an excuse to go grab a coffee, play a round of foosball, and lose their entire flow state. In 2021, this is unacceptable.

The bottleneck usually isn't your code complexity. It isn't even CPU cycles. In 90% of the audits I perform across the Nordics, the culprit is Disk I/O latency and network throughput. When you are running npm install, composer update, or compiling Rust crates inside a Docker container, you are hammering the filesystem with thousands of small read/write operations. On a standard HDD-backed VPS or a crowded public cloud instance, your pipeline stalls waiting for the disk.

Here is how to fix your CI/CD architecture, ensure compliance with the Datatilsynet's strict interpretations of GDPR, and why raw infrastructure matters more than your CI tool choice.

The "Noisy Neighbor" Effect on Build Times

Let's look at a war story. I recently debugged a Jenkins pipeline for a fintech client in Bergen. Their builds fluctuated wildly—sometimes 5 minutes, sometimes 45. They blamed Jenkins. They blamed the JVM. They blamed the network.

I ran a simple diagnostic using iotop during a build spike. The results were damning.

Total DISK READ :       0.00 B/s | Total DISK WRITE :     482.35 K/s
Actual DISK READ:       0.00 B/s | Actual DISK WRITE:     950.11 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
 1204 be/4 root        0.00 B/s  412.30 K/s  0.00 %  94.12 %  java -jar jenkins.war

Look at that IO> 94.12%. The CPU was idle. The process was just waiting for the disk to acknowledge the write. This happens when you use budget VPS providers that oversell their storage arrays. You are fighting for IOPS with hundreds of other tenants.

This is why we standardized on CoolVDS for our build agents. They use KVM virtualization, which provides stricter isolation than OpenVZ or LXC containers, and crucially, they run on pure NVMe arrays. The difference in IOPS between SATA SSD and NVMe is not linear; it is exponential. For a CI/CD job involving thousands of small files (like `node_modules`), NVMe reduces the "unpacking" stage of a build by roughly 60%.

Optimizing Docker for I/O Efficiency

Assuming you have hardware that isn't fighting you, you need to configure your software to stop wasting time. In mid-2021, if you aren't using Docker BuildKit, you are wrong.

BuildKit allows for parallel build stages and better caching. However, it needs to be explicitly enabled in many environments. In your GitLab CI or Jenkins shell execution, export the variable:

export DOCKER_BUILDKIT=1
docker build -t my-app:latest .

But the real speed gain comes from optimizing the Dockerfile to leverage the layer cache effectively. Do not copy your source code before installing dependencies.

The Wrong Way:

FROM node:14-alpine
WORKDIR /app
COPY . .
RUN npm ci
CMD ["node", "server.js"]

The Right Way (Layer Caching):

FROM node:14-alpine
WORKDIR /app
# Copy only package files first
COPY package*.json ./

# Install dependencies. This layer is cached unless package.json changes
RUN npm ci --only=production

# Now copy source code
COPY . .
CMD ["node", "server.js"]

By splitting the copy command, Docker detects that package.json hasn't changed and reuses the cached node_modules layer. On a CoolVDS instance with fast local NVMe storage, this cache restoration is nearly instantaneous.

The Legal Bottleneck: Schrems II and Data Sovereignty

We cannot discuss hosting in 2021 without addressing the elephant in the server room: the Schrems II ruling. The CJEU declared the Privacy Shield invalid. If your CI/CD pipeline runs on US-controlled cloud infrastructure (even if the region is "EU-West"), and that pipeline processes production databases or personally identifiable information (PII) for testing, you are in a legal gray zone.

I advise CTOs to keep the entire DevOps lifecycle within the EEA (European Economic Area), and preferably on Norwegian soil to minimize latency to the NIX (Norwegian Internet Exchange). Hosting your GitLab Runners or Jenkins Agents on CoolVDS in Oslo ensures two things:

  1. Compliance: Data never traverses the Atlantic. It stays under Norwegian jurisdiction.
  2. Latency: If your developers are in Trondheim or Oslo, pushing code to a server in Frankfurt adds unnecessary milliseconds. Pushing to a server in Oslo is effectively LAN-speed.

Configuring High-Performance GitLab Runners

If you are using GitLab, stop using the shared runners for heavy lifting. They are slow and insecure. Deploy a specific runner on a dedicated KVM instance.

Here is a production-ready config.toml optimization for a 4-vCPU CoolVDS instance. We increase concurrency to allow multiple jobs, but limit the cache uploading to avoid saturating the uplink.

concurrent = 4
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "coolvds-nvme-runner-01"
  url = "https://gitlab.com/"
  token = "YOUR_TOKEN_HERE"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "docker:20.10.7"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0
Pro Tip: Mounting /var/run/docker.sock allows the container to spawn sibling containers rather than using Docker-in-Docker (dind). This is faster and avoids filesystem overlay complexity, but has security implications. Only do this on isolated, trusted runner instances like a private VPS.

Database Tuning for Integration Tests

Integration tests are the second biggest time-sink. If your pipeline spins up a MySQL or PostgreSQL container, default configurations are tuned for low memory usage, not speed. You need to disable durability for tests. We don't care if data is lost if the test container crashes; we care about write speed.

When spinning up MySQL in CI, map a custom config file that sets innodb_flush_log_at_trx_commit = 2. This stops the database from flushing to disk after every transaction.

[mysqld]
# CI/CD Optimization - DO NOT USE IN PRODUCTION
innodb_flush_log_at_trx_commit = 2
sync_binlog = 0
innodb_buffer_pool_size = 1G
skip-log-bin

This simple change can reduce integration test suites from 10 minutes to 3 minutes, provided the underlying disk can handle the throughput. Again, NVMe storage handles this aggressive caching significantly better than standard SSDs.

The Verdict

You can tweak Dockerfiles and MySQL configs all day, but if your underlying infrastructure suffers from I/O wait or network throttling, your pipeline will remain sluggish. In the Norwegian market, where compliance and speed are paramount, relying on oversold shared hosting is a strategic error.

For pipelines that require heavy lifting—compiling binaries, rendering assets, or massive integration suites—you need dedicated resources. CoolVDS offers the NVMe throughput and the KVM isolation necessary to turn a 20-minute wait into a 3-minute build. Don't let slow hardware throttle your team's output.

Ready to optimize? Spin up a high-frequency NVMe instance on CoolVDS today and watch your queue times drop.