Console Login

CI/CD Pipeline Optimization: Eliminating I/O Bottlenecks in Jenkins & GitLab

CI/CD Pipeline Optimization: Eliminating I/O Bottlenecks in Jenkins & GitLab

There is nothing more soul-crushing than pushing a critical hotfix and staring at a Jenkins progress bar for 45 minutes. I have seen development teams in Oslo burn thousands of kroner in man-hours simply waiting for a Maven build to compile or a Docker image to push. If your pipeline is slow, your agility is zero. It doesn't matter how fast you code if your deployment process is stuck in the mud.

In 2017, the bottleneck is rarely CPU. It’s rarely RAM. It is almost always Disk I/O.

In this guide, we aren't talking about theory. We are talking about ripping out the inefficiencies in your CI/CD pipeline, specifically for teams hosting in the Nordic region where latency and reliability are non-negotiable. We will cover caching strategies, Docker layer optimization, and why the underlying hardware of your VPS provider is likely the saboteur.

The Silent Killer: I/O Wait

I recently audited a setup for a fintech client in Stavanger. Their GitLab CI runners were taking 20 minutes to build a simple Java microservice. They blamed Gradle. They blamed the network. They were wrong.

We logged into the runner and ran top. The CPU usage was low, but the wa (wait) percentage was hovering around 40%. The CPU was sitting idle, waiting for the disk to read/write data. This is the hallmark of "noisy neighbors"—other tenants on the same physical host monopolizing the disk bandwidth.

You can diagnose this immediately with iostat. If you don't have it, install sysstat.

apt-get update && apt-get install -y sysstat

Then run this to see extended statistics:

iostat -x 1 10

Look at the %util and await columns. If await is in the triple digits while your build is running, your storage solution is failing you. On standard HDD or even SATA SSD VPS hosting, this is common. This is why at CoolVDS, we standardized on KVM with NVMe storage. We don't oversell IOPS, because in a CI/CD environment, high I/O throughput is mandatory, not a luxury.

Optimizing the Docker Build Context

Most pipelines today are moving towards Docker-based workflows. Whether you are using Jenkins 2.0 Pipelines or GitLab CI, you are likely building images. A common mistake I see is sending the entire project root to the Docker daemon.

If you have a .git folder, build artifacts, or huge dependency folders, you are wasting time copying context. Use a .dockerignore file. It works exactly like .gitignore.

.git
node_modules
target
*.log
.DS_Store

Furthermore, order matters. Docker caches layers based on the instruction string. If you copy your source code before installing dependencies, you invalidate the cache every time you change a single line of code. Stop doing this.

The Wrong Way:

FROM node:6
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]

The Battle-Hardened Way:

FROM node:6
WORKDIR /app
# Copy package.json first to leverage cache
COPY package.json package-lock.json ./
RUN npm install
# Now copy source
COPY . .
CMD ["npm", "start"]

By splitting the copy command, npm install only runs if package.json changes. This alone can shave 5 minutes off a build.

Pipeline Caching: Jenkins vs. GitLab

Downloading dependencies over the public internet is a gamble. Even with Norway's excellent fiber infrastructure, upstream repositories like Maven Central or npmjs.org can have hiccups. You must cache dependencies locally.

GitLab CI Configuration

In GitLab 9.x, the cache definition in .gitlab-ci.yml is powerful. Here is a configuration snippet for a Node.js project that caches the node_modules folder across builds, keyed by the branch name.

image: node:6.10

stages:
  - build
  - test

cache:
  key: "$CI_COMMIT_REF_NAME"
  paths:
    - node_modules/

build_job:
  stage: build
  script:
    - npm install
    - npm run build

test_job:
  stage: test
  script:
    - npm test

Jenkins Pipeline (Jenkinsfile)

With Jenkins 2.5+, we use the declarative pipeline syntax. It is cleaner and stores your build logic in Git. Here is how we handle a Maven build, ensuring we reuse the local repository.

pipeline {
    agent {
        docker {
            image 'maven:3.3.9-jdk-8'
            args '-v /root/.m2:/root/.m2'
        }
    }
    stages {
        stage('Build') {
            steps {
                sh 'mvn -B -DskipTests clean package'
            }
        }
        stage('Test') {
            steps {
                sh 'mvn test'
            }
        }
    }
}
Pro Tip: Note the args '-v /root/.m2:/root/.m2'. We are mounting the host's Maven repository into the container. Without this, Maven downloads the internet every single time the container starts. This is a massive I/O saver. On a CoolVDS instance, this cache read is near-instant thanks to local NVMe speeds.

Infrastructure: Why Virtualization Matters

Not all VPS are created equal. In the hosting world, there are two main types of virtualization: Container-based (OpenVZ/LXC) and Kernel-based (KVM/Xen).

  • OpenVZ: You share the kernel with the host and other tenants. If a neighbor decides to compile the Linux kernel, your CI pipeline slows down. Resources are often "burstable," which is marketing speak for "oversold."
  • KVM (CoolVDS Standard): You have a dedicated kernel. Memory and CPU allocations are strict. Most importantly, it allows for better isolation of I/O operations.

For CI/CD, where disk writes are heavy (unpacking archives, compiling objects, building Docker layers), you need high IOPS (Input/Output Operations Per Second). Traditional spinning hard drives (HDD) offer about 80-120 IOPS. SATA SSDs offer 5,000-10,000. NVMe drives can push 20,000+ easily.

Metric Typical Budget VPS CoolVDS KVM (NVMe)
Virtualization OpenVZ / Shared Kernel KVM / Dedicated Kernel
Storage Type SATA SSD (or HDD) NVMe PCIe SSD
Read Latency 2-5 ms < 0.1 ms
Docker Support Limited / Requires Privileged Native / Full Support

Data Sovereignty and The Norwegian Context

We are approaching a new era of data privacy. With the GDPR enforcement date set for May 2018, less than a year away, European companies are scrambling. Datatilsynet (The Norwegian Data Protection Authority) is clear: you must know where your data lives.

Hosting your CI/CD pipeline outside of the EEA (European Economic Area) introduces legal complexity. If your code contains production database dumps for testing (bad practice, but it happens) or customer PII, that data cannot simply sit on a server in the US.

CoolVDS infrastructure is located within Europe, ensuring low latency to NIX (Norwegian Internet Exchange) and compliance with EU data directives. When your build server talks to your production server in Oslo, you want that traffic to stay local, fast, and legal.

Setting Up a Local Docker Registry

Pushing every build to Docker Hub is slow and public. Run a local registry on your VPS. It’s a simple Docker container.

docker run -d -p 5000:5000 --restart=always --name registry registry:2

However, you need to secure it. Here is an Nginx configuration snippet to proxy the registry with basic auth and SSL (because LetsEncrypt is free and essential in 2017).

server {
    listen 443 ssl;
    server_name registry.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/registry.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/registry.yourdomain.com/privkey.pem;

    location / {
        proxy_pass http://localhost:5000;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Disable request size limit for large images
        client_max_body_size 0;
    }
}

Conclusion

Optimization is the art of removing friction. In CI/CD, friction is disk latency and redundant network calls. By structuring your Dockerfiles intelligently, caching aggressively, and hosting on hardware that doesn't choke under load, you turn deployment from a painful chore into a non-event.

Don't let slow hardware dictate your release schedule. If you are serious about DevOps, stop fighting with legacy infrastructure. Deploy a CoolVDS NVMe instance today and see your build times drop by 50%.