Stop Watching Progress Bars: Engineering a Faster CI/CD Pipeline
There is nothing more expensive in software development than a developer staring at a spinning wheel. If your team is deploying to production multiple times a day—or trying to—you know the pain. You push a commit, and then you wait. Ten minutes? Twenty? In the Nordic market, where developer salaries are among the highest in the world, that idle time isn't just annoying; it is a massive financial leak.
In 2017, the standard "DevOps" answer to slow builds is usually "add more slaves" to Jenkins. But horizontal scaling doesn't fix a fundamentally broken pipeline architecture or hardware that chokes under load. I recently audited a setup for a client in Oslo where builds were taking 45 minutes. We got it down to 8. Here is the technical breakdown of how we did it, focusing on the three pillars: I/O throughput, Docker layer strategy, and network proximity.
1. The Hidden Killer: Disk I/O Wait
Most developers treat CI servers like generic compute units. They aren't. CI/CD workloads are incredibly I/O intensive. Think about what happens during a standard build: git clone (write), npm install or mvn clean install (massive small-file read/write), Docker image creation (layer writing), and artifact archiving (write).
On standard spinning rust (HDD) or even oversold SSD VPS hosting, your CPU spends half its cycles in iowait. You can diagnose this easily on your current build server. Run this while a build is in progress:
vmstat 1
Look at the wa (wait) column. If you are seeing numbers consistently above 10-15, your CPU is idle, waiting for the disk to catch up. This is common in "budget" VPS environments using OpenVZ, where you share a disk queue with hundreds of other noisy neighbors.
The Fix: We migrated the workload to CoolVDS instances backed by NVMe storage. NVMe (Non-Volatile Memory Express) interacts directly with the PCIe bus, bypassing the SATA bottlenecks. The difference in simple file extraction tasks is staggering.
Benchmark: Extracting a large tarball
| Storage Type | Time to Extract (5GB Source) |
|---|---|
| Standard HDD (7.2k RPM) | ~42 seconds |
| SATA SSD (Shared) | ~14 seconds |
| CoolVDS NVMe | ~3.5 seconds |
When your build involves unzipping thousands of node modules or compiling Java classes, that speed difference compounds.
2. Docker Build Caching Strategy
Docker is revolutionizing how we ship, but many teams write Dockerfiles that completely negate the caching mechanism. I still see this pattern in production constantly:
# THE WRONG WAY
FROM node:6.9.5
WORKDIR /app
COPY . /app
RUN npm install
CMD ["npm", "start"]
Every time you change a single line of code in your source files, Docker invalidates the cache for the COPY . /app layer. This forces npm install to run again from scratch. In 2017, fetching dependencies is often the slowest part of the pipeline.
The Fix: Order matters. Copy only the dependency definitions first.
# THE OPTIMIZED WAY
FROM node:6.9.5
WORKDIR /app
# Only copy package.json first
COPY package.json /app/
# Install dependencies. This layer is now cached unless package.json changes.
RUN npm install
# Now copy the rest of the source code
COPY . /app
CMD ["npm", "start"]
This simple change reduced build times by 60% for the client because 95% of commits do not alter dependencies.
3. Jenkins Pipeline Parallelization
With Jenkins 2.0 (released last year), we moved from freestyle jobs to Pipeline as Code (Jenkinsfile). This allows us to visualize the stage view, but more importantly, it allows parallel execution. Don't run your linting, unit tests, and integration tests in a sequence if they don't depend on each other.
Pro Tip: Keep your executors busy. If you have a 4-core CoolVDS instance, you should be running 4 heavy streams or multiple light streams simultaneously.
Here is a Groovy script snippet for a Jenkinsfile that parallels independent tasks:
node {
stage('Preparation') {
git 'https://github.com/yourcompany/repo.git'
}
stage('Parallel Test') {
parallel 'Unit Tests': {
// Run unit tests
sh 'mvn test'
}, 'Integration Tests': {
// Run slower integration tests
sh 'mvn verify -DskipUnitTests'
}, 'Static Analysis': {
// Check code quality
sh '/usr/local/sonar-scanner/bin/sonar-scanner'
}
}
stage('Build & Push') {
docker.withRegistry('https://registry.hub.docker.com', 'docker-hub-credentials') {
def app = docker.build("my-image:${env.BUILD_ID}")
app.push()
}
}
}
4. Data Sovereignty and Network Latency
Norway is in a unique position. With the upcoming GDPR regulations looming for May 2018, the Datatilsynet (Norwegian Data Protection Authority) is becoming stricter about where data lives and how it moves. If your CI/CD pipeline handles production database dumps for sanitization, that data must be handled on compliant infrastructure.
Furthermore, latency matters. If your Git repository is hosted on a US server but your build agents are in Oslo, you are adding latency to every git fetch. Using a local mirror or hosting your GitLab instance on a CoolVDS server in Oslo connects you directly to the NIX (Norwegian Internet Exchange). The latency drops from ~120ms (trans-Atlantic) to ~2ms.
Tuning the Network Stack
For high-throughput build servers pushing large Docker images, default Linux TCP settings are often too conservative. We tweak /etc/sysctl.conf to allow for larger window sizes:
# Improve TCP throughput for high latency links
net.ipv4.tcp_window_scaling = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
Apply these with sysctl -p.
The Bottom Line
You cannot script your way out of bad hardware. While optimizing your Dockerfile and parallelizing your Jenkinsfile are mandatory steps for any competent DevOps engineer, the foundation remains the infrastructure.
Virtualization overhead and slow I/O are the silent killers of agility. We built CoolVDS on KVM with local NVMe storage specifically to solve this problem for professionals who understand that saving 5 minutes per build, 20 times a day, equals weeks of gained productivity per year.
Is your pipeline stuck in the slow lane? Spin up a high-performance instance on CoolVDS today and watch those npm install times drop.