Orchestration Wars 2024: Kubernetes, Nomad, or Just a Shell Script?
It is July 2024. If I hear one more startup CTO say they need a multi-region Kubernetes federation for a PHP shop serving 5,000 users in Trondheim, I might snap. The industry has fetishized complexity. We treat infrastructure like a resume-building exercise rather than a foundation for business logic.
I’ve spent the last decade debugging race conditions and watching etcd clusters implode because someone cheaped out on disk IOPS. The reality of container orchestration in Norway—and Europe at large—isn't about who has the most YAML files. It’s about latency, compliance, and total cost of ownership (TCO).
Let's strip away the marketing fluff. We are going to look at the three contenders left standing in 2024: the behemoth (Kubernetes), the sniper (HashiCorp Nomad), and the zombie (Docker Swarm). And we’re going to discuss where they actually run best.
The Behemoth: Kubernetes (k8s)
Kubernetes won the war. It is the operating system of the cloud. With the release of v1.30 earlier this year, it’s more stable than ever. But it is heavy. Running a control plane consumes resources that you pay for, whether you use them or not.
The Pain Point: Latency and Storage.
Kubernetes is stateless, but its brain, etcd, is not. If your underlying storage has high latency, your cluster effectively freezes. I recently audited a setup where the API server latency spiked to 500ms. The culprit? Shared storage on a budget VPS provider throttling IOPS.
If you are running K8s, you need access to raw NVMe performance. You need to tune your kernel parameters.
Configuration Reality Check
Don't just apt-get install and pray. For a production-grade node in 2024, you need to adjust your sysctl settings to handle the networking load.
# /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
# Increase the number of memory map areas for heavy elasticsearch/db pods
vm.max_map_count = 262144
# Avoid neighbor table overflows in large clusters
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
Pro Tip: Check yourfsynclatency before deployingetcd. If the 99th percentile fsync is over 10ms, your cluster will be unstable. On CoolVDS NVMe instances, we typically see sub-1ms fsync times, which is why we are the reference architecture for self-hosted K8s in Oslo.
The Sniper: HashiCorp Nomad
While Kubernetes tries to do everything (networking, storage, secrets), Nomad focuses purely on scheduling. It is a single binary. It is fast. It is simple.
For a project last month involving a high-traffic media processor in Bergen, we ditched K8s for Nomad. Why? We didn't need the overlay network overhead. We needed raw compute speed and simple job definitions.
A Nomad job specification is readable by humans, unlike the verbose hellscape of Helm charts.
Nomad Job Example
Look at how clean this is compared to a K8s Deployment + Service + Ingress combo:
job "cache-service" {
datacenters = ["oslo-dc1"]
type = "service"
group "cache" {
count = 3
network {
port "db" {
to = 6379
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7.2-alpine"
ports = ["db"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
If you are a small DevOps team of 1-3 people, Nomad allows you to sleep at night. Kubernetes requires a team just to manage the cluster.
The Infrastructure Layer: Where the Battle is Lost
Regardless of your orchestrator, your software is only as good as the hardware it runs on. This is where most European companies fail. They deploy onto oversold cloud instances where "vCPU" is a marketing term, not a technical guarantee.
The "Steal Time" Killer
In virtualized environments, "Steal Time" (%st) occurs when the hypervisor is busy serving other tenants (noisy neighbors) and your VM waits for CPU cycles. In a containerized environment, high steal time causes liveness probes to fail, pods to restart randomly, and 502 errors.
You can check this right now on your current server:
$ top -bn1 | grep "Cpu(s)"
%Cpu(s): 5.9 us, 2.1 sy, 0.0 ni, 91.8 id, 0.1 wa, 0.0 hi, 0.1 si, 0.0 st
If that last number (st) is consistently above 0.5, you are being ripped off. Your orchestrator will think the node is unhealthy.
This is the CoolVDS difference. We use KVM virtualization with strict resource isolation. When you buy 4 vCPUs, you get the cycles you paid for. For container orchestration, this consistency is non-negotiable.
Data Sovereignty: The Norwegian Context
Since the Schrems II ruling and the continuous tightening of Datatilsynet guidelines, hosting data outside the EEA is a legal minefield. Even hosting in the EU on a US-owned cloud provider (AWS, Azure, GCP) puts you in a gray area regarding the US CLOUD Act.
Running your orchestration layer on CoolVDS in our Oslo data center solves this immediately:
- Latency: <2ms to NIX (Norwegian Internet Exchange).
- Compliance: 100% Norwegian jurisdiction.
- Cost: Zero egress fees for local traffic.
Comparison: Choosing Your Weapon
| Feature | Kubernetes | Nomad | Docker Swarm |
|---|---|---|---|
| Complexity | Extremely High | Low/Medium | Low |
| State Management | etcd (Sensitive) | Raft (Robust) | Raft (Built-in) |
| Scalability | 5000+ nodes | 10,000+ nodes | <100 nodes |
| Ideal For | Enterprise Microservices | Mixed Workloads (Binaries + Docker) | Simple Web Apps |
Setting Up a Robust Cluster Node
Whether you choose K3s (lightweight K8s) or Nomad, the bootstrap process on a clean CoolVDS instance is rapid. Here is a quick bootstrap script I use to prep a node for high-throughput container networking. This disables swap (mandatory for K8s) and optimizes I/O schedulers for NVMe.
#!/bin/bash
# 1. Disable Swap
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
# 2. Set NVMe scheduler to none (let the device handle it)
echo none > /sys/block/nvme0n1/queue/scheduler
# 3. Increase connection tracking table size
echo "net.netfilter.nf_conntrack_max=131072" >> /etc/sysctl.conf
sysctl -p
# 4. Install Container Runtime (containerd)
apt-get update && apt-get install -y containerd
Conclusion: Stop Over-Engineering
If you are Netflix, use Kubernetes. If you are a lean team in Oslo needing to ship code fast, look at Nomad or K3s. But whatever you choose, do not build a house on sand.
Orchestrators amplify infrastructure flaws. Latency issues that are annoying on a monolith become fatal distributed system errors in a cluster. By deploying on CoolVDS, you eliminate the hardware variable. You get the low latency of a local provider with the raw I/O performance required by modern cloud-native stacks.
Ready to build? Deploy a high-performance NVMe instance in Oslo in 55 seconds and see what 0.0% steal time feels like.