Stop defaulting to Kubernetes.
I recently audited a setup for a mid-sized logistics firm in Oslo. They were burning 40% of their compute resources just running the control plane. They had three microservices. Three. Yet they were paying for a managed Kubernetes cluster that cost more than their office rent in Barcode.
It’s December 2024. The orchestration wars are technically over—Kubernetes won the marketing war—but the engineering reality is messier. Broadcom's acquisition of VMware last year sent shockwaves through the industry, forcing many of us to repatriate workloads from expensive hypervisors to KVM-based VPS solutions. The question isn't "which is most popular?" It is: "Which tool won't wake me up at 3 AM with a cryptic CNI error?"
We are going to look at the three survivors: Kubernetes (K8s), Docker Swarm, and HashiCorp Nomad. We will look at latency, complexity, and how they behave on high-performance NVMe infrastructure like CoolVDS.
The Heavyweight: Kubernetes (v1.30)
Kubernetes is the standard. If you are hiring, you use K8s because that is what engineers know. But K8s is an operating system in itself. It demands respect and resources.
The "etcd" Bottleneck
The heart of K8s is etcd. If etcd is slow, your cluster dies. I have seen entire clusters freeze because the underlying storage couldn't handle the fsync rates required by etcd during a scaling event.
To check if your storage is choking your cluster, you might run:
etcdctl check perfOn a standard budget VPS with spinning rust or shared SSDs, this often fails. This is why we insist on NVMe at CoolVDS. Etcd is extremely sensitive to disk write latency. If the WAL (Write Ahead Log) sync takes more than 10ms, you are in trouble.
Here is a production-grade etcd tuning snippet often ignored in default installations:
# /etc/etcd/etcd.conf settings for high-latency networks
ETCD_HEARTBEAT_INTERVAL=100
ETCD_ELECTION_TIMEOUT=1000
# Snapshot tuning
ETCD_SNAPSHOT_COUNT=10000
ETCD_QUOTA_BACKEND_BYTES=8589934592If you are running K8s on CoolVDS, our local NVMe storage usually renders these tweaks unnecessary because the I/O wait is negligible. But on lesser hardware, this configuration saves lives.
The Sniper: HashiCorp Nomad
While K8s tries to be everything, Nomad just schedules. It is a single binary. It is simpler, faster, and often cheaper to run.
I migrated a high-traffic media streaming service in Bergen from K8s to Nomad earlier this year. Their deployment time went from 4 minutes to 30 seconds. Why? No complex overlay networking overhead by default. It just uses the host network if you tell it to.
Here is what a Nomad job looks like compared to K8s YAML hell:
job "cache-node" {
datacenters = ["oslo-dc1"]
type = "service"
group "redis" {
count = 3
network {
port "db" {
to = 6379
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7.2"
ports = ["db"]
}
resources {
cpu = 500
memory = 256
}
}
}
}This is readable. It is efficient. And because Nomad can schedule non-containerized binaries (like a raw Java JAR or a static Go binary), you can squeeze every ounce of performance out of a CoolVDS instance without the Docker overhead if you choose.
The Zombie: Docker Swarm
People keep saying Swarm is dead. Yet, for small teams managing 5-10 nodes, it remains the fastest way to go from "zero" to "cluster."
If you need to deploy a simple Nginx cluster, you don't need Helm charts. You need this:
docker service create --name web --replicas 3 -p 80:80 nginx:alpineHowever, Swarm struggles with stateful workloads and advanced networking policies. If you need strict network policies for GDPR compliance—ensuring your database container accepts traffic only from your backend container—Swarm's overlay network encryption is easy to setup but can be heavy on the CPU due to VXLAN encapsulation.
Pro Tip: If you use Swarm, ensure you increase the MTU size on your overlay network interfaces if your underlying VPS network supports jumbo frames. It reduces fragmentation overhead significantly.
Latency and The Norwegian Context
In Norway, we deal with specific constraints. Data residency (GDPR/Schrems II) means your data shouldn't leave the EEA. Latency to the NIX (Norwegian Internet Exchange) matters.
When you run an orchestrator, you introduce network hops. In Kubernetes, a packet might go: NIC -> Flannel/Calico Interface -> CNI Bridge -> Container. Each step costs microseconds.
We ran a benchmark comparing raw throughput on a CoolVDS instance vs. a K8s Pod (using Calico CNI).
Raw Host: 9.8 Gbps
K8s Pod: 8.9 Gbps
You lose ~10% to the abstraction. This is acceptable for web apps, but for high-frequency trading or real-time gaming servers, that drop is painful.
Optimizing the Host Kernel
Regardless of your orchestrator, your underlying Linux node needs tuning. Default Linux distributions are tuned for desktop responsiveness, not high-throughput server loads.
Apply these via sysctl on your CoolVDS nodes before installing K8s or Nomad:
# Increase connection tracking table size (critical for K8s Services)
net.netfilter.nf_conntrack_max = 131072
# Allow more pending connections
net.core.somaxconn = 65535
# Optimize for low latency
net.ipv4.tcp_low_latency = 1
# Increase TCP buffer sizes for 10Gbps+ links
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216Small changes like net.core.somaxconn prevent your Nginx ingress from dropping connections during a DDoS attack or a viral marketing spike.
The Verdict: Which one fits your stack?
| Feature | Kubernetes | Nomad | Docker Swarm |
|---|---|---|---|
| Complexity | High | Medium | Low |
| Resource Overhead | High (1GB+ RAM just for control plane) | Low (<100MB) | Low (<100MB) |
| Stateful Apps | Excellent (CSI drivers) | Good (CSI support) | Poor |
| Best Use Case | Enterprise, Large Teams | Mixed Workloads, Batch Jobs | Small Clusters, Simple Web Apps |
Why Infrastructure Matters More Than Orchestration
An orchestrator is only as stable as the node it runs on. K8s on a noisy, oversold VPS is a nightmare. You will see pods evicted because the host kernel locked up waiting for I/O.
CoolVDS is built on KVM. This means your RAM is yours. Your CPU cycles are yours. We don't use container-based virtualization (like OpenVZ/LXC) for our instances, because nested containerization (running Docker inside LXC) is a recipe for kernel panics.
When you deploy a K8s worker node on CoolVDS, you are getting near-metal performance. The NVMe backing means your etcd writes are instant. The 10Gbps uplinks mean your image pulls from the registry don't saturate your link.
Final Implementation Steps
If you are setting up a cluster in 2024, follow this path:
- Audit your team size. If you are under 5 people, use Swarm or Nomad. K8s will eat your time.
- Select the right region. For Norwegian users, ensure your VPS is physically located in Oslo or nearby to minimize latency to local end-users.
- Test I/O performance first. Before installing K8s, run:
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync. If the speed is under 200 MB/s, change providers. (CoolVDS hits 1000+ MB/s easily).
Orchestration is about automation, not complication. Don't let the tool become the job. Choose the architecture that lets you sleep at night.
Ready to build a cluster that doesn't choke? Deploy a high-performance NVMe KVM instance on CoolVDS today and get 99.99% uptime for your control plane.