Stop Over-Engineering: Choosing the Right Orchestrator for High-Latency Environments
I have spent the last decade watching bright engineering teams burn out. They take a perfectly functional monolithic PHP or Node.js application, containerize it, and then throw it into a Kubernetes cluster that is vastly too complex for their needs. The result? They trade application development time for YAML debugging time. It is a bad trade.
But sometimes, you actually do need scale. You need self-healing. You need zero-downtime deployments. The question is: which tool fits the job in early 2023? And more importantly, how does it behave on the infrastructure available to us here in Norway?
Let’s cut through the marketing noise. I am going to compare the three big contenders—Kubernetes, Docker Swarm, and HashiCorp Nomad—based on real-world operational pain, strict Norwegian data compliance (GDPR), and raw infrastructure requirements.
The Scenario: The "Schrems II" Reality Check
Before we touch a single config file, look at where your data lives. Since the Schrems II ruling, Nordic CTOs are sweating. Hosting personal data on US-controlled clouds (even if the region is eu-north-1) is a legal minefield. Datatilsynet (The Norwegian Data Protection Authority) is not lenient.
This drives many of us back to bare metal or local VPS providers. But running orchestration on raw VPS instances requires you to understand the hardware underneath. If you put a Kubernetes control plane on a budget VPS with noisy neighbors and spinning rust (HDD), etcd will time out. Your cluster will implode.
1. Docker Swarm: The "Good Enough" Hero
Status in 2023: It is not dead, despite what the K8s purists say. It is built into Docker CE.
If you have a team of two developers and five microservices, use Swarm. It requires almost zero setup. You don't need a dedicated DevOps engineer to manage it.
The Configuration
Initializing a swarm takes seconds. No external database required.
docker swarm init --advertise-addr 192.168.10.5
The beauty is in the stack file. It is just docker-compose.yml with a deploy key. However, Swarm has a major weakness: Networking. The overlay network can be flaky under high load, and debugging IPVS issues in the Linux kernel is not how you want to spend your Friday night.
2. Kubernetes (K8s): The Standard (and The Beast)
Status in 2023: Version 1.26 is the current stable. Note that dockershim is gone. You are likely using containerd now.
Kubernetes is not an orchestrator; it is a framework for building platforms. It is powerful, but it demands respect. The most critical component is etcd, the key-value store that keeps the cluster state.
The Hardware Bottleneck
I recently audited a cluster for a client in Oslo. Their API server was crashing randomly. The logs showed etcdserver: too long took to apply the request. They were using cheap VPS hosting with shared storage.
Etcd writes to disk synchronously. If your disk fsync latency spikes above 10ms, the leader election fails. The cluster partitions.
Pro Tip: Never run a production K8s control plane on standard SSDs if you can avoid it. You need NVMe. On CoolVDS, we use strict KVM isolation and local NVMe storage. This ensures your fsync latency stays under 2ms, keeping the control plane stable even during traffic spikes.
Here is a snippet of a proper fio test you should run on any VPS before installing K8s. If it fails this, do not deploy.
fio --rw=write --ioengine=sync --fdatasync=1 \
--directory=/var/lib/etcd --size=22m --bs=2300 \
--name=mytest
If the 99th percentile duration is > 10ms, your hosting provider is throttling you. Move your workload.
Resource Guardrails
In K8s, if you don't set limits, a memory leak in one pod kills the node. This is non-negotiable.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-critical
spec:
template:
spec:
containers:
- name: nginx
image: nginx:1.23
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
3. Nomad: The Pragmatic Alternative
Status in 2023: Nomad v1.4 is solid. It handles non-containerized workloads (Java JARs, raw binaries) natively, which K8s struggles with.
Nomad is a single binary. It is architecturally simpler than K8s. It integrates perfectly with Consul and Vault. If you are already in the HashiCorp ecosystem, this is a strong contender. It uses less overhead than K8s, meaning you can squeeze more containers onto a smaller VPS footprint.
Comparison: The Hard Truth
| Feature | Docker Swarm | Kubernetes | Nomad |
|---|---|---|---|
| Learning Curve | Low (Hours) | High (Months) | Medium (Days) |
| State Store | Internal (Raft) | etcd (Very Sensitive) | Internal (Raft) |
| Min. Requirements | 1 Core, 1GB RAM | 2 Cores, 4GB RAM | 1 Core, 512MB RAM |
| Network Overlay | Built-in (VxLAN) | CNI Plugins (Calico/Flannel) | Host / CNI |
The Infrastructure Reality
Software does not run on magic; it runs on metal. Whether you choose K8s or Swarm, the underlying OS virtualization matters.
Many budget providers use OpenVZ or LXC. This shares the host kernel with your instance. Do not use these for container orchestration. You will run into issues with Docker security profiles, overlay networks, and `iptables` limits. You need full hardware virtualization.
This is why we standardized on KVM (Kernel-based Virtual Machine) for CoolVDS. It gives you a private kernel. You can load custom modules. You can tune sysctl parameters for high-throughput networking without asking support for permission. Combine that with NVMe storage, and you have a foundation that won't buckle when your orchestrator demands IOPS.
Final Verdict
If you are a Norwegian startup building the next Vipps:
- Start with Docker Swarm if you have fewer than 10 services. It is cheap and fast.
- Migrate to Kubernetes only when you need custom CRDs or massive scale. But ensure you run it on KVM-based VPS instances with proven low latency to Oslo.
- Consider CoolVDS if you want the control of a dedicated server with the flexibility of a VPS. We don't oversell our CPU cycles.
Don't let latency kill your cluster. Spin up a CoolVDS instance in 55 seconds and run the fio test yourself. The results will speak for themselves.