Kubernetes vs. The Rest: A Pragmatic Orchestration Showdown for Nordic Ops
I recently watched a competent development team in Oslo burn through three weeks of engineering time trying to debug a sporadic latency issue in their microservices mesh. They blamed the code. They blamed the database. They even blamed the NIX (Norwegian Internet Exchange) peering.
The culprit? CPU steal (%st) on the control plane.
They were running a heavy Kubernetes cluster on budget, oversold cloud instances where noisy neighbors were choking the API server. In the world of container orchestration, latency is the silent killer. If etcd cannot write to disk fast enough, your cluster loses consensus, and pods start flapping. It doesn’t matter how clean your Go code is if the underlying hypervisor is starving your I/O.
As of March 2025, the orchestration landscape has settled, but the confusion hasn't. Should you be running full-blown Kubernetes? Is Docker Swarm actually dead? What about HashiCorp's Nomad? Let’s dissect these options with a focus on performance, Norwegian data sovereignty, and technical reality.
The 800lb Gorilla: Kubernetes (K8s)
Kubernetes remains the industry standard, but it is effectively an operating system in its own right. It demands respect and resources. If you are running K8s in 2025, you are likely using Gateway API and Cilium for eBPF networking. It is powerful, but it is heavy.
The Hidden Cost: Etcd Latency
The heart of Kubernetes is etcd. It requires low-latency storage to persist cluster state. If fsync latency exceeds 10ms consistently, your cluster becomes unstable. This is where generic VPS providers fail. You need NVMe storage with direct pass-through or high-performance virtio drivers.
Before you deploy a master node, run this fio benchmark to simulate the etcd write load. If the 99th percentile fdatasync latency is above 10ms, do not deploy K8s there.
# Simulating etcd write load
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data \
--size=22m --bs=2300 --name=mytest
On a CoolVDS NVMe instance (hosted in Oslo), we consistently see latencies well under 2ms for this test. That is the difference between a production-grade cluster and a "hobby" setup that crashes at 3 AM.
Configuring Kubeadm for Nordic HA
When initializing a cluster, specifically for a high-availability setup across Norwegian availability zones, ensure your kubeadm-config.yaml is explicit about the control plane endpoint. Don't rely on DNS round-robin alone; use a VIP (Virtual IP) managed by Keepalived or a load balancer.
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.30.0
controlPlaneEndpoint: "10.10.50.100:6443"
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"
apiServer:
extraArgs:
authorization-mode: "Node,RBAC"
# Critical for minimizing latency in geo-distributed setups
default-not-ready-toleration-seconds: "30"
default-unreachable-toleration-seconds: "30"
etcd:
local:
dataDir: /var/lib/etcd
extraArgs:
# Snapshot tuning for high throughput
snapshot-count: "10000"
The Efficient Alternative: HashiCorp Nomad
Not every team needs the complexity of K8s. Nomad is a single binary scheduler. It’s conceptually simpler and integrates tightly with Consul for service discovery. In 2025, Nomad is the choice for teams who want to run containers and legacy Java JARs or raw binaries on the same hardware without the virtualization overhead of Docker for everything.
Nomad shines in "Edge" scenarios or smaller Norwegian data centers where resource efficiency is paramount (TCO matters). It uses significantly less CPU for the control plane than K8s.
Pro Tip: Use Nomad if you have a hybrid workload. We see CoolVDS clients mixing legacy PHP applications (running natively) with new Rust microservices (in containers) on the same node using Nomad task drivers. It saves massive overhead.
The "Zombie": Docker Swarm
Despite years of people declaring it dead, Swarm is still here in 2025. Why? Because a docker-compose.yml file is still the easiest way to describe a stack. For small dev teams in Oslo managing 5-10 microservices, Swarm is perfectly adequate.
However, its networking overlay has performance penalties compared to Cilium on K8s or host networking on Nomad. Use it for staging, or simple production workloads where 50ms of network overhead isn't catastrophic.
Infrastructure Matters: The CoolVDS Factor
Your orchestration software is only as stable as the kernel it runs on. This is where the "Managed vs. Self-Hosted" debate gets interesting.
Public clouds often oversubscribe CPUs. In a containerized environment, the Linux scheduler (CFS) fights with the Hypervisor's scheduler for CPU time. This results in "stolen time." If you are running a database or a message queue (like Kafka) inside a container, CPU steal kills throughput.
CoolVDS takes a different approach:
- KVM Isolation: We don't use containers to host your containers. You get a full kernel.
- NVMe I/O: Direct speeds necessary for
etcdand high-transaction databases. - Data Sovereignty: Your data sits in Norway. This is not just a "nice to have"; with the tightening of GDPR and Schrems II implications continuing into 2025, knowing your physical bits are in Oslo satisfies the Datatilsynet requirements.
Optimizing the Base Layer
Regardless of the orchestrator, you must tune the Linux kernel on your nodes. Here is a standard sysctl.conf optimization we apply for high-throughput container hosts:
# /etc/sysctl.d/99-k8s-network.conf
# Increase the range of ephemeral ports for high connection rates
net.ipv4.ip_local_port_range = 1024 65535
# Maximize the backlog for high connection bursts (essential for Ingress Nginx)
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 8192
# Enable forwarding (Required for CNI plugins)
net.ipv4.ip_forward = 1
# Increase inotify limits for file watchers (logs, configs)
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
Conclusion: Choose Your Weapon
| Feature | Kubernetes | Nomad | Docker Swarm |
|---|---|---|---|
| Complexity | High | Medium | Low |
| Resource Overhead | High (Etcd + API) | Low (Single Binary) | Low |
| Best Use Case | Enterprise, Microservices | Hybrid Workloads, Batch | Small Teams, Simple Stacks |
If you need the ecosystem and are building the next Spotify, use Kubernetes. If you want efficiency and simplicity, look at Nomad. But whatever you choose, do not cripple it by running on slow I/O or over-provisioned CPUs.
Latency is the enemy. Build on infrastructure that respects the milliseconds.
Ready to build a cluster that doesn't flake? Deploy a high-performance NVMe KVM instance on CoolVDS in Oslo today. Ping time to NIX is negligible, and the IOPS are real.