K8s vs. The World: Right-Sizing Container Orchestration for Nordic Infrastructure
Let’s get one thing straight immediately: You probably don't need a full-blown, multi-master Kubernetes cluster for your startup's monolith. I see it every week. A team of three developers spending 40% of their time debugging Ingress configurations instead of shipping code. It’s infrastructure theatre.
By April 2025, the container ecosystem has matured, but it hasn't gotten simpler. The "default to K8s" mentality is burning money—especially here in Norway where IT salaries are among the highest in Europe. If you are deploying in Oslo to serve the Nordic market, latency and data sovereignty (hello, Datatilsynet) matter more than having the fanciest service mesh.
I've managed clusters ranging from 3 nodes to 500+ across the EU. Today, we’re stripping away the marketing fluff. We are comparing the heavyweights: Vanilla Kubernetes, the lightweight champion K3s, and the pragmatist's choice Nomad. We will look at this through the lens of performance, maintenance overhead, and infrastructure requirements.
The 2025 Landscape: Where We Stand
Kubernetes (v1.32 is the current stable standard we trust) won the orchestration war. That’s a fact. But winning the war doesn't mean it's the right tool for every skirmish.
Pro Tip: In 2025, if you aren't using Gateway API for ingress traffic, you're already behind. The old Ingress resource is in maintenance mode. Update your manifests.
1. Vanilla Kubernetes (K8s)
Best for: Enterprise teams, complex microservices, resume driven development.
K8s is the operating system of the cloud. It is powerful. It is also a resource hog. A control plane requires significant CPU and memory just to exist. If you are running on a standard VPS, you lose about 15-20% of your resources to overhead before you launch a single pod.
The Hidden Cost: Etcd Latency
Kubernetes dies without fast storage. The state store, etcd, is incredibly sensitive to disk write latency. If fsync takes too long, the cluster becomes unstable. This is where cheap hosting kills you. You cannot run a production K8s cluster on spinning rust or shared SATA SSDs with noisy neighbors.
Here is a snippet from a fio test we run to validate disk suitability for etcd on our CoolVDS NVMe instances:
fio --rw=write --ioengine=sync --fdatasync=1 \
--directory=/var/lib/etcd --size=250m \
--bs=2300 --name=etcd-benchmark
If the 99th percentile fdatasync latency is above 10ms, your cluster will flap. On our infrastructure, we typically see sub-1ms. That is the difference between a pager going off at 3 AM and a good night's sleep.
2. K3s: The Smart Choice
Best for: Edge computing, dev environments, cost-efficient VPS clusters.
Rancher’s K3s strips out the legacy cloud provider plugins and alpha features. It is a single binary. It uses less than 512MB of RAM. For a startup targeting the Norwegian market, running K3s on a cluster of three robust VPS nodes is the sweet spot. You get the Kubernetes API (so all your Helm charts work), but you don't pay the "Google tax" on resources.
Deploying a cluster is shockingly simple compared to kubeadm:
# On the master node
curl -sfL https://get.k3s.io | sh -
# Get the node token
cat /var/lib/rancher/k3s/server/node-token
# On the worker node (CoolVDS-Worker-01)
curl -sfL https://get.k3s.io | K3S_URL=https://<MASTER_IP>:6443 \
K3S_TOKEN=<TOKEN> sh -
3. HashiCorp Nomad
Best for: Mixed workloads (Docker + Java JARs + binaries), simplicity, massive scale.
Nomad is not Kubernetes. It doesn't want to be. It is a task scheduler. It is arguably more robust because it is simpler. If you need to run a Docker container alongside a legacy binary that can't be containerized (yes, they still exist in 2025), Nomad handles it natively.
The job specification is HCL (HashiCorp Configuration Language), which is readable by humans, unlike the wall of YAML that K8s demands.
job "web-api" {
datacenters = ["oslo-dc1"]
group "api" {
count = 3
task "server" {
driver = "docker"
config {
image = "my-registry/api:v2.4"
ports = ["http"]
}
resources {
cpu = 500
memory = 256
}
}
}
}
The Infrastructure Reality Check
Software doesn't run on magic. It runs on metal. Whether you choose K8s, K3s, or Nomad, the underlying constraints remain the same: Network Latency and I/O Throttling.
Network Latency & The NIX Connection
If your user base is in Norway, hosting in Frankfurt adds 15-20ms of round-trip time. That sounds negligible until you have microservices talking to each other. Service A calls Service B, which queries Database C. That 20ms compounds. Suddenly, your page load time increases by 200ms.
Hosting locally isn't just about GDPR compliance (though keeping data within Norwegian borders simplifies legal audits immensely); it's about physics. CoolVDS peers directly at NIX (Norwegian Internet Exchange). The path from your server to a fiber user in Trondheim or Bergen is as short as physically possible.
The "Noisy Neighbor" Problem
Container orchestrators assume they own the CPU. The Kubernetes scheduler calculates placement based on requests and limits. If the underlying hypervisor steals CPU cycles because another tenant on the physical host is mining crypto, your K8s scheduler fails. You see CrashLoopBackOff not because of code errors, but because the readiness probe timed out.
We solve this at the virtualization layer. By using KVM with strict resource isolation, CoolVDS ensures that a "vCPU" is actually a computing unit you can rely on.
Configuration Strategy for 2025
If you are deploying today, here is the reference architecture I recommend for a mid-sized SaaS:
| Component | Recommendation | Why? |
|---|---|---|
| Orchestrator | K3s (High Availability) | Full API compatibility, 50% less RAM usage. |
| CNI (Networking) | Cilium (eBPF) | Performance and security observability without sidecars. |
| Ingress | Traefik or Gateway API | Dynamic configuration, native Let's Encrypt support. |
| Storage | Local Path Provisioner | Utilize CoolVDS NVMe speeds directly. Network storage is slow. |
Tuning Sysctl for High Throughput
Before you install the orchestrator, you must tune the Linux kernel. Default settings are for desktop usage, not high-throughput container routing. Apply this to your /etc/sysctl.conf:
# Increase connection tracking table size
net.netfilter.nf_conntrack_max = 131072
# Enable IP forwarding (Essential for CNI)
net.ipv4.ip_forward = 1
# Optimize swap (Don't swap unless necessary)
vm.swappiness = 1
# Increase max memory map areas for ElasticSearch/Databases
vm.max_map_count = 262144
Run sysctl -p to apply. Neglecting vm.max_map_count is the #1 reason Elasticsearch pods fail to start.
Conclusion: Choose Based on Ops Capacity
Don't choose Kubernetes because Google uses it. Choose it if you have the team to manage the complexity. For 90% of deployments in the Norwegian market, K3s on high-performance NVMe VPS is the pragmatic winner. It delivers the container orchestration you need without the operational fatigue.
Your infrastructure should be invisible. It should just work. High latency and slow I/O are visibility you don't want.
Ready to build a cluster that actually performs? Deploy a CoolVDS instance in Oslo today and experience the difference raw NVMe power makes for your control plane.