Console Login

Kubernetes vs. Docker Swarm vs. Nomad: The 2025 Orchestration Reality Check for Nordic Infra

Kubernetes vs. Docker Swarm vs. Nomad: The 2025 Orchestration Reality Check

I once watched a startup burn three months of runway trying to build a "geo-redundant" Kubernetes federation across three cloud providers before they had even acquired their first paying customer. It was tragic. They were optimizing for Google-scale traffic while running a monolithic PHP app that barely tickled the CPU.

In 2025, the container orchestration wars are supposedly over. Kubernetes won, right? Not exactly. While K8s is the defacto standard, the complexity tax is higher than ever. For teams in Oslo or Bergen managing lean infrastructure, deploying a full K8s cluster just to run Nginx and Redis is like commuting to work in a Leopard 2 tank. It works, but the fuel efficiency is terrible, and parking is a nightmare.

Let's cut through the marketing noise. We are going to look at the three survivors—Kubernetes, Docker Swarm, and Nomad—through the lens of performance, maintenance overhead, and data sovereignty requirements typical of the Norwegian market.

The Heavyweight: Kubernetes (v1.31+)

Kubernetes is the operating system of the cloud. By late 2025, features like the Gateway API have finally stabilized, and eBPF-based networking (Cilium) is standard practice. But K8s is resource-hungry. The control plane alone—API server, Scheduler, Controller Manager, and etcd—requires significant compute.

The Reality of Etcd Latency

The number one killer of Kubernetes clusters isn't bad code; it's disk latency on the etcd nodes. If your fsync latency spikes, your cluster leadership elections fail, and the whole system starts flapping. This is why we strictly enforce NVMe backing for control plane nodes at CoolVDS.

Here is a standard deployment snippet using the Gateway API, which replaced the old Ingress controllers for most serious setups this year:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: production-route
  namespace: default
spec:
  parentRefs:
  - name: external-gateway
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /api/v2
    backendRefs:
    - name: backend-service
      port: 8080
Pro Tip: If you are running K8s on a VPS, disable swap immediately. While K8s v1.28+ introduced limited swap support, it is still a performance gamble for the scheduler. Use sudo swapoff -a and ensure your kubelet flags explicitly deny swap usage to prevent OOM killer unpredictability.

The Zombie: Docker Swarm

"Swarm is dead." We've heard this since 2017. Yet, here we are in 2025, and Swarm is still the fastest way to go from "I have a Dockerfile" to "I have a cluster." It is embedded in the Docker engine. No extra binaries, no complex PKI management for the control plane.

For a Norwegian e-commerce shop handling Black Friday traffic, Swarm's simplicity is a feature. You don't need a dedicated DevOps engineer just to upgrade the cluster.

Deploying a stack is still refreshingly simple:

version: "3.9"
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 5
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:

However, Swarm lacks the rich ecosystem of K8s. No Helm charts, no Operators, and limited CSI (Container Storage Interface) support. If you need complex stateful sets with automatic volume provisioning across different availability zones, Swarm will hurt you.

The Sniper: HashiCorp Nomad

Nomad follows the Unix philosophy: do one thing and do it well. It schedules workloads. That's it. It doesn't handle networking (Consul does that) or secrets (Vault does that). This makes the Nomad binary incredibly lightweight—around 100MB.

For hybrid workloads—where you have legacy Java binaries that can't be containerized alongside modern Docker containers—Nomad is the only sane choice. It can schedule raw exec drivers just as easily as containers.

job "legacy-payment-processor" {
  datacenters = ["oslo-dc1"]

  group "payment" {
    count = 3

    task "java-core" {
      driver = "java"
      config {
        jar_path    = "local/payment.jar"
        jvm_options = ["-Xmx2048m", "-Xms512m"]
      }
      resources {
        cpu    = 500
        memory = 2048
      }
    }
  }
}

The Infrastructure Layer: Why "Where" Matters

Orchestrators abstract away the hardware, but they cannot fix bad hardware. A container is just a process isolated by cgroups and namespaces. If the host kernel is starving for I/O, your container crashes. This is the "Noisy Neighbor" problem common in cheap shared hosting.

Performance Comparison (Requests Per Second)

Orchestrator Idle RAM Usage Deployment Time Complexity Score
Kubernetes ~1.5 GB High 10/10
Docker Swarm ~50 MB Low 3/10
Nomad ~80 MB Medium 5/10

At CoolVDS, we don't oversell our cores. We use KVM (Kernel-based Virtual Machine) virtualization. This means your Kubernetes nodes have a hard-reserved slice of the CPU and, crucially, dedicated NVMe throughput. When you run etcd on CoolVDS, you aren't fighting a WordPress blog next door for disk IOPS.

Local Nuance: The Norwegian Context

Latency is physics. If your customers are in Trondheim or Oslo, hosting your cluster in Frankfurt adds 20-30ms of round-trip time. For a database-heavy application, that latency compounds with every query.

Furthermore, Datatilsynet (The Norwegian Data Protection Authority) has become increasingly strict regarding Schrems II and data transfers. Hosting on US-owned cloud giants introduces legal friction. CoolVDS offers purely Norwegian infrastructure. Your bits stay within the borders, subject to Norwegian law, not the US CLOUD Act.

The Verdict

  1. Choose Kubernetes if: You have a team of at least 3 DevOps engineers, you need the ecosystem (Helm, Prometheus, Istio), and you are building a microservices platform.
  2. Choose Docker Swarm if: You are a small team, you want to deploy fast, and you don't need complex stateful volume management.
  3. Choose Nomad if: You have a mix of containers and legacy binaries, or you are already deep into the HashiCorp stack (Terraform/Consul).

Whichever orchestrator you choose, the bottleneck will eventually be the metal underneath. Don't let IO wait kill your SEO. Spin up a high-performance, KVM-backed instance on CoolVDS in under 55 seconds and give your containers the headroom they deserve.