Console Login

Kubernetes vs. Docker Swarm vs. Nomad: The Honest 2023 Guide for Nordic DevOps

Stop Deploying Kubernetes for Your WordPress Blog

I am tired of interviewing Junior DevOps engineers who can recite the entire CNCF landscape but can't debug a simple Nginx gateway timeout. By July 2023, container orchestration has become the default standard for deployment, but we have a problem: Resume-Driven Development.

You are likely choosing Kubernetes (K8s) because it looks good on LinkedIn, not because your traffic demands it. If you are managing infrastructure in Norway or the broader EU, you have specific constraints: strict GDPR compliance (thank you, Schrems II), latency requirements to local exchanges like NIX (Norwegian Internet Exchange), and a finite budget.

In this analysis, we are stripping away the marketing fluff. We are looking at the three contenders that actually matter in production right now: Kubernetes, Docker Swarm, and HashiCorp Nomad. We will look at them through the lens of performance, maintenance overhead, and why the underlying metal—specifically high-performance NVMe storage—matters more than which YAML dialact you speak.

1. Kubernetes: The 800lb Gorilla

Let's start with the inevitable. Kubernetes is the operating system of the cloud. It is powerful, extensible, and standard. But it is also a beast.

The hidden cost of K8s isn't the compute; it's the control plane. The heart of Kubernetes is etcd, a consistent key-value store. etcd is notoriously sensitive to disk latency. If your fsync latency spikes, your cluster leadership elections fail, and your API server starts timing out. I've seen entire production clusters in Oslo degrade because the underlying VPS provider was throttling IOPS on standard SSDs.

Here is a standard deployment spec. Notice the resource limits. If you don't set these, a memory leak in one pod can OOM-kill your node. In a shared hosting environment, this is fatal. On CoolVDS, where you have KVM isolation, the damage is contained, but you still need discipline.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-api
  labels:
    app: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: api-server
        image: registry.coolvds.com/api:v2.4.1
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10
Pro Tip: Always define readinessProbe and livenessProbe. Without them, K8s will route traffic to a container that is technically "running" but chemically brain-dead (e.g., deadlocked database connection).

2. Docker Swarm: The "Just Works" Option

Docker Swarm is not dead. In fact, for teams of less than 10 people, it is often superior. It is built into the Docker engine. There is no separate binary to install, no complex CNI (Container Network Interface) plugins to configure by default, and the overlay network just works.

Swarm's weakness is stateful workloads. Handling persistent volumes across nodes is clunky compared to K8s' CSI (Container Storage Interface). However, if you are running stateless microservices and need to scale fast, the TCO (Total Cost of Ownership) is practically zero.

Deploying a stack is as simple as a docker-compose.yml file:

version: '3.8'
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 5
      restart_policy:
        condition: on-failure
      placement:
        constraints:
          - node.role == worker
    ports:
      - "80:80"
    networks:
      - webnet

networks:
  webnet:

Command to deploy: docker stack deploy -c docker-compose.yml production_stack

If your latency to the end-user in Bergen or Trondheim is your metric, Swarm adds less networking overhead than K8s default iptables/IPVS chains.

3. Nomad: The UNIX Philosophy Heir

HashiCorp's Nomad is the middle ground. It is a single binary. It schedules containers, but also Java JARs, binaries, and VMs. It doesn't care. It is simpler than K8s but more flexible than Swarm.

Nomad shines in mixed environments. Do you have a legacy binary that needs to run alongside a Docker container? K8s forces you to containerize the binary. Nomad just runs it.

job "cache-service" {
  datacenters = ["oslo-dc1"]
  type = "service"

  group "cache" {
    count = 3
    
    network {
      port "db" {
        to = 6379
      }
    }

    task "redis" {
      driver = "docker"
      config {
        image = "redis:7.0"
        ports = ["db"]
      }
      resources {
        cpu    = 500
        memory = 256
      }
    }
  }
}

The Infrastructure Reality: Latency and "Steal Time"

Here is the hard truth no tutorial tells you: Orchestration software cannot fix bad infrastructure.

When you run K8s or Swarm on a cheap, oversold VPS, you suffer from "CPU Steal" (stolen time). This happens when the hypervisor makes your VM wait while it serves another tenant. For a database or an orchestration control plane, this wait time is catastrophic. It manifests as random 502 errors or leader election timeouts.

This is why we architect CoolVDS differently. We use KVM (Kernel-based Virtual Machine) with strict resource guarantees. When you buy 4 vCPUs, you get the cycles. Check your steal time right now:

top -b -n 1 | grep "Cpu(s)"

If the st value is above 0.0 for sustained periods, migrate. Your code isn't slow; your host is noisy.

Optimizing the Host Kernel for Containers

Regardless of which orchestrator you pick, you must tune the Linux kernel. The defaults are often set for general-purpose computing, not high-density packet switching. Add these to /etc/sysctl.conf:

# Allow IP forwarding (essential for container networking)
net.ipv4.ip_forward = 1

# Increase the number of connections
net.core.somaxconn = 4096

# Enable TCP BBR congestion control for better throughput over the internet
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

# Increase file descriptors (containers eat these for breakfast)
fs.file-max = 2097152

Apply them with sysctl -p. These small tweaks can increase network throughput by 20-30% on high-traffic nodes.

Comparison: The Decision Matrix

FeatureKubernetesDocker SwarmNomad
Learning CurveSteep (Months)Low (Days)Medium (Weeks)
MaintenanceHigh (Requires dedicated Ops)LowLow/Medium
ScalabilityMassive (5000+ nodes)Moderate (1000 nodes)Massive (10k+ nodes)
StorageExcellent (CSI)PoorGood (CSI plugins)
Best ForEnterprise, Complex MicroservicesSimple Web Apps, Small TeamsMixed Workloads, Hybrid Cloud

The Privacy & Local Angle

In 2023, data residency is not optional. The Norwegian Datatilsynet is watching. If you run your K8s cluster on a US-owned hyperscaler, you are navigating a legal minefield regarding data transfer. Hosting on European soil, with a provider like CoolVDS, simplifies your GDPR compliance posture immediately. Your data stays here. Your latency stays low.

Final Verdict

If you need to manage 50 microservices and have a team of 3 DevOps engineers, use Kubernetes. But run it on high-performance NVMe instances, or the etcd latency will kill you.

If you have a monolithic Rails/Django app and a Redis cache, use Docker Swarm. It is robust, boring, and stable.

If you are a solo developer or a small studio wanting speed and reliability without the headache, CoolVDS NVMe instances provide the raw horsepower needed for any of these choices. Don't let IO wait times destroy your application's performance.