Console Login

K8s vs. Swarm vs. Nomad: The 2022 Container Orchestration Showdown for Nordic Ops

K8s vs. Swarm vs. Nomad: The 2022 Container Orchestration Showdown for Nordic Ops

Let’s be honest. Most of you don't need Kubernetes. You think you do because Google uses it, but you aren't Google. I’ve seen too many engineering teams in Oslo burn three months of runway building a "production-grade" K8s cluster that hosts exactly two monolithic PHP applications. It’s resume-driven development at its finest.

But sometimes, you actually do need the scale. Or the self-healing. Or the ecosystem.

As of June 2022, the landscape has shifted. Kubernetes 1.24 "Stargazer" just dropped, finally deprecating the Dockershim. Docker Swarm is still refusing to die. And HashiCorp's Nomad is quietly powering massive workloads while everyone else argues about YAML indentation. I'm writing this from the perspective of a SysAdmin who has had to wake up at 4 AM to fix a split-brain etcd cluster. We are going to look at these three contenders, not in a vacuum, but running on real iron—specifically, high-performance VPS infrastructure where IOPS actually matter.

The Elephant in the Server Room: Kubernetes 1.24

Kubernetes is the de facto standard. If you are hiring in Europe, you put "K8s" on the job description. But the operational overhead is massive. The recent release of 1.24 removed the dockershim. This caused panic in half the Slack channels I lurk in.

If you are running a cluster on bare metal or VPS, this means you need to be running containerd or CRI-O directly. The days of just slapping Docker on a node and letting Kubelet talk to it are over.

The Reality of Performance:
K8s is heavy. The control plane components (API server, Controller Manager, Scheduler) and specifically etcd require low latency. If your disk write latency spikes, your cluster leadership election fails. This is where cheap hosting kills you.

Pro Tip: Always benchmark your etcd storage. If fsync latency is > 10ms, your cluster will be unstable. On CoolVDS NVMe instances, we typically see fsync latencies under 2ms, which is critical for HA clusters.

Here is a snippet of a proper etcd tuning configuration for high-performance scenarios. Don't just stick with defaults.

# /etc/etcd/etcd.conf

# Increase snapshot count to reduce I/O spikes during compaction
ETCD_SNAPSHOT_COUNT=5000

# Increase heartbeat interval if you have network jitter (common in cross-DC setups)
ETCD_HEARTBEAT_INTERVAL=100
ETCD_ELECTION_TIMEOUT=1000

# Ensure you are binding to the private network, not public!
ETCD_LISTEN_PEER_URLS="https://10.10.0.1:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.0.1:2379"

When to use K8s:

  • You have a dedicated DevOps team of at least 3 people.
  • You need the Helm ecosystem.
  • You require complex networking (CNI) or service mesh (Istio/Linkerd).

The Undead Warrior: Docker Swarm

Everyone says Swarm is dead. Yet, Mirantis bought Docker Enterprise, and Swarm is still shipping. Why? Because it is simple. I can spin up a Swarm cluster on three CoolVDS instances in Norway in about 45 seconds.

Swarm is integrated into the Docker engine. No extra binaries. No complex certificates setup (it does it for you). For a lot of Norwegian agencies managing 50 small client sites, Swarm is superior to K8s. It has lower overhead, meaning you can pack more containers onto a single VPS.

Deploying a stack is as simple as this:

version: '3.8'
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 2
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet

networks:
  webnet:

Then just run:

docker stack deploy -c docker-compose.yml mystack

However, Swarm struggles with stateful workloads. Persistent storage plugins are nowhere near as robust as Kubernetes CSI drivers. If you need complex databases orchestrated, look elsewhere.

The Assassin: HashiCorp Nomad

Nomad is my personal favorite for 2022. It is a single binary. It schedules containers, Java jars, and even static binaries. It is simpler than K8s but scales better than Swarm.

The beauty of Nomad is integration with Consul (networking) and Vault (secrets). It follows the Unix philosophy: do one thing well. Nomad just schedules.

Here is what a Nomad job looks like compared to K8s YAML hell:

job "api-service" {
  datacenters = ["oslo-dc1"]
  type = "service"

  group "api" {
    count = 3

    network {
      port "http" {
        to = 8080
      }
    }

    task "server" {
      driver = "docker"

      config {
        image = "my-registry.com/api:1.4.2"
        ports = ["http"]
      }

      resources {
        cpu    = 500
        memory = 256
      }
      
      service {
        name = "api-http"
        port = "http"
        tags = ["urlprefix-/api"]
        check {
          type     = "http"
          path     = "/health"
          interval = "10s"
          timeout  = "2s"
        }
      }
    }
  }
}

Notice the datacenters = ["oslo-dc1"]? This is crucial for GDPR compliance. You need to ensure your workloads are landing exactly where the law says they should. With Schrems II invalidating Privacy Shield, guaranteeing your data stays on servers in Europe (like Norway) is not just a technical requirement, it's a legal one.

Infrastructure Matters: The underlying Iron

Orchestrators are just control loops. They rely on the kernel and the hardware. A common issue I see in virtualized environments is "CPU Steal". This happens when the host node is oversold, and your "2 vCPU" VPS is fighting for cycles with 50 other neighbors.

In container orchestration, latency is death. If your Kubelet cannot report status to the API server because the CPU is stolen by a noisy neighbor, the node gets marked NotReady, and pods are evicted. This causes a thundering herd problem.

Comparison of Requirements:

Feature Kubernetes Docker Swarm Nomad
Min. RAM 2GB (Control Plane) 512MB 256MB
Storage IOPS Critical (etcd) Moderate Low/Moderate
Complexity High Low Medium
Latency Sensitivity High Medium Low

At CoolVDS, we specifically tune our KVM hypervisors to minimize CPU steal. We don't oversell resources. When you deploy a Kubernetes cluster on our managed hosting or VPS, you are getting dedicated NVMe throughput. This is why our VPS Norway instances are preferred for heavy etcd workloads. We connect directly to NIX (Norwegian Internet Exchange), ensuring that traffic between your nodes and your Norwegian users is effectively instant.

Technical Deep Dive: Monitoring the Cluster

Regardless of your choice, you need visibility. In 2022, Prometheus + Grafana is the standard. But how you scrape targets changes.

If you are running Swarm, you might need a sidecar. If K8s, use the Prometheus Operator.

Here is a quick check to see if your node is experiencing I/O pressure (vital for K8s stability):

iostat -x 1 10

Look at the %iowait column. If this is consistently above 5% on an idle cluster, your hosting provider is failing you. Move to NVMe storage immediately.

Conclusion

If you are building a bank, use Kubernetes. If you are building a startup MVP, use Swarm or just docker-compose. If you want high efficiency and a modern binary workflow, use Nomad.

But remember: software cannot fix broken hardware. A slow disk will kill a Kubernetes cluster faster than a bad config file. Ensure your foundation is solid. Whether you need ddos protection for your ingress or low latency for your database, the underlying VPS is the most critical architectural decision you will make.

Don't let slow I/O kill your orchestration. Deploy a high-performance test instance on CoolVDS in 55 seconds and see the difference real NVMe makes.