The Myth of Isolation
Stop me if you've heard this one: "It's secure because it's in a container." I hear this weekly, usually right before I show a CTO how easy it is to escape a privileged container and mount the host filesystem. In Norway, where the Datatilsynet (Data Protection Authority) monitors GDPR compliance with a hawk's eye, relying on default Docker configurations is not just negligent; it is a liability.
As of March 2024, the landscape has shifted. The recent CVE-2024-21626 (Leaky Vessels) vulnerability in runc proved that the barrier between container and host is paper-thin. If you are running high-load workloads in Oslo or dealing with sensitive EU data, you need to harden your stack from the kernel up.
The CoolVDS Reality Check: Containers share the host kernel. If you run containers on a cheap, container-based VPS (like OpenVZ), a kernel panic in one tenant can bring down the whole node. This is why CoolVDS exclusively uses KVM (Kernel-based Virtual Machine) virtualization. You get a dedicated kernel. Your neighbors cannot crash your stack.
1. Stop Running as Root (Seriously)
By default, a process inside a Docker container runs as root. If an attacker compromises that process, they are root. While namespaces provide some mapping, a breakout often means they have root access to your node. It is the most common misconfiguration I see in audits.
You must force a non-privileged user ID. Here is how you do it in a standard Dockerfile:
FROM node:20-alpine
# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Tell Docker to switch to this user
USER appuser
WORKDIR /home/appuser
COPY . .
CMD ["npm", "start"]This simple change mitigates a vast class of potential breakouts. If code injection occurs, the attacker finds themselves trapped as appuser with limited permissions.
2. Immutable Infrastructure: Read-Only Filesystems
If an attacker gets in, their first move is usually to download a payload or modify a binary. Make that impossible. Configure your containers to run with a read-only root filesystem. This forces your application to be stateless and write only to mounted volumes or temporary directories (tmpfs).
In Kubernetes (v1.29), this is defined in the securityContext:
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
containers:
- name: my-app
image: my-app:1.2.0
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-volume
volumes:
- name: tmp-volume
emptyDir: {}If your application tries to apt-get install or wget a rootkit into /bin, it fails immediately. The logs will scream, and you will know you are under attack.
3. The Supply Chain: Trust Nothing
You are likely pulling images from Docker Hub. Do you know what is in them? In 2024, supply chain attacks are the primary vector. Malicious actors push "typosquatting" images (e.g., nignx instead of nginx) loaded with cryptominers.
Integrate scanning into your CI/CD pipeline before the image ever hits your CoolVDS instance. Tools like Trivy are essential here.
# Install Trivy (Ubuntu/Debian)
wget https://github.com/aquasecurity/trivy/releases/download/v0.49.1/trivy_0.49.1_Linux-64bit.deb
sudo dpkg -i trivy_0.49.1_Linux-64bit.deb
# Scan your image before deployment
trivy image --severity HIGH,CRITICAL my-app:latestIf you see Critical CVEs, the build fails. Simple.
4. Capabilities: Drop 'Em All
Linux "Capabilities" break down the power of root into distinct privileges. By default, Docker grants roughly 14 capabilities, including NET_RAW (ping) and CHOWN. Most web apps need exactly zero of these.
Adopt a "deny-all, permit-some" approach. We drop all capabilities and only add back what is strictly necessary. This significantly reduces the attack surface.
securityContext:
capabilities:
DROP:
- ALL
ADD:
- NET_BIND_SERVICE # Only if binding to ports < 10245. Network Policies: The Forgotten Firewall
In a default Kubernetes cluster, every pod can talk to every other pod. If your frontend is compromised, the attacker can port scan your database directly. This "flat network" is a security nightmare.
Use NetworkPolicies to lock this down. Here is a policy that denies all ingress traffic by default, forcing you to whitelist connections.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes:
- IngressThis ensures that if a vulnerability exists, lateral movement is blocked. Your database should only accept connections from the API pod, not the public internet or a compromised logging agent.
Infrastructure as the Final Defense
Software configuration only goes so far. Eventually, you rely on the kernel. This is where the choice of hosting provider becomes a security decision.
Shared hosting or weak virtualization (LXC/OpenVZ) exposes you to "noisy neighbor" attacks and shared kernel exploits. If a neighbor on the same physical host triggers a kernel race condition, your data could be exposed.
At CoolVDS, we engineer for isolation. Our NVMe VPS instances run on KVM. This means you have your own kernel, your own memory space, and hardware-level virtualization extensions enabled. We combine this with low-latency connectivity to NIX (Norwegian Internet Exchange), ensuring that your secure encryption handshakes don't suffer from network jitter.
Quick Checklist for 2024 Deployments:
- Audit: Run
kubectl-who-canto audit RBAC permissions. - Monitor: Deploy Falco to detect runtime anomalies (shell spawning in containers).
- Update: Patch your nodes. The "Leaky Vessels" runc patch must be applied to the host OS.
- Isolation: Ensure your underlying VPS uses KVM, not container virtualization.
Security is not a product; it is a discipline. It requires constant vigilance, patching, and the refusal to accept default settings. Don't let a misconfigured YAML file be the reason you have to report a breach to Datatilsynet.
Ready to build on a foundation that respects your security? Deploy a hardened, KVM-based instance on CoolVDS today and get root (the safe way) in under 55 seconds.