Your Containers Are Leaking. Here is How to Plug the Holes.
Letâs be honest. Most of you are still running USER root in your Dockerfiles. I see it in audits every week. You pull a Node.js image, slap your code in, expose port 80, and push to production. It works. The site loads fast. The client is happy.
Then a zero-day hits a library you forgot to update, an attacker gains shell access, and because the container process is root, they break out to the host. Suddenly, your "isolated" environment is just a playground for a crypto-miner or, worse, a data exfiltration script targeting your customer database.
In May 2023, security isn't just about firewalls; it's about reducing the blast radius. If you are hosting in Norway, you also have Datatilsynet breathing down your neck regarding GDPR and Schrems II. You cannot afford loose permissions.
Here is the reality of container hardening, stripped of the marketing buzzwords. We are going to look at immutable infrastructure, network policing, and why the underlying virtualization technology (your VPS provider) is the final line of defense.
1. The Root Problem (Literally)
By default, Docker containers run as root. This is convenient for development but catastrophic for production. If an attacker compromises the process, they have root privileges inside the namespace. If they find a kernel vulnerability (like Dirty Pipe from last year), they own the node.
The Fix: Create a specific user in your Dockerfile. Never let the process ID (PID) 1 be root.
# The Wrong Way
FROM node:18-alpine
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
Change it to this:
# The Battle-Hardened Way
FROM node:18-alpine
# Create a group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY . .
# Switch context
USER appuser
CMD ["node", "index.js"]
Pro Tip: When using CoolVDS KVM instances, we recommend ensuring your base OS (Ubuntu 22.04 or Debian 11) has the latest kernel patches applied immediately. We handle the hypervisor layer, but the guest OS kernel is your domain.
2. Locking Down the Runtime: Kubernetes SecurityContext
If you are orchestrating with Kubernetes (anything from v1.24 to the current v1.27), you must enforce security contexts. PodSecurityPolicies (PSP) are dead; they were removed in v1.25. You should now be using Pod Security Standards (PSS) or an admission controller like Kyverno or OPA Gatekeeper.
But at the very minimum, configure the securityContext in your Deployment manifest. This effectively tells the kernel: "Do not let this pod do anything clever."
The "Paranoid" Configuration
This configuration drops all Linux capabilities and forces a read-only filesystem. If an attacker gets in, they cannot write a script to disk to execute it.
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-backend
spec:
template:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 2000
containers:
- name: api
image: my-registry/api:v2.1
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp-volume
mountPath: /tmp
volumes:
- name: tmp-volume
emptyDir: {}
Notice the readOnlyRootFilesystem: true flag? Most applications crash when you enable this because they try to write logs or temp files. Map an emptyDir volume to /tmp (as shown above) to give them a scratchpad without compromising the root filesystem.
3. Supply Chain: Trust Nothing
You aren't just deploying your code. You are deploying the entire OS userspace from the base image. A generic FROM ubuntu:latest is a 700MB attack surface.
In 2023, we have excellent tools to scan for CVEs before deployment. Trivy by Aqua Security is currently the gold standard for speed and accuracy.
Run this in your CI/CD pipeline:
trivy image --severity HIGH,CRITICAL coolvds-app:latest
If you see a wall of red text, do not deploy. Switch to Distroless images (Google's minimal images that contain only your application and its runtime dependencies, no shell, no package manager) or Alpine Linux.
4. The Infrastructure Layer: Why KVM beats LXC/OpenVZ
This is where many DevOps engineers fail. They harden the container but run it on cheap, shared-kernel hosting.
If you use a VPS provider that relies on container-based virtualization (like OpenVZ or LXC), your "server" is effectively just a container sharing the host's kernel with hundreds of other customers. If a neighbor triggers a kernel panic or exploits a kernel bug, your data is at risk.
This is why CoolVDS exclusively uses KVM (Kernel-based Virtual Machine).
With KVM, you get hardware-level virtualization. Your kernel is yours. The memory is allocated to you. It provides a hard isolation boundary that containerization technologies cannot match alone. When handling data for Norwegian clientsâespecially under the scrutiny of GDPRâlogical separation is often not enough. You need the physical-equivalent isolation that a hypervisor provides.
5. Network Policies: The Firewall Inside the Cluster
By default, all pods in a Kubernetes cluster can talk to all other pods. A compromised frontend can scan your database directly.
Implement a default deny policy. Whitelist traffic explicitly.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Then, allow access strictly where needed:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
6. Local Compliance & Latency
Security is also about availability and legal compliance. Hosting your containers in a datacenter in Frankfurt or Amsterdam is fine, but if your user base is in Oslo or Bergen, you are fighting physics. Latency matters.
Furthermore, the Norwegian Data Protection Authority (Datatilsynet) has become increasingly strict following the Schrems II ruling. Transferring personal data to US-owned cloud providers requires complex risk assessments (TIA). Using a Nordic-centric provider like CoolVDS simplifies this. Your data stays under local jurisdiction, protected by strong Norwegian privacy laws, running on NVMe storage that saturates the I/O bus.
Final Checklist for May 2023 Deployment
- User: Running as non-root (UID > 1000).
- Filesystem: Read-only root where possible.
- Resources: Limits defined (CPU/RAM) to prevent DoS.
- Image: Scanned with Trivy or Grype.
- Host: KVM-based Virtualization (CoolVDS) for kernel isolation.
Container security is a process, not a toggle switch. It requires vigilance. But starting with a secure foundationâproper configuration and robust infrastructureâsolves 90% of the problems before they happen.
Don't let a misconfigured YAML file be the reason you're waking up at 3 AM. Secure your stack. Deploy your next hardened cluster on CoolVDS and experience the stability of true KVM isolation combined with local low-latency performance.