Container Security: Stop Leaking Data in Production
Let’s be honest: default container configurations are a security nightmare. If you are deploying Docker containers with standard settings, you aren't deploying infrastructure; you're deploying a playground for privilege escalation attacks. I have spent the last decade cleaning up after developers who thought chmod 777 was a valid fix for permission errors.
In 2025, the threat landscape has shifted. We aren't just worried about script kiddies anymore; we are dealing with automated supply chain injections and sophisticated runtime exploits. If you are operating out of Norway or serving European clients, the stakes are higher. The Datatilsynet (Norwegian Data Protection Authority) does not care that you forgot to drop capabilities; they care that user data was exposed. Here is how to lock it down effectively.
1. The Root Problem: UID 0 is a Liability
The most persistent sin in container orchestration is running processes as root. By default, a process inside a container running as UID 0 has the same privileges as UID 0 on the host, specifically regarding kernel calls. If a vulnerability allows an attacker to break out of the container (container escape), they own your node.
You must enforce a non-root user in your Dockerfile. Do not rely on the runtime to do this for you.
# WRONG
FROM node:22-alpine
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
# RIGHT
FROM node:22-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY . .
USER appuser
CMD ["node", "index.js"]Pro Tip: On CoolVDS KVM instances, we recommend mapping container UIDs to unprivileged host UIDs using user namespaces (userns) for an extra layer of isolation, effectively neutralizing most kernel-level escape vectors.
2. Runtime Hardening: Drop Capabilities
The Linux kernel divides privileges into distinct units called capabilities. By default, Docker grants a container roughly 14 capabilities, including NET_RAW and SETUID. Most web applications need exactly zero of these.
Adopt a whitelist approach: drop everything, then add back only what is strictly necessary. Here is how a secure Kubernetes securityContext looks in 2025:
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
containers:
- name: my-app
image: my-registry/app:v1.4
securityContext:
runAsNonRoot: true
runAsUser: 1001
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICEUsing readOnlyRootFilesystem: true is painful during development but critical for production. It prevents an attacker from writing malicious binaries or modifying configurations even if they gain shell access.
3. Supply Chain: Trust Nothing
In 2025, pulling latest from Docker Hub is professional negligence. You need to pin your image digests and scan every layer. Tools like Trivy or Grype should be blocking your CI/CD pipeline if high-severity CVEs are found.
Implementing Image Signing
Ensure your images are signed. We use Cosign for this.
# Generate a key pair
cosign generate-key-pair
# Sign the image
cosign sign --key cosign.key my-registry/my-app:v1.4
# Verify before deployment
cosign verify --key cosign.pub my-registry/my-app:v1.44. The Infrastructure Layer: Why "Where" Matters
You can have the most hardened Kubernetes manifest in the world, but if your underlying infrastructure is built on oversold, shared-kernel virtualization (like OpenVZ/LXC), you are building a fortress on a swamp. Shared kernels mean a kernel panic or exploit in a neighbor's container can take down your entire stack.
This is where hardware isolation becomes non-negotiable. At CoolVDS, we utilize KVM (Kernel-based Virtual Machine) exclusively. This ensures that your OS kernel is completely distinct from the hypervisor and other tenants. When you combine this with our local NVMe storage arrays in Oslo, you get two things: strict security boundaries and I/O latency low enough to make your database weep with joy.
Network Policies & Latency
If you are serving Nordic customers, data residency is key. Keeping traffic local within Norway (via NIX - Norwegian Internet Exchange) reduces the hops your data takes across the public internet, shrinking the attack surface.
Below is a default "Deny All" NetworkPolicy. Apply this, then whitelist traffic explicitly.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress5. Compliance: GDPR & Schrems II
Since the fallout of Schrems II and subsequent rulings, storing EU citizen data on US-owned cloud providers has become a legal minefield. Using a Norwegian provider like CoolVDS simplifies your compliance posture immediately. Your data stays in Oslo. It is governed by Norwegian law.
When auditing your stack, remember to check where your container registry lives. If your runtime is in Oslo but your registry is in Virginia, you are still exporting data.
Conclusion
Security is not a product; it is a process of reducing risk to an acceptable level. By stripping privileges, scanning images, and hosting on isolated KVM infrastructure, you turn your containers from soft targets into hardened units.
Do not let a shared kernel vulnerability compromise your business. Deploy your secure container stack on CoolVDS today and experience true isolation with local Norwegian latency.