Container Security in 2025: Hardening K8s and Docker for Norwegian Compliance
Let’s be brutally honest: "It works on my machine" is the most dangerous phrase in DevOps.
I recently audited a Kubernetes cluster for a mid-sized logistics firm in Oslo. They were proud of their CI/CD velocity, deploying thirty times a day. But when I peeled back the layers, I found a nightmare. Their backend microservices were running as root. Their base images were two years old, riddled with CVEs. And worst of all? They were mounting the host Docker socket into a monitoring container.
That is not a deployment; that is a welcome mat for ransomware. In 2025, with automated botnets scanning for exposed runtimes faster than you can type kubectl apply, security isn't a feature. It is survival.
This guide cuts through the noise. We aren't discussing theoretical attack vectors. We are implementing hard controls to lock down your containers, satisfy Datatilsynet (the Norwegian Data Protection Authority), and ensure your infrastructure on CoolVDS stands firm while others crumble.
1. The Foundation: Minimal Base Images
Your attack surface is directly proportional to the size of your base image. If you are still using FROM ubuntu:24.04 for a Go binary, you are shipping a full OS just to run a 10MB executable. That is madness.
Switch to Distroless or Alpine images. In 2025, Google's Distroless images are the gold standard for production. They contain no shell, no package manager, and no bloat. If an attacker manages to inject code, they cannot just run /bin/bash because it doesn't exist.
Example: Moving to a Multi-Stage Distroless Build
Here is a production-ready Dockerfile pattern we use for high-security endpoints hosted on CoolVDS NVMe instances:
# Stage 1: Builder
FROM golang:1.24-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Compile statically to avoid libc dependencies
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
# Stage 2: Runner
FROM gcr.io/distroless/static-debian12
WORKDIR /
COPY --from=builder /app/main .
# Never run as root. Distroless has a 'nonroot' user (id 65532).
USER 65532:65532
ENTRYPOINT ["/main"]
This reduces the image size from ~700MB to ~25MB and eliminates 99% of OS-level vulnerabilities.
2. Runtime Defense: Dropping Capabilities
By default, Docker grants a container nearly all the capabilities of the kernel. You almost never need them. A web server does not need NET_ADMIN (network manipulation) or SYS_TIME (changing system clock).
The Linux kernel capabilities system allows us to fine-tune privileges. The strategy is simple: Drop ALL, then add back only what is strictly necessary.
Pro Tip: When hosting on CoolVDS, you have full KVM isolation. However, if a container escapes to the host, it hits the kernel. Dropping capabilities makes that escape useless even if a vulnerability exists.
Here is how you configure this in a Kubernetes deployment.yaml. Pay attention to the securityContext:
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-api
namespace: production
spec:
selector:
matchLabels:
app: secure-api
template:
metadata:
labels:
app: secure-api
spec:
containers:
- name: api
image: my-registry/secure-api:v1.2.0
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 10001
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE # Only if binding to port < 1024, otherwise drop this too
volumeMounts:
- name: temp-volume
mountPath: /tmp
volumes:
- name: temp-volume
emptyDir: {} # Required because root fs is read-only
Note the readOnlyRootFilesystem: true directive. This prevents attackers from modifying binaries or writing malicious scripts to disk. If your app needs to write logs or temp files, give it a specific emptyDir volume mounted at /tmp.
3. Supply Chain Security: SBOMs and Signing
In the wake of the major supply chain attacks of the early 2020s, knowing what is in your software is mandatory. European compliance standards (like the Cyber Resilience Act) are pushing hard for this.
You must generate a Software Bill of Materials (SBOM) for every release. We use Syft and Grype in our CI pipelines.
Generating an SBOM:
syft packages docker:my-app:latest -o spdx-json > sbom.json
Scanning for Vulnerabilities:
grype sbom:./sbom.json --fail-on medium
If you aren't gating your deployments based on these scans, you are flying blind.
4. The Infrastructure Layer: Why "Shared" Kernels Fail
This is where the hardware meets the compliance requirements. In cheap container hosting (often based on OpenVZ or LXC), all customers share the same host kernel. If a neighbor triggers a kernel panic or exploits a kernel vulnerability, your containers are compromised.
For data falling under GDPR jurisdiction—especially here in Norway where Schrems II dictates strict data sovereignty—shared kernels are a liability. You need hardware virtualization.
This is why we built CoolVDS on KVM (Kernel-based Virtual Machine). Each VPS gets its own dedicated kernel.
| Feature | Standard VPS (LXC/OpenVZ) | CoolVDS (KVM) |
|---|---|---|
| Kernel Isolation | Shared (High Risk) | Dedicated (High Security) |
| Docker Compatibility | Limited / Emulated | Native / Full Support |
| Performance Stability | Noisy Neighbors affect you | Guaranteed Resources |
| eBPF Support | Often Disabled | Full Support |
When you run Kubernetes or Docker on CoolVDS, you aren't fighting for kernel semaphores with a Minecraft server next door. You have a private lane.
5. Network Policies: The Firewall Inside the Cluster
By default, all pods in a Kubernetes cluster can talk to all other pods. If an attacker breaches your frontend, they can pivot directly to your database pod.
NetworkPolicies are your internal firewall. Block everything, then allow specific paths.
Example: Deny-All Default Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Selects all pods in namespace
policyTypes:
- Ingress
- Egress
Once applied, nothing moves. You then explicitly allow your frontend to talk to your backend on port 8080:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
6. Real-Time Threat Detection with Falco
Static analysis is great, but what happens when an exploit occurs at runtime? This is where eBPF shines. Tools like Falco listen to the kernel syscalls in real-time. It’s like a security camera for your CPU.
Since CoolVDS instances support custom kernels and eBPF natively, you can deploy Falco to detect shell spawning in containers.
Example Falco Rule:
- rule: Terminal shell in container
desc: A shell was used as the entrypoint for a container.
condition: >
spawned_process and container
and shell_procs and proc.tty != 0
and container_entrypoint
output: >
Shell executed in container (user=%user.name container_id=%container.id image=%container.image.repository)
priority: WARNING
If this rule triggers, you get an alert instantly. You can even hook this into a serverless function to kill the pod immediately.
Conclusion
Security isn't a product you buy; it's a discipline you practice. In 2025, the tools available—from distroless images to eBPF monitoring—make it possible to run containers with unprecedented hardness.
But software hardening is meaningless if the foundation is shaky. You need low latency, data sovereignty in Norway, and the isolation of KVM virtualization.
Don't let shared kernels compromise your compliance. Deploy a hardened K3s cluster on a CoolVDS NVMe instance today and own your infrastructure down to the last byte.