Container Security is a Minefield: Hardening Your Stack for the Nordic Threat Landscape
I once watched a financial tech startup in Oslo burn a week of engineering time responding to a breach that didn't touch their application code. The attacker didn't exploit a SQL injection or a buffer overflow in their custom Go binary. They simply pivoted through a privileged sidecar container that had read-write access to the host filesystem. The startup had focused entirely on code quality but ignored the runtime environment.
In 2025, assuming containers are secure by default is negligence. With the NIS2 directive now fully enforceable across Europe (and by extension affecting Norwegian compliance standards via the EEA), the "it works on my machine" excuse is a legal liability. If you are running containers on bare metal or generic cloud instances without hardening, you are essentially handing root access to anyone who finds a vulnerability in your dependencies.
Here is how we lock down infrastructure, from the Dockerfile to the kernel level, keeping your data compliant with Datatilsynet and your latency to NIX (Norwegian Internet Exchange) negligible.
1. The Base Image Trap: Stop Using :latest
The biggest lie in DevOps is that official images are secure. They are general-purpose, meaning they contain shells, package managers, and binaries you don't need. An attacker loves a container with curl and bash installed. It makes data exfiltration trivial.
Switch to Distroless or minimal Alpine images. We use multi-stage builds to compile in a heavy environment and run in a skeletal one.
The Fix: Multi-Stage Builds
# Build Stage
FROM golang:1.24-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o secure-service
# Production Stage
# Using Google's distroless static image - no shell, no package manager
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/secure-service /secure-service
USER nonroot:nonroot
ENTRYPOINT ["/secure-service"]
Pro Tip: Never deploy without a SHA digest. Tags like v1.2 are mutable. A malicious actor can push a compromised image to that tag. Pinning the SHA256 digest ensures immutability.
2. Runtime Hardening: Drop Those Capabilities
By default, Docker containers retain too many Linux capabilities. Does your Node.js API really need CAP_NET_RAW (ability to craft raw packets)? Absolutely not.
In a recent migration for a healthcare client requiring strict GDPR adherence, we enforced a "deny-all" policy on capabilities. We drop everything and only add back what is strictly necessary. This significantly reduces the blast radius if a container is compromised.
Docker Run Example
docker run --rm -it \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--read-only \
--tmpfs /tmp \
--security-opt=no-new-privileges \
my-secure-app:stable
This command does three critical things:
- Drops all capabilities, adding back only the ability to bind ports.
- Makes the root filesystem read-only. Attackers cannot download malware or modify configs.
- Prevents privilege escalation (
no-new-privileges).
3. Kubernetes SecurityContext: The 2025 Standard
If you are orchestrating with Kubernetes (v1.32 is the current stable target), you must define the securityContext at the Pod level. Leaving this empty means your pod runs as root. In a shared kernel environment, running as root inside a container is dangerously close to running as root on the host.
Here is a snippet from a production manifest we use for high-security workloads hosted on CoolVDS NVMe instances:
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-processor
spec:
template:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: processor
image: registry.coolvds.com/payment-app@sha256:45b23...
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
4. The Infrastructure Factor: Why "Shared Hosting" Kills Security
Software limits are good, but hardware isolation is better. Containers share the host kernel. If a vulnerability exists in the Linux kernel (like the Dirty Pipe exploit from a few years back), a container escape is possible.
This is where your choice of VPS provider becomes a security decision. Cheap "container VPS" providers often use LXC or OpenVZ, where kernel sharing is absolute. At CoolVDS, we strictly use KVM (Kernel-based Virtual Machine) virtualization. Each CoolVDS instance has its own isolated kernel.
| Feature | Standard Container VPS | CoolVDS KVM Instance |
|---|---|---|
| Kernel Isolation | Shared (High Risk) | Dedicated (High Security) |
| I/O Performance | Noisy Neighbor Issues | Dedicated NVMe |
| Custom Modules | Restricted | Full Control (Load AppArmor/SELinux) |
Furthermore, scanning containers for vulnerabilities requires heavy disk I/O. When we run trivy or grype scans on CI/CD pipelines, disk speed matters. CoolVDS NVMe storage ensures these scans finish in seconds, not minutes, keeping your deployment pipeline fast.
5. Supply Chain & Continuous Scanning
In 2025, SBOM (Software Bill of Materials) isn't just a buzzword; it's a requirement for many EU contracts. You need to know exactly what libraries are inside your container.
Integrate scanning into your pipeline. Fail the build if a 'High' or 'Critical' CVE is found.
# Scanning a local image with Trivy (v0.58+)
trivy image --exit-code 1 --severity HIGH,CRITICAL --no-progress my-app:latest
Don't just scan at build time. Use an admission controller in Kubernetes to reject unsigned or vulnerable images from ever starting.
6. Local Context: Data Sovereignty & Latency
For Norwegian businesses, the physical location of your container host is a compliance issue. Under GDPR and Schrems II requirements, ensuring customer data resides on servers within the EEA—and ideally within Norway for critical infrastructure—is paramount.
CoolVDS data centers are optimized for connectivity within the Nordic region. When your Kubernetes nodes need to sync state or replicate databases, the sub-millisecond latency to Oslo exchanges ensures that strict security checks (like mTLS handshakes between microservices) don't degrade user experience.
Final Thoughts
Security is not a product; it's a relentless process of subtraction. Remove the shell, remove the root user, remove the capabilities. But you cannot subtract the need for a robust foundation.
Running hardened containers on weak infrastructure is like putting a bank vault door on a tent. You need the isolation of KVM and the performance of NVMe to support modern security tooling without performance penalties.
Ready to harden your infrastructure? Deploy a KVM-isolated, NVMe-powered instance on CoolVDS today and secure your Nordic workloads properly.