Console Login

Container Security in 2024: Hardening Strategies for Norwegian Infrastructure

The Myth of the Sandbox: Why Your Containers Are Leaking

Let’s rip the band-aid off immediately: Containers are not virtual machines. If you are treating a Docker container as a hard security boundary, you are gambling with your data. I watched a team in Oslo deploy a financial microservice earlier this year, assuming that because it was "containerized," it was immune to the host's configuration. Two weeks later, a misconfigured capability allowed a lateral move that nearly triggered a report to Datatilsynet.

The reality of 2024—especially in the wake of the XZ Utils backdoor scare (CVE-2024-3094)—is that the supply chain is poisoned and the runtime is fragile. Whether you are running Kubernetes clusters or simple Docker Swarms, security is not a checkbox; it is an architecture. For Norwegian businesses navigating the strict requirements of GDPR and Schrems II, relying on default configurations is negligence.

1. The Base Image: Trust Nothing

The attack surface starts with your `FROM` instruction. I still see too many `FROM node:latest` or `FROM ubuntu:22.04` in production Dockerfiles. These images contain package managers, shells, and libraries you do not need, all of which are potential gadgets for an attacker to construct an exploit chain.

The Fix: Distroless and Multistage Builds.

We strip the OS down to the bare metal necessities. Google's distroless images contain only your application and its runtime dependencies. No shells. No package managers. If an attacker gets in, they can't just run `apt-get install nmap`.

Example: Golang Multistage Hardening

# Build Stage
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY . .
RUN go build -ldflags="-w -s" -o server main.go

# Production Stage
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/server /

# Run as non-root (ID 65532 is the nonroot user in distroless)
USER 65532:65532

ENTRYPOINT ["/server"]
Pro Tip: Always pin your images by SHA256 digest, not tags. Tags are mutable. Digests are immutable. Use image: postgres@sha256:e8f... to guarantee you get the exact bits you tested.

2. Runtime Security: Restricting the Blast Radius

By default, Docker allows a container to retain too many Linux capabilities. We need to drop them. Specifically, `NET_ADMIN` and `SYS_ADMIN` are dangerous. But even benign capabilities can be weaponized.

In your Kubernetes manifests or Docker Compose files, you must explicitly drop all capabilities and only add back what is strictly necessary. Furthermore, the filesystem should be read-only wherever possible to prevent attackers from downloading payloads or modifying binaries.

Configuration: The "Iron-Clad" SecurityContext

apiVersion: v1
kind: Pod
metadata:
  name: secure-app
spec:
  containers:
  - name: my-app
    image: my-registry/app:1.4.2
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 10001
      capabilities:
        drop:
          - ALL
        add:
          - NET_BIND_SERVICE

You can verify your current capabilities quickly on a running container:

capsh --print

If you see a wall of text, you have work to do.

3. Network Segmentation: Zero Trust Inside the Cluster

In a flat network, if one pod is compromised, the attacker can scan the entire internal range. I've seen compromised frontend pods used to brute-force internal Redis instances that had no password protection because "it's internal network."

Use Kubernetes NetworkPolicies to whitelist traffic. Deny everything by default.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
--- 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

4. The Infrastructure Layer: Why Virtualization Matters

Here is the uncomfortable truth about container security: Shared kernels are a single point of failure. If you are running your containers on a budget VPS provider that uses OpenVZ or LXC, you are effectively sharing the kernel with your "noisy neighbors." A kernel panic triggered by them affects you. A kernel exploit (like Dirty Pipe) could theoretically allow them to break out and access your memory.

This is where infrastructure choice becomes a security decision. At CoolVDS, we exclusively use KVM (Kernel-based Virtual Machine) virtualization. Each VPS instance has its own isolated kernel. Even if you are running a Docker host, that host is wrapped in a hardware-assisted virtualization layer. This provides the strong isolation required for GDPR compliance and sensitive workloads.

When hosting data in Norway, latency to the Norwegian Internet Exchange (NIX) is critical for performance, but data sovereignty is critical for legality. CoolVDS ensures your data resides on physical hardware within the region, protected by robust KVM boundaries, unlike the soft isolation of container-native platforms.

5. Continuous Scanning

Security is a snapshot in time. A clean image today is a vulnerable image tomorrow. You must integrate scanning into your CI/CD pipeline.

Tools like Trivy are essential. Don't just scan the OS; scan the language dependencies (go.mod, package-lock.json).

trivy image --severity HIGH,CRITICAL coolvds-app:latest

Conclusion

Container security in 2024 demands a defense-in-depth approach. You harden the image (Distroless), you lock down the runtime (SecurityContext), you segment the network (NetworkPolicies), and finally, you ensure the underlying infrastructure offers true isolation.

Don't let a shared-kernel environment be your weak link. Secure your foundation first.

Ready to deploy on infrastructure that takes isolation seriously? Spin up a KVM-backed instance on CoolVDS today and experience the difference between "cheap hosting" and "professional infrastructure."