Console Login

Container Security in 2024: Moving Beyond Default Configurations for Norwegian Infrastructure

Stop Trusting Default Container Configurations

If you type docker run -d nginx and walk away, you haven't deployed an application; you've opened a potential backdoor. As a DevOps engineer who has spent the last decade cleaning up after "move fast and break things" deployments, I can tell you that the illusion of isolation is the most dangerous aspect of containerization. Containers are just processes. They share the host kernel. If that kernel is exposed, your entire infrastructure is compromised.

In March 2024, the threat landscape has shifted. We aren't just worried about script kiddies running DDoS attacks; we are dealing with sophisticated supply chain attacks and runtime escapes. For those of us operating in Norway, the stakes are doubled by strict adherence to Datatilsynet regulations and GDPR. Security isn't just about preventing hacks; it's about proving to auditors that your data never leaves the encrypted boundaries of your jurisdiction.

1. The "Root" of All Evil

The most common vulnerability I see in production environments is processes running as root inside the container. By default, Docker containers run as root. If an attacker exploits a vulnerability in your application code (like a buffer overflow in a C-based library), they gain root access inside the container. From there, escaping to the host is significantly easier.

The Fix: enforce non-root users in your Dockerfile. Never let the build process decide the UID.

# WRONG
FROM node:20-alpine
CMD ["node", "server.js"]

# RIGHT
FROM node:20-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
CMD ["node", "server.js"]

In Kubernetes, you must enforce this at the Pod level using the securityContext. If you don't block root, someone will eventually use it.

apiVersion: v1
kind: Pod
metadata:
  name: secure-app
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  containers:
  - name: app
    image: my-company/app:v1.2
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true

2. Kernel Surface Area: Seccomp and AppArmor

Even as a non-root user, a process can make system calls to the kernel. Does your Node.js API really need to make a keyctl syscall? Probably not. Yet, by default, it can. This is where Seccomp (Secure Computing Mode) profiles come in. They act as a firewall for system calls.

Docker's default seccomp profile blocks about 44 syscalls out of 300+. That is good, but for high-security environments—especially those handling financial data or PII falling under Norwegian privacy laws—you should whitelist only what is necessary.

Below is a snippet of how you verify if your container is actually running with the default profile:

docker run --rm -it alpine grep Seccomp /proc/self/status
# Output should be: Seccomp: 2 (Filtering)

If you see Seccomp: 0, you are running unprotected. For critical workloads, I prefer generating custom profiles using tools like bane or strictly defining capabilities. Drop everything, then add back only what is needed.

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx

3. The Infrastructure Layer: Why KVM Matters

Here is the hard truth about container security: Containers are not Virtual Machines. In a shared hosting environment or a cheap VPS provider using OpenVZ or LXC, your "server" is often just a container itself. This means you are sharing a kernel with noisy neighbors. If their container escapes, your data is at risk.

Pro Tip: Always ask your provider about their virtualization technology. If they can't promise hardware-level isolation via KVM (Kernel-based Virtual Machine) or similar hypervisors, do not store sensitive data there.

This is why we architect CoolVDS strictly on KVM. When you spin up an instance with us, you get your own kernel. This isolation layer is critical. Even if you mess up your Docker configuration and a process escapes the container, it is trapped inside your VM. It cannot touch the hypervisor or other clients on the node. This architecture is the only responsible choice for VPS Norway hosting where data sovereignty is a legal requirement.

4. Managing Data and Network Latency

Security also encompasses availability and compliance. Under Schrems II and GDPR, knowing exactly where your data sits is non-negotiable. Using a US-based cloud giant often involves complex legal gymnastics regarding data transfers. Hosting locally in Norway simplifies this.

FeatureUS Cloud ProviderCoolVDS (Norway)
Data JurisdictionUS CLOUD Act appliesNorwegian / EU Law
Latency to Oslo20-40ms (via Frankfurt/London)< 5ms
Storage BackendNetworked Block Storage (Variable IOPS)Local NVMe Storage (High IOPS)

When your database is running on NVMe storage directly attached to the hypervisor, the attack surface for network interception decreases, and performance for disk-heavy security scanning (like Clair or Trivy running against your registry) increases drastically.

5. Supply Chain Security

In 2024, you cannot trust images blindly. You must sign and verify them. We use Cosign (part of the Sigstore project) to sign container images before they are pushed to our private registry. This ensures that the code running in production is exactly the code our CI/CD pipeline built.

Command to verify an image signature:

cosign verify --key cosign.pub my-registry.com/user/app:v1.0.0

Conclusion: Layered Defense

Security is not a switch you flip; it is a series of layers. You harden the code, you restrict the container, and crucially, you isolate the infrastructure.

If you are building for the Nordic market, you need infrastructure that respects local compliance laws and offers the raw performance required for modern encryption and scanning overhead. CoolVDS provides that foundational layer with KVM isolation, DDoS protection, and local NVMe performance. Don't build a fortress on a swamp.

Ready to lock down your infrastructure? Deploy a hardened KVM instance on CoolVDS today and get root access in under 60 seconds.