Console Login

Automating the Auditor: Infrastructure-as-Code Compliance in the Post-Schrems Era

Automating the Auditor: Infrastructure-as-Code Compliance in the Post-Schrems Era

If you are still taking screenshots of firewall rules to paste into an Excel sheet for your annual ISO 27001 audit, you are not just wasting time—you are creating a security gap. The moment that screenshot is taken, it is obsolete.

For CTOs operating in Norway and the broader EEA, the regulatory environment in late 2025 is unforgiving. Between Datatilsynet's strict enforcement of GDPR and the lingering complexities of Schrems II (and subsequent trans-Atlantic data frameworks), knowing where your data lives is just as critical as how it is encrypted. We cannot rely on "good intentions" or manual checklists anymore.

Compliance must be code. It must be automated, immutable, and continuous. Here is how we architect self-auditing infrastructure, utilizing strictly Norwegian data residency to eliminate legal headaches.

The Sovereignty Problem: Why Latency Isn't the Only Reason to Host Local

Before we touch a single line of code, we need to address the infrastructure layer. Many DevOps teams default to US-based hyperscalers for convenience. However, under the scrutiny of European privacy laws, the US CLOUD Act remains a thorn in the side of compliance officers.

CTO Note: Segregating PII (Personally Identifiable Information) on strictly Norwegian soil acts as a legal firewall. Hosting on CoolVDS, which operates data centers in Oslo under Norwegian jurisdiction, removes the "Transfer Impact Assessment" burden for that specific data subset. Plus, the latency to NIX (Norwegian Internet Exchange) is under 2ms.

Layer 1: Infrastructure as Code (IaC) with Enforcement

Your infrastructure state should be defined in Terraform or OpenTofu. This allows you to review infrastructure changes (PRs) the same way you review application code. But writing HCL (HashiCorp Configuration Language) isn't enough; you need to prevent non-compliant code from ever being applied.

We use tools like tfsec or Open Policy Agent (OPA) to scan Terraform plans before deployment. If a developer tries to provision a storage bucket without encryption or a compute instance with open SSH ports (0.0.0.0/0), the pipeline fails immediately.

Example: Mandating Encryption at Rest

Here is a simplified OpenTofu/Terraform snippet that defines a secure volume attachment on a CoolVDS instance. Note that we don't just ask for storage; we explicitly define the encryption parameters.

resource "openstack_blockstorage_volume_v3" "secure_data" {
  name        = "gdpr-compliant-storage"
  description = "Encrypted volume for PII"
  size        = 100
  volume_type = "nvme-encrypted" 

  # This tag is critical for our automated audit tools to track scope
  metadata = {
    classification = "confidential"
    residency      = "NO-OSL"
  }
}

resource "openstack_compute_volume_attach_v2" "attach_secure" {
  instance_id = openstack_compute_instance_v2.app_server.id
  volume_id   = openstack_blockstorage_volume_v3.secure_data.id
}

If you run a scanner against this, you can write a policy that asserts volume_type must contain "encrypted". If it doesn't, the build breaks.

Layer 2: Policy as Code (OPA & Rego)

Kubernetes is the standard for 2025 application delivery. However, a default Kubernetes cluster is permissive. It allows root containers, privilege escalation, and unrestricted ingress.

We implement Open Policy Agent (OPA) Gatekeeper as an admission controller. It intercepts requests to the Kubernetes API server and validates them against policies written in Rego. This effectively stops developers from accidentally deploying non-compliant workloads.

The "No Root" Policy

Running containers as root is a major security risk. Here is a Rego policy that denies any pod attempting to run with UID 0.

package kubernetes.admission

deny[msg] {
  input.request.kind.kind == "Pod"
  security_context := input.request.object.spec.securityContext
  run_as_user := security_context.runAsUser
  
  # Check if runAsUser is explicitly set to 0 (root)
  run_as_user == 0
  
  msg := "Security Policy Violation: Containers must not run as root (UID 0)."
}

deny[msg] {
  input.request.kind.kind == "Pod"
  container := input.request.object.spec.containers[_]
  not container.securityContext.runAsNonRoot

  msg := sprintf("Security Policy Violation: Container %v must set runAsNonRoot to true.", [container.name])
}

When you deploy this to your cluster, the auditor doesn't need to check 500 running pods. They only need to audit this one policy file. If the policy is active, the violation is impossible.

Layer 3: OS Hardening with Ansible

Even with containers, the underlying host OS (the Node) matters. Whether you are running KVM instances on CoolVDS or bare metal, the base image must be hardened according to CIS Benchmarks.

Don't configure servers manually. Use Ansible. This ensures that if a server is compromised and you have to rebuild it, the new one comes up with the exact same security posture.

Automating SSH Security

This playbook snippet ensures that SSH is restricted. It disables root login and forces key-based authentication—standard requirements for any SOC 2 or ISO 27001 certification.

- name: Hardening SSH Configuration
  hosts: all
  become: yes
  tasks:
    - name: Disable SSH Root Login
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: '^PermitRootLogin'
        line: 'PermitRootLogin no'
        state: present
      notify: restart_sshd

    - name: Disable Password Authentication
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: '^PasswordAuthentication'
        line: 'PasswordAuthentication no'
        state: present
      notify: restart_sshd

    - name: Ensure SSH Protocol 2 is enforced
      lineinfile:
        path: /etc/ssh/sshd_config
        line: 'Protocol 2'
        state: present

  handlers:
    - name: restart_sshd
      service:
        name: sshd
        state: restarted

The Continuous Audit Workflow

The goal is to shift compliance to the left. By the time code reaches production, it is already compliant. Here is what the pipeline looks like:

  1. Commit: Developer pushes Terraform code.
  2. Static Analysis: CI pipeline runs trivy config . to check for misconfigurations.
  3. Policy Check: OPA validates that resources are tagged correctly and encrypted.
  4. Deploy: Terraform applies changes to CoolVDS environment.
  5. Runtime Protection: Gatekeeper prevents drift in Kubernetes.

This architecture transforms your compliance from a yearly panic attack into a daily non-event. It also drastically reduces Total Cost of Ownership (TCO) by preventing security incidents that result in massive GDPR fines.

You cannot automate trust, but you can automate the verification of it. Start by securing your foundation. If you need a sandbox to test these OPA policies with low latency and guaranteed data sovereignty, spin up a KVM instance on CoolVDS. It allows custom kernels and full control, unlike shared container platforms.

Next Step: Audit your current provider's data processing agreement. If it mentions "standard contractual clauses" for transfer to the US, it might be time to migrate to a sovereign Norwegian cloud.