Surviving Schrems II: A Pragmatic Multi-Cloud Strategy for Norwegian Enterprises
Let’s be honest: for most CTOs in 2022, "Multi-Cloud" is less of a technical desire and more of a legal necessity. Since the fallout of the Schrems II ruling and the increasing scrutiny from the Norwegian Data Protection Authority (Datatilsynet), relying 100% on US-owned hyperscalers is a risk profile many boards are no longer willing to accept.
But the technical overhead of running multi-cloud is brutal. Latency kills distributed databases. Egress fees destroy budgets. Inconsistent APIs make Terraform state files a nightmare.
I recently architected a hybrid solution for a FinTech startup in Oslo. The requirement: Keep customer PII (Personally Identifiable Information) on Norwegian soil to satisfy strict compliance, but leverage AWS for heavy compute and global CDN reach. We didn't use expensive enterprise gateways. We used standard Linux tools, raw compute, and physics-aware architecture. Here is how we built it.
1. The Connectivity Layer: WireGuard Mesh
Legacy IPsec VPNs are bloated, slow to handshake, and a pain to debug. In 2022, if you aren't looking at WireGuard for your site-to-site inter-cloud links, you are wasting CPU cycles. WireGuard runs in the kernel space, offers smaller attack surfaces, and reconnects instantly when IPs shift.
We established a mesh network between the AWS instances in Frankfurt (eu-central-1) and our CoolVDS KVM instances in Oslo. The goal: A private, encrypted backplane.
Configuration on the Oslo Node (Hub):
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [HIDDEN_OSLO_PRIVATE_KEY]
# Peer: AWS Frankfurt
[Peer]
PublicKey = [AWS_PUBLIC_KEY]
AllowedIPs = 10.100.0.2/32
Endpoint = aws-frankfurt-ip:51820
PersistentKeepalive = 25
The PersistentKeepalive = 25 is critical here. Without it, the stateful firewalls in AWS Security Groups will drop the connection during idle periods, causing packet loss when traffic spikes.
2. Infrastructure as Code: The Hybrid Terraform State
Managing two providers requires a unified control plane. We utilize Terraform v1.1. The trick is modularizing your providers so you can swap instance sizes without rewriting the network logic.
Hyperscalers charge a premium for predictable performance. A `t3.medium` bursts, but then it throttles. For the database and core logic, we need sustained CPU. This is where we leverage CoolVDS. By using a standard KVM-based provider for the "steady state" workload, we cut the monthly bill by 40% compared to equivalent reserved instances on AWS.
Here is a snippet of our `providers.tf` allowing this hybrid orchestration:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
# Generic provider for KVM/OpenStack compatible APIs
coolvds = {
source = "terraform-providers/openstack"
version = "~> 1.40"
}
}
}
resource "aws_instance" "compute_node" {
ami = "ami-05d34d340fb1d89e5" # Amazon Linux 2
instance_type = "t3.medium"
tags = {
Name = "Frankfurt-Burst-Node"
}
}
# The Anchor Node in Norway
resource "openstack_compute_instance_v2" "data_sovereignty_node" {
name = "Oslo-Core-DB"
image_name = "Ubuntu 20.04"
flavor_name = "vds-nvme-16gb" # 4 vCPU, 16GB RAM, NVMe
key_pair = "deploy-key-2022"
security_groups = ["default"]
}
3. The Data Gravity Problem
You cannot cheat physics. The round-trip time (RTT) between Oslo and Frankfurt is roughly 15-20ms. For a synchronous database cluster, that is acceptable but risky for high-write loads.
We adopted a Split-Brain Architecture:
- Primary Write Master (Oslo): Hosted on CoolVDS NVMe storage. All PII writes happen here. This satisfies GDPR/Schrems II requirements because the authoritative data never leaves the EEA/Norway jurisdiction initially.
- Read Replicas (Frankfurt): Hosted on AWS. Anonymized data is replicated asynchronously for the compute nodes to crunch.
We use MariaDB 10.5 with specific replication tuning to handle the WAN latency. The standard `slave_net_timeout` is often too high by default.
# my.cnf optimization for WAN replication
[mysqld]
# Use ROW format for consistency across versions
binlog_format = ROW
# Aggressive timeouts to detect network splits early
slave_net_timeout = 30
# Parallel workers to prevent lag on the replica during batch jobs
slave_parallel_threads = 4
slave_parallel_mode = conservative
# Critical for data integrity on NVMe
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
Pro Tip: Monitor your `Seconds_Behind_Master`. If it consistently exceeds 1 second, your network jitter is too high. We use CoolVDS specifically because their peering at NIX (Norwegian Internet Exchange) offers a cleaner route out of Norway than many consumer-grade ISPs.
4. Load Balancing and Failover
We placed HAProxy instances at the edge. We use DNS-based failover (via Route53 with health checks) to direct traffic.
However, pure DNS failover has a TTL delay. To mitigate this, our application logic includes a "circuit breaker". If the Oslo database (CoolVDS) becomes unreachable from the AWS application servers, the app switches to "Read-Only Mode" serving from the local Frankfurt replicas, queuing writes to a local Redis instance until connectivity is restored.
Why Not Just Use One Cloud?
Three reasons:
- Egress Fees: Pulling terabytes of data out of AWS is extortionate. We keep the heavy data static on CoolVDS (where bandwidth is often unmetered or significantly cheaper) and only push the computed results out.
- Compliance: As mentioned, Datatilsynet is watching. Having your core database physically located in a Norwegian datacenter simplifies your legal defense strategy significantly.
- Performance/Price Ratio: For raw IOPS, virtualized NVMe on a dedicated slice (like CoolVDS) often outperforms network-attached block storage (like EBS) unless you pay for "Provisioned IOPS" which costs a fortune.
Summary
Multi-cloud in 2022 isn't about using every service from every provider. It's about placing the right workload on the right infrastructure. Use the hyperscalers for what they are good at: elastic scale and global CDN. Use a specialized local provider like CoolVDS for what they are good at: data sovereignty, predictable performance, and low-latency connectivity to the Norwegian market.
Don't let latency or legal fears paralyze your architecture. Spin up a test environment, configure WireGuard, and measure the RTT yourself.