The Pragmatic Hybrid Cloud: Escaping Vendor Lock-in and Solving Schrems II in Norway
Let’s be honest: for most European companies, "Multi-Cloud" is often just a slide in a pitch deck that never translates to reality. In practice, you usually end up with 90% of your infrastructure glued to a single hyperscaler’s proprietary APIs, paying egress fees that would make a CFO weep. But in late 2021, the landscape has shifted. With the fallout from the Schrems II ruling and the aggressive stance of the Datatilsynet (Norwegian Data Protection Authority), relying solely on US-owned hyperscalers is no longer just a technical risk—it's a legal liability.
I am speaking to the CTOs and Lead Architects who have to answer for TCO (Total Cost of Ownership) and compliance. The goal isn't to abandon AWS or Google Cloud entirely; their object storage and managed ML tools are useful. The goal is to build a hybrid core where your critical data and heavy I/O workloads reside on predictable, sovereign infrastructure, while you use the public cloud only for what it's good at: burstable ephemeral compute.
The Architecture: The "Sovereign Core" Strategy
The most resilient pattern I’ve deployed this year involves a "Hub and Spoke" topology. The Hub is a heavy-duty, vertical stack hosted locally (e.g., in Oslo), handling the primary database and persistent state. The Spokes are stateless application containers that can run anywhere—on CoolVDS for low latency, or on AWS spots for overflow.
Why keep the core local? Physics and Law.
- Latency: Round trip time (RTT) from Oslo to Frankfurt is approx 20-25ms. From Oslo to a local data center connected to NIX (Norwegian Internet Exchange), it is <2ms. For high-transaction databases, that difference is the bottleneck.
- Compliance: Data at rest on a US-controlled cloud is subject to the US CLOUD Act. Hosting your primary PostgreSQL cluster on a Norwegian provider like CoolVDS creates a legal air gap.
Infrastructure as Code: Abstraction is Key
To make this work, you cannot click around in a GUI. You need Terraform. The trick is to avoid provider-specific resources (like `aws_db_instance`) for your core logic. Instead, deploy base compute instances and provision them with Ansible. This makes your infrastructure portable.
Here is a simplified Terraform `main.tf` pattern for deploying a sovereign node alongside a cloud failover node:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
# Generic provider for local VPS via SSH
null = {
source = "hashicorp/null"
version = "~> 3.1"
}
}
}
# The Sovereign Core (CoolVDS NVMe Instance)
resource "null_resource" "oslo_primary_db" {
connection {
type = "ssh"
user = "root"
host = "185.x.x.x" # Your CoolVDS Static IP
private_key = file("~/.ssh/id_rsa")
}
provisioner "remote-exec" {
inline = [
"apt-get update",
"apt-get install -y postgresql-13 wireguard",
# Initialize data volume on NVMe
"mkfs.ext4 /dev/vdb",
"mount /dev/vdb /var/lib/postgresql"
]
}
}
# The Stateless Failover (Public Cloud)
resource "aws_instance" "frankfurt_read_replica" {
ami = "ami-0d527b8c289b4af7f" # Ubuntu 20.04 LTS
instance_type = "t3.medium"
tags = {
Name = "dr-replica-frankfurt"
}
}
Secure Networking: The WireGuard Mesh
IPsec is bloated and OpenVPN is slow. In 2021, the industry standard for linking disparate clouds is WireGuard. It is built into the Linux kernel (5.6+), which means it has incredible throughput with minimal CPU overhead—crucial when you are pushing gigabits of traffic between your VPS Norway node and an external CDN.
Pro Tip: Don't expose your database port (5432) to the public internet, even with SSL. Use a WireGuard tunnel to create a private network overlay spanning your providers.
Here is a production-ready `wg0.conf` for the Oslo hub. Note the MTU setting; when tunneling over the internet, fragmentation kills performance. Setting MTU to 1360 usually accounts for the overhead.
[Interface]
# The Hub (Oslo Core)
Address = 10.100.0.1/24
SaveConfig = true
ListenPort = 51820
PrivateKey =
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
MTU = 1360
[Peer]
# The Spoke (Frankfurt Replica)
PublicKey =
AllowedIPs = 10.100.0.2/32
The Storage IOPS Reality Check
This is where the "Pragmatic CTO" must look at the numbers. Cloud providers often throttle IOPS on their standard block storage tiers. If you are running a high-frequency trading bot or a Magento database with thousands of SKUs, you will hit the "burst balance" limit on standard cloud volumes quickly.
We benchmarked this. On a standard generic cloud volume, random write 4k performance often plateaus around 3,000 IOPS before costs skyrocket. On CoolVDS, because we map NVMe drives directly via KVM virtio drivers, we consistently see sustained IOPS north of 20,000 on standard plans.
To verify your current disk latency, run `ioping` on your database partition:
# Install ioping
apt-get install ioping
# Run a latency check
ioping -c 10 .
# Expected output on CoolVDS NVMe:
# 4 KiB <<< . (ext4 /dev/vda1): request=1 time=185 us (warmup)
# ...
# min/avg/max/mdev = 150 us / 190 us / 230 us / 40 us
If your average is over 1ms (1000 us) for local disk operations, your database is waiting on I/O, not CPU. No amount of code optimization fixes slow spinning rust or throttled network storage.
Load Balancing with HAProxy
To route traffic intelligently between your Sovereign Core and your burst nodes, HAProxy is still the king of reliability. Nginx is great for serving static assets, but HAProxy’s health checking logic is superior for TCP load balancing.
Configure HAProxy to prefer the local CoolVDS node (weight 100) and only bleed traffic to the public cloud (weight 10) if load gets too high or the primary checks fail.
global
log /dev/log local0
maxconn 2000
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend database_front
bind *:5432
default_backend database_back
backend database_back
option httpchk
# CoolVDS Primary - High Priority
server db_oslo 10.100.0.1:5432 check weight 100 inter 2000 rise 2 fall 3
# Cloud Replica - Low Priority / Failover
server db_cloud 10.100.0.2:5432 check weight 10 inter 2000 rise 2 fall 3 backup
The Financial Argument
Let's talk money. Public cloud bandwidth costs are a hidden tax. If you host a media-heavy site and serve 10TB of data via a major hyperscaler, the egress bill can exceed the instance cost. CoolVDS offers generous bandwidth allocations because we peer directly at major Nordic exchanges. By keeping your bandwidth-heavy origin server in Norway, you reduce the egress fees significantly, using the public cloud only for lightweight compute tasks that require global distribution.
Conclusion
A multi-cloud strategy in 2021 isn't about using every service AWS and Azure offers. It is about architectural independence. It is about knowing that if a transatlantic cable is cut or a new privacy shield ruling comes down, your data is safe on Norwegian soil.
Don't let latency or legal gray areas compromise your infrastructure. Start building your Sovereign Core today.
Ready to test the difference? Deploy a KVM-based NVMe instance on CoolVDS in under 60 seconds and ping 1.1.1.1 to see what real low latency looks like.