Console Login

Multi-Cloud is No Longer Optional: Architecting for Sovereignty and Speed in 2020

Multi-Cloud is No Longer Optional: Architecting for Sovereignty and Speed in 2020

Let’s address the elephant in the server room: The EU-US Privacy Shield was invalidated by the CJEU in July. If you are a CTO operating in Norway or the broader EEA, and your entire database sits in `us-east-1` or even `eu-central-1` (owned by a US entity), you are now walking a legal tightrope without a net. The CLOUD Act means your data is never truly private from US subpoenas, regardless of where the physical disk spins.

But let's be pragmatic. You aren't going to migrate your entire Kubernetes cluster back to a basement rack in Oslo overnight. The capital expenditure (CapEx) alone would kill your Q4 budget. The solution isn't total isolation; it's a strategic Multi-Cloud Architecture.

By decoupling your stateless application logic from your stateful data layer, you can leverage the global reach of hyperscalers for content delivery while keeping your sensitive data sovereign on independent infrastructure like CoolVDS. Here is how we build that bridge securely using technologies available today, in late 2020.

The Architecture: The Split-Stack Approach

The goal is simple: Compute is a commodity; Data is an asset.

In this model, your frontend and stateless microservices run where they are cheapest or closest to the user. Your database (MySQL/PostgreSQL) and object storage containing PII (Personally Identifiable Information) reside on a jurisdictionally safe provider in Norway. This satisfies Datatilsynet requirements while maintaining high availability.

The Connectivity Challenge

Historically, linking two cloud providers meant dealing with clunky IPsec VPNs (StrongSwan, anyone?) that were a nightmare to debug. But with Linux kernel 5.6 (released earlier this year), WireGuard is finally in the mainline. It is faster, leaner, and significantly easier to configure than IPsec or OpenVPN.

Here is a battle-tested configuration for linking a CoolVDS instance (Data Node) with an external frontend node.

1. The Data Node (CoolVDS - Norway)

This node holds the database. We deny all public traffic except on the WireGuard port.

# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
ListenPort = 51820
PrivateKey = <Server_Private_Key>

# Peer: The Frontend Node
[Peer]
PublicKey = <Frontend_Public_Key>
AllowedIPs = 10.100.0.2/32

2. The Frontend Node (External Cloud)

This node connects to the database over the private tunnel `10.100.0.1:3306`.

# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.2/24
PrivateKey = <Frontend_Private_Key>

# Peer: CoolVDS Data Node
[Peer]
PublicKey = <Server_Public_Key>
Endpoint = 185.x.x.x:51820
AllowedIPs = 10.100.0.0/24
PersistentKeepalive = 25
Pro Tip: Don't forget to adjust your UFW (Uncomplicated Firewall) on the CoolVDS instance. If you don't allow UDP traffic on port 51820, the handshake will fail silently. ufw allow 51820/udp

Orchestration with Terraform 0.13

Managing two providers manually is a recipe for drift. Terraform 0.13 (released August 2020) made managing third-party providers significantly cleaner with the new `required_providers` syntax. You no longer need to hack around with messy plugin paths.

Here is how you structure a `main.tf` to spin up resources on a generic cloud (for frontend) and a KVM-based provider like CoolVDS (for data) simultaneously.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
    # Utilizing a generic Libvirt/OpenStack provider for the VPS
    coolvds = {
      source = "dmacvicar/libvirt"
      version = "0.6.2"
    }
  }
}

provider "aws" {
  region = "eu-central-1"
}

provider "coolvds" {
  uri = "qemu+ssh://root@vps-host/system"
}

resource "aws_instance" "frontend" {
  # ... Stateless frontend config
}

resource "libvirt_domain" "db_node" {
  name   = "secure-db-01"
  memory = "8192"
  vcpu   = 4
  
  # CoolVDS NVMe backing
  disk {
    volume_id = libvirt_volume.os_image.id
  }
}

The Latency Factor: Oslo vs. The World

A common objection to multi-cloud is latency. "If my app is in Frankfurt and my DB is in Oslo, won't it crawl?"

Let's look at the numbers. The round-trip time (RTT) between major European hubs and the NIX (Norwegian Internet Exchange) is surprisingly low, thanks to robust fiber routes.

Origin Destination (CoolVDS Oslo) Avg Latency (ms) Impact on User Experience
Frankfurt (DE-CIX) Oslo ~18ms Negligible
London (LINX) Oslo ~22ms Negligible
Amsterdam (AMS-IX) Oslo ~16ms Negligible
Stockholm Oslo ~9ms Instant

For a standard web application, an extra 18ms on a database query is imperceptible. However, the performance bottleneck usually isn't the network; it's the Disk I/O.

Why NVMe Matters More Than Latency

In 2020, if you are running a database on standard SSDs (SATA) or, heaven forbid, spinning rust, you are bottlenecking your own CPU. High-performance databases like PostgreSQL 13 or MongoDB 4.4 crave IOPS.

We see this constantly: A client complains about "network lag" between clouds. We run `iostat -x 1` and see %iowait hitting 40%. It wasn't the network; it was the disk queue. CoolVDS standardizes on NVMe storage, which provides roughly 5-6x the read/write speeds of standard SATA SSDs found in budget VPS tiers.

# Benchmark check: Random Read/Write IOPS
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=1G --readwrite=randrw --rwmixread=75

Run that on a standard cloud block storage volume, then run it on a local NVMe instance. The difference explains why we keep the heavy data lifting on bare-metal adjacent virtualization.

Compliance is a Feature, Not a Bug

The post-Schrems II landscape is unforgiving. Transferring data to US-owned clouds requires "supplementary measures" that are often technically impossible to guarantee (like encryption where the keys are never accessible to the cloud provider).

By keeping your persistence layer on CoolVDS in Norway, you gain:

  • Sovereignty: Your data sits on Norwegian soil, protected by Norwegian privacy laws, outside the direct scope of the US CLOUD Act.
  • Predictable Costs: Unlike hyperscalers that charge for every GB of egress traffic (the "Data Tax"), predictable bandwidth pricing models allow you to replicate data without checking the bank account every hour.
  • Simplicity: No complex IAM roles to manage just to spin up a MySQL server.

The Verdict

The era of putting all your eggs in one massive, US-owned basket is ending. It is risky legally and costly financially. A multi-cloud strategy using WireGuard for secure transport and Terraform for unified management gives you the agility of the cloud with the security of a vault.

You don't need to rebuild your entire stack today. Start by migrating your disaster recovery or a secondary database replica to a sovereign provider. Test the latency. Check the IOPS.

Ready to secure your data infrastructure? Deploy a compliant, NVMe-powered instance on CoolVDS today and regain control of your digital borders.