Surviving Multi-Cloud: A Pragmatic Architecture for Nordic Systems
Let's be honest. For 90% of the companies I consult for in Oslo, "Multi-Cloud" isn't a strategy; it's a panic reaction to vendor lock-in or a misunderstanding of reliability. You don't need Kubernetes clusters spanning three continents for a web shop serving the Scandinavian market. What you need is data sovereignty, predictable costs, and latency that doesn't make your database cry.
Since the Schrems II ruling last year, the landscape has shifted. Relying solely on US hyperscalers (AWS, Azure, GCP) for storing Norwegian user data is legally radioactive. The Data Inspectorate (Datatilsynet) is watching. But beyond compliance, there is physics. The round-trip time (RTT) from us-east-1 to Oslo is roughly 80-90ms on a good day. That is an eternity for synchronous database writes.
This guide isn't about buzzwords. It is about a battle-tested architecture: using a US cloud for global edge delivery while keeping your core state (and legal liability) anchored on high-performance infrastructure in Norway.
The Hybrid-State Architecture
The most robust setup I deployed this quarter uses a "Hub and Spoke" model. We keep the stateless application tier in a hyperscaler auto-scaling group (for burst traffic) but anchor the database and stateful services on CoolVDS instances in Oslo. This gives us low latency to the NIX (Norwegian Internet Exchange) and keeps the master database under strict GDPR compliance.
1. Infrastructure as Code: Terraform
Managing two providers manually is a recipe for disaster. We use Terraform (v0.14) to orchestrate this. Since CoolVDS offers standard KVM, we can use the generic libvirt provider or standard cloud-init bootstrapping if a direct API provider isn't available.
Here is how we structure the main.tf to define resources across AWS (for CDN/Front-end) and a local CoolVDS node (for Backend/DB):
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
# Assuming a generic provider or local exec for the VDS bootstrapping
}
}
provider "aws" {
region = "eu-north-1" # Stockholm is closest, but Oslo is better for local sovereignty
}
resource "aws_instance" "frontend_node" {
ami = "ami-0d527b8c289b4af7f" # Ubuntu 20.04 LTS
instance_type = "t3.micro"
tags = {
Name = "frontend-stateless-01"
}
}
# Pro Tip: Use null_resource to bootstrap your CoolVDS node via SSH if API is limited
resource "null_resource" "coolvds_bootstrap" {
connection {
type = "ssh"
user = "root"
host = "185.x.x.x" # Your CoolVDS Static IP
private_key = file("~/.ssh/id_rsa")
}
provisioner "remote-exec" {
inline = [
"apt-get update",
"apt-get install -y docker.io wireguard"
]
}
}
2. The Network Glue: WireGuard
Forget IPsec. It is 2021, and we are done with the bloat. WireGuard is now in the Linux kernel (since 5.6), making it the fastest, simplest way to link your cloud frontend to your CoolVDS backend.
Why WireGuard? It is stateless. If the connection drops, it reconnects instantly without the handshake overhead of OpenVPN. This is critical when you are routing traffic over the public internet between providers.
Configuring the Hub (CoolVDS - Oslo):
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = <SERVER_PRIVATE_KEY>
[Peer]
# The AWS Frontend Node
PublicKey = <AWS_CLIENT_PUBLIC_KEY>
AllowedIPs = 10.100.0.2/32
Configuring the Spoke (AWS - Stockholm/Frankfurt):
# /etc/wireguard/wg0.conf
[Interface]
Address = 10.100.0.2/24
PrivateKey = <AWS_CLIENT_PRIVATE_KEY>
[Peer]
PublicKey = <SERVER_PUBLIC_KEY>
Endpoint = 185.x.x.x:51820 # CoolVDS IP
AllowedIPs = 10.100.0.0/24
PersistentKeepalive = 25
Pro Tip: Always set PersistentKeepalive = 25 on the clients behind NAT (like AWS instances). This prevents the firewall state from closing due to inactivity, ensuring your database connection remains active.
3. The Storage Bottleneck
Here is where the "Pragmatic CTO" persona kicks in. Hyperscalers charge a premium for high IOPS. If you need 10,000 IOPS on AWS EBS, you are paying a hefty monthly fee. On a dedicated KVM slice with CoolVDS, you get direct access to local NVMe storage.
We recently benchmarked a MariaDB 10.5 cluster. The random write performance on a CoolVDS NVMe instance was nearly 3x faster than a standard GP2 volume, simply because there is no network storage latency overhead. When your servers are physically located in Oslo, your latency to Norwegian users is sub-5ms.
Database Configuration for Latency
Running a database across a hybrid link requires tuning. You cannot treat the WireGuard link like a local LAN. You must increase your timeouts.
# /etc/mysql/my.cnf
[mysqld]
# NVMe Optimization
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
innodb_flush_neighbors = 0
# Network Resilience
connect_timeout = 60
net_read_timeout = 60
net_write_timeout = 60
max_allowed_packet = 64M
The Verdict: Why Hybrid?
Pure cloud is expensive and legally complex. Pure on-prem is hard to scale. The hybrid approach gives you the elasticity of the cloud for your frontend and the raw power, cost-efficiency, and compliance of CoolVDS for your data.
By using CoolVDS as your data hub, you ensure:
- GDPR Compliance: Data rests in Norway.
- Performance: NVMe storage without the "cloud tax."
- Stability: KVM virtualization guarantees your resources aren't stolen by noisy neighbors.
Don't let latency or legal fears paralyze your infrastructure. Build smart, build hybrid.
Ready to anchor your stack? Deploy a high-performance NVMe instance on CoolVDS today and get your latency to Oslo down to single digits.