Console Login

The Pragmatic Multi-Cloud Strategy: Surviving Schrems II and Egress Fees in 2023

The Pragmatic Multi-Cloud Strategy: Surviving Schrems II and Egress Fees in 2023

Let’s cut through the marketing noise. For most CTOs in 2023, "Multi-Cloud" isn't a strategic choice; it's a defensive posture forced upon us by two things: the terrifying volatility of hyperscaler billing and the legal minefield of GDPR post-Schrems II.

I recently audited a SaaS platform based here in Oslo. They were 100% AWS, utilizing everything from EC2 to RDS. Their infrastructure bill was manageable, but their Data Transfer Out (egress) fees were eating 30% of their monthly revenue. Furthermore, their legal team was panicking about customer data residing on US-owned infrastructure, regardless of the region being eu-north-1.

The solution wasn't to abandon AWS—that's impractical. The solution was arbitrage. We moved the heavy compute and persistent storage to a high-performance local provider (CoolVDS) while keeping specific managed services on the hyperscaler. Here is how we architected it.

The Architecture: The "Sovereign Core" Approach

The premise is simple: Treat hyperscalers as utility providers for proprietary features (like advanced AI APIs or global CDNs) and use standard Linux VPS instances for the "heavy lifting"—databases, application logic, and batch processing.

Why? Because a vCPU on a CoolVDS NVMe instance behaves predictably. It doesn't come with a credit system that throttles you when your traffic spikes. More importantly, data sitting on a server physically located in Norway, owned by a European entity, simplifies your Record of Processing Activities (ROPA) significantly.

Step 1: Unifying Infrastructure with Terraform

Managing two providers manually is a recipe for disaster. We use Terraform to treat CoolVDS and AWS as a single logical entity. While AWS has a native provider, for standard VPS implementations, we often use the generic implementations or specialized providers to bootstrap the initial state.

Here is a stripped-down main.tf structure demonstrating how we provision a "Sovereign Core" database node alongside an AWS frontend:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
    # Assuming a generic libvirt or compatible provider for VPS
    libvirt = {
      source = "dmacvicar/libvirt"
      version = "0.7.1"
    }
  }
}

# The Hyperscaler: Ephemeral Frontends
resource "aws_instance" "frontend" {
  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = "t3.micro"
  tags = {
    Name = "stateless-frontend-01"
  }
}

# The Sovereign Core: Persistent Database on CoolVDS
resource "libvirt_domain" "db_node" {
  name   = "coolvds-db-01"
  memory = "8192"
  vcpu   = 4

  network_interface {
    network_name = "default"
  }

  disk {
    volume_id = libvirt_volume.os_image.id
  }

  # Cloud-init to bootstrap security immediately
  cloudinit = libvirt_cloudinit_disk.commoninit.id
}

This configuration is basic, but the intent is clear: we define resources based on their role, not their vendor.

Step 2: The Network Mesh (WireGuard)

The biggest challenge in multi-cloud is latency and security between the clouds. IPsec is bloated and slow. In 2023, if you aren't using WireGuard, you are wasting CPU cycles. WireGuard offers lower latency—crucial when your app server is in an Oslo datacenter and your S3 bucket is in Stockholm.

We set up a mesh where the CoolVDS instance acts as the stable hub. Because CoolVDS provides generous bandwidth without the draconian egress fees of AWS, it makes sense to route traffic through it.

Here is a production-ready wg0.conf for the Hub (CoolVDS node):

[Interface]
Address = 10.100.0.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = [SERVER_PRIVATE_KEY]

# Peer: AWS Frontend 01
[Peer]
PublicKey = [AWS_CLIENT_PUBLIC_KEY]
AllowedIPs = 10.100.0.2/32
PersistentKeepalive = 25
Pro Tip: Always set PersistentKeepalive = 25 when dealing with hyperscalers. Their NAT gateways are aggressive about closing idle UDP connections. This setting ensures the tunnel stays open even during low traffic periods.

Step 3: Data Gravity and IOPS

The "Pragmatic CTO" knows that IOPS (Input/Output Operations Per Second) is where cloud bills go to die. Provisioned IOPS (io1/io2) on AWS is exorbitantly expensive. If your database requires 10,000 IOPS, you are looking at hundreds of dollars a month just for the privilege of reading your own data.

On a platform like CoolVDS, utilizing local NVMe storage, you access the raw speed of the drive. We are talking about latency figures often measuring in the microseconds, not milliseconds.

For a recent MySQL cluster deployment, we optimized the `innodb_io_capacity` to match the local NVMe capabilities, something we could never afford on a standard cloud tier:

-- Optimize for NVMe storage
SET GLOBAL innodb_io_capacity = 20000;
SET GLOBAL innodb_io_capacity_max = 40000;
SET GLOBAL innodb_flush_neighbors = 0; -- NVMe handles random writes well
SET GLOBAL innodb_log_file_size = 2G; -- Reduce checkpointing frequency

These settings on a shared cloud instance would likely trigger IO throttling. On dedicated NVMe resources, the database breathes freely.

Compliance: The "Norwegian Fortress"

Since the Schrems II ruling invalidated the Privacy Shield, transferring personal data to US providers has required complex Standard Contractual Clauses (SCCs) and supplementary measures. The Norwegian Data Protection Authority (Datatilsynet) has been clear: you must ensure the data is protected from foreign surveillance.

By hosting your primary database on CoolVDS servers located physically in Norway, you create a stronger compliance argument. You are not just checking a box; you are reducing the surface area of data exposed to US jurisdictions. Your encrypted backups can go to S3 (with proper client-side encryption), but the live, unencrypted data stays on European soil under European law.

Cost Analysis: The 60% Drop

Let's look at the TCO (Total Cost of Ownership) for a mid-sized setup:

Component Hyperscaler (Est.) Hybrid (CoolVDS Core)
Compute (4 vCPU, 16GB RAM) $180/mo $45/mo
Storage (500GB NVMe/SSD) $65/mo (GP3) Included/Low Cost
Egress (5TB Traffic) $450/mo $0 (Included)
Total Monthly ~$695 ~$45 + Frontend Costs

The difference is staggering. By offloading the bandwidth-heavy and storage-heavy components to CoolVDS, the hyperscaler bill drops to just the cost of the lightweight frontend containers and specific APIs.

The Verdict

Building a multi-cloud strategy in 2023 isn't about using every shiny new tool. It's about recognizing that infrastructure is a commodity. You shouldn't pay premium prices for commodity compute.

We built CoolVDS to be the reliable, compliant anchor in this hybrid setup. We provide the raw NVMe performance and the local presence that the hyperscalers charge a premium for. Whether you are running a Kubernetes worker node or a legacy PostgreSQL monolith, the physics of latency and the economics of bandwidth don't lie.

Is your cloud bill scaling faster than your user base? Stop paying the "laziness tax." Spin up a high-performance NVMe instance on CoolVDS today and see what your database can actually do when the handcuffs are taken off.