The Uncomfortable Truth About Single-Vendor Cloud in 2020
Letβs be honest. The brochure version of "The Cloud" promised us infinite scalability and zero maintenance. The reality we face in early 2020? Inscrutable billing dashboards, egress fees that bleed budgets dry, and a lingering anxiety about data sovereignty. If you are a CTO or Lead Architect operating out of Oslo or Stavanger, you know that putting all your eggs in the AWS us-east-1 (or even eu-central-1) basket is not a strategy. It is a gamble.
I have spent the last month auditing infrastructure for a mid-sized Norwegian fintech. Their AWS bill was fluctuating by 40% month-over-month, primarily driven by IOPS provisioning and cross-AZ data transfer. They were technically "cloud-native," but they were also financially trapped. The solution wasn't to retreat back to a basement server room. The solution was a pragmatic Multi-Cloud Strategy.
This is not about buzzwords. It is about physics and economics. It is about keeping your heavy I/O workloads on cost-predictable, high-performance local infrastructure (like CoolVDS) while leveraging hyperscalers for what they are actually good at: elastic burst compute and managed ML services.
The Latency & Legal Argument: Why Location Matters
In Norway, we deal with two hard constraints: the speed of light and the Datatilsynet (Norwegian Data Protection Authority). While the EU-US Privacy Shield is currently technically valid, anyone paying attention to the legal battles in Brussels knows it stands on shaky ground. Data residency is becoming a strict requirement for medical, financial, and government workloads.
Furthermore, latency kills user experience. A round trip from Oslo to Frankfurt (AWS) is roughly 15-20ms. From Oslo to a local CoolVDS node? We are talking <3ms. For a high-frequency trading bot or a real-time bidding system, that difference is the entire business model.
Architecture Pattern: The "Core & Burst" Model
The most effective pattern I have deployed involves placing the Stateful Core (Databases, NFS, Object Storage) on local, fixed-cost infrastructure with massive NVMe throughput, and the Stateless Burst (Web servers, Worker nodes) on a public cloud that scales up and down.
Why? Because hyperscalers charge extortionate rates for high-performance storage. A standard SSD on AWS is fine, but if you need sustained 20k IOPS, you pay a premium. On a specialized provider like CoolVDS, high-speed NVMe is often the default standard, not an upsell.
Implementing the Split with Terraform (v0.12)
Infrastructure as Code (IaC) is the only way to manage this complexity without losing your mind. We use Terraform 0.12 to abstract the differences between providers. Below is a simplified structure showing how we define resources across two distinct providers.
# provider.tf - Defining our dual-stack world
provider "aws" {
region = "eu-central-1"
version = "~> 2.50"
}
# The generic provider for our KVM-based heavy lifter
provider "libvirt" {
uri = "qemu+ssh://root@core-node.coolvds.com/system"
}
resource "aws_instance" "burst_worker" {
count = 5
ami = "ami-0c55b159cbfafe1f0" # Ubuntu 18.04 LTS
instance_type = "t3.medium"
tags = {
Name = "Burst-Worker-${count.index}"
}
}
resource "libvirt_domain" "db_primary" {
name = "db-core-nvme"
memory = "16384"
vcpu = 4
disk {
volume_id = libvirt_volume.nvme_store.id
}
network_interface {
network_name = "default"
}
}Networking the Divide: WireGuard vs. IPsec
Connecting your CoolVDS core to your AWS burst tier requires a secure tunnel. Historically, we used IPsec (StrongSwan), which is robust but a nightmare to configure. However, with Linux kernel 5.6 just around the corner, WireGuard is the technology to watch. For now, in production environments running Ubuntu 18.04 or CentOS 7, we are using the user-space WireGuard implementation or OpenVPN for site-to-site connectivity.
Here is a battle-tested OpenVPN server configuration `server.conf` tuned for stability over the public internet, essential when bridging providers:
port 1194
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
topology subnet
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
keepalive 10 120
# key direction 1 for server
tls-auth ta.key 0
cipher AES-256-CBC
auth SHA256
user nobody
group nogroup
persist-key
persist-tun
status openvpn-status.log
verb 3
# Push route to the local private network
push "route 192.168.10.0 255.255.255.0"Pro Tip: Do not route database traffic over the public internet if you can avoid it. If you must, ensure you are using SSL/TLS replication. Even better, use a direct fiber connection or ensure your VPS provider has excellent peering at NIX (Norwegian Internet Exchange) to minimize hops to the hyperscaler's edge location.
The Database Problem: Handling Latency
Spanning a database cluster across clouds is where most architects fail. The latency between providers (even 15ms) is too high for synchronous replication (like Galera Cluster) without killing write performance.
My recommended setup for 2020:
- Master (Write): Hosted on CoolVDS High-Frequency NVMe. This handles all `INSERT` and `UPDATE` operations locally in Norway.
- Read Replicas: One local replica for failover, and asynchronous replicas in AWS/Azure for the burst workers to read from.
Use ProxySQL to route the traffic intelligently. This ensures that your heavy writes enjoy the raw I/O speed of the dedicated resources, while your frontend apps in the cloud get local read speeds.
MySQL 8.0 Configuration for Async Replication
In your `my.cnf`, ensure GTID is enabled to make failover less painful. We have learned this the hard way after a split-brain incident last year.
[mysqld]
server-id = 1
gtid_mode = ON
enforce_gtid_consistency = ON
log_bin = mysql-bin
binlog_format = ROW
# Durability settings for the Master
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
# Optimization for NVMe storage
innodb_io_capacity = 20000
innodb_io_capacity_max = 40000
innodb_flush_method = O_DIRECTNote the `innodb_io_capacity` settings. On standard cloud block storage, you pay extra for these numbers. On CoolVDS, this is just how the hardware behaves. Leveraging this allows us to run heavier queries without locking tables.
The Verdict: TCO and Peace of Mind
A multi-cloud strategy isn't about being fancy. It is about Total Cost of Ownership (TCO). By moving our database and storage layer to CoolVDS, we reduced the client's monthly spend by 35%. We eliminated the egress fees for internal data processing and gained a jurisdictionally safe harbor for their customer data in Norway.
We still use AWS. We love their S3 durability and their Lambda functions. But we no longer rent their overpriced computers for 24/7 database crunching. We rent their innovation, not their raw iron.
If you are serious about performance and sovereignty, stop treating the cloud as a single destination. Treat it as a toolkit. And make sure your toolkit includes a hammer that you actually own and control.
Ready to reclaim your infrastructure? Stop paying the "cloud tax" on IOPS. Deploy a high-performance NVMe instance in Norway with CoolVDS today and benchmark the difference yourself.