Escaping the AWS Trap: Building a Resilient Multi-Cloud Strategy in 2017
We need to talk about the elephant in the server room. Everyone is rushing to the public cloud with a velocity that suggests they think it's a magic bullet. It isn't. I have sat in too many boardrooms in Oslo this year explaining why the monthly AWS bill has tripled while latency to our Stavanger clients hasn't improved. The promise of the cloud was flexibility; the reality for many has become a golden handcuff known as vendor lock-in.
But the financial aspect isn't even the biggest threat. It's sovereignty. With the General Data Protection Regulation (GDPR) set to become enforceable in May 2018âless than a year from nowârelying exclusively on US-based giants puts your data architecture on shaky legal ground. If you are handling Norwegian citizen data, you need a strategy that encompasses data residency, failover, and raw performance.
This is not a manifesto against Amazon or Google. It is a guide to survival. We will look at how to build a hybrid, multi-cloud architecture that leverages the massive scalability of big cloud providers while anchoring your core data and predictable workloads on high-performance, local infrastructure like CoolVDS.
The Hybrid Architecture: "Core and Burst"
The most pragmatic approach for 2017 is the "Core and Burst" model. You keep your baseline loadâyour database master, your primary application servers, your backend processingâon cost-effective, high-performance NVMe VPS instances. You then use the public cloud purely for auto-scaling capabilities during traffic spikes.
Why? Because compute cycles on EC2 can cost 3x to 5x more than a comparable KVM slice on a dedicated provider when running 24/7. By moving the baseline to CoolVDS, you slash TCO immediately. But to make this work, you need an orchestration layer that doesn't care whose hardware it runs on.
Infrastructure as Code: The Equalizer
If you are manually SSH-ing into servers to run apt-get install, you have already lost. In a multi-cloud environment, consistency is paramount. We use Terraform (currently v0.9) to define the state of our infrastructure across providers. This allows us to spin up a CoolVDS instance in Oslo and an AWS instance in Frankfurt using the same declarative syntax.
Here is a simplified example of how we abstract the provider differences using a Terraform configuration. Note how we separate the resources but unify the logic:
# main.tf - Terraform v0.9 syntax
provider "aws" {
region = "eu-central-1"
}
# Define the volatile scaling group on AWS
resource "aws_instance" "burst_node" {
ami = "ami-c86c3f23" # Ubuntu 16.04 LTS
instance_type = "t2.medium"
tags {
Name = "burst-worker"
}
}
# Define the stable core on a KVM provider (e.g., CoolVDS via generic/custom provider)
# In 2017, we often use 'null_resource' with remote-exec for providers without official plugins yet
resource "null_resource" "core_node" {
connection {
type = "ssh"
user = "root"
host = "185.x.x.x" # Your CoolVDS Static IP
private_key = "${file("~/.ssh/id_rsa")}"
}
provisioner "remote-exec" {
inline = [
"apt-get update",
"apt-get install -y nginx",
"service nginx start"
]
}
}
Traffic Management: The HAProxy Layer
Having servers in different locations is useless if you can't route traffic intelligently. DNS Round Robin is the poor man's load balancer; it doesn't respect server health or latency. For a robust multi-cloud setup, we deploy HAProxy 1.7 on the edge.
We position the load balancer on the infrastructure with the best peering. In Norway, connectivity to the Norwegian Internet Exchange (NIX) is critical for low latency. This is where CoolVDS shines. By placing the HAProxy entry point on a CoolVDS instance in Oslo, you ensure that local users get a sub-10ms response time. The HAProxy then routes traffic to the backend nodes, prioritizing the local ones and only spilling over to the remote cloud nodes when load increases.
Pro Tip: When routing across public networks, never send unencrypted HTTP. It is 2017; Let's Encrypt is mature. Use SSL termination at the HAProxy layer, but ensure the backend communication is also tunneled, preferably via OpenVPN or IPsec. The overhead is negligible on modern CPUs.
Here is a battle-tested haproxy.cfg snippet for weighting traffic between a local core and a cloud burst:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http_front
bind *:80
default_backend app_nodes
backend app_nodes
balance roundrobin
option httpchk HEAD /health HTTP/1.1\r\nHost:\ localhost
# CoolVDS Local Node: Weight 100 (Primary)
server core01 10.10.1.5:80 check weight 100
# Cloud Burst Node: Weight 10 (Backup/Overflow)
server cloud01 52.x.x.x:80 check weight 10 backup
The Data Consistency Nightmare
Stateless web servers are easy. The database is where multi-cloud dreams go to die. The speed of light is a hard constraint; you cannot have a synchronous multi-master database spanning Oslo and Frankfurt without incurring massive write latency. It is physics.
For high availability without the latency penalty, we recommend an asynchronous Master-Slave replication topology, or for read-heavy workloads, a Galera Cluster restricted to a single region with asynchronous replication to the second cloud for Disaster Recovery (DR).
On our CoolVDS NVMe instances, we see MySQL performance often doubling compared to standard SSD cloud storage due to the raw I/O throughput. This allows the "Master" to handle significantly more write queries before sharding is required.
Galera Configuration for WAN Replication
If you must replicate across the WAN for DR purposes, you need to tune the Galera provider options in my.cnf to prevent the cluster from stalling on minor network jitters. This configuration is for MariaDB 10.1:
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="multi_cloud_cluster"
wsrep_cluster_address="gcomm://10.10.1.5,52.x.x.x"
# Critical for WAN: Increase timeouts to avoid false partitions
wsrep_provider_options="evs.keepalive_period=PT3S;evs.suspect_timeout=PT30S;evs.inactive_timeout=PT1M;gcache.size=1G"
# Optimization for NVMe storage
innodb_flush_method=O_DIRECT
innodb_io_capacity=2000
Why Local Presence Matters for GDPR
The impending GDPR framework places strict requirements on data controllers. While the Privacy Shield agreement currently allows data transfer to the US, political winds change. The safest bet for European enterprises is to ensure the primary storage of personal identifiable information (PII) remains within the EEA (European Economic Area).
By using CoolVDS as your primary data store, you satisfy the data residency requirement by default. Your backups and encrypted snapshots can go to S3 or Glacier for durability, but the live, unencrypted data stays on Norwegian soil, protected by strong national privacy laws and Datatilsynet oversight.
The Verdict
A multi-cloud strategy in 2017 isn't about using every service AWS offers. It is about commoditizing the infrastructure. Treat the big cloud providers as utility compute for bursting, but build your house on land you can trust.
We built CoolVDS to be that foundation. We provide the KVM virtualization, the DDoS protection, and the NVMe storage that gives you the performance of bare metal with the flexibility of the cloud. Don't wait until May 2018 to fix your compliance and latency issues.
Ready to decouple your architecture? Deploy a KVM instance in Oslo today and benchmark the latency difference yourself.