Console Login

Multi-Cloud is a Necessity, Not a Luxury: A 2016 Survival Guide for Nordic CTOs

Multi-Cloud is a Necessity, Not a Luxury: A 2016 Survival Guide for Nordic CTOs

There is a dangerous trend sweeping through Oslo’s tech hubs right now. It starts with a credit card and ends with a monthly AWS bill that looks more like a mortgage payment. The narrative has been sold effectively: "Put everything in the cloud, it scales infinitely."

But as we close out 2016, the hangover is setting in. For those of us managing high-traffic workloads in Norway, the reality of "all-in" public cloud is often slower and more expensive than promised. I recently audited a media streaming platform in Oslo that was routing 100% of its traffic through AWS Frankfurt (eu-central-1). They were paying a premium for bandwidth, yet their users were suffering from 35ms+ latency on dynamic content. In the world of high-frequency trading or real-time bidding, that latency is an eternity.

The solution isn't to abandon the cloud—it's to stop treating it as a religion. The future is Multi-Cloud. Specifically, a hybrid approach that leverages the raw, cost-effective power of local infrastructure (like CoolVDS) for your core, while using hyperscalers only for what they are good at: bursting and object storage.

The "Split-Brain" Architecture: Speed vs. Scale

The most pragmatic architecture I’ve deployed this year involves a "Split-Brain" model. We keep the data-heavy, I/O-intensive workloads on local, high-performance NVMe virtual servers, and offload static assets and temporary compute to the public cloud.

The Latency Tax

Let’s look at the physics. The round-trip time (RTT) from a fiber connection in Oslo to AWS Frankfurt is typically around 30-40ms. To a local datacenter in Oslo, it’s under 2ms.

# Ping from Oslo to AWS Frankfurt
64 bytes from 54.93.x.x: icmp_seq=1 ttl=241 time=34.2 ms

# Ping from Oslo to CoolVDS Local Instance
64 bytes from 185.x.x.x: icmp_seq=1 ttl=58 time=1.8 ms

For a Magento database making 50 queries per page load, that latency compounds. Your TTFB (Time To First Byte) skyrockets. By moving the MySQL primary node to a CoolVDS NVMe instance in Norway, we cut page load times by 400ms for this client.

Orchestration: The Glue (Terraform 0.7)

Managing two providers used to be a nightmare of Bash scripts. But with Terraform 0.7 (released just this August), we finally have a stable way to define this hybrid state. While Terraform state management is still maturing, the ability to treat a local KVM instance and an AWS S3 bucket as part of the same `main.tf` is powerful.

Here is a snippet of how we provision the "Core" on a generic provider (simulating our CoolVDS interface) and the "Burst" assets on AWS:

# main.tf

# Define our stable, high-performance Core on CoolVDS
resource "coolvds_instance" "db_primary" {
  name      = "production-db-01"
  image     = "ubuntu-16.04-x64"
  region    = "no-oslo-01"
  # NVMe is crucial here for MySQL InnoDB performance
  disk_type = "nvme"
  size      = "16gb"
}

# Define offsite backup storage on AWS
resource "aws_s3_bucket" "backups" {
  bucket = "company-backups-2016-nov"
  acl    = "private"
  
  tags {
    Environment = "Production"
  }
}

# Output the IP to configure our VPN later
output "db_ip" {
  value = "${coolvds_instance.db_primary.ipv4_address}"
}

The I/O Bottleneck: Why NVMe Matters

In 2016, "cloud" storage often means network-attached storage (like EBS gp2). It’s convenient, but it suffers from "noisy neighbor" syndrome. If another tenant on the host is thrashing the disk, your IOPS drop.

Pro Tip: Check your iowait percentage. If your CPU is idle but your application is slow, you are likely waiting on disk I/O. Run iostat -x 1. If %util is near 100% and await is high, your storage is the bottleneck.

We use CoolVDS because they offer local NVMe storage. This isn't just faster; it's a different protocol entirely compared to SATA SSDs. For a heavy write workload (like a logging stack or a transactional DB), the queue depth handling of NVMe prevents the system from choking under load.

The Compliance Tsunami: GDPR is Coming

We need to talk about the elephant in the room. The EU Parliament adopted the General Data Protection Regulation (GDPR) earlier this year. We have until May 2018 to comply, but smart CTOs are moving now.

With the invalidation of "Safe Harbor" and the shaky ground of the new "Privacy Shield," relying solely on US-owned hyperscalers for storing PII (Personally Identifiable Information) is becoming a legal risk. The Norwegian Data Protection Authority (Datatilsynet) is increasingly strict about where Norwegian citizen data lives.

The Strategy:
1. Data Residency: Keep the database on CoolVDS servers physically located in Norway/Europe.
2. Encryption: Use LUKS encryption on the NVMe volumes.
3. Anonymization: Only send anonymized logs to public cloud analytics tools.

Implementing Failover with Nginx

You don't need a complex load balancer appliance to handle this. A standard Nginx 1.10 installation can act as a smart router. In this configuration, we prefer the local CoolVDS upstream for speed, but failover to a static cloud mirror if the dynamic backend goes dark.

upstream backend_core {
    # The high-performance local NVMe instance
    server 10.0.0.5:80 weight=10 max_fails=3 fail_timeout=30s;
    
    # Cloud fallback (slower, but keeps the site up)
    server 54.93.x.x:80 backup;
}

server {
    listen 80;
    server_name example.no;

    location / {
        proxy_pass http://backend_core;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        
        # Crucial for stable connections over VPN tunnels
        proxy_read_timeout 90;
    }
}

Conclusion

The "all-in" cloud strategy is great for startups with zero users and zero revenue. But for scaling businesses in the Nordics, the math doesn't hold up. You end up paying for IOPS you don't get and latency you can't afford.

By treating CoolVDS as your high-performance "Core" and the public cloud as your elastic "Shell," you get the best of both worlds: data sovereignty, NVMe speed, and controlled costs. Don't wait for the GDPR panic of 2018 to rethink your infrastructure.

Ready to test the difference real hardware makes? Deploy a CoolVDS NVMe instance in Oslo today and benchmark it against your current cloud provider. The results might scare you.