Multi-Cloud Architecture: Why Norway Needs a Hybrid Approach
Let’s cut through the marketing noise. In 2016, the mandate from nearly every board of directors is "Move to the Cloud." They read about AWS, Azure, and the elasticity of the public cloud in business magazines, and they assume it’s a silver bullet for stability and cost.
As a CTO or Systems Architect, you know the reality is messier. Physics—specifically the speed of light—hasn't changed. If your primary customer base is in Oslo, Bergen, or Trondheim, and your application is chatting with a database hosted in Frankfurt or Ireland, you are introducing a latency penalty on every single request. Furthermore, with the ink barely dry on the EU-US Privacy Shield (adopted July 2016), relying solely on US-owned infrastructure is still a compliance tightrope regarding the upcoming data protection regulations discussed in Brussels.
The solution isn't to reject the public cloud, but to treat it as a utility, not a home. This is the case for the Multi-Cloud / Hybrid Strategy: anchoring your persistent data and heavy I/O workloads on high-performance local infrastructure (like CoolVDS) while bursting stateless compute to the hyperscalers.
The Latency Tax is Real
I recently audited a Magento e-commerce platform for a Norwegian retailer. They moved everything to AWS `eu-central-1` (Frankfurt). Their Time to First Byte (TTFB) jumped from 200ms to 650ms. Why? Because the application made 40+ database calls per page load, and the round-trip time (RTT) from the user to the server, and the server internal processing, added up.
We ran a trace from a fiber connection in Oslo. Here is the reality of distance:
$ mtr --report --report-cycles=10 ec2.eu-central-1.amazonaws.com
HOST: oslo-office-gw Loss% Snt Last Avg Best Wrst StDev
1.|-- 192.168.1.1 0.0% 10 0.4 0.5 0.4 0.8 0.1
2.|-- ip-local-isp.no 0.0% 10 2.1 2.3 1.9 3.5 0.5
...
12.|-- ec2-54-93-0-1.eu-central 0.0% 10 34.2 35.1 33.8 41.2 2.1
35ms isn't bad for a single packet. But stack that sequentially? It kills the user experience. Compare that to a local instance on CoolVDS, peered directly at the NIX (Norwegian Internet Exchange):
$ mtr --report --report-cycles=10 oslo.coolvds.com
HOST: oslo-office-gw Loss% Snt Last Avg Best Wrst StDev
1.|-- 192.168.1.1 0.0% 10 0.4 0.5 0.4 0.8 0.1
2.|-- ip-local-isp.no 0.0% 10 2.1 2.2 1.9 2.9 0.3
3.|-- gw.coolvds.net 0.0% 10 2.8 2.9 2.7 3.1 0.1
2.9ms. That is an order of magnitude difference. For database-heavy applications, local hosting isn't nostalgia; it's performance optimization.
Architecture Pattern: The "Core & Burst" Model
The most pragmatic architecture for 2016 involves placing your Stateful Core (Databases, NFS/GlusterFS storage, Git repositories) on local, high-I/O VPS instances, and your Stateless Front-ends on a mix of local servers and public cloud instances for auto-scaling.
1. The Routing Layer
You can use Nginx as a smart load balancer to route traffic. If you are expecting a massive traffic spike (e.g., Black Friday), you spin up instances in AWS/DigitalOcean, add them to the upstream, but keep the "master" data in Norway to ensure data sovereignty and write-speed.
Here is a snippet from an nginx.conf designed to prioritize local nodes but failover or spillover to remote cloud nodes. We use the backup parameter effectively here:
http {
upstream backend_cluster {
# Primary Local Nodes (CoolVDS - Low Latency)
server 10.10.0.5:80 weight=5;
server 10.10.0.6:80 weight=5;
# Burst Nodes (Public Cloud - Higher Latency, but infinite scale)
# Marked as 'backup' so they are only used when primaries are full or down
server 54.93.xx.xx:80 backup;
server 54.93.xx.xy:80 backup;
}
server {
listen 80;
server_name shop.example.no;
location / {
proxy_pass http://backend_cluster;
proxy_set_header X-Real-IP $remote_addr;
# essential for keepalive connections to reduce SSL handshake overhead upstream
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}
2. The Data Layer & NVMe
Most public cloud providers are still transitioning their standard tiers to SSD. Even then, "Provisioned IOPS" costs a fortune. In contrast, providers like CoolVDS have standardized on NVMe storage. In 2016, NVMe is still a differentiator.
If you are running MySQL or PostgreSQL, the disk I/O queue is where performance dies. Check your disk stats. If iowait is consistently above 5%, your CPU is just waiting on the disk.
Pro Tip: When running databases on KVM-based VPS (like CoolVDS), change your I/O scheduler to `noop` or `deadline` inside the guest. The host handles the physical scheduling. Using `cfq` inside a virtual machine often leads to double-queueing overhead.
To change this on CentOS 7:
# echo noop > /sys/block/vda/queue/scheduler
# cat /sys/block/vda/queue/scheduler
[noop] deadline cfq
Data Sovereignty and Datatilsynet
We must address the elephant in the room. While the Privacy Shield framework has replaced Safe Harbor, many Norwegian entities (healthcare, finance, public sector) prefer strict adherence to the Personal Data Act (Personopplysningsloven). Datatilsynet (The Norwegian Data Protection Authority) has always recommended keeping sensitive citizen data within the EEA, and preferably within Norway to avoid legal ambiguity.
By hosting your core database on CoolVDS in Oslo, you satisfy the requirement of storage location. You can still use AWS for processing non-sensitive data or serving static assets (images/CSS) via CloudFront, but the PII (Personally Identifiable Information) never leaves the jurisdiction.
Secure Interconnects
Don't expose your database port (3306/5432) to the public internet. If you are connecting a cloud frontend to a local backend, use a VPN. In 2016, OpenVPN is the standard, though IPsec (via StrongSwan) is faster for site-to-site.
A simple, robust way to secure the link between your cloud web servers and your local CoolVDS database is an SSH tunnel with autossh, if a full VPN is overkill for your setup:
# On the web server (Cloud)
# Forward local port 3307 to the remote database's localhost:3306
autossh -M 0 -f -N -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" \
-L 3307:127.0.0.1:3306 user@oslo-db.coolvds.com -i /path/to/key.pem
Now your application connects to localhost:3307, and the traffic is encrypted through the tunnel to Norway.
Conclusion: Balance is Key
The "All-In Public Cloud" strategy is often a billing trap and a latency nightmare for local businesses. The "All-On-Prem" strategy lacks agility. The sweet spot for 2016 is Hybrid.
Use the giants for what they are good at: CDN, object storage (S3), and burst compute. Use CoolVDS for what we are good at: predictable pricing, NVMe I/O performance that crushes provisioned IOPS, and low-latency connectivity to the Norwegian market.
Don't let latency kill your conversion rates. Spin up a KVM instance in Oslo today and test the ping for yourself.