The Cloud is Not a Strategy, It's a Tactic
Let’s address the elephant in the server room: The sheer panic caused by the invalidation of the Safe Harbor agreement last year. If you are a CTO operating in Oslo or Bergen today, in late 2016, you are likely staring at the upcoming General Data Protection Regulation (GDPR) drafts and sweating. The EU-US Privacy Shield is in place, but for how long? Legal uncertainty is the enemy of stability.
For the past three years, the industry mantra has been "All-in on Cloud." Move everything to AWS. Move everything to Azure. But physics and lawyers disagree with this approach. If your users are in Norway, routing traffic through Frankfurt (AWS eu-central-1) or Ireland adds 30-40ms of latency round-trip. That is acceptable for a blog, but unacceptable for high-frequency trading or real-time VoIP applications.
More importantly, data sovereignty is becoming a boardroom discussion. The pragmatic solution is not to abandon the cloud, but to adopt a Hybrid Core Architecture. We keep the database and sensitive customer data on sovereign Norwegian soil—leveraging high-performance local VPS—and use the public cloud merely for elastic compute during burst loads.
The Architecture: Local Core, Elastic Edge
In this architecture, we treat CoolVDS instances as the "Stateful Core." This is where your MySQL/MariaDB writes happen and where your customer PII (Personally Identifiable Information) resides, strictly under Norwegian Datatilsynet jurisdiction. The public cloud acts as a "Stateless Edge," spinning up disposable frontend workers only when traffic spikes.
To make this work seamlessly, we need three components:
- Infrastructure as Code (IaC) to manage disparate providers.
- Secure Tunneling to bridge the private networks.
- Smart Load Balancing to route traffic intelligently.
1. Unified Provisioning with Terraform (v0.7)
Managing two different providers manually is a recipe for disaster. HashiCorp's Terraform has matured significantly this year. It allows us to describe the state of our CoolVDS local nodes and our AWS burst nodes in a single file.
Below is a practical example of how to structure a main.tf to deploy a local "Core" node (simulated here as a generic provider for the sake of the example) and an AWS worker. Note the syntax specific to version 0.7:
provider "aws" {
region = "eu-central-1"
}
# The Core: Secure Data Node in Norway (CoolVDS)
resource "coolvds_instance" "core_db" {
image = "centos-7-x64"
label = "norway-db-master"
region = "oslo-1"
size = "nvme-16gb" # High I/O is critical for the master DB
ssh_keys = ["${var.ssh_fingerprint}"]
}
# The Edge: Stateless Worker in AWS
resource "aws_instance" "burst_worker" {
ami = "ami-bc5b48d0" # Amazon Linux 2016.09
instance_type = "t2.micro"
count = "${var.high_traffic ? 5 : 0}"
tags {
Name = "hybrid-worker"
}
}Pro Tip: Notice the nvme-16gb size for the local instance. In 2016, most cloud providers still rely on standard SSDs or even spinning rust (HDD) for their base tiers. CoolVDS standardizes on NVMe, which provides IOPS capabilities that are often 4x-5x faster than standard SATA SSDs found in public cloud general-purpose instances. For a database master, this I/O throughput is non-negotiable.2. Bridging the Gap: OpenVPN Site-to-Site
Once you have servers in Oslo and Frankfurt, they need to talk privately. Do not expose your MySQL port (3306) to the public internet, even with SSL. It creates an unnecessary attack surface.
Instead, we deploy an OpenVPN server on the CoolVDS instance (the static IP endpoint) and have the dynamic cloud instances connect as clients. This creates a flat 10.8.0.0/24 network across providers.
Server Config (Oslo Node) - /etc/openvpn/server.conf:
port 1194
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
push "route 10.10.0.0 255.255.255.0" # Route to local LAN if needed
keepalive 10 120
tls-auth ta.key 0
cipher AES-256-CBC
auth SHA256
user nobody
group nogroup
persist-key
persist-tun
status openvpn-status.log
verb 3Why AES-256-CBC? Because hardware acceleration (AES-NI) is standard on the modern Xeon CPUs we use at CoolVDS, meaning the encryption overhead is negligible on throughput.
3. The Routing Logic: Nginx as the Gatekeeper
With the tunnel established, your application logic in the cloud needs to read from the master database in Norway. However, for read-heavy applications (like Magento or WordPress), you should replicate data. But for this architectural example, let's focus on a scenario where the Frontend stays in Norway for low latency to Norwegian users, and we offload Image Processing or batch jobs to the cloud.
We configure Nginx on the CoolVDS node to handle incoming user traffic via the NIX (Norwegian Internet Exchange) for minimal latency.
Nginx Upstream Configuration:
upstream backend_cluster {
# Local fast processing (Primary)
server 127.0.0.1:8080 weight=10;
# Cloud offload (Secondary - via VPN tunnel IP)
# Only used if local load is too high or for specific async jobs
server 10.8.0.5:8080 weight=2 backup;
}
server {
listen 80;
server_name api.norway-service.no;
location / {
proxy_pass http://backend_cluster;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
# Timeouts adjusted for cross-border latency
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
}
}The Cost & Compliance Reality
There is a financial argument here that is often overlooked. Public cloud providers charge exorbitant fees for Egress Traffic (data leaving their network). If you host a media-heavy site entirely on AWS and serve terabytes of data to users in Trondheim, your bill will explode.
By hosting the static assets and heavy downloads on CoolVDS, you benefit from our predictable bandwidth models. You only pay the "cloud tax" for the small JSON payloads exchanged between the worker nodes and the master database.
Furthermore, by keeping the storage volume physically in Oslo, you simplify your compliance posture. When the GDPR hammer drops in 2018, you will already be able to prove exactly where your data rests at rest: on an encrypted NVMe array in a Norwegian datacenter, not replicated across three availability zones in pending jurisdictions.
Implementing the Switch
Moving from a monolithic architecture to a hybrid one requires testing. Start small. Move your staging environment to a local VPS while keeping production in the cloud, or vice versa.
The latency advantage of being physically close to your user base cannot be patched with software. If your target market is here, your servers should be too.
Ready to secure your data sovereignty? Deploy a high-performance NVMe instance on CoolVDS today and build the core of your hybrid infrastructure.