Console Login

Escaping the Vendor Lock-in Trap: A Pragmatic Hybrid Cloud Strategy for 2017

Escaping the Vendor Lock-in Trap: A Pragmatic Hybrid Cloud Strategy for 2017

Let’s be honest: The cloud honeymoon is over. Back in 2013, we all rushed to move bare metal to EC2, convinced that "elasticity" solved everything. Now, nearing the end of 2016, many CTOs are staring at monthly bills that look more like ransom notes. We traded the headache of hardware for the headache of proprietary APIs and opaque pricing models.

I recently audited a SaaS platform based here in Oslo. They were 100% committed to a major US provider. When the Safe Harbor agreement was invalidated last year, and the new Privacy Shield framework came into play this August, their legal team panicked. Their customer data was sitting on disks in Virginia, while their customers were in Bergen and Trondheim. Beyond the compliance nightmare, they were paying premium rates for IOPS that couldn't match a standard local SSD.

The solution isn't to abandon the cloud. It's to stop treating it as a religion and start treating it as a commodity. This is the era of the Hybrid Cloud—keeping your data sovereign and cost-effective on high-performance infrastructure like CoolVDS, while utilizing public clouds strictly for what they are good at: ephemeral burst computing.

The Architecture: The "Core & Burst" Model

The most resilient architecture I’ve deployed this year follows a strict separation of concerns:

  1. The Core (CoolVDS): Database masters, stateful services, and primary application logic. This lives on dedicated KVM instances in Norway. Why? predictable billing, lower latency to NIX (Norwegian Internet Exchange), and GDPR compliance readiness.
  2. The Burst (Hyperscalers): Stateless frontend workers and CDNs. These spin up only when traffic spikes and die when it drops.

This setup prevents you from paying 10x for "provisioned IOPS" on a public cloud when standard NVMe storage on a CoolVDS instance already saturates the bus.

Step 1: The Unified Control Plane

Managing two providers sounds like overhead. In 2014, it was. Today, with tools like Ansible (currently v2.2), it's trivial. We don't rely on proprietary vendor consoles. We define infrastructure as code.

Here is how we normalize the environment. We want our local CoolVDS instances and our cloud burst instances to look identical to the application code.

# inventory.ini

[core_norway]
coolvds-db-01 ansible_host=185.x.x.x
coolvds-app-01 ansible_host=185.x.x.y

[burst_cloud]
aws-worker-01 ansible_host=54.x.x.x
aws-worker-02 ansible_host=54.x.x.y

[all:vars]
ansible_python_interpreter=/usr/bin/python3

And here is a stripped-down Ansible playbook to ensure our Nginx configuration is consistent across both environments, regardless of the underlying hardware:

--- 
- hosts: all
  become: yes
  tasks:
    - name: Ensure Nginx is installed
      apt:
        name: nginx
        state: present
        update_cache: yes

    - name: Deploy application config
      template:
        src: templates/app.conf.j2
        dest: /etc/nginx/sites-available/default
      notify: restart nginx

    - name: Tune Worker Processes
      lineinfile:
        dest: /etc/nginx/nginx.conf
        regexp: "worker_processes"
        line: "worker_processes auto;"

  handlers:
    - name: restart nginx
      service: name=nginx state=restarted

The Network Layer: Latency is the Killer

If you split your app, latency matters. A query from a worker in Frankfurt to a database in Oslo typically takes 15-20ms. That is acceptable for asynchronous jobs but fatal for a high-frequency trading app or a Magento checkout loop.

We mitigate this by using HAProxy as a smart gateway. It detects which backend is fastest and routes accordingly. If the local CoolVDS instances are healthy, we keep traffic local (0.5ms latency). If they are overwhelmed, we spill over.

Check the latency yourself:

ping -c 4 193.213.112.x # Test connectivity to NIX/Oslo

If you see numbers above 30ms from your primary user base, you are losing conversions. This is why hosting the "Core" in Norway is non-negotiable for Norwegian businesses.

Configuration: HAProxy weighted routing

This configuration prefers the high-performance local hardware but fails over to the cloud if necessary.

global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend http_front
    bind *:80
    default_backend app_nodes

backend app_nodes
    balance roundrobin
    option httpchk HEAD /health HTTP/1.1\r\nHost:\ localhost
    # CoolVDS Core - Weight 100 (Primary)
    server core01 10.10.1.5:80 check weight 100
    server core02 10.10.1.6:80 check weight 100
    
    # Cloud Burst - Weight 10 (Backup/Overflow)
    server cloud01 192.168.55.2:80 check weight 10 backup
    server cloud02 192.168.55.3:80 check weight 10 backup

Data Sovereignty and the Database Layer

With the Datatilsynet becoming increasingly vigilant regarding the Personal Data Act, you cannot afford ambiguity about where your data writes occur. Read replicas can live anywhere; the Master must be secure.

Pro Tip: Never expose your MySQL port (3306) to the public internet, even with strong passwords. Use a VPN tunnel (OpenVPN) or SSH tunneling between your cloud workers and your CoolVDS core.

We run Percona Server 5.7 (a drop-in replacement for MySQL) on the CoolVDS instances. The performance gain from the underlying NVMe storage here is massive compared to EBS-optimized instances unless you pay extortionate fees.

Here is a critical setting for my.cnf to ensure data safety when replicating over the WAN (Wide Area Network) between providers:

[mysqld]
# Mandatory for replication stability
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW

# Security optimization
bind-address = 0.0.0.0
require_secure_transport = ON

# Performance for NVMe (SSD)
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000

To set up the replication user securely with SSL (essential when traversing different networks):

GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%' REQUIRE SSL;

The Economic Argument

Let's look at the Total Cost of Ownership (TCO). A generic cloud instance with 4 vCPUs and 16GB RAM often costs upwards of $150/month once you add bandwidth egress fees and storage IOPS costs. A comparable CoolVDS instance—which often benchmarks faster due to lack of "noisy neighbor" throttling—sits at a fraction of that cost.

Feature Public Cloud Giant CoolVDS (Local Core)
Storage I/O Throttled (Pay for speed) Unthrottled NVMe
Data Location Usually Frankfurt/Ireland Norway (Oslo)
Bandwidth High Egress Fees Generous/Unmetered
Privacy US Jurisdiction (Patriot Act) Norwegian Jurisdiction

Implementation Plan

Moving to a hybrid model doesn't happen overnight. Start by decoupling your state. Move your database and primary file storage (NFS/GlusterFS) to a secure, fixed-cost environment like CoolVDS. Once your data is anchored safely under Norwegian jurisdiction, point your stateless application servers to it.

Don't wait for the next price hike or compliance audit to force your hand. Building resilience means owning your core infrastructure.

Ready to anchor your infrastructure? Deploy a high-performance NVMe instance on CoolVDS today and see the latency difference for yourself.