Disaster Recovery in a Post-Schrems II World: A Norwegian CTO’s Survival Guide
Let’s be honest. Most Disaster Recovery (DR) plans are documents that sit in a drawer (or a Google Drive folder) to satisfy an auditor. They aren't real. If `rm -rf /` happened on your production database today, would you bet your job on the restore process? More importantly, if that restore process involves moving Norwegian citizen data to a backup bucket in us-east-1, you aren't just fighting downtime. You are fighting the Datatilsynet (Norwegian Data Protection Authority).
It is March 2023. The Schrems II ruling has made the "just put it in the cloud" strategy legally radioactive for European companies handling personal data. You cannot simply pipe `mysqldump` output to an S3 bucket owned by a US entity without complex legal gymnastics (Standard Contractual Clauses) that might not even hold up in court.
As a CTO, your remit is simple: keep the lights on and keep the lawyers away. This requires a DR strategy that ensures Data Sovereignty without sacrificing Recovery Time Objectives (RTO). We are going to build a DR architecture that keeps data on Norwegian soil, utilizes standard open-source tooling, and leverages KVM isolation for security.
The Architecture: Hot vs. Warm Sites
For most SMEs in the Nordics, an Active-Active configuration is overkill. It requires complex bi-directional replication and load balancing logic that introduces more points of failure than it solves. We will focus on a Pilot Light (Warm Standby) approach.
- Primary Site: Your current infrastructure (Bare Metal or VPS).
- DR Site: A scaled-down CoolVDS instance in Oslo.
- Mechanism: Asynchronous replication for databases, file syncing for assets.
Why Oslo? Physics. If your primary users are in Bergen or Trondheim, latency matters. Replicating storage to a datacenter in Frankfurt adds milliseconds. Replicating to Oslo keeps it negligible. Plus, your data stays under Norwegian jurisdiction.
Phase 1: The Database Layer (MySQL 8.0)
Forget standard binary logs. In 2023, if you aren't using Global Transaction Identifiers (GTIDs), you are doing it wrong. GTIDs make failover and failback sane because you don't have to calculate log file positions manually.
Here is the configuration for your Primary server (Source). We tune innodb_flush_log_at_trx_commit to 1 for ACID compliance, but on the replica, we can relax this for catch-up speed.
Primary Server my.cnf
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
gtid_mode = ON
enforce_gtid_consistency = ON
log_slave_updates = ON
# Safety nets
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
# Networking optimization for replication
slave_net_timeout = 60
Now, let's configure the DR Site (CoolVDS). This instance serves as a Replica. We use a smaller instance type here to save costs (TCO), but since CoolVDS provides NVMe storage even on smaller plans, the I/O throughput won't choke during the replication stream.
DR Server my.cnf
[mysqld]
server-id = 2
relay_log = /var/log/mysql/mysql-relay-bin.log
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
gtid_mode = ON
enforce_gtid_consistency = ON
read_only = 1
# Performance tuning for replication catch-up
innodb_flush_log_at_trx_commit = 2
innodb_buffer_pool_size = 2G # Adjust based on your VDS RAM
skip_name_resolve = 1
To initialize replication securely, create a dedicated replication user. Do not use root.
CREATE USER 'repl_user'@'10.%.%.%' IDENTIFIED WITH caching_sha2_password BY 'Str0ngP@ssw0rd!';
GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'10.%.%.%';
FLUSH PRIVILEGES;
Pro Tip: Use a VPN (WireGuard) or an SSH tunnel for the replication traffic. Never expose port 3306 to the public internet. On CoolVDS, you can set up a private network interface to keep this traffic completely off the public grid.
Phase 2: Asset Synchronization (The Lazy Way)
You don't need a fancy distributed file system like GlusterFS or Ceph for a web application with moderate write loads. They add complexity and latency. For 90% of use cases, rsync is still the king of reliability.
We use a simple script to sync `public_html` or `storage` folders. We use the `-a` flag for archive mode (preserves permissions) and `-z` for compression.
#!/bin/bash
# /opt/scripts/sync_dr.sh
SOURCE_DIR="/var/www/html/uploads/"
DEST_IP="192.168.10.5" # Your CoolVDS Private IP
DEST_DIR="/var/www/html/uploads/"
USER="dr_transfer"
# Sync only changes, delete files on DR that were deleted on Primary
rsync -avz --delete -e "ssh -i /home/admin/.ssh/id_ed25519" $SOURCE_DIR $USER@$DEST_IP:$DEST_DIR
Add this to your crontab. Run it every 5 minutes. If you need real-time syncing, look into `lsyncd`, which triggers `rsync` on file system events.
*/5 * * * * /opt/scripts/sync_dr.sh >> /var/log/dr_sync.log 2>&1
Phase 3: Infrastructure as Code (Terraform)
The