Database Migration Survival Guide: Zero-Data-Loss Strategies for Nordic Infrastructure
Data gravity is real. The larger your dataset grows, the harder it is to move without tearing your infrastructure apart. I have seen seasoned engineers freeze when they realize their 500GB mysqldump is going to take 14 hours to restore because they underestimated the I/O limits of their target server. Migration is not a feature; it is a controlled disaster.
For those of us operating in the Nordics, the challenge is twofold. First, the physics: latency between Oslo and Frankfurt is decent, but latency between Oslo and a poorly routed server in the US is a catastrophe for synchronous replication. Second, the law: GDPR and the Schrems II ruling have made where your data lives just as critical as how it gets there.
If you are moving critical workloads to a Norwegian footprintâperhaps to get closer to the NIX (Norwegian Internet Exchange) or to satisfy Datatilsynet audit requirementsâyou need a plan that goes beyond scp.
The Bottleneck is Always Disk I/O
Before we touch a single config file, we need to address the hardware. Most failed migrations happen because the target environment cannot write data as fast as the source sends it. This creates a backlog, blows up replication lag, and eventually causes the migration to fail or the application to time out.
In a recent project migrating a high-traffic Magento cluster from a legacy dedicated server to a virtualized environment, we hit a wall. The target VPS claimed "SSD Storage," but the sustained write speeds plummeted after the cache filled up. We call this the "noisy neighbor" effect.
Architect's Note: When evaluating a target host like CoolVDS, verify the virtualization technology. We use KVM (Kernel-based Virtual Machine) specifically because it allows for stricter isolation of resources. If your provider is using OpenVZ or heavy containerization, your database import speed is at the mercy of every other user on that node. For database workloads, NVMe backed by KVM is the only serious option in 2023.
Strategy 1: The Replication Shift (Zero-Downtime)
The "Maintenance Mode" window is a luxury of the past. If you can't afford four hours of downtime, you cannot use dump-and-restore. You must use replication. This involves setting up the new server as a replica (slave) of the current production server (master), letting them sync, and then promoting the slave.
1. MySQL/MariaDB Configuration
First, ensure your source server has binary logging enabled. Without this, point-in-time recovery and replication are impossible.
Edit your /etc/my.cnf (or /etc/mysql/mysql.conf.d/mysqld.cnf):
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 10
max_binlog_size = 100M
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
Crucial: The server-id must be unique. Your target CoolVDS instance should have server-id = 2.
2. The Initial Sync
Do not use mysqldump for the initial sync if your DB is over 10GB. It is too slow and locks tables. Use Percona's xtrabackup (for MySQL) or mariabackup. It copies physical data files, which is significantly faster.
# On the source server
mariabackup --backup --target-dir=/root/backup/ --user=root --password=YOUR_PASS
# Prepare the backup (apply logs)
mariabackup --prepare --target-dir=/root/backup/
Once prepared, use rsync to move this directory to your CoolVDS instance. Since we care about security and speed, we use the specific cipher flags optimized for modern CPUs.
rsync -avzW --progress -e "ssh -c aes128-gcm@openssh.com" /root/backup/ root@target-ip:/var/lib/mysql/
3. Establishing Replication
Once the data files are in place on the target, start the service and configure the replication link. Note that if you are crossing public networks (e.g., migrating from AWS to a CoolVDS server in Oslo), never expose port 3306 to the internet. Use an SSH tunnel.
ssh -N -L 3307:127.0.0.1:3306 user@source-server-ip -i ~/.ssh/id_rsa_migration
Now, on the target database, you connect to localhost:3307 which tunnels securely to the source.
Strategy 2: PostgreSQL Logical Replication
If you are running PostgreSQL 10 or higher (and by May 2023, you should be on PG 14 or 15), Logical Replication is superior to streaming replication for migrations because it allows you to migrate between major versions and different operating systems with minimal friction.
Source Configuration (postgresql.conf):
wal_level = logical
max_replication_slots = 4
max_wal_senders = 4
You must restart Postgres after changing wal_level. This is the only downtime required on the source until the final cutover.
Create Publication (Source):
CREATE PUBLICATION migration_pub FOR ALL TABLES;
Create Subscription (Target CoolVDS Instance):
CREATE SUBSCRIPTION migration_sub
CONNECTION 'host=source-ip port=5432 user=replicator password=secret dbname=production'
PUBLICATION migration_pub;
Postgres will automatically snapshot the initial data and then stream changes. Monitor the status with:
SELECT * FROM pg_stat_subscription;
Network Latency: The Silent Killer
When replicating data to Norway from Central Europe, you are fighting the speed of light. Average latency from Amsterdam to Oslo is roughly 15-20ms. This is fine for asynchronous replication. However, if your application tries to write to the Master (EU) and read immediately from the Slave (NO) before the transaction logs arrive, you will get stale data.
Ensure your application is configured for consistency over availability during the migration phase. If you are using a strictly synchronous setup (like Galera Cluster), that 20ms latency will kill your write performance.
Optimizing the Target Environment
Your target environment on CoolVDS needs to be tuned before the data arrives. A common mistake is leaving default config values on a high-spec server.
If you have deployed a 32GB RAM instance, your innodb_buffer_pool_size should not be the default 128MB. It should be 70-80% of available RAM.
# /etc/my.cnf optimization for a 32GB RAM instance
[mysqld]
innodb_buffer_pool_size = 24G
innodb_log_file_size = 2G
innodb_write_io_threads = 8
innodb_read_io_threads = 8
innodb_io_capacity = 2000 # Only set this high on NVMe storage like CoolVDS
The Legal Layer: Data Sovereignty
Technical execution is meaningless if legal shuts you down. Moving data into Norway involves specific compliance checks. Norway is EEA (not EU), but GDPR applies fully. If you are migrating personal data (PII) from a US-owned cloud provider to a Norwegian provider like CoolVDS, you are effectively solving a major Schrems II compliance headache. By localizing data in Oslo, you reduce the risk of extraterritorial data access.
Ensure that during the migration (rsync/ssh), no intermediate relays are used that might route traffic outside the EEA. Using direct peer-to-peer SSH tunnels ensures the data stream remains encrypted and geographically direct.
Final Cutover Checklist
The scary part. Switching the DNS.
- Lower TTL: 24 hours before migration, lower your DNS TTL to 300 seconds.
- Stop Writes: Enable "Read Only" mode on your application.
- Verify Catch-up: Ensure
Seconds_Behind_Master(MySQL) is 0 or replication status is 'streaming' (Postgres). - Promote Target: Stop replication on the CoolVDS instance. Reset the slave status.
- Point Traffic: Update DNS or switch your Load Balancer IP.
Migration is not about hope; it's about math and preparation. High-performance NVMe storage provides the headroom you need to catch up on replication lag quickly, but your configuration determines if the database survives the landing.
Don't let storage latency be the reason your migration fails. Spin up a test environment on CoolVDS today, run a benchmark against your current provider, and look at the numbers. They usually speak for themselves.