Zero-Downtime Database Migration Strategies: The Norwegian Architect’s Playbook
Let’s be honest. Typing service mysql stop on a production server is the most terrifying moment in a sysadmin's week. If you are running a high-traffic e-commerce site targeting customers in Oslo or Bergen, downtime isn't just an inconvenience; it is a breach of trust. And with the GDPR fully enforceable as of May this year (2018), losing data or exposing it during transit is a legal nightmare waiting to happen.
I have seen too many developers attempt the "Dump and Restore" method on a Friday night, only to watch the restore process choke on disk I/O for 14 hours. The site goes dark. The CEO starts calling. It is a mess.
There is a better way. We don't move dead data; we replicate live streams. This guide covers how to migrate critical databases with near-zero downtime, focusing on the specific infrastructure constraints we face here in the Nordics.
The Latency Trap: Why Location Matters
Before touching a config file, look at your network topology. If you are migrating from a legacy provider in Frankfurt to a local provider in Norway, you are fighting physics. A 30ms latency spike between the master and the slave can cause replication lag to balloon, potentially corrupting data if you switch over prematurely.
Pro Tip: Always test latency between your source and destination servers. If you are hosting on CoolVDS, you are likely peering directly at NIX (Norwegian Internet Exchange), which gives you single-digit millisecond latency to major Norwegian ISPs. Use mtr to verify the path before starting a transfer.
Strategy 1: The MySQL Replication Swing (Master-Slave)
For MySQL (5.7 or the new 8.0) and MariaDB, standard replication is your lifeline. We aren't just copying files; we are making the new server a slave of the old one until they are identical.
Step 1: The Initial Dump
Do not lock your tables. Use --single-transaction to ensure data consistency without halting reads/writes on InnoDB tables. We also compress the stream to save bandwidth.
mysqldump -u root -pOpti0ns --all-databases --master-data=2 --single-transaction --quick --routines --triggers | gzip > /mnt/backup/full_dump.sql.gz
Step 2: Prepare the Target (CoolVDS)
Disk I/O is usually the bottleneck during the import. On a standard VPS, "noisy neighbors" (other users stealing CPU/Disk) can kill import speeds. This is why we insist on KVM virtualization at CoolVDS—you get dedicated resource isolation. To speed up the import, we temporarily relax the ACID compliance in my.cnf.
Warning: Revert these settings immediately after the migration is complete.
[mysqld]
# TEMPORARY SETTINGS FOR IMPORT SPEED
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 32M
innodb_write_io_threads = 8
innodb_doublewrite = 0
# Standard optimization for NVMe
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
Step 3: Secure Transport via SSH Tunnel
Since the Datatilsynet (Norwegian Data Protection Authority) is watching, we cannot send raw SQL traffic over the public internet. Use an SSH tunnel. It adds overhead, but it keeps you GDPR compliant.
ssh -N -L 3307:127.0.0.1:3306 user@destination_server_ip
Now, point your slave replication to 127.0.0.1:3307.
Strategy 2: PostgreSQL WAL Shipping (The Robust Way)
If you are running PostgreSQL 9.6 or 10, pg_dump is fine for small DBs, but for terabyte-scale datasets, you need WAL (Write-Ahead Logging) shipping. This method is practically bulletproof.
- Base Backup: Use
pg_basebackupto stream the binary data. - Continuous Archiving: Configure the source to push WAL files to the destination.
Here is a snippet for your recovery.conf on the destination server (the standby):
standby_mode = 'on'
primary_conninfo = 'host=192.168.1.50 port=5432 user=replicator password=SecretPassword sslmode=verify-full'
trigger_file = '/tmp/postgresql.trigger.5432'
When you are ready to cut over, you simply touch /tmp/postgresql.trigger.5432. The standby server promotes itself to master instantly. No dns propagation delays, no dropped connections.
The Hardware Reality Check: HDD vs. NVMe
I recently audited a client complaining about "slow database queries" after migrating to a budget host. The CPUs were idling. The RAM was free. The problem was iowait. They were on rotational HDD storage shared with 50 other tenants.
In 2018, running a database on spinning rust is negligence. The random read/write patterns of a busy database (especially Magento or WordPress) will crush a hard drive.
| Metric | Standard SATA SSD | CoolVDS NVMe |
|---|---|---|
| Read IOPS | ~5,000 | ~20,000+ |
| Latency | 0.5ms - 2ms | 0.05ms - 0.2ms |
| Throughput | 500 MB/s | 3,000 MB/s |
When you perform a migration, you are essentially reading the entire database and writing it again. NVMe storage cuts migration windows from hours to minutes.
The Cut-Over Checklist
Once replication is in sync (Seconds_Behind_Master = 0), you are ready. Do not wing it. Follow this sequence:
- Lower TTL: Set your DNS TTL to 60 seconds 24 hours in advance.
- Maintenance Mode: Place your application in read-only mode or show a maintenance page.
- Stop Writes:
FLUSH TABLES WITH READ LOCK;on the old master. - Verify Sync: Check Master Log File and Position on both servers.
- Promote Slave: Stop replication on the new server and make it writable.
- Switch App Config: Point your application connection strings to the new IP.
Conclusion
Database migration is 90% preparation and 10% execution. If you rely on luck, you will eventually fail. By using replication, securing your transport tunnels, and leveraging high-performance storage like the NVMe arrays we use at CoolVDS, you mitigate the risks of data corruption and extended downtime.
Your data is the lifeblood of your business. Don't host it on a platform that can't keep up with your I/O demands.
Ready to upgrade your infrastructure? Spin up a high-performance KVM instance in Oslo today.