Zero-Downtime Database Migration: A Battle-Plan for High-Traffic Norwegian Workloads
I still remember the silence in the room during a migration for a major Oslo-based retailer back in 2015. We were moving 400GB of transactional data from a legacy dedicated server to a cloud instance. The rsync stalled. The site was in maintenance mode. The CEO was pacing behind my chair. Every minute of downtime cost them roughly 15,000 NOK. We sweat through it, but I swore: never again.
In 2017, the concept of a "maintenance window" is becoming obsolete. Your users expect 24/7 availability, and with the looming GDPR enforcement coming next year, data integrity is not just a technical requirement⟶it's a legal one monitored by Datatilsynet.
If you are still shutting down services to move data, you are doing it wrong. This guide covers the Master-Slave Replication strategy to migrate a live MySQL 5.7 or MariaDB 10.1 database from legacy SATA storage to a high-IOPS NVMe environment (like the ones we architect at CoolVDS) with near-zero downtime.
The Hardware Bottleneck: Why We Move
Most migrations I oversee today aren't about CPU limits; they are about I/O Wait. Traditional spinning disks or shared standard SSDs simply choke under heavy concurrent writes. When your iowait hits 30%, your database locks up, and PHP-FPM processes pile up until the server crashes.
Pro Tip: Before you even start a migration, check your target environment's disk performance. At CoolVDS, we enforce strict isolation on our KVM instances. If you aren't getting consistent IOPS, your replication lag will never catch up to the master, and you will never be able to cut over.
Step 1: Preparing the Master (The Source)
You cannot set up replication if your master isn't writing binary logs. Check your my.cnf (usually in /etc/mysql/). You need a unique server ID and binary logging enabled.
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
expire_logs_days = 10
max_binlog_size = 100M
If you change this, you must restart MySQL. Yes, this requires a momentary blip, but it is the last restart you will need.
Step 2: The Non-Blocking Dump
Do not use mysqldump on a large dataset unless you enjoy table locks. For serious datasets, we use Percona XtraBackup. It grabs a physical copy of the data files without locking the database for writes.
Install XtraBackup and run the following on your Master:
xtrabackup --backup --target-dir=/data/backups/full_backup/ \
--user=root --password=YOUR_SECURE_PASSWORD
Once finished, you need to prepare the backup (apply the transaction logs) so it's ready to restore:
xtrabackup --prepare --target-dir=/data/backups/full_backup/
Inside that backup directory, look for a file called xtrabackup_binlog_info. It contains the coordinates you need for the slave. It looks like this:
mysql-bin.000003 481
Memorize these numbers. They are your lifeline.
Step 3: Staging the Slave (The Target)
Spin up your new instance. Ideally, this is a CoolVDS instance located in Oslo or a nearby Nordic datacenter to minimize latency to the NIX (Norwegian Internet Exchange). If your application server is in Norway and your database is in Frankfurt, the 30ms round-trip latency per query will kill your application performance regardless of how fast the disk is.
Transfer the backup data to the new server using rsync or scp. Then, stop MySQL on the slave, clear the datadir, and move the backup in.
service mysql stop
rm -rf /var/lib/mysql/*
xtrabackup --move-back --target-dir=/data/backups/full_backup/
chown -R mysql:mysql /var/lib/mysql
service mysql start
Step 4: Configuring Replication
Now, tell the new server (Slave) to listen to the old server (Master). Log into the MySQL shell on the Slave:
CHANGE MASTER TO
MASTER_HOST='10.0.0.5', -- Internal IP of your old server
MASTER_USER='replication_user',
MASTER_PASSWORD='ComplexPassword2017!',
MASTER_LOG_FILE='mysql-bin.000003',
MASTER_LOG_POS=481;
START SLAVE;
Now, the moment of truth. Check the status:
SHOW SLAVE STATUS \G
You are looking for two specific lines:
Slave_IO_Running: YesSlave_SQL_Running: YesSeconds_Behind_Master: 0(Eventually)
If Seconds_Behind_Master is high, your new server's disk I/O is too slow to catch up with the incoming writes. This is where cheap VPS hosting fails. On a CoolVDS NVMe plan, we typically see this drop to 0 within minutes, even for write-heavy workloads.
Step 5: Optimization & The Cutover
Before you switch traffic, optimize the new server. Since it has more RAM (presumably), tune your InnoDB buffer pool. A good rule of thumb for a dedicated database server is 70-80% of total RAM.
# /etc/mysql/my.cnf
innodb_buffer_pool_size = 8G
innodb_flush_log_at_trx_commit = 1 # ACID compliance is mandatory for data safety
innodb_flush_method = O_DIRECT
The Cutover Procedure
- Lower TTL: Lower your DNS TTL to 300 seconds a day before the migration.
- Read-Only Mode: Set the old Master to read-only. This ensures no new data is written to the old drive that doesn't make it to the new one.
SET GLOBAL read_only = ON; - Verify Sync: Ensure
Seconds_Behind_Masteris 0 on the slave. - Promote Slave: Stop the slave capabilities on the new server.
STOP SLAVE; RESET SLAVE ALL; - Switch App Config: Point your application's database connection string to the new server IP.
Total downtime for writes? Approximately 30 to 60 seconds. Total downtime for reads? Zero.
Data Sovereignty and The Road Ahead
We are operating in a changing legal landscape. With the EU's data protection reforms solidifying, keeping your customer data within recognized borders is becoming critical. Hosting on CoolVDS ensures your data resides on infrastructure governed by clear jurisdiction, unlike some budget providers hopping between offshore zones.
Don't let slow I/O or sloppy migration practices compromise your uptime. Test this replication strategy on a development instance first. If you need a sandbox that mimics production-grade NVMe throughput, deploy a CoolVDS instance. It takes about 55 seconds, which is less time than it takes to explain to your boss why the database is down.