Migrating Databases Without Losing Sleep (or Data)
It is 3:00 AM. You have a terminal open, a pot of coffee that went cold two hours ago, and a progress bar stuck at 98%. We have all been there. Database migration is the single most anxiety-inducing task in systems administration. Do it right, and nobody notices. Do it wrong, and you are explaining to the CEO why the online store is showing a 500 error during the morning rush.
In 2018, acceptable downtime is effectively zero. Whether you are running a high-traffic media site in Oslo or a SaaS platform serving all of Scandinavia, "maintenance windows" are a luxury of the past. If you are still using mysqldump > file.sql and SCPing it to a new server while the site is offline, stop. You are hurting your business.
This guide covers battle-tested strategies for migrating MySQL/MariaDB instances with minimal interruption, focusing on data integrity and performance. We will look at why underlying hardware—specifically the NVMe storage we standardized on at CoolVDS—makes or breaks these operations.
The Anatomy of a Migration Disaster
Most failed migrations stem from three issues: network latency, inconsistent data states, and disk I/O bottlenecks. I once watched a junior admin try to migrate a 500GB database over a public network without compression. The transfer took so long that the data was stale by the time it arrived. The delta was too large to sync.
To avoid this, we need a strategy that prioritizes consistency and speed.
Pre-Flight Check: The Environment
Before touching a single config file, verify your target environment. If you are moving to a CoolVDS instance, you are already ahead because we use KVM virtualization. Unlike OpenVZ (common in budget VPS providers), KVM gives you a dedicated kernel and strict resource isolation. This is critical for databases.
Pro Tip: Check your target disk scheduler. On CentOS 7, you generally wantdeadlineornoopfor virtualized SSD/NVMe storage to minimize latency.
cat /sys/block/vda/queue/scheduler
# If it says [cfq], change it.
echo noop > /sys/block/vda/queue/scheduler
Strategy 1: The Master-Slave Switchover (Zero Downtime)
This is the gold standard. You set up the new server as a replication slave of the current production server (master). Once they are in sync, you promote the slave to master.
Step 1: Configure the Master
Edit your my.cnf (usually in /etc/mysql/ or /etc/). You need binary logging enabled. If this wasn't on already, a restart is required.
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
Step 2: Dump and Position
We use mysqldump with the --master-data=2 flag. This comments the binary log coordinates into the dump file, so the slave knows exactly where to start replicating. We also use --single-transaction to ensure InnoDB consistency without locking tables.
mysqldump -u root -p \
--all-databases \
--master-data=2 \
--single-transaction \
--quick \
--routines \
--triggers | gzip > full_backup.sql.gz
Step 3: secure Transfer via SSH
Don't just FTP this file. Use scp or rsync. Since we are dealing with Norwegian data privacy standards (and the GDPR enforcement that hit us in May), data in transit must be encrypted.
rsync -avz --progress -e ssh full_backup.sql.gz user@target-coolvds-ip:/home/user/
Step 4: Configure the Slave (Target)
On your new CoolVDS instance, import the data and configure the replication.
# In my.cnf on the new server
[mysqld]
server-id = 2
# optimize for write speed during import
innodb_flush_log_at_trx_commit = 2
# Import
zcat full_backup.sql.gz | mysql -u root -p
Once imported, grep the dump file to find the coordinates:
zcat full_backup.sql.gz | head -n 50 | grep "CHANGE MASTER TO"
Log into MySQL on the new server and start the replication:
CHANGE MASTER TO
MASTER_HOST='192.0.2.10', -- Old Server IP
MASTER_USER='replication_user',
MASTER_PASSWORD='secure_password',
MASTER_LOG_FILE='mysql-bin.000001', -- from the grep above
MASTER_LOG_POS= 107;
START SLAVE;
Strategy 2: Physical Backups with Percona XtraBackup
If your database is larger than 50GB, mysqldump is too slow. The restore process involves rebuilding indexes, which takes forever. Percona XtraBackup works at the filesystem level. It copies the actual InnoDB data files while the server runs.
Why use this? Because it supports hot backups. Your store stays online. On a CoolVDS NVMe instance, the restore speed is limited only by the network throughput, not disk IOPS.
# Create the backup
xtrabackup --backup --target-dir=/data/backups/
# Prepare the backup (apply transaction logs)
xtrabackup --prepare --target-dir=/data/backups/
Then, simply rsync the /data/backups/ directory to the new server's mysql data directory (usually /var/lib/mysql).
The Hardware Reality: NVMe vs. SSD
In 2018, many providers are still selling "SSD VPS" which are actually SATA SSDs in a crowded RAID array. The latency there can spike to 5-10ms under load. For a database, that is fatal.
CoolVDS uses NVMe (Non-Volatile Memory Express) over PCIe. The difference isn't just bandwidth; it's queue depth. NVMe can handle thousands of parallel command queues. When you are replaying binary logs to catch up a slave server, that I/O capability is the difference between a 5-minute sync and a 5-hour sync.
| Storage Type | Avg Latency | IOPS (Random Read) | Suitability for DB |
|---|---|---|---|
| HDD (7.2k) | 10-15ms | ~100 | Backup only |
| SATA SSD | 0.5-1ms | ~5,000 | Web Server |
| CoolVDS NVMe | <0.1ms | ~20,000+ | High-Load DB |
Norwegian Context: Latency and GDPR
Hosting outside the country adds latency. If your application servers are in Oslo (interacting with NIX) and your database is in Frankfurt, you are adding 20-30ms round trip for every query. A Magento page load might run 50 queries. That's 1.5 seconds of pure network lag added to your TTFB (Time To First Byte).
Furthermore, Datatilsynet (The Norwegian Data Protection Authority) is strict. While GDPR unifies rules across Europe, keeping data within Norwegian borders simplifies compliance arguments regarding data sovereignty.
Final Cutover
Once your slave is showing Seconds_Behind_Master: 0, you are ready.
- Put your application in maintenance mode (or read-only).
- Verify the slave has processed all events.
- Stop the slave:
STOP SLAVE; - Point your application config to the new CoolVDS IP.
- Start the application.
Total downtime? Usually less than 30 seconds. Your cache stays warm, your users stay happy, and you get to finish that coffee.
Database performance is 80% architecture and 20% hardware, but bad hardware ruins good architecture every time. Don't bottle-neck your clean SQL with slow I/O. Spin up a CoolVDS NVMe instance today and see what your database is actually capable of.