Console Login

Zero-Downtime Database Migration: A Survival Guide for Norwegian Systems

Zero-Downtime Database Migration: A Survival Guide for Norwegian Systems

Migrations are where sysadmins are either forged into seniors or crushed under the weight of a corrupted ibdata1 file. I recall a project late last year involving a legacy commerce platform in Bergen. The client wanted to move 800GB of transactional data to a modern stack. Their initial plan? A raw SQL dump over a standard SCP connection. Estimated downtime: 14 hours. Unacceptable.

If you are shutting down your application to move a database in 2024, you are doing it wrong. Downtime is revenue lost. Worse, it’s trust lost.

This guide isn't about the theory of moving data. It is about the gritty reality of shifting bits across the wire, specifically dealing with the latency nuances of the Nordic infrastructure and the strict regulatory environment we operate in.

The Pre-Flight Check: Latency and IOPS

Before you even touch mysqldump or pg_dump, look at your infrastructure. In Norway, we are blessed with excellent connectivity via NIX (Norwegian Internet Exchange), but packet loss between ISPs can still kill a long-running rsync process.

First, test your route. If you are migrating to a CoolVDS instance, run a trace.

mtr --report-cycles=10 --no-dns 

Look for jitter. If your standard deviation is high, a single-threaded transfer will stall. You need parallelism.

Second, verify the disk performance on the target. Database imports are write-heavy. They chew through IOPS. This is where the underlying hardware matters. We strictly use NVMe on CoolVDS because standard SSDs choke when you try to restore a 50GB table with 12 indexes. When the disk queue length spikes, your migration stalls.

Strategy 1: The Modified Dump (For datasets < 50GB)

For smaller datasets, you might get away with a dump, but do not use the defaults. The default behavior of most dump tools is single-threaded. That is a waste of modern CPU cycles.

PostgreSQL: Directory Format & Parallel Jobs

If you are running PostgreSQL 14, 15, or 16, stop using plain text dumps. Use the directory format (-Fd) combined with parallel jobs (-j).

# On the Source Server
pg_dump -h localhost -U postgres \
    -j 4 \
    -Fd \
    -f /tmp/migration_dump \
    production_db

# -j 4: Uses 4 cores to dump tables in parallel
# -Fd: Directory format, required for parallel dumps

Transfer this directory using rsync. It is restartable. If the connection drops halfway through transferring a 50GB monolithic file, you start over. With a directory dump, you resume.

Strategy 2: Replication (The Zero-Downtime Standard)

For anything critical, we don't move the database; we clone it and sync it. This minimizes downtime to the seconds it takes to switch the DNS or the application config.

MySQL / MariaDB: GTID Replication

Forget the old binary log position method. It's fragile. Use Global Transaction Identifiers (GTID). It makes failover and synchronization far less headache-inducing.

  1. Snapshot: Use Percona XtraBackup to take a hot backup of the source without locking tables.
  2. Restore: Move the backup to your CoolVDS instance and restore.
  3. Sync: Configure the new server as a replica.
# On the New CoolVDS Replica (my.cnf)
[mysqld]
server-id = 2
gtid_mode = ON
enforce_gtid_consistency = ON
log_bin = mysql-bin
read_only = 1  # Crucial safety net until switchover

Once the Seconds_Behind_Master hits 0, you are ready. You stop the app, process the remaining queue, and point the app to the new IP. Total downtime: < 30 seconds.

Pro Tip: Disable binary logging on the *target* server during the initial import if you are doing a logical restore. It saves disk I/O. Just remember to re-enable it if you plan to use that server as a master later.

Tuning for the Import

A production configuration is often terrible for a migration. Production configs prioritize data safety (ACID compliance). Migration configs prioritize raw write speed.

When you are restoring the data to your new CoolVDS instance, temporarily tweak your database configuration. This is dangerous for production, but essential for imports. Just revert it immediately after.

Setting Migration Value Why?
innodb_flush_log_at_trx_commit 0 or 2 Stops flushing to disk on every transaction. Massive speedup. Risk: Data loss if power fails (acceptable during import).
sync_binlog 0 Disables synchronous binary log flushing.
maintenance_work_mem (Postgres) 2GB+ Speeds up index creation significantly.

Here is a snippet to apply specifically for the import session in MySQL:

SET GLOBAL innodb_flush_log_at_trx_commit = 0;
SET GLOBAL foreign_key_checks = 0;
SET GLOBAL unique_checks = 0;

Warning: Turn these back on (set to 1) the moment your import is done. Running a production DB with innodb_flush_log_at_trx_commit = 0 is asking for corruption during a crash.

The Regulatory Angle: GDPR and Datatilsynet

You cannot ignore the legal reality. In Norway, and the broader EEA, where your data resides physically matters. If you are migrating away from a US-hyperscaler, you are likely doing so to reduce exposure to the CLOUD Act or simply to ensure GDPR compliance.

When migrating, ensure intermediate storage (like S3 buckets used for dumps) is also in the correct region. I have seen teams dump a GDPR-sensitive database into a default us-east-1 bucket "just for an hour." That is a violation.

CoolVDS infrastructure is strictly European. We own our hardware. We don't resell capacity from massive providers who might silently replicate your data across the Atlantic. This simplifies your Article 30 records of processing activities.

Why Infrastructure Matters

You can script the perfect migration, but if the target disk latency is 15ms, you will fail. Database performance is effectively I/O performance.

We see migrations fail not because of SQL syntax, but because of "Steal Time" on oversubscribed host nodes. Cheap VPS providers oversell CPU cycles. During a heavy import, your database needs 100% of the CPU to rebuild indexes. If the host limits you, the import that should take 2 hours takes 12.

On CoolVDS, we utilize KVM virtualization with strict resource guarantees. When you ask for 4 vCPUs for your Postgres worker processes, you get them.

Final Thoughts

Database migration is 90% planning and 10% execution. Test your backups. Verify your checksums. And for the love of uptime, do not perform a migration on Friday afternoon.

If you are planning a move and need a target environment that respects high-performance I/O requirements and Norwegian data sovereignty, it is time to look at your infrastructure.

Ready to test your import speeds? Deploy a high-performance NVMe instance on CoolVDS in under 60 seconds and see the difference raw metal performance makes.