Console Login

Zero-Downtime Database Migration: A Survival Guide for High-Traffic Norwegian Systems

Zero-Downtime Database Migration: A Survival Guide for High-Traffic Norwegian Systems

Moving a database is controlled disaster management. I have seen seasoned engineers weep when a 500GB ibdata1 file corrupts midway through a transfer because of a network hiccup between Frankfurt and Oslo. If you are reading this, you are likely planning to move off a legacy provider where "noisy neighbors" are stealing your CPU cycles, or you are repatriating data to Norway to satisfy Datatilsynet (The Norwegian Data Protection Authority).

Hope is not a strategy. Rsync is not enough.

In this guide, we are cutting through the vendor marketing fluff. We will look at how to execute a zero-downtime migration for high-load MySQL and PostgreSQL environments, specifically targeting the distinct infrastructure landscape we see here in the Nordics in late 2025.

The Infrastructure Reality Check

Before we touch a single config file, let's talk about the metal. Most failed migrations aren't due to bad SQL; they are due to I/O bottlenecks on the target machine during the restore phase. I recently audited a migration for a Fintech startup in Stavanger. They tried to restore a 2TB dataset onto a standard SATA SSD VPS. The restore took 48 hours. The business failed.

When you are writing terabytes of data, IOPS are your oxygen. This is why we use KVM virtualization on CoolVDS. We pass through NVMe storage instructions directly to the kernel. You need sustained random write speeds, not just burstable credits that Amazon or Azure throttle after 30 minutes.

Pro Tip: Before migrating, benchmark your target disk IO. If you aren't seeing at least 50,000 IOPS on random writes, abort. You can use fio to verify this on your CoolVDS instance before starting the transfer.

Phase 1: The Tunnel (Security & Latency)

Do not expose your database port to the public internet. Ever. Even with SSL. The latency penalty of a public route through multiple hops can kill replication lag.

By August 2025, WireGuard is the de-facto standard for this. It is leaner than OpenVPN and handles roaming IPs better. We establish a private tunnel between your legacy host and your new VPS Norway instance.

# On the Target Server (CoolVDS) apt-get install wireguard umask 077 wg genkey | tee privatekey | wg pubkey > publickey

Configure the interface to minimize MTU fragmentation, which ruins database packet streams:

[Interface] Address = 10.10.0.1/24 ListenPort = 51820 PrivateKey = MTU = 1360 [Peer] PublicKey = AllowedIPs = 10.10.0.2/32 Endpoint = legacy-db.example.com:51820

Phase 2: PostgreSQL Migration (Logical Replication)

Forget pg_dump for the main cutover. It locks tables and requires downtime. For PostgreSQL 15, 16, and 17, we use Logical Replication. It allows us to copy the data while the application is still writing to the old database.

1. Configure the Source

Edit your postgresql.conf. You need wal_level set to logical. If you haven't done this, you will need a restart. This is the only downtime you should incur before the final switch.

wal_level = logical max_replication_slots = 10 max_wal_senders = 10

2. The Schema Transfer

We migrate the schema without data first. This creates the skeleton.

pg_dump -h 10.10.0.2 -U postgres --schema-only --no-publications --no-subscriptions my_database | psql -h localhost -U postgres my_database

3. Tuning the Target for Ingestion

This is where amateurs fail. Default Postgres configs are too polite. On a CoolVDS instance with high-speed NVMe storage, you must tell Postgres it's allowed to eat disk I/O. Temporarily relax durability for the initial sync to speed it up by 300%.

# TARGET postgresql.conf (Temporary for Import) fsync = off synchronous_commit = off full_page_writes = off max_wal_size = 4GB checkpoint_timeout = 30min maintenance_work_mem = 2GB

Warning: Revert fsync and synchronous_commit to on immediately after the initial sync is complete, or you risk data corruption during a power loss.

4. Start Subscription

On the legacy source:

CREATE PUBLICATION migration_pub FOR ALL TABLES;

On the new CoolVDS target:

CREATE SUBSCRIPTION migration_sub CONNECTION 'host=10.10.0.2 port=5432 user=replicator password=SECRET dbname=my_database' PUBLICATION migration_pub;

Monitor the lag. Since CoolVDS is peered directly at NIX (Norwegian Internet Exchange), if your source is also in Norway, latency should be sub-millisecond.

Phase 3: MySQL / MariaDB Strategies

For MySQL 8.0 or 8.4 LTS, we rely on GTID-based replication. The old method of tracking binlog filenames (mysql-bin.000456, position 892) is prone to human error.

Use Percona XtraBackup. It creates a hot backup without locking the database.

1. Stream Backup Directly to Target

Don't save to disk on the source if you are low on space. Stream it over SSH (or WireGuard) directly to the CoolVDS NVMe drive. This saves I/O.

xtrabackup --backup --stream=xbstream --extra-lsndir=/tmp --target-dir=./ | ssh user@10.10.0.1 "xbstream -x -C /var/lib/mysql/data/"

2. Prepare and Optimize

Once the data is on the CoolVDS side, we prepare the backup. Here is where the NVMe storage shines. The "prepare" phase is random-read heavy.

xtrabackup --prepare --target-dir=/var/lib/mysql/data/

Before starting MySQL on the new server, optimize your my.cnf for the hardware. A standard VPS config will choke on high-traffic sites.

Variable Value Why?
innodb_buffer_pool_size 70-80% of RAM Keeps working set in memory. Vital.
innodb_io_capacity 15000 Tell MySQL you have NVMe. Default is usually 200 (HDD).
innodb_flush_log_at_trx_commit 1 (Strict) or 2 (Fast) Use 2 during sync, 1 for production safety.
innodb_log_file_size 2GB Reduces checkpoint frequency, improving write throughput.

Phase 4: The Cutover (The Kill Switch)

You have two databases in sync. The application is still writing to the old one. Now we switch.

  1. Lower DNS TTL: Set your A-record TTL to 60 seconds 24 hours in advance.
  2. Stop Writes: Enable maintenance mode in your app, or set the old DB to read-only: SET GLOBAL read_only = ON;
  3. Verify Sync: Ensure the Log Sequence Number (LSN) matches.
  4. Switch Traffic: Update your app config to point to the new CoolVDS IP.

If you are running a high-compliance workload under GDPR/Schrems II, verify that your backups are also strictly located within the EEA. CoolVDS infrastructure ensures data sovereignty within Norway, removing the legal headache of explaining why your database backup is sitting in a bucket in Virginia.

Why Infrastructure Dictates Success

You can script the perfect migration, but if the underlying host steals your CPU cycles (Steal Time > 0.1%), your database will stutter. Latency jitters cause transaction deadlocks.

We built CoolVDS to solve exactly this problem for systems administrators who are tired of "best effort" performance. By strictly limiting allocation and using enterprise-grade NVMe arrays, we ensure that when you run a SELECT * JOIN across 5 million rows, the hardware isn't the bottleneck. You are.

Don't let slow I/O kill your SEO or your uptime stats. If you are planning a migration this quarter, spin up a test instance. Run your benchmarks. See the difference raw, unthrottled performance makes.

Deploy your NVMe Gen4 Instance on CoolVDS today and migrate with confidence.