Zero-Downtime Database Migration: Moving Heavy Workloads to Oslo Without Losing Sleep
I have seen grown men cry over failed database migrations. I have seen startups incinerate three months of revenue because someone thought a mysqldump on a 500GB live database was a solid strategy during peak hours. It isn't. If you are reading this, you are likely planning to move a critical workload—perhaps escaping a hyperscaler's egress fees or adhering to stricter data residency requirements by moving to Norway. The goal is simple: move the data, keep the lights on, and don't get fired.
In the Nordic hosting market, latency is the silent killer. Moving a database from Frankfurt or Amsterdam to Oslo requires precise choreography. You aren't just copying files; you are managing consistency across a WAN gap that, while small (typically 15-25ms), is enough to wreck a synchronous commit strategy if you aren't careful. This guide covers the Replication-Switchover Method. It is the only way I trust to move production data without a maintenance window that angers stakeholders.
1. The Infrastructure Reality Check
Before touching a config file, look at your destination hardware. Database I/O is brutal during the "catch-up" phase of replication. If your destination VPS throttles IOPS, your replication lag will never reach zero. You will be perpetually 500 seconds behind master, and you will never cut over.
Pro Tip: Do not guess your I/O capacity. Run fio on your destination CoolVDS instance before starting. If you aren't seeing sustained random write speeds suitable for your workload (on our NVMe tiers, this is standard), stop. You cannot migrate a high-write DB to slow storage.
2. The Strategy: Asynchronous Replication
We will treat the migration as a disaster recovery exercise. We set up the new server in Oslo as a replica of your current master. Once they are in sync, we promote the replica.
Scenario: Migrating PostgreSQL 16/17
PostgreSQL's streaming replication is robust. We will use pg_basebackup to bootstrap the replica.
Step A: Configure the Source (Master)
Edit your postgresql.conf. You need wal_level set to replica (or logical if doing major version upgrades) and enough max_wal_senders.
# /etc/postgresql/17/main/postgresql.conf
wal_level = replica
max_wal_senders = 10
max_replication_slots = 10
# Vital for WAN replication security
password_encryption = scram-sha-256
Update pg_hba.conf to allow the CoolVDS IP address. Be specific. Do not use 0.0.0.0/0.
# /etc/postgresql/17/main/pg_hba.conf
host replication replicator 192.0.2.15/32 scram-sha-256
Step B: Bootstrap the Replica (Destination)
On your CoolVDS instance (let's assume it's running Almalinux 9 or Debian 12), stop the postgres service and clear the data directory. Then, pull the base backup.
systemctl stop postgresql
rm -rf /var/lib/postgresql/17/main/*
# Run as postgres user
pg_basebackup -h source-db.example.com -D /var/lib/postgresql/17/main/ \
-U replicator -P -v -R -X stream -C -S migration_slot
The -R flag is magic—it automatically writes the standby.signal file and configures connection settings. The -S migration_slot creates a replication slot on the master, ensuring the master doesn't delete WAL segments that the replica hasn't consumed yet. This is critical when moving data over the internet where network jitters happen.
3. Tuning for the WAN Gap
The path to Norway is stable, but physics is physics. To optimize the replication stream over a WAN, standard TCP settings might be insufficient. On your CoolVDS destination, tweak the sysctl settings to handle the BDP (Bandwidth-Delay Product).
# /etc/sysctl.d/99-db-migration.conf
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_congestion_control = bbr
Apply with sysctl -p. Using BBR congestion control (available in Linux kernels 4.9+) significantly helps with throughput stability over geographic distances.
4. The Cutover (The Scary Part)
Once pg_stat_replication on the master shows the state as streaming and replay_lag is near zero, you are ready. This is where Datatilsynet (The Norwegian Data Protection Authority) becomes relevant. By finalizing this move, you are ensuring data sovereignty within the EEA/Norway jurisdiction.
The Execution Plan:
- Stop Writes: Put your application in maintenance mode. Or, if you are brave, set the source DB to read-only.
- Flush WAL: Ensure the replica has caught the very last transaction.
- Promote: On the CoolVDS instance:
/usr/lib/postgresql/17/bin/pg_ctl promote -D /var/lib/postgresql/17/main/
4. Switch DNS/Connection Strings: Point your app to the new IP.
5. MySQL / MariaDB Specifics
If you are running MySQL 8.0 or 8.4 LTS, the logic is similar but the commands differ. Do not rely on binlog file positions; they are fragile. Use GTIDs (Global Transaction Identifiers).
-- On Source
SET GLOBAL gtid_mode = ON;
SET GLOBAL enforce_gtid_consistency = ON;
-- On CoolVDS Replica
CHANGE REPLICATION SOURCE TO
SOURCE_HOST='source-db.example.com',
SOURCE_USER='replicator',
SOURCE_PASSWORD='YourStrongPassword',
SOURCE_AUTO_POSITION=1,
SOURCE_SSL=1;
START REPLICA;
Always enforce SOURCE_SSL=1. Moving unencrypted database traffic across public networks is professional negligence.
Why Local Hosting Matters for DBs
Beyond the technical steps, there is a compliance architecture argument. Norway is not in the EU, but it is in the EEA. GDPR applies fully. However, by hosting on CoolVDS in Oslo, you leverage Norway's strict privacy culture and green energy profile (often hydroelectric). But strictly technical? It's about latency.
| User Location | Latency to Frankfurt (AWS/Google) | Latency to Oslo (CoolVDS) |
|---|---|---|
| Oslo | ~25ms | < 2ms |
| Bergen | ~30ms | ~6ms |
| Trondheim | ~35ms | ~9ms |
For a standard web app, 25ms is negligible. For a database-heavy application doing 50 sequential queries per page load, that 25ms round-trip accumulates to over 1 second of wait time. That is unacceptable user experience.
Final Thoughts
Migrations fail when people underestimate I/O and overestimate their network stability. By using replication slots and validating your storage throughput beforehand, you remove luck from the equation. We built CoolVDS on KVM with high-performance NVMe specifically so I wouldn't have to worry about iowait during these exact scenarios.
Don't let slow I/O kill your SEO or your uptime. If you are planning a migration, verify your destination performance first. Deploy a test instance on CoolVDS in 55 seconds and run your benchmarks.