Console Login

Stop Sharing Your Kernel: Why KVM is the Only Serious Choice for Production Hosting in 2009

Stop Sharing Your Kernel: Why KVM is the Only Serious Choice for Production Hosting

It’s 3:00 AM. Your monitoring system just paged you. The load average on your VPS has spiked to 25.00, but top shows your CPU usage is idle. You can't write to the disk. SSH is lagging by three seconds.

Congratulations, you are the victim of a "noisy neighbor" on an oversold OpenVZ node. Someone else on the same physical box is compiling a kernel or getting front-paged on Slashdot, and because you share the host kernel, their I/O wait is now your I/O wait.

In the Nordic hosting market, efficiency is usually the priority. But efficiency often masks overselling. At CoolVDS, we took a hard look at the virtualization landscape in early 2009—pitting the established Xen and the ubiquitous OpenVZ against the newcomer, KVM (Kernel-based Virtual Machine). The conclusion was obvious.

The Architecture of Isolation

Most budget VPS providers in Europe love OpenVZ. It’s container-based virtualization. It’s lightweight, sure, but it relies on a single shared kernel. If the host node runs a 2.6.18 kernel, so do you. You cannot load your own kernel modules. You cannot tune your TCP stack effectively for high-traffic tuning.

KVM is different. With KVM, the Linux kernel acts as the hypervisor. Each guest has its own private virtualized hardware: network card, disk controller, graphics adapter, and most importantly, its own kernel.

Why does this matter for your MySQL database?

Let's look at a real-world scenario. We recently migrated a high-traffic Magento setup from a generic US-based OpenVZ host to a CoolVDS KVM instance in Oslo. The database was locking up constantly.

On the old host, we couldn't touch the disk scheduler. On KVM, we immediately changed the guest's I/O scheduler from cfq to deadline, which is far superior for database workloads.

# echo deadline > /sys/block/sda/queue/scheduler

We then tweaked the /etc/my.cnf to utilize the dedicated RAM fully, without fear of the host node reclaiming it (a common issue with OpenVZ "burstable" RAM limits):

[mysqld] innodb_buffer_pool_size = 1G innodb_flush_log_at_trx_commit = 2 query_cache_size = 64M

The result? Page load times dropped from 4.2 seconds to 0.8 seconds. Stability isn't just about uptime; it's about predictable performance.

The Storage Bottleneck: SAS vs. SSD

While Intel has started making waves with their X25-E Extreme SSDs, they remain prohibitively expensive for mass storage. However, standard SATA drives in a RAID array simply cannot handle the random I/O of 20+ virtual machines.

If you are running a production application, ask your provider what spins underneath. If they say "SATA," run away. You need 15k RPM SAS drives in Hardware RAID-10. This is the CoolVDS standard. The seek times on SAS are drastically lower than SATA, ensuring that your iowait doesn't kill your application responsiveness.

Pro Tip: Check your disk latency with ioping if you have it, or a simple dd test (carefully!) to measure write throughput. If you aren't seeing at least 80MB/s on a sequential write, your provider is overloading the spindle.

Data Privacy: The "Patriot Act" Risk

As a Systems Architect, I worry about more than just packets. I worry about jurisdiction. If you host in the US, your data is subject to the USA PATRIOT Act, which allows federal agencies to access data without a warrant. For Norwegian businesses, or anyone handling sensitive European customer data, this is a massive liability.

Under the Norwegian Personopplysningsloven (Personal Data Act) and the EU Data Protection Directive, your customers have rights that don't exist on US soil. Even with the "Safe Harbor" framework, relying on US hosting is a legal gray area that many CIOs are starting to flag.

By keeping your data in our Oslo facility, you benefit from:

  • Legal Sovereignty: Protection under strict Norwegian privacy laws.
  • Low Latency: Sub-5ms pings to anywhere in Norway via NIX (Norwegian Internet Exchange).
  • Green Power: Our datacenter runs almost entirely on hydroelectric power.

Comparison: Virtualization Technologies

Feature OpenVZ (Containers) Xen (Paravirtualization) KVM (CoolVDS)
Kernel Shared with Host Modified Guest Kernel Fully Independent
Performance Fast, but inconsistent Stable Near Native
Overselling Risk Extreme Moderate Low (Hard RAM limits)
OS Support Linux only Linux/Windows Linux/Windows/BSD

Final Thoughts

Virtualization is maturing fast. While OpenVZ was great for cheap hobby boxes in 2006, 2009 demands true isolation. KVM (merged into the Linux kernel just two years ago) has rapidly become the gold standard for those of us who need to sleep at night.

Don't let a shared kernel be your single point of failure. If you need a server that acts like a server—where your RAM is yours and your disk I/O screams—it’s time to upgrade.

Ready to compile your own kernel modules? Deploy a high-performance KVM instance in Oslo with CoolVDS today.