Console Login

Scaling Past Apache: The Definitive Nginx Reverse Proxy Guide for High-Traffic Norwegian Sites

Scaling Past Apache: The Definitive Nginx Reverse Proxy Guide for High-Traffic Norwegian Sites

Let’s be honest: if you are still serving static assets directly through Apache with mod_php in 2012, you are setting money on fire. I recently watched a mid-sized Norwegian e-commerce store crash during a modest marketing campaign. Their 16GB RAM server capitulated under just 400 concurrent users. Why? Because every single image request was spawning a heavy Apache child process consuming 40MB of RAM.

The solution isn't to buy more RAM. The solution is architecture. specifically, placing Nginx in front of your application stack.

In this guide, I will show you the exact reverse proxy configuration we use at CoolVDS to handle thousands of requests per second on standard KVM instances. We aren't just talking about theory; these are the configs keeping sites alive when the traffic spikes hit.

The Architecture: Nginx as the Shield

The concept is simple but powerful. Nginx acts as the "front door" (Reverse Proxy). It handles the messy work: SSL termination, gzip compression, and serving static files (JPG, CSS, JS). It only passes the dynamic requests (PHP, Python, Ruby) to the backend server (Apache/PHP-FPM) sitting on localhost.

This offloads the heavy lifting from your application server, allowing it to focus strictly on code execution.

Prerequisites

  • A CoolVDS KVM VPS (Ubuntu 12.04 LTS recommended).
  • Root access via SSH.
  • Nginx 1.2.x (The stable branch as of mid-2012).

Step 1: The Base Configuration

First, install the latest stable Nginx. Do not rely on the default repositories if they are outdated; add the PPA if necessary, but for 12.04, the default is acceptable for most.

apt-get update && apt-get install nginx

Now, let's strip down /etc/nginx/nginx.conf to the essentials. We need to tweak the worker processes to match your CPU cores. If you are on a CoolVDS dual-core instance, set this to 2 or auto.

user www-data;
worker_processes 2;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
    # multi_accept on; 
    use epoll;
}
Pro Tip: Always use use epoll; on Linux kernels. It is significantly more efficient than select or poll for handling thousands of connections.

Step 2: The Reverse Proxy Block

This is where the magic happens. We will configure a virtual host file in /etc/nginx/sites-available/default. We want Nginx to listen on port 80 and forward requests to Apache running on port 8080.

server {
    listen 80;
    server_name example.no www.example.no;

    # Serve static files directly - bypassing the backend completely
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
        access_log        off;
        log_not_found     off;
        expires           30d;
        root              /var/www/example.no/public_html;
    }

    # Pass everything else to the backend
    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Timeouts for slow backends
        proxy_connect_timeout 60;
        proxy_send_timeout 90;
        proxy_read_timeout 90;
    }
}

Understanding the Headers

The proxy_set_header directives are critical. Without them, your backend application will think every request is coming from 127.0.0.1. If you are running a CMS like WordPress or Magento, this breaks IP-based security and logging.

Step 3: buffer Optimization (The Silent Killer)

Default Nginx buffer settings are often too small for heavy POST requests (like uploading images to a CMS). If the headers or body exceed the buffer, Nginx writes them to disk. Disk I/O, even on our high-speed SSD arrays, is slower than RAM.

Add this inside your http block or specific server block:

client_max_body_size 10M;
client_body_buffer_size 128k;

proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;

Why Hardware Still Matters

You can tune Nginx until you are blue in the face, but if your underlying disk I/O is trash, your database (MySQL) will bottleneck the entire chain. This is a common issue with budget VPS providers who oversell their storage on slow SATA drives.

At CoolVDS, we refuse to play that game. We use Enterprise-grade SSD RAID-10 storage. In 2012, this is the single biggest upgrade you can make for database performance. When Nginx hands a request to PHP, and PHP queries MySQL, that query needs to return instantly.

Feature Standard HDD VPS CoolVDS SSD KVM
Random IOPS ~100-150 ~20,000+
Latency 5-15ms <0.1ms
Boot Time 45 seconds 12 seconds

A Note on Norwegian Compliance (Personopplysningsloven)

If you are hosting data for Norwegian citizens, you are bound by the Personopplysningsloven. Latency isn't the only reason to host in Oslo. Data sovereignty is becoming a massive talking point for Datatilsynet. Keeping your server physical location within Norway (or the EEA) simplifies your legal landscape immensely compared to hosting in the US.

Furthermore, ping times from Oslo to a server in Frankfurt might be 20-30ms. Ping times to a CoolVDS server in our Oslo data center? 2ms. For high-frequency trading or real-time applications, that difference is an eternity.

Final Thoughts

This configuration is battle-tested. It separates the concerns: Nginx handles the connections and static files, while your backend focuses on logic. It creates a robust, scalable infrastructure that can handle traffic spikes without crashing your server.

Don't let slow I/O or bad configurations kill your project. SSH into a machine that can actually keep up.

Deploy a high-performance SSD VPS on CoolVDS today and experience the difference raw speed makes.