Console Login

Breaking the C10k Barrier: High-Performance Nginx Reverse Proxying on CentOS 6

Breaking the C10k Barrier: High-Performance Nginx Reverse Proxying on CentOS 6

If I have to look at another server-status page showing Apache spawning 200 child processes just to serve a 4KB static image, I might actually pull the plug on the rack myself. It is 2012. We are long past the days where throwing more RAM at a problem is a viable strategy, especially when budgets are tight and traffic is spiking.

The reality for those of us managing infrastructure in Oslo or managing remote boxes across the EU is simple: Apache is a fantastic application server, but a terrible frontend.

This isn't about "web scale" buzzwords. This is about physics. When you use a thread-heavy server like Apache to handle thousands of slow clients (think mobile 3G connections dropping in and out), you run out of memory before you run out of CPU. The solution isn't a larger server; it's an event-driven reverse proxy. Enter Nginx.

The Architecture: Nginx as the Bouncer

The most robust setup I've deployed for high-traffic sites involves placing Nginx at the edge, listening on port 80/443, and proxying dynamic requests back to Apache (or PHP-FPM) listening on localhost:8080. Nginx handles the heavy lifting—SSL termination, gzip compression, and static file serving—while the backend focuses solely on generating the page.

This is crucial for Norwegian businesses targeting local customers. Latency to the NIX (Norwegian Internet Exchange) matters, but if your web server is blocking on I/O, that low-latency fiber connection is wasted.

Step 1: Installation on CentOS 6

Don't rely on the default repositories; they are often outdated. We want the stable 1.2.x branch. Create a repo file at /etc/yum.repos.d/nginx.repo:

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/6/$basearch/
gpgcheck=0
enabled=1

Then install:

yum install nginx
chkconfig nginx on
service nginx start

The Core Configuration

The default nginx.conf is too conservative. We need to tune the worker processes to utilize the underlying hardware efficiently. If you are running on a CoolVDS Enterprise SSD plan with 4 vCPUs, you need to tell Nginx to use them.

Here is a production-ready /etc/nginx/nginx.conf skeleton:

user  nginx;
worker_processes  4; # Match this to your core count

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
    use epoll; # Critical for Linux performance
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    # Optimization for file serving
    sendfile        on;
    tcp_nopush      on;
    tcp_nodelay     on;

    keepalive_timeout  65;
    
    # Gzip settings - save bandwidth!
    gzip  on;
    gzip_disable "msie6";
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    include /etc/nginx/conf.d/*.conf;
}

Configuring the Reverse Proxy

Now, let’s configure the actual proxy block. We need to ensure that the backend application (running on port 8080) receives the correct client IP addresses. Without this, your application logs will show 127.0.0.1 for every visitor, making debugging or geo-blocking impossible.

Create /etc/nginx/conf.d/app_proxy.conf:

server {
    listen       80;
    server_name  example.no www.example.no;

    # Serve static files directly, bypassing the backend
    location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
        root           /var/www/html;
        access_log     off;
        expires        30d;
    }

    # Proxy everything else to Apache/Backend
    location / {
        proxy_pass         http://127.0.0.1:8080;
        proxy_redirect     off;

        # Standard Proxy Headers
        proxy_set_header   Host             $host;
        proxy_set_header   X-Real-IP        $remote_addr;
        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
        
        # Buffer settings for handling heavy payloads
        client_max_body_size       10m;
        client_body_buffer_size    128k;

        proxy_connect_timeout      90;
        proxy_send_timeout         90;
        proxy_read_timeout         90;

        proxy_buffer_size          4k;
        proxy_buffers              4 32k;
        proxy_busy_buffers_size    64k;
        proxy_temp_file_write_size 64k;
    }
}
Pro Tip: If your backend application is PHP-based (like Drupal or WordPress), ensure your mod_rpaf is installed on the Apache side so it interprets the X-Real-IP header correctly. Otherwise, your security plugins will fail to detect brute force attacks.

The Hardware Bottleneck: Why SSDs Matter

Here is where the theory meets the metal. Nginx uses temporary files to buffer responses from the backend if they exceed the memory buffer size (defined by proxy_temp_file_write_size). If you have a slow backend generating a large report, Nginx writes that data to disk before sending it to the client.

On a traditional mechanical hard drive (HDD), these random writes can cause I/O wait times to skyrocket, causing the dreaded "load average" spike. This is why we migrated our primary clusters to CoolVDS. Their use of pure SSD storage means that even when Nginx swaps buffers to disk, the latency is negligible.

Comparison: HDD vs SSD for Proxy Buffering

Metric Standard SATA HDD (7.2k RPM) CoolVDS Enterprise SSD
Random Write IOPS ~80-120 ~50,000+
Buffer Flush Latency 15-20ms < 0.5ms
Concurrent Requests Struggles at 500+ Stable at 5,000+

Compliance and Data Location

Working in the Nordic market requires adherence to the Personal Data Act (Personopplysningsloven). While the US Patriot Act has made hosting sensitive data on American soil a legal minefield for European companies, hosting locally in Norway offers legal safety.

However, compliance isn't just about geography; it's about control. By managing your own Nginx instance on a VPS rather than using a shared hosting black box, you maintain full control over your access logs. You decide how long IP addresses are retained, ensuring you stay on the right side of the Datatilsynet guidelines.

Final Verification

Before you restart Nginx and point your DNS records, always test your configuration syntax:

nginx -t

If you see syntax is ok and test is successful, you are ready to reload:

service nginx reload

By shifting the connection handling to Nginx, you effectively immunize your server against the slow-loris attacks that cripple standard Apache setups. You gain stability, lower memory usage, and the ability to sleep through the night without pager alerts.

Don't let legacy rotating rust slow down your application. Deploy a test instance on CoolVDS today, configure this reverse proxy, and watch your load averages drop.