Console Login

Zero-Trust Architecture in 2017: Why Your VPN is a False Sense of Security

Zero-Trust Architecture: Why "Inside the Firewall" is a Dangerous Lie

If you are still operating under the assumption that your internal network is a "safe space," you are already compromised. The traditional "castle-and-moat" security model—where we harden the perimeter with heavy firewalls and VPNs but leave the internal LAN soft and trusting—is failing. We saw it with the massive retail breaches over the last few years; once an attacker is inside, lateral movement is trivial.

As we approach the enforcement date for the EU's General Data Protection Regulation (GDPR) next year, the stakes for Norwegian businesses are higher than ever. Datatilsynet isn't going to accept "but our firewall was expensive" as an excuse for a data leak. It is time to adopt the Zero Trust model, popularized by Google's BeyondCorp initiative. The premise is simple: Trust no packet, even if it comes from 192.168.1.10.

I've spent the last month migrating a financial services client in Oslo from a legacy VPN architecture to a granular Zero Trust setup. We found that their "secure" internal dev environment was crawling with unpatched services accessible to anyone with a VPN credential. Here is how we fixed it, and how you can implement these principles on your CoolVDS infrastructure today.

1. The Death of the Perimeter

In a Zero Trust environment, we shift access controls from the network perimeter to the individual device and user. Every request must be authenticated, encrypted, and authorized, regardless of origin.

When you deploy a VPS on CoolVDS, you get distinct public and private network interfaces. A lazy admin creates a rule allowing all traffic on eth1 (private). A smart admin treats eth1 exactly like eth0: hostile.

2. Authenticating Machines: Mutual TLS (mTLS)

Passwords can be phished. SSH keys can be stolen. But Mutual TLS (mTLS) provides a robust way to ensure that only authorized machines can talk to your backend services. In 2017, this is the most reliable way to secure service-to-service communication without the overhead of complex SDN solutions.

Here is how to configure Nginx to require a valid client certificate. This ensures that even if an attacker gets onto your network segment, they cannot query your API without the cryptographic certificate.

Step 1: Generate the Certificate Authority (CA)

# Create the CA Key and Certificate
openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt

Step 2: Generate the Client Key and CSR

# Create the Client Key
openssl genrsa -out client.key 2048

# Create the Certificate Signing Request
openssl req -new -key client.key -out client.csr

# Sign the CSR with our CA
openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt

Step 3: Hardening Nginx

Now, we configure the backend service on our CoolVDS instance to reject any connection that doesn't present a certificate signed by our CA. This goes inside your server block.

server {
    listen 443 ssl;
    server_name api.internal.coolvds-customer.no;

    ssl_certificate /etc/nginx/certs/server.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;

    # Enforce Client Certificate Verification
    ssl_client_certificate /etc/nginx/certs/ca.crt;
    ssl_verify_client on;

    location / {
        proxy_pass http://localhost:8080;
        # Pass SSL details to the backend app if needed
        proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
    }
}

With ssl_verify_client on;, a request without the correct certificate is dropped at the handshake level. It doesn't matter if they have the password; they can't even load the login page.

3. Authenticating Humans: Identity-Aware Proxy

For human-facing internal tools (like Jenkins, Kibana, or Adminers), putting them behind a simple Basic Auth is insufficient in 2017. You need centralized identity management.

We use Oauth2_Proxy (a Go-based reverse proxy) to sit in front of these internal tools. It forces users to log in via a provider (like Google or GitHub) before the request ever touches the application.

Pro Tip: Do not expose your database management ports (3306, 5432) to the public internet, ever. On CoolVDS, use SSH Tunnels or a VPN strictly for management, but wrap the application access in the mTLS logic described above.

4. Network Segmentation with Iptables

Even with mTLS, you must minimize the blast radius. If one web server is compromised, it should not be able to SSH into your database server.

On Ubuntu 16.04 (Xenial Xerus), ufw is great, but for granular Zero Trust, I prefer raw iptables or ipset to handle high-throughput blocking without the abstraction overhead. Here is a baseline configuration for a database node that only accepts traffic from a specific web node IP on the private interface.

# Flush existing rules
iptables -F

# Default Policy: DROP EVERYTHING
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT

# Allow loopback
iptables -A INPUT -i lo -j ACCEPT

# Allow established connections (crucial!)
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

# Allow SSH only from specific admin IP (VPN endpoint)
iptables -A INPUT -p tcp -s 10.8.0.5 --dport 22 -j ACCEPT

# Allow MySQL only from the Web Server Private IP
iptables -A INPUT -p tcp -s 10.10.10.15 --dport 3306 -j ACCEPT

# Log dropped packets (be careful with disk I/O on high load systems)
iptables -A INPUT -j LOG --log-prefix "IPTables-Dropped: "

5. The Norwegian Advantage: Data Sovereignty

Technological implementation is only half the battle. The other half is legal and physical jurisdiction. With the invalidation of Safe Harbor and the precarious nature of Privacy Shield, relying on US-based hosting providers is becoming a liability for Norwegian companies handling sensitive user data.

CoolVDS infrastructure is located physically in data centers subject to Norwegian law. This reduces latency to your Oslo and Bergen customer base to sub-5ms levels, but more importantly, it simplifies your GDPR compliance journey. You know exactly where the data lives. It's not floating in a nebulous "cloud region" that might span jurisdictions; it's right here on NVMe storage you can verify.

Conclusion

Zero Trust is not a product you buy; it's a mindset you adopt. It acknowledges that threats are omnipresent and that the firewall is no longer a sufficient guardian. By implementing Mutual TLS, rigorous Identity-Aware proxies, and strict iptables segmentation, you build an infrastructure that is resilient by design.

Security reduces performance only if implemented poorly. Our benchmarks show that Nginx mTLS handshakes add negligible latency compared to the security benefits gained. Stop relying on luck and legacy architectures.

Ready to harden your stack? Spin up a CoolVDS KVM instance today and start building a network that trusts no one.