Serverless Without the Handcuffs: Implementing Private FaaS Patterns on High-Performance VDS in 2025
Let’s clear the air: "Serverless" is a lie. There are always servers. The only difference is whether you control them, or if you're renting them by the millisecond at a 400% markup while praying your cold starts don't kill your conversion rates. I’ve spent the last decade watching engineering teams migrate to AWS Lambda or Azure Functions, only to come crawling back to VPS infrastructure two years later when their cloud bill hits five figures and latency to Oslo spikes unpredictably.
In 2025, the smart money isn't on public cloud FaaS (Function-as-a-Service). It's on Private FaaS. It gives you the developer velocity of serverless—git push deployment, auto-scaling, event-driven triggers—without the vendor lock-in or data sovereignty nightmares that keep Norwegian CTOs awake at night.
Here is how you build battle-tested serverless architecture patterns using CoolVDS as your engine room.
The Architecture: The "Iron-FaaS" Stack
We aren't creating a distributed mess here. We are building a lean, mean, function-executing machine. The stack of choice for 2025 is K3s (lightweight Kubernetes) paired with OpenFaaS. This combination allows you to run event-driven functions on standard KVM instances.
Why KVM? Because containers need kernel isolation to be secure, and shared hosting containers (LXC/OpenVZ) are noisy neighbors. On a CoolVDS KVM instance, you get dedicated CPU time. When 50 functions fire at once, you need raw compute, not a "burstable" promise.
Pattern 1: The Asynchronous Webhook Receiver
This is the most common pain point. You have a Stripe webhook or a heavy data ingestion endpoint. If you process this synchronously, your client times out. In a public cloud, you pay for the API Gateway, the Queue, and the Function. On your own VDS, you architect it with Redis and a background worker.
The Flow: Nginx → OpenFaaS Gateway → NATS (Queue) → Go Function.
First, we tune the node. You cannot run high-concurrency serverless workloads on default Linux settings. You will hit file descriptor limits immediately.
# /etc/sysctl.conf tuning for high-concurrency FaaS
# Increase max open files for heavy container usage
fs.file-max = 2097152
# Increase the backlog for high connection bursts (webhook storms)
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000
# Allow more local port range for outbound connections (db calls)
net.ipv4.ip_local_port_range = 1024 65535
# Minimize swap usage. We want RAM speed, not disk thrashing.
vm.swappiness = 1
Apply this with sysctl -p. If you are running on a provider with slow spinning rust (HDD), these settings won't save you. You need the NVMe storage standard on CoolVDS to handle the I/O pressure of writing logs and state from hundreds of short-lived containers.
Pattern 2: The "Fan-Out" Image Processor
Let's say you are building an e-commerce platform for a retailer in Bergen. Users upload high-res images. You need to resize them, watermark them, and strip metadata. Doing this on a single monolithic PHP process will lock up your threads.
Instead, we use the Fan-Out Pattern. One event triggers multiple functions in parallel.
Here is the OpenFaaS stack definition stack.yml for a Python-based resizer. Note the resource limits—critical for density.
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
img-resizer:
lang: python3-http
handler: ./img-resizer
image: registry.coolvds-client.no/img-resizer:latest
labels:
com.openfaas.scale.min: 2
com.openfaas.scale.max: 20
environment:
write_debug: true
read_timeout: 10s
write_timeout: 10s
limits:
memory: 128Mi
cpu: 100m
requests:
memory: 64Mi
cpu: 50m
The Secret Sauce: The com.openfaas.scale.min: 2 label. This prevents "cold starts." We keep two hot replicas ready. On AWS Lambda, keeping instances warm costs a fortune. On CoolVDS, it costs you nothing extra because you've already paid for the RAM.
Pattern 3: The GDPR-Shield Proxy
This is specific to our region. Datatilsynet (The Norwegian Data Protection Authority) has made it very clear: transferring PII (Personally Identifiable Information) to US-controlled clouds is a legal minefield post-Schrems II.
The architecture pattern here is the Reverse Proxy Gatekeeper. You run Nginx at the edge on a CoolVDS instance in Oslo. It terminates SSL/TLS, strips sensitive headers, and then routes the request to your internal functions. The data never leaves the Norwegian legal jurisdiction.
# nginx.conf inside your Gateway VDS
http {
upstream faas_gateway {
server 10.0.0.5:8080; # Internal IP of your Function node
keepalive 64;
}
server {
listen 443 ssl http2;
server_name api.norway-service.no;
# SSL optimizations for low latency handshakes
ssl_certificate /etc/letsencrypt/live/domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain/privkey.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
location /function/ {
proxy_pass http://faas_gateway;
proxy_http_version 1.1;
proxy_set_header Connection "";
# Strict header stripping for compliance
proxy_hide_header X-Powered-By;
proxy_set_header X-Real-IP $remote_addr;
# Buffer tuning for JSON payloads
proxy_buffers 8 16k;
proxy_buffer_size 32k;
}
}
}
Pro Tip: Network latency within Norway matters. Routing traffic from a user in Trondheim to a server in Frankfurt adds 30-40ms round trip. Routing it to a CoolVDS instance in Oslo keeps it under 10ms. For high-frequency trading or real-time gaming backends, this is non-negotiable.
The Infrastructure Reality Check
I have deployed OpenFaaS on cheap, oversold VPS providers before. It’s a disaster. Why?
- Etcd Latency: Kubernetes (and thus K3s) relies on etcd. Etcd is incredibly sensitive to disk write latency (fsync). If your provider has "noisy neighbors" stealing I/O, your entire cluster will flap and leader elections will fail. CoolVDS NVMe arrays provide the consistent, low-latency IOPS required to keep etcd stable.
- Packet Loss: Serverless relies on many internal HTTP calls (Gateway to Provider to Function to Database). A 1% packet loss rate compounds across these internal hops, resulting in massive 99th percentile latency spikes.
Here is a quick Python script to stress-test your current provider's I/O stability before you try to run FaaS on it. If the deviation is high, move to CoolVDS.
import time
import os
def sync_test(cycles=1000):
start = time.time()
f = open("test_file.bin", "wb")
for i in range(cycles):
f.write(os.urandom(4096))
os.fsync(f.fileno()) # Force write to disk
f.close()
end = time.time()
print(f"Time for {cycles} fsyncs: {end - start:.4f}s")
os.remove("test_file.bin")
if __name__ == "__main__":
print("Starting I/O Latency Test...")
sync_test()
If that script takes more than 2 seconds on 1000 cycles, your current host is stealing your I/O.
Take Control of Your Stack
Serverless architecture is brilliant. Handing over the keys to your infrastructure to a hyperscaler is not. By running K3s and OpenFaaS on CoolVDS, you get the best of both worlds: the operational efficiency of functions and the raw power, cost-predictability, and legal compliance of bare-metal virtualization.
Stop accepting cold starts and variable billing as a fact of life. Build a platform that actually works for your engineering team.
Ready to build your private FaaS cluster? Deploy a high-performance NVMe KVM instance on CoolVDS today and get root access in under 60 seconds.