Console Login

The Serverless Trap: Building High-Performance FaaS on Bare-Metal VPS

The Serverless Trap: Why Smart Engineers Are Moving Back to VPS

Let’s clear the air: "Serverless" is a marketing term. There are always servers. The difference is whether you control them, or whether you pay a premium for a hyperscaler to manage them while you suffer through 500ms cold starts. In the Nordic dev scene, we are seeing a shift. Teams that bought into the AWS Lambda or Azure Functions dream are waking up to massive egress bills and GDPR headaches. If your data touches a US-owned control plane, are you truly compliant with Schrems II? Probably not.

I’ve spent the last decade debugging distributed systems, and I can tell you that the most robust "serverless" architecture is often one you build yourself. It sounds counterintuitive until you look at the latency numbers. By deploying a lightweight Kubernetes distribution (like K3s) on high-frequency NVMe VPS instances, you gain the developer experience of serverless (push-to-deploy) with the raw performance of bare metal. And you keep the data in Oslo.

The Architecture: Private FaaS (Function-as-a-Service)

We aren't going to manage raw EC2 instances with bash scripts. We want the event-driven capability of serverless. For this, we use OpenFaaS on top of K3s. This stack allows you to deploy functions in seconds, scale to zero, and handle thousands of requests per second without the "API Gateway tax" public clouds charge.

Pro Tip: The bottleneck in self-hosted Kubernetes is almost always etcd latency. If your VPS provider uses standard SSDs or (god forbid) spinning rust, your cluster will flake under load. You need NVMe storage with high IOPS. This is why we benchmark CoolVDS instances before deploying control planes.

Step 1: The Foundation (K3s on CoolVDS)

First, provision a CoolVDS instance with at least 4 vCPUs and 8GB RAM. We need headroom for the orchestration layer. We'll use a standard Linux distro like Ubuntu 22.04 LTS or AlmaLinux 9. Once you have SSH access, install K3s. It's a binary less than 100MB that removes the bloat of upstream Kubernetes.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -

I disable the default Traefik ingress controller because we want fine-grained control over our ingress later, likely with NGINX or Contour.

Step 2: Deploying the Function Engine

With K3s running, verify your node status. It should be Ready in seconds thanks to the low latency of local storage.

$ kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
coolvds-01    Ready    control-plane,master   35s   v1.28.2+k3s1

Now, we use arkade (a marketplace for K8s apps) to install OpenFaaS. This is the standard way to deploy in 2024.

curl -sLS https://get.arkade.dev | sudo sh arkade install openfaas

This command deploys the gateway, the queue worker, and NATS. In a public cloud, these components would be opaque services costing you $0.40/million requests. Here, they are just processes on your VPS.

Step 3: Defining a Function

The beauty of this setup is the workflow. You define functions in YAML, just like Serverless Framework. Here is a Python function definition used for image processing (a common high-CPU task).

version: 1.0
provider:
  name: openfaas
  gateway: http://127.0.0.1:8080
functions:
  image-resize:
    lang: python3-http
    handler: ./image-resize
    image: registry.coolvds-client.no/image-resize:latest
    labels:
      com.openfaas.scale.min: "1"
      com.openfaas.scale.max: "20"
    annotations:
      com.openfaas.health.http.initialDelay: "2s"
    environment:
      write_debug: true
      read_timeout: 10s
      write_timeout: 10s

Notice the com.openfaas.scale.min: "1" label. This prevents the "cold start" problem entirely by keeping one hot replica. On AWS Lambda, provisioned concurrency costs extra. On your CoolVDS VPS, it's just utilizing RAM you've already paid for.

The Hidden Performance Killer: Disk I/O

When you run your own FaaS platform, container churn is high. Images are pulled, unpacked, and deleted constantly. This generates significant I/O wait. If your hosting provider throttles IOPS, your functions will hang.

We recently migrated a client from a generic cloud provider to CoolVDS solely because of the storage backend. Benchmarking with fio showed the difference in random read/write speeds, which is critical for Docker overlay filesystems.

Metric Standard Cloud VPS CoolVDS (NVMe)
Random Read (4k) 2,500 IOPS 85,000+ IOPS
Latency 2.5ms 0.08ms
Container Start Time 3.2s 0.4s

Data Sovereignty and The "Norwegian Advantage"

For developers targeting the Norwegian market, latency to the NIX (Norwegian Internet Exchange) matters. Routing traffic through Frankfurt or London adds 20-30ms of unnecessary round-trip time. By hosting your FaaS infrastructure on CoolVDS nodes physically located in Norway, you ensure:

  • GDPR Compliance: Data doesn't leave the EEA.
  • Performance: Sub-millisecond routing to local ISPs.
  • Cost Predictability: Flat-rate VPS pricing vs. variable serverless billing.

Optimizing Kernel Parameters for High-Density

Running hundreds of functions on a single node requires tuning the Linux kernel. The defaults are too conservative. Update your /etc/sysctl.conf to handle the network load.

# Allow more connections
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 8192

# Fast recycling of TIME_WAIT sockets
net.ipv4.tcp_tw_reuse = 1

# Increase file descriptors for high concurrency
fs.file-max = 1000000

Apply these with sysctl -p. Without these, your API Gateway will start dropping connections during traffic spikes, regardless of how much CPU you have.

Conclusion

Serverless architecture is a pattern, not a product. You don't need a credit card linked to a hyperscaler to build event-driven systems. You need solid architecture and hardware that keeps up with your code. Building a private FaaS on CoolVDS gives you the best of both worlds: the developer velocity of serverless and the economic sanity of bare-metal VPS.

Ready to take control of your stack? Stop optimizing for someone else's cloud. Spin up a high-performance NVMe instance on CoolVDS today and deploy your first function in under 60 seconds.