Serverless is a Lie (But We Can Fix It)
Let’s cut through the marketing fluff. "Serverless" doesn't mean servers disappeared. It just means you are renting someone else's servers, often at a 300% markup, and you have absolutely no control over when they decide to put your application to sleep. I’ve spent too many nights debugging 504 Gateway Timeouts only to realize a cloud provider decided to reclaim resources because my function wasn't "warm" enough.
For developers targeting the Nordic market, this is a double-edged sword. You want the event-driven architecture—triggering code on database changes or webhooks—but you cannot afford the latency penalty. If your user is sitting in Oslo on a fiber connection, and your "Serverless" function takes 2.5 seconds to wake up in a Frankfurt data center, you have failed. That latency kills conversion rates faster than a bad UI.
We are going to look at a Hybrid FaaS (Function-as-a-Service) Architecture. We will build a platform that gives you the developer experience of serverless (git push to deploy) but with the raw I/O performance and predictable pricing of a dedicated environment. We are doing this on CoolVDS because, frankly, trying to run container orchestration on standard VPS providers with "noisy neighbors" and spinning rust storage is a suicide mission.
The Architecture: K3s + OpenFaaS on NVMe
The pragmatic approach in 2024 is not to go full proprietary (AWS Lambda/Azure Functions) but to use open standards. This prevents vendor lock-in and solves the data residency headaches with Datatilsynet here in Norway. By hosting the FaaS layer yourself, you control the idle timeout. You keep the functions warm. You own the metal.
Step 1: The Foundation (Don't Skimp on I/O)
Container cold starts are essentially I/O operations. The runtime has to pull the image layer, extract it, and start the process. If you are on a standard SATA SSD, this is slow. On CoolVDS NVMe instances, we see image extraction speeds that make `docker run` feel instantaneous.
We will use K3s. It’s a lightweight Kubernetes distribution that strips out the bloat of standard K8s, making it perfect for a single-node high-performance VDS.
Provision your CoolVDS instance (Ubuntu 22.04 LTS recommended). Then, install K3s without Traefik (we will install our own ingress later for finer control):
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik" sh -
Verify your node is ready to work:
sudo k3s kubectl get node -o wide
Pro Tip: Always tune your kernel for high-throughput networking if you expect a flood of events. Add these to your/etc/sysctl.confto prevent connection tracking tables from filling up during a DDoS or a viral traffic spike:
net.netfilter.nf_conntrack_max = 131072
net.ipv4.tcp_tw_reuse = 1
Step 2: Deploying the Serverless Engine
We are using OpenFaaS. It’s battle-tested, simpler than Knative for this scale, and runs beautifully on a VDS. It allows you to package any code (Python, Go, Node.js) as a Docker container and invoke it via HTTP.
First, we need `arkade` (the marketplace installer for K8s apps) to get this running fast:
# Install arkade
curl -sLS https://get.arkade.dev | sudo sh
# Install OpenFaaS on the K3s cluster
arkade install openfaas
Once installed, you have a private "AWS Lambda" running inside your CoolVDS instance. The difference? Zero cold starts if you configure it that way.
Pattern: The "Warm-Pool" Webhook Processor
Here is a real-world scenario. I recently architected a payment processing system for a Norwegian retail chain. They needed to handle incoming webhooks from Vipps. The traffic is bursty—silent at 3 AM, insane at 5 PM on Friday.
Public cloud serverless functions were timing out during the initial handshake because the Java runtime took too long to boot. We moved it to a CoolVDS instance running OpenFaaS with a "warm pool" strategy.
The Configuration
In your OpenFaaS function definition (stack.yml), you can strictly define autoscaling rules. Unlike the public cloud, where you pay for keeping things warm, here you just use the RAM you are already paying for.
functions:
vipps-handler:
lang: java11
handler: ./vipps-handler
image: registry.coolvds.internal/vipps-handler:latest
labels:
com.openfaas.scale.min: 2 # Always keep 2 replicas running. Zero cold starts.
com.openfaas.scale.max: 20 # Burst up to 20 during traffic spikes.
com.openfaas.scale.factor: 20 # Scale up quickly when load hits.
This configuration ensures that two instances are always resident in memory. When the webhook hits, response time is <10ms. You cannot achieve this reliability on a shared hosting plan or a standard restricted VPS.
Data Sovereignty & The NIX Connection
Let's talk about the elephant in the room: GDPR and Schrems II. If you are processing personal data (PII) of Norwegian citizens, sending that data to a serverless function hosted by a US provider triggers a complex compliance audit. Where are the logs stored? Who has the encryption keys?
By hosting your serverless architecture on CoolVDS, the data stays on drives physically located in our datacenters. You have root access. You control the encryption at rest (LUKS) and in transit.
| Feature | Public Cloud FaaS | OpenFaaS on CoolVDS |
|---|---|---|
| Cold Start Latency | 200ms - 3s (Unpredictable) | <10ms (Controlled) |
| Cost per 1M requests | Variable ($$$) | Fixed ($) |
| Data Residency | Complex (US Cloud Act) | Norway (Local Control) |
| Execution Time Limit | Usually 15 mins max | Unlimited |
Handling the Database Layer
Serverless functions are stateless, but your app isn't. A common mistake is allowing 500 lambda functions to hammer a standard MySQL database, exhausting the connection pool immediately. This is the "Connection Storm."
Since you are running K3s on CoolVDS, you should deploy a connection pooler like PgBouncer (for Postgres) or ProxySQL (for MySQL) alongside your functions. This acts as a shock absorber.
Here is a snippet for a ProxySQL configuration to ensure your database doesn't melt under load:
INSERT INTO mysql_users(username,password,default_hostgroup) VALUES ('app_user','secure_pass',1);
-- Multiplexing reduces thousands of function connections to a few dozen DB connections
UPDATE global_variables SET variable_value='true' WHERE variable_name='mysql-multiplexing';
LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;
Why This Stack Wins
I am a pragmatist. I don't care about "cool" tech; I care about what works at 2 AM when the pager goes off. The combination of K3s and OpenFaaS gives you the developer velocity of serverless. You write code, not config files.
But the underlying infrastructure matters. We built CoolVDS with high-frequency CPUs and NVMe storage specifically because virtualization adds overhead. If you add a heavy abstraction layer like Kubernetes on top of slow hardware, you are building a slow system. On CoolVDS, the overhead is negligible.
Stop accepting 2-second delays as "normal." Take control of your infrastructure. Deploy a CoolVDS instance today, install K3s, and see what sub-millisecond local latency actually feels like.