Console Login

Serverless Patterns Without the Cloud Tax: Building Private FaaS in Norway

Serverless Patterns Without the Cloud Tax

Serverless Patterns Without the Cloud Tax: Building Private FaaS in Norway

Let’s get one thing straight: "Serverless" is a misnomer. There are always servers. The only difference is whether you control them, or if you're renting them by the millisecond at a 400% markup.

I’ve seen too many engineering teams in Oslo migrate to AWS Lambda or Azure Functions expecting nirvana, only to wake up to three distinct nightmares: cold starts killing their user experience, unpredictable billing that scales faster than their revenue, and the looming legal headache of Schrems II.

If you are handling Norwegian user data, pushing it through a US-controlled hyper-scaler is a compliance risk you shouldn't take lightly in 2021. The alternative? Build your own Serverless platform. It gives you the developer velocity of FaaS (Function as a Service) with the fixed costs and data sovereignty of bare-metal virtualization.

The Architecture: Private FaaS on Kubernetes

We don't need the bloat of full upstream Kubernetes for this. We need K3s—a lightweight, certified Kubernetes distribution—running OpenFaaS. This stack allows you to deploy functions (Node, Python, Go) just like Lambda, but on your own terms.

However, this architecture has a critical dependency: Disk I/O and Network Latency.

Pro Tip: Kubernetes relies heavily on etcd for state management. etcd is incredibly sensitive to disk write latency. If your underlying storage waits, your entire cluster hangs. This is why shared hosting fails for K8s. You need the NVMe speeds standard on CoolVDS to keep etcd sync times under 10ms.

Step 1: The Foundation

Start with a clean Ubuntu 20.04 LTS instance. Do not use a container-based VPS (like OpenVZ); you need a proper kernel for Docker/containerd. We use KVM-based instances at CoolVDS for this reason.

First, optimize your kernel for high-throughput networking. Default Linux settings are conservative. Add this to /etc/sysctl.conf:

net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_max_syn_backlog = 5000
net.ipv4.tcp_slow_start_after_idle = 0
fs.file-max = 2097152
vm.swappiness = 10

Apply it with sysctl -p. The tcp_slow_start_after_idle = 0 flag is crucial—it prevents the connection from idling down, keeping your functions snappy.

Step 2: Lightweight Orchestration with K3s

Install K3s. It strips out legacy cloud provider binaries, making it perfect for a single-node VDS cluster.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -

We disable the default Traefik because we want tighter control over our Ingress later. Verify your node is ready:

sudo k3s kubectl get node
# NAME          STATUS   ROLES                  AGE   VERSION
# coolvds-node  Ready    control-plane,master   35s   v1.21.5+k3s1

Step 3: Deploying OpenFaaS

OpenFaaS is the engine. It handles the API gateway, scaling, and queueing. We'll use arkade, a Go-based marketplace tool perfect for 2021 stacks, to install it quickly.

curl -sLS https://get.arkade.dev | sudo sh

arkade install openfaas \
  --load-balancer 
  --set gateway.replicas=2 \
  --set queueWorker.replicas=2

Notice we set replicas to 2. Even on a single VDS, this ensures that if one process locks up processing a heavy payload, the other keeps serving traffic. This type of parallel processing requires the dedicated CPU cores you get with CoolVDS, not the "burstable" credits that disappear when you need them most.

Step 4: The Function Pattern

Let's create a function that processes data locally. This stays in Oslo. No round-trip to Frankfurt or Virginia.

faas-cli new --lang node14 process-order

Your handler.js should focus on pure logic. The infrastructure handles the rest.

'use strict'

module.exports = async (event, context) => {
  const result = {
    status: 'received',
    timestamp: new Date(),
    region: 'NO-OSL-1'
  }

  // Simulate DB write logic here
  return context
    .status(200)
    .succeed(result)
}

The Economic & Legal Reality

Why go through this trouble?

1. Latency is the new downtime.
If your customers are in Norway, serving them from a datacenter in Oslo (like CoolVDS) versus a cloud region in Sweden or Ireland makes a tangible difference. TCP handshakes matter. On a proper fiber setup connected to NIX (Norwegian Internet Exchange), you are looking at sub-5ms ping times to most Norwegian ISPs.

2. Predictable Pricing.
Cloud FaaS is cheap until it isn't. An infinite loop in a Lambda function can drain a credit card overnight. With a VPS Norway solution, your cost is capped. You pay for the instance. If you hit 100% CPU, you throttle; you don't go bankrupt.

3. Compliance (Schrems II).
The 2020 Schrems II ruling made using US-based cloud providers for European personal data legally complex. By hosting your own FaaS stack on CoolVDS, you ensure data residency. The drives are here. The memory is here. The legal jurisdiction is here.

Performance Tuning for Production

Default configurations are for safety, not speed. To handle bursts of traffic, you need to tune the OpenFaaS gateway.

Create a `values.yaml` override for the gateway configuration:

gateway:
  upstreamTimeout: "10s"
  writeTimeout: "10s"
  readTimeout: "10s"
  scaleFromZero: true

faas-netes:
  writeTimeout: "10s"
  readTimeout: "10s"

The scaleFromZero feature is controversial. It saves resources but introduces lag. On a high-performance CoolVDS instance with NVMe, the cold start of a Docker container is significantly faster than a VM boot, often under 400ms. But for critical paths, keep at least one replica running.

Conclusion

Serverless architecture is brilliant. But you don't need Amazon to do it. You need robust, low-latency compute and the willingness to own your stack.

By layering K3s and OpenFaaS on top of dedicated resources, you gain the agility of serverless without the vendor lock-in or the data privacy concerns. You become the platform engineer.

Ready to build your own compliant cloud? Deploy a high-frequency NVMe instance on CoolVDS today and experience the difference raw I/O power makes for your Kubernetes clusters.