Console Login

Serverless Without the Lock-in: Building a Private FaaS Platform in Norway (2022 Edition)

Serverless Without the Lock-in: Building a Private FaaS Platform in Norway

Let’s get one thing straight immediately: "Serverless" is a marketing term. There are always servers. The only variable is whether you control them, or if you are renting a timeshare on a black box owned by a US tech giant. For many of us operating out of Oslo or dealing with strict EU data mandates, the "public cloud" serverless promise has started to sour. It tastes like cold starts, unpredictable billing spikes, and late-night panic about Schrems II compliance.

I have seen production deployments melt down because a recursive function trigger racked up a $5,000 bill in two hours. I have also seen latency spike to 2 seconds because a function went "cold" just when a customer tried to checkout.

If you are serious about performance and data sovereignty in 2022, the pattern isn't just "use Lambda." It is Private FaaS (Functions as a Service). By running a lightweight Kubernetes distribution like K3s with OpenFaaS on high-performance infrastructure, you get the developer velocity of serverless with the cost predictability and raw I/O speed of a Virtual Dedicated Server (VDS).

The Architecture: Why Self-Hosted FaaS?

In Norway, we have specific challenges. The Data Inspectorate (Datatilsynet) is increasingly vigilant about data transfers outside the EEA. While US providers offer "EU regions," the legal framework remains murky post-Schrems II. Hosting your compute on a Norwegian VPS provider like CoolVDS eliminates that ambiguity. Your data stays here.

Furthermore, standard VPS instances in 2022 have become incredibly powerful. With the commoditization of NVMe storage, the I/O bottleneck—which used to make self-hosted container orchestration painful—is effectively gone.

The Stack

  • Infrastructure: CoolVDS Compute Instance (Ubuntu 22.04 LTS). We need fast NVMe storage because container image pulls are I/O intensive.
  • Orchestrator: K3s. It is a lightweight Kubernetes certified distribution. It strips away the bloat of standard K8s, making it perfect for a single-node or small cluster VDS environment.
  • FaaS Framework: OpenFaaS. It is simpler than Knative and creates standard Docker containers as functions.

Step 1: Preparing the Node

First, we need to ensure our kernel is tuned for container traffic. Stock kernel settings are often too conservative for high-throughput microservices.

Access your CoolVDS instance and apply the following sysctl tweaks to handle high connection rates and fast failovers:

# /etc/sysctl.d/99-k8s-networking.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.ipv4.conf.all.forwarding = 1 net.ipv4.neigh.default.gc_thresh1 = 4096 net.ipv4.neigh.default.gc_thresh2 = 8192 net.ipv4.neigh.default.gc_thresh3 = 16384

Apply them:

sudo sysctl --system
Pro Tip: If you are serving traffic to Norwegian users, check your latency to NIX (Norwegian Internet Exchange). CoolVDS peers directly there, meaning your handshake times are often sub-5ms within the country. Public clouds routing through Stockholm or Frankfurt can't compete with that physics.

Step 2: Deploying K3s

We don't need a heavy kubeadm setup here. K3s installs a production-ready cluster in seconds. Note that we disable the default Traefik ingress controller because we want to install it manually later for finer control over SSL termination and DDoS protection configurations.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -

Check the status. It should be ready in under 30 seconds on NVMe storage.

sudo k3s kubectl get nodes

Step 3: Installing OpenFaaS

Now we layer the serverless framework on top. We will use arkade, a CLI tool that simplifies Kubernetes app installation. It was highly popular this year (2022) for good reason.

# Install arkade curl -sLS https://get.arkade.dev | sudo sh # Install OpenFaaS arkade install openfaas

This command installs the core components: the Gateway, the Provider, and NATS for asynchronous queueing. Once installed, retrieve your password:

kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo

Step 4: A Practical Code Example

Let's create a function that actually does something useful, like image resizing—a common task that is expensive on Lambda due to memory requirements. Since we are on a VPS, we have dedicated RAM, so we don't pay per GB-second.

Install the CLI:

curl -sL https://cli.openfaas.com | sudo sh

Create a Python function:

faas-cli new --lang python3-http resize-image

Modify resize-image/handler.py. Notice we can import standard libraries. In a real-world scenario, you would add Pillow to your requirements.txt.

import os def handle(req): """handle a request to the function Args: req (str): request body """ # Simulating processing logic return "Processed image with high I/O throughput on local NVMe."

Deploying this to your local cluster minimizes the feedback loop. No waiting for S3 uploads.

faas-cli up -f resize-image.yml

Performance: The NVMe Factor

Why run this on CoolVDS instead of a Raspberry Pi cluster in your basement? Disk I/O.

Serverless relies heavily on starting containers fast (Cold Starts). When a request hits a scaled-to-zero function, the system must pull the Docker image, unpack the layers, and start the process.

Metric Standard HDD VPS CoolVDS NVMe
Docker Image Pull (500MB) ~8.5 seconds ~1.2 seconds
Container Unpack ~4.0 seconds ~0.6 seconds
Total Cold Start High Latency Near Instant

On spinning metal, your database and your container runtime fight for IOPS. On NVMe, you have enough bandwidth to saturate the CPU before the disk chokes. This is critical for functions that require state or large dependencies.

Security & Compliance (GDPR)

By keeping the architecture entirely within a Norwegian datacenter, you simplify your GDPR compliance posture significantly. You know exactly where the physical drive is spinning (or flashing, in the case of NVMe). There is no hidden replication to a region in Virginia.

Additionally, you can configure network policies in K3s to lock down communication between functions.

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-external-egress namespace: openfaas-fn spec: podSelector: {} policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: name: openfaas

This policy ensures your functions can only talk to the OpenFaaS gateway, not the open internet, preventing data exfiltration.

Conclusion

Serverless is a powerful architectural pattern, but it shouldn't cost you your autonomy. By deploying OpenFaaS on top of K3s, you regain control over costs, latency, and data privacy. The "magic" of serverless is just orchestration, and in 2022, the tools to manage that orchestration are stable and mature enough for any competent DevOps team to handle.

However, the software is only as good as the hardware it runs on. If your underlying hypervisor is stealing CPU cycles or your storage is slow, your functions will lag. That is where we come in.

Ready to build your own private cloud? Deploy a high-performance NVMe instance on CoolVDS today and get your K3s cluster humming in under 60 seconds.