The Network is the Computer (and it's probably misconfigured)
If I had a krone for every time I've seen a Kubernetes cluster bring a production database to its knees not because of bad queries, but because of conntrack exhaustion or erratic latency between nodes, I’d have retired to a cabin in Lofoten by now. It is December 2025. We are past the era of "it works on my machine." If you are running high-performance workloads in Norway/Europe, default Kubernetes networking settings are not just inefficient; they are a liability.
Most developers treat the Kubernetes network as a black box. You create a Service, magic happens, and traffic flows. Until it doesn't. Until your latency spikes to 200ms between pods sitting on the same physical host because your hair-pinning configuration is garbage. In this deep dive, we are cutting through the abstraction layers. We will look at eBPF (Extended Berkeley Packet Filter), the transition to the Gateway API, and why the physical location of your nodes—specifically relative to NIX (Norwegian Internet Exchange)—matters more than you think.
1. The Death of kube-proxy and the Rise of eBPF
For years, kube-proxy using iptables was the standard. It was reliable, understood by everyone, and incredibly slow at scale. Every service update meant reloading massive iptables rulesets. O(n) complexity in a O(1) world.
By late 2025, running standard iptables-based networking for high-traffic clusters is negligence. We utilize eBPF to bypass the kernel's heavy networking stack. This isn't just about speed; it's about observability and security without the sidecar tax.
Pro Tip: If you are deploying on CoolVDS, our kernels are tuned for eBPF throughput out of the box. Don't fight the legacy stack; replace it. We recommend Cilium as the CNI (Container Network Interface) to fully replace kube-proxy.
Configuring Cilium for Direct Routing
Tunneling (VXLAN/Geneve) is fine for convenience, but it adds overhead (encapsulation/decapsulation). For raw performance, especially when your nodes are on a high-performance underlay like our NVMe storage instances, you want Direct Routing. This allows Pod IPs to be routable on the node network.
Here is the helm configuration we use for maximum throughput in our Oslo zones:
# cilium-values.yaml
kubeProxyReplacement: true
k8sServiceHost: "10.96.0.1"
k8sServicePort: "443"
routingMode: "native" # Direct routing, no tunnels
ipv4:
enabled: true
autoDirectNodeRoutes: true # Automatically install routes to other nodes
bpf:
masquerade: true # Faster than iptables masquerading
loadBalancer:
mode: "dsr" # Direct Server Return - crucial for latency!
Note the loadBalancer.mode: "dsr". With Direct Server Return, the return traffic from the backend pod goes directly to the client, bypassing the load balancer node. This cuts the hop count in half for the response path, which is often the heavy part of the flow (data retrieval).
Verifying the BPF Map
Once deployed, don't just trust the UI. Check the maps. A healthy eBPF map looks like this:
kubectl -n kube-system exec ds/cilium -- cilium bpf lb list
2. The Physical Layer: Why "VPS Norway" Actually Matters
You can tune your CNI until you are blue in the face, but you cannot code your way out of physics. If your customers are in Oslo, Bergen, or Trondheim, and your cluster is hosted in Frankfurt, you are starting with a 15-25ms handicap. In high-frequency trading or real-time gaming, that is an eternity.
Data residency is the other half of the coin. With the strict enforcement of GDPR and interpretations of Schrems II, keeping data within Norwegian borders (or at least EEA with strict guarantees) is often a legal requirement, not just a technical one. We see many devs using hyperscalers unaware that their "EU" zone might still be piping metadata across the Atlantic.
At CoolVDS, our infrastructure is peered directly at NIX. When a packet leaves your pod, it hits the Norwegian backbone almost instantly. We use enterprise-grade NVMe storage, meaning that when your database pod flushes to disk, I/O wait is negligible.
3. Gateway API > Ingress
The Ingress resource was a confused spec. It tried to be everything and succeeded at nothing specific. As of late 2025, the Gateway API is the de-facto standard for defining how traffic enters your cluster. It separates the role of the Infrastructure Provider (us/NetOps) from the Application Developer.
Here is a real-world example of splitting traffic for a canary deployment—something that was painful with standard Ingress but trivial with Gateway API:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: production-traffic-split
namespace: backend
spec:
parentRefs:
- name: external-gateway
namespace: gateway-system
hostnames:
- "api.norway-service.no"
rules:
- matches:
- path:
type: PathPrefix
value: /v2/orders
backendRefs:
- name: order-service-stable
port: 8080
weight: 90
- name: order-service-canary
port: 8080
weight: 10
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: X-Region
value: "NO-Oslo-1"
This configuration handles 90/10 traffic splitting natively. No Nginx rewrites, no Lua scripts. It just works.
4. Debugging When Things Go Wrong
When packets drop, "it's the network's fault" is the default excuse. Prove it. On a CoolVDS instance, you have full root access to the underlying node, which is critical for deep debugging. You aren't hidden behind a managed control plane that blocks dmesg.
First, check if the drop is at the policy level:
cilium monitor --type drop
If that's clean, look at the interface statistics for errors:
ethtool -S eth0 | grep errors
And finally, if you need to inspect traffic inside a specific pod without installing tools in the production image (security risk!), use an ephemeral debug container:
kubectl debug -it pod/payment-service-x89s --image=nicolaka/netshoot --target=payment-service -- tcpdump -i eth0 -n port 8080
This attaches a container with network tools to the running pod's network namespace. You can see exactly what the application sees.
5. Security: Network Policies are Mandatory
By default, Kubernetes allows all traffic between all pods. In 2025, with ransomware gangs scanning for open internal ports, this is unacceptable. You must implement a Zero Trust model. Start by denying everything, then whitelist.
This policy locks down a database so only the backend API can talk to it:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-access-control
namespace: database
spec:
podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: backend
podSelector:
matchLabels:
app: api-server
ports:
- protocol: TCP
port: 5432
Conclusion
Kubernetes networking in 2025 is powerful, but unforgiving. By leveraging eBPF/Cilium, adopting the Gateway API, and enforcing strict Network Policies, you build a cluster that is resilient and observable.
However, software optimization hits a ceiling if the hardware is mediocre. Low latency requires physical proximity and high-performance I/O. Don't let your perfectly tuned K8s cluster sit on oversold hardware with noisy neighbors.
Ready to see the difference real hardware makes? Spin up a CoolVDS instance in our Oslo datacenter. Test the latency yourself. Your packets (and your users) will thank you.