All articles tagged with Kong
Slash latency and handle massive concurrency by optimizing the Linux kernel, NGINX buffers, and SSL termination. A deep dive for engineers targeting the Norwegian market.
Default API gateway configurations are bottlenecks waiting to happen. We dive deep into kernel tuning, upstream keepalives, and hardware selection to drop latency below 10ms.
A battle-hardened guide to optimizing API Gateways for Nordic traffic. We dive deep into kernel TCP stacks, Nginx upstream keepalives, and why underlying hardware latency dictates your 99th percentile.
Stop blaming your backend code for 504 errors. We dissect the kernel-level bottlenecks and Nginx configurations causing latency spikes, specifically tailored for Norwegian infrastructure constraints.
Default configurations are killing your API throughput. We deep dive into Linux kernel tuning, Nginx 1.23 optimizations, and why NVMe storage is non-negotiable for low-latency workloads in 2022.
Default configurations are the enemy of low latency. Learn how to tune the Linux kernel, Nginx upstreams, and TLS termination to handle 10k+ RPS without choking, specifically tailored for the Nordic infrastructure landscape.
Latency isn't just a nuisance; it's an abandonment trigger. We dissect kernel-level optimizations for NGINX and Kong, the impact of NVMe I/O on throughput, and why data residency in Norway (Schrems II) is your hidden performance weapon.
Default configurations are killing your API performance. We dive deep into Linux kernel tuning, NGINX worker optimization, and why NVMe storage is non-negotiable for low-latency workloads in the post-Schrems II era.
A deep dive into kernel-level optimizations, NGINX directives, and hardware requirements for low-latency API gateways. Essential reading for Norwegian DevOps navigating high-load environments and Schrems II compliance.
Your microservices aren't slow; your gateway configuration is. A deep dive into kernel tuning, upstream keepalives, and selecting the right infrastructure for low-latency APIs in the Nordics.
Stop blaming your backend code for latency. In 2018, the bottleneck is your kernel configuration and your hypervisor. A battle-hardened guide to tuning NGINX and Kong for high-throughput environments in Norway.
Bottlenecks in your API gateway can cripple your microservices. We dive into kernel-level tuning, Nginx worker optimization, and the infrastructure requirements needed to handle 10k+ requests per second in a pre-GDPR world.
Stop managing Nginx config files by hand. Learn how to deploy Kong as an API Gateway to centralize authentication, rate limiting, and logging for your microservices architecture, specifically optimized for high-performance KVM environments.