We use cookies and similar technologies to improve your experience, analyze site traffic, and personalize content. By clicking "Accept All", you consent to our use of cookies. You can manage your preferences or learn more in our Privacy Policy.
Privacy & Cookie Settings
We respect your privacy and give you control over your data. Choose which cookies you want to allow:
These cookies are necessary for the website to function and cannot be disabled. They are set in response to actions made by you such as setting your privacy preferences, logging in, or filling in forms.
These cookies help us understand how visitors interact with our website by collecting and reporting information anonymously. This helps us improve our services.
Providers: Google Analytics, Plausible Analytics (privacy-friendly)
These cookies are used to track visitors across websites to display relevant advertisements and measure campaign effectiveness.
Providers: LinkedIn, Twitter/X, Reddit
These cookies enable the website to remember choices you make (such as your language preference or region) to provide enhanced, more personalized features.
Your Privacy Rights
Right to Access: You can request a copy of your personal data
Right to Deletion: You can request deletion of your data
Right to Object: You can object to processing of your data
Right to Portability: You can request your data in a portable format
Is Serverless just a buzzword for 'someone else's computer'? We dismantle the hype, explore real-world event-driven patterns using Docker and RabbitMQ, and show why high-performance VPS infrastructure often beats public cloud FaaS on latency and cost.
The cloud promise of 'pay for what you use' often turns into 'pay for what you forgot to turn off'. Learn actionable strategies to slash infrastructure costs without sacrificing performance, from kernel tuning to selecting the right virtualization technology.
Microservices solve development bottlenecks but create operational nightmares. Learn how to implement a Service Mesh with Linkerd to fix latency and observability before the GDPR deadline hits.
Microservices solved your code velocity problems but broke your network reliability. In this guide, we deploy Linkerd (v1.0) to handle service discovery and circuit breaking without polluting application code. Valid for March 2017.
A battle-hardened guide to tuning your API Gateway for maximum throughput and minimal latency using 2017's best practices. From sysctl kernel tweaks to upstream keepalives, we dissect the stack.
When 'adding more RAM' stops working, you need a strategy. We dissect database sharding architectures relevant to high-traffic European workloads in 2017.
Is your deployment pipeline an excuse for a coffee break? We dissect the I/O bottlenecks killing your build times, implement ephemeral Docker agents, and optimize Jenkins 2.0 pipelines for the Nordic infrastructure landscape.
When vertical scaling hits the ceiling, sharding is the only way out. We explore practical sharding strategies using MySQL 5.7 and ProxySQL, tailored for low-latency infrastructure in Norway.
When your monolithic database hits the vertical ceiling, sharding is the nuclear option. We explore hash-based vs. range-based strategies, implementation patterns in MySQL 5.7, and why low-latency infrastructure in Oslo is critical for distributed data consistency.
With the USD strengthening against the NOK, public cloud bills are spiraling. Learn how to audit your infrastructure, leverage PHP 7 performance, and right-size your stack to stop bleeding budget.
Microservices are breaking your network stability. Learn how to implement a Service Mesh using Linkerd on Kubernetes 1.5 to handle service discovery, retries, and latency without code changes.
Your microservices might be fast, but your gateway is likely the bottleneck. A deep dive into kernel tuning, NGINX optimization, and why hardware choices in 2017 dictate your API's survival.
It is late 2016, and if you are still clicking buttons in the Jenkins UI, you are doing it wrong. We explore moving to Pipeline-as-Code, fixing I/O bottlenecks with NVMe, and keeping your intellectual property compliant within Norwegian borders.
The 'Castle and Moat' security strategy is dead. In this guide, we dismantle the perimeter and implement strict access controls, 2FA SSH, and encrypted tunnels on Ubuntu 16.04, ensuring your data in Norway remains untouchable.
Is your API latency killing your mobile app retention? We dive deep into Nginx 1.10 tuning, Linux kernel optimization, and TCP stack tweaks on Ubuntu 16.04 to handle massive concurrency. No fluff, just raw performance.
For Scandinavian user bases, hosting in Frankfurt is a compromise you can't afford. We dive into TCP stack tuning, NIX peering, and why 'Edge' means local hardware in 2016.
Vertical scaling has a ceiling. When your MySQL instance starts choking on write-heavy loads, it's time to talk about sharding. We explore consistent hashing, topology planning, and why network latency in Oslo matters more than you think.
With the Safe Harbor framework invalidated and new EU regulations looming, manual security hardening is a liability. Learn how to automate server compliance using Ansible on CentOS 7 to satisfy auditors and secure your Nordic infrastructure.
It is October 2016, and everyone is rushing to containerize. But default Docker settings are a security nightmare waiting to happen. Here is how to harden your stack using namespaces, capabilities, and KVM isolation.
Centralized clouds are failing real-time applications. We explore how deploying logic closer to Norwegian users—using local KVM VPS and TCP tuning—solves the latency crisis.
Latency is the silent killer of conversion rates. In this guide, we strip away the marketing fluff to build a raw, effective monitoring stack using Nginx, MySQL 5.7, and the ELK stack, optimized for Norwegian data sovereignty.
It is late 2016. Safe Harbor is dead, ransomware is rampant, and your RTO is likely a lie. Here is how to build a battle-tested Disaster Recovery plan using KVM, NVMe, and Norwegian data sovereignty.
Default configurations are killing your API performance. We dive deep into kernel tuning, HTTP/2 optimizations, and connection pooling on Ubuntu 16.04 to handle thousands of concurrent requests without melting your CPU.
Is your API gateway choking under load? We dissect the Linux kernel parameters and Nginx configurations required to handle massive concurrency in 2016, specifically focusing on the Norwegian hosting landscape.
Is your Jenkins build queue stalling your release cycle? We analyze the impact of disk I/O on Docker builds, the transition to Jenkins 2.0 pipelines, and why data residency in Norway is critical following the new EU-US Privacy Shield agreement.
Stop relying on basic ping checks. Learn how to monitor I/O wait, steal time, and Nginx metrics to ensure your Norwegian VPS infrastructure survives high loads without melting down.
Monoliths are safe; distributed systems are chaotic. We explore the 2016 microservices landscape, from Service Discovery with Consul to handling NIX latency, and why fast I/O is non-negotiable.
Is your Jenkins build giving you enough time to drive from Oslo to Bergen? We dissect the I/O bottlenecks killing your deployment speed and show you how to fix them using Jenkins 2.0, Docker, and NVMe infrastructure.
Is your API gateway becoming the bottleneck? We dive deep into kernel tuning, Nginx configuration, and the hardware reality required to handle high-concurrency traffic in 2016.
Is your application slow or is it just the network? Without proper APM, you are just guessing. We break down how to build a robust monitoring stack using ELK, Nginx, and system-level profiling on high-performance NVMe infrastructure.