Microservices in Production: 3 Architecture Patterns That Won't Wake You Up at 3 AM
Let’s be honest for a second. Most "microservices" architectures I audit in Norway are just distributed monoliths. They have all the complexity of a distributed system—latency, eventual consistency, tracing nightmares—with none of the decoupling benefits. I recently watched a generic e-commerce platform in Oslo implode during a flash sale, not because the code was bad, but because network latency between their poorly segmented services created a cascade of timeouts that no amount of caching could fix.
If you are deploying microservices in 2022 without a solid grasp of failure domains, you aren't an architect; you're a gambler. We are seeing a massive shift right now with Kubernetes 1.23 removing the dockershim and the industry finally taking Service Meshes seriously. But tools don't fix bad design.
Here is how to structure microservices so they survive production, specifically tailored for the high-compliance, high-performance needs of the Nordic market.
1. The Strangler Fig Pattern: Don't Rewrite, Re-route
The biggest mistake CTOs make is the "Big Bang" rewrite. It never works. Instead, we use the Strangler Fig pattern. You keep your legacy monolith running but place a proxy in front of it. You gradually strip out functionality, rewrite it as a microservice, and update the proxy to route traffic to the new service.
We use NGINX heavily for this at the edge. It’s battle-tested and adds negligible overhead if you configure the buffers correctly.
Here is a production-ready snippet for nginx.conf that splits traffic based on URI, allowing you to strangle the monolith one endpoint at a time:
upstream legacy_monolith {
server 10.0.0.5:8080;
keepalive 32;
}
upstream new_inventory_service {
server 10.0.0.6:3000;
keepalive 32;
}
server {
listen 443 ssl http2;
server_name api.yourservice.no;
# SSL optimization for low latency
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# The new microservice handles inventory
location /api/v1/inventory {
proxy_pass http://new_inventory_service;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
}
# Everything else falls back to the monolith
location / {
proxy_pass http://legacy_monolith;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
Pro Tip: Notice thekeepalive 32;in the upstream block? If you omit this, NGINX opens a new connection for every request to your backend. On a high-traffic site, you will exhaust your ephemeral port range and hitTIME_WAITlimits faster than you can say "502 Bad Gateway".
2. The Circuit Breaker: Failing Fast
In a distributed system, slow is worse than down. If your Pricing Service hangs, it shouldn't take down the Checkout Service. In 2022, we are moving logic out of the application and into the sidecar (Envoy/Istio), but many teams in Norway still run lean setups without a full mesh.
If you are running Go, you implement this at the client level. If you are waiting 30 seconds for a timeout, you've already lost the user. Fail fast, return a default value or a cached response, and recover.
Here is a resilience pattern using a standard Go implementation. This protects your resources from being tied up by a dying dependency:
// Circuit Breaker configuration
var cb = gobreaker.NewCircuitBreaker(gobreaker.Settings{
Name: "PricingService",
MaxRequests: 3,
Interval: 5 * time.Second,
Timeout: 30 * time.Second,
ReadyToTrip: func(counts gobreaker.Counts) bool {
failureRatio := float64(counts.TotalFailures) / float64(counts.Requests)
return counts.Requests >= 5 && failureRatio >= 0.6
},
})
// Wrapping the HTTP call
body, err := cb.Execute(func() (interface{}, error) {
resp, err := http.Get("http://pricing-service/get-price")
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode >= 500 {
return nil, fmt.Errorf("server error: %d", resp.StatusCode)
}
return ioutil.ReadAll(resp.Body)
})
3. Database-per-Service (and the I/O Reality)
This is the hardest pill to swallow. Shared databases are comfortable, but they create tight coupling. If Service A locks a table, Service B waits. In a proper architecture, the Inventory Service has its own Postgres instance, and the Order Service has its own MySQL instance. They communicate via events (Kafka or RabbitMQ), not SQL joins.
However, running 12 different database instances on cheap hardware is suicide. This is where infrastructure choice dictates architecture success. Databases are I/O heavy. If you run them on standard SATA SSDs or shared cloud storage with noisy neighbors, your microservices will bottleneck at the disk layer.
Database Optimization for Small Instances
If you are deploying a dedicated Postgres instance for a microservice on a 4GB VPS, you must tune postgresql.conf. Defaults are too conservative.
# PostgreSQL 14 tuning for 4GB RAM VPS
shared_buffers = 1GB
effective_cache_size = 3GB
maintenance_work_mem = 256MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 10MB
min_wal_size = 1GB
max_wal_size = 4GB
The random_page_cost set to 1.1 tells the query planner that we are on fast NVMe storage, making it more likely to use index scans over sequential scans.
The Infrastructure Bottleneck
Architecture is abstract; hardware is reality. In the Nordics, we deal with specific challenges. Data sovereignty (Schrems II) means you can't just dump everything into a US-owned cloud bucket without legal risking a headache from Datatilsynet.
Furthermore, latency matters. If your users are in Oslo or Bergen, routing traffic through Frankfurt adds 20-30ms round trip. For a microservice chain calling 5 services deep, that latency compounds to 150ms of pure network wait time.
| Metric | Standard Cloud VPS | CoolVDS NVMe Instance |
|---|---|---|
| Storage Type | Networked SSD (Ceph/SAN) | Local NVMe |
| IOPS (Random 4k Write) | 3,000 - 5,000 | 50,000+ |
| Virtualization | Often Container-based | KVM (Kernel-based Virtual Machine) |
| Compliance | US Jurisdiction usually | 100% Norwegian/European |
This is why we architect on CoolVDS. We use KVM virtualization which guarantees that resources are hard-fenced. When I allocate 4 vCPUs for a Kubernetes worker node on CoolVDS, I don't have to worry about a neighbor stealing CPU cycles. Plus, the NVMe storage allows those decoupled databases to perform like they are on bare metal.
Security & Compliance in 2022
With the current geopolitical climate and the invalidation of Privacy Shield, keeping data within Norway is no longer optional for many sectors (Health, Finance, Gov). Hosting your microservices on CoolVDS ensures that the physical bits reside in Oslo, keeping you compliant with GDPR and local data residency laws. Also, standard DDoS protection filters out the noise before it hits your ingress controller.
Final Thoughts
Microservices solve organizational scaling problems, but they introduce technical ones. Don't fight the network and the disk at the same time. Adopt the Strangler pattern to migrate safely, use circuit breakers to keep the system standing when parts fail, and host it on infrastructure that respects your need for I/O and sovereignty.
If you are ready to stop fighting latency, spin up a KVM instance on CoolVDS today. It takes less than a minute, and you’ll finally see what your database can actually do.