Stop SSH-ing into Production: A 2025 Guide to GitOps in the Nordics
If I catch you typing kubectl apply -f from your laptop, I’m revoking your access. I'm serious. In 2025, manual cluster manipulation isn't just "bad practice"—it's a liability. I’ve spent the last decade cleaning up "quick fixes" that turned into permanent outages. The only way to manage modern infrastructure, especially when you are dealing with the strict data compliance requirements here in Norway, is GitOps.
But GitOps isn't just installing ArgoCD and calling it a day. It is an architectural discipline. It requires infrastructure that doesn't buckle under the weight of constant reconciliation loops. Here is how we build battle-tested GitOps workflows that satisfy both the Datatilsynet (Norwegian Data Protection Authority) and your on-call engineers.
The "Source of Truth" Fallacy
The core promise of GitOps is simple: Git is the single source of truth. If it's not in the repo, it doesn't exist. If someone manually changes a resource in the cluster, the agent (ArgoCD or Flux) should detect the drift and revert it immediately.
However, this creates a heavy I/O tax. Your GitOps controller is constantly polling your Git repositories and your Kubernetes API server. I've seen control planes on budget VPS providers melt because they couldn't handle the etcd throughput required for managing 500+ microservices. Latency matters. If your API server is sluggish, your deployments hang.
The Stack: 2025 Standards
For a robust setup in the current landscape, we rely on the "Pull Model." We are avoiding CI-driven deployments (Push Model) because they expose credentials to the CI runner.
- Orchestration: Kubernetes 1.31+
- GitOps Controller: ArgoCD v2.12 (Stable)
- Secret Management: External Secrets Operator
- Base Metal: KVM-based Virtualization (Essential for kernel isolation)
1. Configuring the Application Manifest
Don't just use the defaults. You need to configure the sync policy to be aggressive but safe. Here is a production-ready Application manifest we use for high-traffic workloads:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nord-payment-gateway
namespace: argocd
spec:
project: default
source:
repoURL: 'git@github.com:coolvds-ops/payment-gateway.git'
targetRevision: HEAD
path: k8s/overlays/oslo-prod
destination:
server: https://kubernetes.default.svc
namespace: payments
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- ApplyOutOfSyncOnly=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
Pro Tip: Always enable selfHeal: true. Without it, you are just doing fancy CI/CD. With it, you have an automated immune system that rejects unauthorized changes.
Infrastructure: The Invisible Bottleneck
This is where many engineers fail. They design a beautiful software architecture and deploy it on shared hosting or oversold cloud instances. GitOps is resource-intensive. The reconciliation loop hits the disk hard.
When running this on CoolVDS, we utilize KVM virtualization backed by NVMe storage. Why? Because etcd (the brain of Kubernetes) is extremely sensitive to disk write latency. If fsync takes too long, the cluster leader election fails, and your GitOps syncs start timing out. We consistently see write latencies under 2ms on our Oslo NVMe instances, which is critical for maintaining stability during heavy deployment waves.
Comparison: Push vs. Pull Architecture
| Feature | Push (Jenkins/GitLab CI) | Pull (ArgoCD/Flux) |
|---|---|---|
| Security | CI needs full cluster admin access (Risky) | Agent lives inside cluster, pulls changes (Secure) |
| Drift Detection | None. Only checks during deploy. | Continuous. Reverts manual changes instantly. |
| Scalability | Complex firewall rules needed for multi-cluster. | Scales natively with cluster count. |
Solving the "Secret" Problem in Norway
You cannot commit secrets.yaml to Git. That is a GDPR violation waiting to happen, especially if you are handling Norwegian citizen data (Fødselsnummer). In 2025, the standard is the External Secrets Operator (ESO).
Instead of encrypted files in the repo (like SealedSecrets), ESO fetches secrets from a secure vault at runtime. Here is how we map a Vault secret to a Kubernetes secret without ever exposing the data in Git:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: backend
spec:
refreshInterval: "1h"
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: db-secret
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: secret/data/production/db
property: username
- secretKey: password
remoteRef:
key: secret/data/production/db
property: password
This approach ensures that your data stays resident in the secure environment. When you host with CoolVDS in our Oslo facility, you add another layer of compliance: data sovereignty. The data never leaves the physical borders of Norway, satisfying the strictest interpretations of Schrems II.
The CI Pipeline Integration
GitOps handles the Deployment, but you still need CI for the Integration. Your CI pipeline should build the Docker image, push it to a registry, and then update the manifest repository. It should never touch the cluster directly.
Here is a snippet of a clean .gitlab-ci.yml build stage:
build_image:
stage: build
image: docker:27.0.3
services:
- docker:27.0.3-dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
only:
- main
Network Latency and the NIX
If your GitOps controller is in Frankfurt but your users and developers are in Oslo, you are fighting physics. Pulling large container images across the continent slows down your Mean Time To Recovery (MTTR).
By peering directly at the NIX (Norwegian Internet Exchange), CoolVDS ensures that when ArgoCD pulls a new image, it travels over high-bandwidth, low-latency local routes. We are talking about reducing image pull times from 45 seconds to 4 seconds for heavy Java or .NET Core applications.
Handling Stateful Workloads
Stateless apps are easy. Databases are hard. When defining `StatefulSets` in a GitOps repo, you must ensure your PersistentVolumeClaims (PVCs) bind to high-performance storage classes.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nvme-local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
On CoolVDS, we map these storage classes directly to local NVMe arrays, avoiding network-attached storage overhead for database workloads. This is crucial for high-transaction systems.
Final Thoughts
GitOps is the only way to manage complexity at scale, but it exposes the weaknesses in your underlying infrastructure. You can have the perfect ArgoCD config, but if your virtualization layer has "noisy neighbors" stealing your CPU cycles, your reconciliation loops will lag.
Don't let slow I/O kill your deployment velocity. If you are building for the Nordic market, you need infrastructure that respects your need for speed and sovereignty.
Ready to harden your pipeline? Deploy a high-performance KVM instance on CoolVDS in Oslo today and experience the difference of raw NVMe power.