Console Login

Stop Cowboy Coding: A Battle-Tested GitOps Workflow for Norwegian High-Availability Clusters

Stop Cowboy Coding: A Battle-Tested GitOps Workflow for Norwegian High-Availability Clusters

If you are still SSH-ing into your production nodes to edit a config file, stop. If you are running kubectl apply -f deployment.yaml from your laptop, you are a liability. I’ve seen entire clusters melt down because a developer panicked and manually patched a hotfix, only for the CI system to overwrite it ten minutes later during a scheduled run. It’s messy, it’s unprofessional, and in a strict regulatory environment like Norway, it’s practically illegal.

GitOps isn't just a buzzword used to sell conference tickets. It is the only sane way to manage infrastructure at scale. By May 2025, if your infrastructure state doesn't match your Git repository, your infrastructure doesn't exist.

The Core Principle: Drift is the Enemy

The concept is simple: Git is the single source of truth. Not your memory, not the dashboard, and definitely not the current state of the etcd database. We utilize a reconciliation loop (usually an operator inside Kubernetes) to force the cluster state to match Git.

Why does this matter specifically for Norwegian deployments? Audit trails.

When Datatilsynet (The Norwegian Data Protection Authority) comes knocking asking who changed the ingress rules that exposed customer data, pointing at a commit hash signed with a GPG key is your safety net. "Jeff logged into the server" is not an acceptable audit trail.

The Stack: ArgoCD vs. Flux

In 2025, the war is mostly between ArgoCD and Flux v2. Both are CNCF graduated projects. I’ve run both on bare metal and virtualized substrates. Here is the pragmatic breakdown:

Feature ArgoCD Flux v2
UI/Visibility Excellent dashboard, visualizes topology. Minimalist, CLI-focused.
Multi-Tenancy Native SSO integration (OIDC). Relies on Kubernetes RBAC primarily.
Resource Usage Heavy (needs decent memory). Lightweight (Go controllers).

For most teams operating out of Oslo or Stockholm, I recommend ArgoCD because the visual feedback loop is critical during incidents. However, ArgoCD is resource-hungry. It demands a stable control plane. This is where your underlying infrastructure matters. We host our ArgoCD control planes on CoolVDS NVMe instances because the I/O latency on standard HDD VPS providers causes the Redis cache to choke during massive reconciliation waves.

Structuring the Repository: The Separation of Concerns

Do not put your application source code and your Kubernetes manifests in the same repository. It creates a CI loop from hell where a config change triggers a binary rebuild. Split them.

The Directory Tree

Here is the structure I use for clients deploying strictly within the EU/EEA zone:

gitops-repo/
β”œβ”€β”€ base/
β”‚   β”œβ”€β”€ nginx-ingress/
β”‚   β”œβ”€β”€ cert-manager/
β”‚   └── my-app/
β”‚       β”œβ”€β”€ deployment.yaml
β”‚       β”œβ”€β”€ service.yaml
β”‚       └── kustomization.yaml
β”œβ”€β”€ overlays/
β”‚   β”œβ”€β”€ dev/
β”‚   β”‚   β”œβ”€β”€ kustomization.yaml
β”‚   β”‚   └── replicas_patch.yaml
β”‚   └── prod-oslo/
β”‚       β”œβ”€β”€ kustomization.yaml
β”‚       β”œβ”€β”€ resource_limits_patch.yaml
β”‚       └── ingress_patch.yaml
└── clusters/
    β”œβ”€β”€ oslo-01/
    └── bergen-dr/

Using Kustomize allows us to keep a dry base and overlay environment-specific configurations. For the prod-oslo overlay, we specifically tune for the hardware.

The Pipeline: CI to GitOps Handoff

Your CI pipeline (GitLab CI or GitHub Actions) should not touch kubectl. Its only job is to build the container, push it to the registry, and then commit the new tag to the GitOps repository.

Here is a stripped-down GitLab CI job demonstrating this secure handoff:

update_manifest:
  stage: deploy
  image: alpine/git:v2.45.2
  script:
    - git config --global user.email "ci-bot@coolvds.com"
    - git clone https://oauth2:${GIT_TOKEN}@gitlab.com/org/gitops-repo.git
    - cd gitops-repo/overlays/prod-oslo
    # We use kustomize to set the new image tag cleanly
    - kustomize edit set image my-app=registry.coolvds.com/app:${CI_COMMIT_SHORT_SHA}
    - git commit -am "Bump image to ${CI_COMMIT_SHORT_SHA}"
    - git push origin main
  only:
    - main

Once this commit lands, ArgoCD detects the drift and syncs the cluster. No credentials for the production cluster ever exist inside the CI runner. This is crucial for security compliance.

Pro Tip: Network latency kills reconciliation speed. If your Git repo is hosted on GitHub (US) but your cluster is in Norway, you are adding 100ms+ to every fetch. We mirror our critical GitOps repos to a private GitLab instance hosted on CoolVDS in Oslo to keep fetch times under 5ms.

Secrets Management: The "Schrems II" Headache

You cannot check secrets into Git. Obvious, right? But storing them in a SaaS vault hosted in the US violates GDPR data sovereignty requirements under Schrems II context. You need the secrets to live and die in Norway.

I rely on the External Secrets Operator (ESO) coupled with a local HashiCorp Vault or a secure K8s secret store. However, for smaller deployments, Sealed Secrets by Bitnami is still the standard for "Git-friendly" encryption.

To seal a secret locally before pushing:

kubectl create secret generic db-creds \
  --from-literal=password=SuperSecure!123 \
  --dry-run=client -o yaml | \
  kubeseal --controller-name=sealed-secrets \
  --format=yaml > sealed-secret.yaml

Now sealed-secret.yaml is safe to commit. Only the controller running inside your cluster can decrypt it.

The Hardware Reality: Why Virtualization Matters

You can have the most beautiful YAML in the world, but if the underlying hypervisor steals CPU cycles, your Ingress Controller will drop packets. GitOps controllers like ArgoCD application controllers are constantly polling. They are essentially infinite loops.

On oversold shared hosting, these loops get throttled. I've debugged clusters where ArgoCD took 5 minutes to recognize a change because the host CPU was choked by a noisy neighbor.

This is why for production workloads, we specify CoolVDS. We use KVM (Kernel-based Virtual Machine) virtualization which provides strict isolation. Unlike OpenVZ or LXC containers used by budget providers, KVM ensures that the memory you allocate to your GitOps controller is actually yours. When we route traffic through the Norwegian Internet Exchange (NIX), we need the virtualization layer to get out of the way.

Essential Commands for verification

When you set up your GitOps controller, verify the latency to your Git source immediately:

kubectl exec -it -n argocd argocd-server-xyz -- curl -w "%{time_total}\n" -o /dev/null -s https://gitlab.com

If that number is high, your reconciliation loop will lag. Also, ensure your etcd storage is fast enough to handle the constant state updates:

fio --name=random-write --ioengine=libaio --rw=randwrite --bs=4k --numjobs=1 --size=1g --iodepth=1 --runtime=60 --time_based

On CoolVDS NVMe instances, we consistently see IOPS high enough to support etcd clusters without the dreaded "etcdserver: leader changed" warnings.

Final Thoughts

GitOps is about discipline. It removes the "human error" variable from deployment. But it adds a "infrastructure reliability" variable. Your control plane becomes mission-critical.

Don't build a robust logical pipeline on top of a fragile physical foundation. Ensure your cluster runs on hardware that respects data sovereignty and provides consistent I/O performance.

Ready to stabilize your control plane? Deploy a high-performance KVM instance on CoolVDS today and get your latency to Oslo down to single digits.