The Era of "Click-Ops" is Dead
If you are still SSHing into your production servers to restart a service, or manually running kubectl apply -f deployment.yaml from your laptop, you are a liability. I say this not to be harsh, but because I have seen entire platforms vanish due to fat-finger errors during Friday afternoon deployments. In the Nordic market, where reliability is non-negotiable and Datatilsynet (The Norwegian Data Protection Authority) watches data flows like a hawk, you need a workflow that is auditable, reversible, and automated.
Enter GitOps. It is not just a buzzword; it is the only sanity-preserving way to manage infrastructure in 2022. By using Git as the single source of truth, we ensure that the state of our infrastructure is versioned, peer-reviewed, and automated. But implementing GitOps isn't just about installing ArgoCD and walking away. It requires underlying hardware capable of handling the reconciliation loops and a network architecture that respects data sovereignty.
The Architecture: Pull vs. Push
Traditional CI/CD relies on a "Push" model. Your Jenkins or GitLab CI runner builds the artifact and then pushes the configuration to the cluster. This is a security nightmare because it requires giving your CI tool god-mode credentials to your production environment.
In a properly architected GitOps workflow (the "Pull" model), the cluster pulls its own configuration. The cluster has no outside administrative access allowed. This is crucial for compliance with Schrems II regulations, as it limits the attack surface significantly. If your CI runner is hosted on a US-owned cloud but your production is in Norway, credential leakage becomes a massive compliance risk. Running your control plane on local, high-performance VDS instances mitigates this.
Pro Tip: When hosting your GitOps operator (like ArgoCD), latency matters. A reconciliation loop that takes 5 seconds due to slow I/O or network hops adds up when managing hundreds of microservices. We run our control planes on CoolVDS NVMe instances because the KVM isolation prevents "noisy neighbor" CPU stealing, keeping sync times under 200ms.
Tool Selection: The 2022 Landscape
Right now, the battle is primarily between ArgoCD and Flux v2. While Flux is fantastic for headless setups, ArgoCD provides a UI that developers appreciate for visibility.
| Feature | ArgoCD | Flux v2 |
|---|---|---|
| Architecture | Centralized Control Plane | Controller-based (microservices) |
| UI | Excellent, out-of-the-box | Limited (requires add-ons) |
| Multi-tenancy | Strong support | Native via multi-tenancy lock-down |
| Resource Usage | Moderate to High | Low |
Implementation: The Repository Structure
Separation of concerns is vital. Do not keep your application source code and your Kubernetes manifests in the same repository. If you do, a simple README update triggers a deployment pipeline unnecessarily.
1. The App Repo (Source Code)
This repo contains your Go, Python, or Node.js code and a Dockerfile. The CI pipeline (GitHub Actions or GitLab CI) runs tests, builds the image, and pushes it to a container registry.
# .github/workflows/build.yaml
name: Build and Push
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build and push Docker image
run: |
docker build -t registry.coolvds.com/myapp:${{ github.sha }} .
docker push registry.coolvds.com/myapp:${{ github.sha }}
2. The Config Repo (Manifests)
This is where GitOps shines. This repository contains your Helm charts or Kustomize files. The final step of your CI pipeline should be to update the image tag in this repository, not touch the cluster directly.
Here is an example of an ArgoCD Application manifest that watches your config repository. Note the specific sync policy settings to ensure self-healing:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nordic-payment-gateway
namespace: argocd
spec:
project: default
source:
repoURL: 'git@github.com:my-org/infra-config.git'
targetRevision: HEAD
path: k8s/overlays/production
destination:
server: 'https://kubernetes.default.svc'
namespace: payment-prod
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Handling Secrets (The "Zero Trust" Way)
You cannot check secrets.yaml into Git. It is the fastest way to get fired. In late 2022, the standard approach is using Sealed Secrets (by Bitnami) or the External Secrets Operator integrating with HashiCorp Vault.
For smaller teams deploying on CoolVDS, Sealed Secrets is efficient. It uses asymmetric encryption. You encrypt the secret using a public key (safe to commit to Git), and the controller inside the cluster decrypts it using the private key (which never leaves the cluster).
To seal a secret locally:
kubeseal --format=yaml --cert=pub-cert.pem < secret.yaml > sealed-secret.yaml
And here is what you commit to your repository:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: db-credentials
namespace: production
spec:
encryptedData:
password: AgBy3...<long encrypted string>...==
template:
metadata:
name: db-credentials
labels:
app: postgres
type: Opaque
Infrastructure Performance Matters
GitOps relies heavily on the Kubernetes API server and etcd. Every time ArgoCD checks Git against the cluster state, it queries the API. If your underlying storage is slow, your etcd latency spikes, leading to timeouts and failed synchronizations.
We see this constantly with providers who oversell resources. If the disk I/O wait is high, Kubernetes becomes unstable. This is why for our managed Kubernetes and self-hosted clusters on CoolVDS, we strictly use NVMe storage backends. We need fsync operations to complete in microseconds, not milliseconds.
To verify your etcd disk performance, drop into your node and run:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=mytest
If you aren't seeing the IOPS necessary to sustain a 3-node etcd cluster, your GitOps workflow will crumble under load.
Conclusion: Latency and Sovereignty
Implementing GitOps is about more than just YAML files; it is about architectural integrity. By hosting your GitOps control plane in Norway (or localized European regions), you reduce latency to your end-users and simplify GDPR compliance. Using CoolVDS allows you to leverage the raw power of KVM and NVMe without the noisy neighbor issues typical of budget containers, ensuring your reconciliation loops happen instantly.
Don't let slow I/O kill your deployment velocity. Deploy a high-performance instance on CoolVDS today and build a pipeline that actually scales.