GitOps in Production: Stop Manual kubectl apply Before You Wreck Your Cluster
If you are still SSHing into your production server to edit an Nginx config, or running kubectl apply -f deployment.yaml from your local laptop, you are creating a ticking time bomb. Itβs 2022. We have moved past the era of "pet" servers and manual intervention. Iβve seen entire clusters in Oslo go dark because a senior engineer manually tweaked a resource limit to fix a hot issue, went on vacation, and then the CI pipeline overwrote the fix on Monday morning. Total chaos.
The solution isn't just "more automation." It is GitOps. But implementing GitOps isn't just about installing ArgoCD and calling it a day. It requires a fundamental shift in how you view infrastructure, especially here in Europe where Schrems II has made data sovereignty a legal minefield.
The Architecture of Truth
In a proper GitOps workflow, Git is the only source of truth. If it is not in the repo, it does not exist. This eliminates configuration drift, a silent killer of uptime. There are two primary ways to handle this: the Push model (traditional CI/CD pipelines) and the Pull model (an operator inside the cluster).
For high-security environmentsβlike those we host for fintech clients in Norwayβthe Pull model is superior. Why? because your CI system (Jenkins, GitLab CI) does not need root access to your production cluster. Instead, an agent inside the cluster (ArgoCD or Flux) pulls the changes.
The Stack: What Works in 2022
- VCS: GitLab (Self-hosted or SaaS)
- Controller: ArgoCD v2.2+
- Templating: Kustomize (simpler than Helm for pure ops)
- Infrastructure: CoolVDS KVM Instances (NVMe backed)
Step 1: The Repository Structure
Do not mix your application source code with your deployment manifests. I see this mistake constantly. It creates a loop where a config change triggers a binary build. Separate them.
βββ apps/
β βββ backend-api/
β β βββ base/
β β β βββ deployment.yaml
β β β βββ service.yaml
β β β βββ kustomization.yaml
β β βββ overlays/
β β βββ production/
β β β βββ kustomization.yaml
β β βββ staging/
β β βββ kustomization.yaml
βββ cluster-config/
Here is how a clean kustomization.yaml looks for a production overlay. We are overriding the replicas and resource limits specifically for the high-performance nodes provided by CoolVDS.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- |
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-api
spec:
replicas: 5
template:
spec:
containers:
- name: app
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1000m"
Step 2: The ArgoCD Application Definition
Once your manifests are in Git, you define an Application CRD. This tells the controller where to look and where to deploy.
Pro Tip: Always set prune: true. If you delete a file in Git, it should be deleted from the cluster. If you don't enable this, you leave orphaned resources consuming memory that you are paying for.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: backend-api-prod
namespace: argocd
spec:
project: default
source:
repoURL: 'git@gitlab.com:your-org/infra-manifests.git'
targetRevision: HEAD
path: apps/backend-api/overlays/production
destination:
server: 'https://kubernetes.default.svc'
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
Step 3: Handling Secrets (The "Schrems II" Factor)
You cannot commit raw secrets to Git. In 2022, this is non-negotiable. Especially if you are operating under Norwegian Datatilsynet jurisdiction, leaking customer database credentials is a GDPR nightmare.
We recommend Sealed Secrets by Bitnami for mid-sized deployments. It uses asymmetric encryption. You encrypt with a public key locally, commit the gibberish, and the controller inside the cluster decrypts it with the private key.
Workflow:
- Developer creates
secret.yamllocally. - Run
kubeseal < secret.yaml > sealed-secret.json. - Commit
sealed-secret.jsonto Git.
The Infrastructure Reality Check
GitOps is heavy on the control plane. ArgoCD is constantly polling your Git repo and diffing it against the cluster state. If your underlying infrastructure has high I/O latency (Steal Time), your reconciliation loops will lag. You will push code, and nothing will happen for 5 minutes.
This is where the "noisy neighbor" problem on cheap shared hosting kills you. We built CoolVDS on KVM with local NVMe storage specifically to eliminate this bottleneck. When you run kubectl get pods on our infrastructure, the response is instant because the I/O wait is virtually zero.
Comparison: Hosting for GitOps
| Feature | Budget VPS | CoolVDS NVMe |
|---|---|---|
| Disk I/O | SATA/SSD (Shared) | Enterprise NVMe (High IOPS) |
| Virtualization | Container (LXC/OpenVZ) | KVM (Kernel-based) |
| Kernel Access | Restricted | Full (Required for eBPF tools) |
| Location | Often Unknown | Oslo/EU (GDPR Compliant) |
Latency Matters: The Norwegian Context
If your Git repository (e.g., GitLab) is hosted in Europe, and your cluster is in the US, your sync latency increases. But more importantly, if you are processing Norwegian user data, transferring that data across the Atlantic relies on Standard Contractual Clauses (SCCs), which are increasingly scrutinized post-2020.
By hosting your Kubernetes nodes on CoolVDS in Norway or strictly within the EEA, you simplify your compliance posture. Plus, the latency to NIX (Norwegian Internet Exchange) is typically under 2ms from our datacenter. Speed isn't just about CPU; it's about network topology.
Implementation Checklist
Ready to refactor? Start here:
- Audit your current state: Export all running configs to YAML.
- Spin up a Management Cluster: Use a CoolVDS instance to host your ArgoCD control plane. Isolate it from the workload clusters.
- encrypt everything: Set up Sealed Secrets before migrating the first app.
- Test the break: Manually delete a deployment in the cluster and watch ArgoCD bring it back. If it doesn't, your config is wrong.
GitOps is the standard for modern infrastructure. It brings sanity to the chaos of microservices. But software is only as good as the hardware it runs on. Don't let IOwait be the reason your deployment hangs.
Deploy your GitOps control plane on a platform engineered for performance. Spin up a CoolVDS NVMe instance today.