Stop Touching Production: The Case for Strict GitOps in 2020
If you are still SSHing into a server to restart a service, or worse, running kubectl edit deployment directly against your production cluster, you are the liability. It is late 2020. We have tools that prevent human error, yet I still see senior engineers manually patching live environments.
The philosophy is simple: Git is the single source of truth. If it is not in the repo, it does not exist. This methodology, GitOps, isn't just a buzzword; it's the only way to manage complexity without waking up at 3 AM because someone made an undocumented change to the nginx.conf two weeks ago.
But there is a local angle here often ignored in generic tutorials. Following the Schrems II ruling earlier this year, relying on US-based managed Kubernetes services has become a legal minefield for Norwegian companies. You need control. You need sovereignty. You need to build this stack on infrastructure that resides legally and physically in Norway.
The Architecture: Push vs. Pull
Traditional CI/CD pipelines use a "Push" model. Jenkins or GitLab CI builds the container, tests it, and then runs kubectl apply to the cluster. This works, but it has a massive security flaw: your CI server needs god-mode access to your production cluster.
A better approach, and the one we advocate for on CoolVDS instances, is the "Pull" model using an operator like ArgoCD.
The Pull Workflow
- Developer commits code.
- CI builds Docker image and pushes to registry.
- CI updates the deployment repo with the new image tag.
- ArgoCD (running inside the cluster) detects the change in Git.
- ArgoCD pulls the manifest and applies it to the cluster.
This way, the cluster keys never leave the cluster. Here is a standard ArgoCD Application manifest we use to sync our internal tooling:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: production-api
namespace: argocd
spec:
project: default
source:
repoURL: 'git@gitlab.com:your-company/infra-manifests.git'
targetRevision: HEAD
path: k8s/production
destination:
server: 'https://kubernetes.default.svc'
namespace: backend
syncPolicy:
automated:
prune: true
selfHeal: true
Pro Tip: Enable selfHeal. If a cowboy engineer tries to manually change the replica count on the cluster, ArgoCD will immediately revert it back to the state defined in Git. Ruthless consistency.
Handling Secrets without Leaking Credentials
The biggest blocker to GitOps is secrets. You cannot commit .env files to Git. In 2020, the most pragmatic solution for small to mid-sized teams is Bitnami Sealed Secrets. It uses asymmetric encryption. You encrypt locally with a public key, push the "sealed" secret to Git, and the controller inside the cluster (which holds the private key) decrypts it.
Stop using base64 encoded secrets. That is not encryption; it is obfuscation. Here is the workflow:
# 1. Create a raw secret (dry-run to not actually apply it)
kubectl create secret generic db-creds \
--from-literal=password=SuperSecureP@ssw0rd \
--dry-run=client -o json > secret.json
# 2. Seal it using the controller's public key
kubeseal --format=yaml < secret.json > sealed-secret.yaml
# 3. Commit sealed-secret.yaml to Git safe and sound
The Infrastructure Layer: Terraform on CoolVDS
GitOps handles the software, but what handles the server? You need Infrastructure as Code (IaC) for the base layer. We use Terraform. Since CoolVDS provides standard KVM virtualization, we treat our nodes as immutable resources.
While Hyperscalers obscure the hardware, running on CoolVDS gives you raw access. This is critical for etcd performance. Kubernetes will crash if etcd latency spikes. etcd is extremely sensitive to disk write latency.
We specifically configure our CoolVDS instances to utilize local NVMe storage rather than network-attached block storage for the etcd members. The I/O wait times on network storage can kill a cluster during high load.
resource "libvirt_domain" "k8s_master" {
name = "k8s-master-01"
memory = "4096"
vcpu = 2
disk {
volume_id = libvirt_volume.os_image.id
}
# Direct NVMe passthrough logic or virtio-scsi optimization
# ensures low latency for etcd fsync operations.
xml {
xslt = file("optimize-disk-io.xsl")
}
}
Why Location Matters: The Oslo Factor
Let's talk about latency and law. In a GitOps workflow, your "Control Loop" is constant. But more importantly, your data ingress/egress is subject to physics and legislation.
1. Physics: If your users are in Norway, hosting in Frankfurt adds 20-30ms round trip. Hosting in US-East adds 90ms+. Hosting on CoolVDS in Oslo connects you directly to NIX (Norwegian Internet Exchange). Your TTL is minimal.
2. Law (GDPR/Schrems II): As of July 2020, the Privacy Shield is dead. Moving personal data of Norwegian citizens to US-controlled clouds is risky compliance-wise. By deploying your GitOps worker nodes on CoolVDS, you ensure data persistence happens on Norwegian soil, under Norwegian law.
Implementation Checklist
Ready to move from "ClickOps" to GitOps? Here is your roadmap for this weekend:
| Component | Tool Recommendation (2020) | Why? |
|---|---|---|
| CI Pipeline | GitLab CI or GitHub Actions | Linting and building images only. No deployment permissions. |
| CD Controller | ArgoCD v1.7+ | Visualizes the graph, handles drift detection automatically. |
| Secrets | Sealed Secrets | Low operational overhead compared to HashiCorp Vault. |
| Infrastructure | CoolVDS NVMe Instances | High IOPS for etcd, low latency to NIX, GDPR compliance. |
Don't let configuration drift take down your production environment. Lock your state in Git, secure your secrets, and run it on hardware that respects your data sovereignty.
Need a compliant, low-latency target for your clusters? Spin up a CoolVDS high-performance instance in Oslo. Deployment takes less than 60 seconds.