Console Login

Stop `kubectl apply`ing to Production: A GitOps Manifesto for Norwegian DevOps

Stop kubectl applying to Production: A GitOps Manifesto for Norwegian DevOps

If you are still SSHing into a server to update a docker-compose file, or running kubectl apply -f . from your laptop, you are the single point of failure. It’s harsh, but it’s the truth. I remember a specific incident in 2022 where a senior engineer deployed a hotfix to a payment gateway running on a generic cloud provider. It worked on his machine. It worked in staging. But in production, it crashed the pod loop.

Why? Because he had manually tweaked a ConfigMap directly in the cluster three months prior to handle a load spike and forgot to commit that change to git. When the new deployment rolled out, it overwrote the manual tweak. The result? Three hours of downtime and a very angry CTO.

That is why we do GitOps. Not because it's a buzzword, but because manual intervention is a liability. In this guide, we are tearing down the β€œpush” model and replacing it with a robust, pull-based GitOps workflow suitable for high-compliance Norwegian environments.

The Architecture: Pull vs. Push

Most CI/CD pipelines use a push model: Jenkins or GitHub Actions builds a container and then runs a script to deploy it to the cluster. This is a security risk. It requires your CI runner to have cluster-admin or high-level write access to your production environment.

The Pull Model (GitOps) flips this. Your cluster tracks a Git repository. When the repo changes, an agent inside the cluster (like ArgoCD or Flux) pulls the changes and applies them. The cluster keys never leave the cluster.

Pro Tip: For Norwegian companies dealing with Datatilsynet audits, GitOps is a savior. The Git commit log becomes your audit trail. You can prove exactly who changed what, and when, satisfying strict GDPR accountability requirements.

Step 1: The Repository Structure

The biggest mistake teams make is mixing application code and infrastructure manifests in the same repo. Don't do it. Use a β€œConfiguration Repo” pattern.

  • App Repo: Source code, Dockerfile, Unit Tests.
  • Infra/Config Repo: Helm charts, Kustomize files, YAML manifests.

Here is a battle-tested directory structure for a Kustomize-based setup:


config-repo/
β”œβ”€β”€ base/
β”‚   β”œβ”€β”€ deployment.yaml
β”‚   β”œβ”€β”€ service.yaml
β”‚   └── kustomization.yaml
β”œβ”€β”€ overlays/
β”‚   β”œβ”€β”€ staging/
β”‚   β”‚   β”œβ”€β”€ kustomization.yaml
β”‚   β”‚   └── patch-replicas.yaml
β”‚   └── production/
β”‚       β”œβ”€β”€ kustomization.yaml
β”‚       └── patch-resources.yaml

Step 2: Configuring ArgoCD

We use ArgoCD because the UI is fantastic for visualizing drift. Installing it is straightforward, but running it reliably requires underlying stability.

While you can run ArgoCD on any cluster, the control plane's responsiveness depends heavily on I/O. When ArgoCD reconciles hundreds of applications, it hits the disk hard. This is where CoolVDS shines. We run our control planes on CoolVDS NVMe instances because the high IOPS ensure that the redis cache used by ArgoCD doesn't become a bottleneck during massive sync operations. If your controller lags, your deployments lag.

Here is a declarative Application manifest to monitor your production overlay:


apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-gateway-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'git@github.com:my-org/infra-config.git'
    targetRevision: HEAD
    path: overlays/production
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

Note the selfHeal: true flag. If someone manually changes a resource in the cluster (like our friend in the intro), ArgoCD will immediately detect the drift and revert it to the state defined in Git. Ruthless? Yes. Necessary? Absolutely.

Step 3: Handling Secrets without Leaking Them

You cannot commit secrets.yaml to Git. You have two solid options in 2024:

  1. External Secrets Operator: Fetches secrets from Azure KeyVault or AWS Secrets Manager.
  2. Sealed Secrets (Bitnami): My preferred method for smaller setups or strictly on-prem/VPS environments.

With Sealed Secrets, you encrypt the secret locally using a public key exposed by the controller running in your cluster. Only the controller can decrypt it. It is safe to commit the encrypted output to a public repo.


# Install kubeseal client
brew install kubeseal

# Encrypt a secret
kubectl create secret generic db-creds \
  --from-literal=password=SuperSecret123 \
  --dry-run=client -o yaml | \
  kubeseal --controller-name=sealed-secrets-controller \
  --controller-namespace=kube-system \
  --format=yaml > sealed-secret.yaml

Step 4: The CI Handoff

Your CI pipeline (GitHub Actions/GitLab CI) should not touch the cluster. Its only job is to:

  1. Run tests.
  2. Build the Docker image.
  3. Push the image to a registry.
  4. Update the image tag in the Config Repo.

Here is a snippet for the final step using kustomize edit inside a GitHub Action:


- name: Update Image Tag
  run: |
    cd config-repo/overlays/production
    kustomize edit set image my-app=my-registry/my-app:${{ github.sha }}
    git config user.name "CI Bot"
    git config user.email "ci@coolvds.com"
    git add kustomization.yaml
    git commit -m "Deploying image ${{ github.sha }}"
    git push origin main

Once this commit lands, ArgoCD picks it up, and the deployment happens automatically.

Infrastructure Matters: Latency and NIX

When operating in Norway, latency matters. If your GitOps controller is hosted in Frankfurt but your Git repository and container registry are mirrored locally, or vice versa, you introduce sync delays. Hosting your Kubernetes nodes on CoolVDS in Oslo ensures you have direct peering through NIX (Norwegian Internet Exchange). This reduces the time it takes to pull heavy container images from local mirrors, speeding up the "mean time to recovery" (MTTR) if a pod needs to restart.

Furthermore, running your own K8s cluster on standard VPS instances often leads to "noisy neighbor" issues where CPU steal affects the API server. CoolVDS utilizes KVM isolation to guarantee that the resources you pay for are the resources you get. For a GitOps controller that constantly polls repositories and reconciles state, consistent CPU performance is non-negotiable.

Conclusion

GitOps is not just a tool change; it is a culture shift. It moves the β€œsource of truth” from the unpredictable state of a live server to the version-controlled safety of a Git repository. It satisfies the strict compliance needs we face in Europe and eliminates the β€œworks on my machine” excuse.

But software is only as good as the hardware it runs on. A fragile network or overloaded hypervisor will turn your automated workflow into a bottleneck. Don't let slow I/O kill your operational efficiency. Deploy a test instance on CoolVDS in 55 seconds and give your K8s cluster the foundation it deserves.