Console Login

GitOps in 2025: Stop Breaking Production with ClickOps

GitOps in 2025: Stop Breaking Production with ClickOps

I still see it happen. It is May 2025, and I walked into a client's office in Oslo last weekβ€”a respected fintech, mind youβ€”and saw a senior engineer ssh-ing into a production node to "hotfix" a config file. He typed vim /etc/nginx/nginx.conf. My heart skipped a beat.

If you are still doing "ClickOps" or manual SSH interventions, you are not managing infrastructure; you are gambling. In the era of ephemeral containers and immutable infrastructure, the only source of truth must be Git. If it's not in the repo, it doesn't exist.

Let’s cut through the marketing noise. This isn't about buying a fancy SaaS dashboard. This is about a rigorous GitOps workflow that survives real-world chaos, strict Norwegian compliance requirements, and the need for speed.

The Architecture of Truth

GitOps is simple in theory: your Git repository contains the declarative description of your infrastructure. An automated operator (like ArgoCD or Flux) ensures the live state matches Git. But the devil is in the details.

1. The Repository Strategy: Monorepo vs. Polyrepo

For most European SMEs I work with, the Environment Repository Pattern yields the best balance between control and chaos.

  • App Repo: Source code + Dockerfile + Helm Chart (or basic manifests).
  • Config Repo (The GitOps Repo): The actual state definition for Dev, Staging, and Prod.

Separating them prevents a CI loop from triggering an infinite deployment spiral. Here is the directory structure that doesn't scale into a nightmare:

β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ payment-service/
β”‚   β”‚   β”œβ”€β”€ base/
β”‚   β”‚   β”œβ”€β”€ overlays/
β”‚   β”‚   β”‚   β”œβ”€β”€ dev/
β”‚   β”‚   β”‚   └── prod/
β”œβ”€β”€ infrastructure/
β”‚   β”œβ”€β”€ ingress-controllers/
β”‚   β”œβ”€β”€ monitoring/
β”‚   └── cert-manager/
β”œβ”€β”€ cluster-config/
β”‚   └── production-norway.yaml

The Engine: ArgoCD on Bare Metal Performance

In 2025, ArgoCD remains the gold standard for Kubernetes delivery. However, I’ve seen Argo instances crawl because the underlying VPS had poor I/O wait times. When Argo is reconciling 500 applications, it slams the disk with etcd operations.

We run our control planes on CoolVDS NVMe instances. Why? Because when you have 50 developers pushing commits simultaneously, you cannot afford for your GitOps operator to lag 5 minutes behind due to CPU stealing or slow storage. Latency matters. If your servers are in Oslo, your control plane should be too.

Configuring the Application

Don't use the UI to create apps. That defeats the purpose. Define your applications declaratively using the ApplicationSet controller.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: microservices
  namespace: argocd
spec:
  generators:
  - git:
      repoURL: https://github.com/my-org/gitops-config.git
      revision: HEAD
      directories:
      - path: apps/*
  template:
    metadata:
      name: '{{path.basename}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/my-org/gitops-config.git
        targetRevision: HEAD
        path: '{{path}}'
      destination:
        server: https://kubernetes.default.svc
        namespace: '{{path.basename}}'
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
Pro Tip: Always enable prune: true. If you delete a resource in Git, it must die in the cluster. Leaving orphaned resources is a security risk and a resource leak.

Handling Secrets without Leaking to Github

This is where I see people fail Schrems II and GDPR compliance. Never commit raw secrets. Just don't. In 2025, we have mature tools for this.

I prefer External Secrets Operator (ESO). It fetches secrets from a secure vault (like HashiCorp Vault or a managed secret manager) and injects them as Kubernetes secrets.

However, for smaller setups on CoolVDS, Sealed Secrets is a pragmatic choice. You encrypt the secret locally, commit the encrypted blob, and only the controller in the cluster can decrypt it.

Workflow:

  1. Developer creates secret: kubectl create secret generic db-pass --from-literal=password=SuperSecret -o yaml --dry-run=client > secret.yaml
  2. Developer seals it: kubeseal < secret.yaml > sealed-secret.yaml
  3. Commit sealed-secret.yaml. Safe.

The CI/CD Split: Performance Bottlenecks

GitOps handles the CD (Continuous Delivery). But your CI (Continuous Integration) is the heavy lifter. Running tests, compiling Go binaries, or building Docker images requires raw CPU power.

I recently migrated a client's Jenkins runners from a generic cloud provider to CoolVDS High-Performance instances. Their build times dropped from 12 minutes to 4 minutes.

Resource Generic Cloud VPS CoolVDS NVMe Impact
Disk I/O ~150 MB/s ~2500 MB/s Faster Docker builds
CPU Steal Variable (5-15%) Near Zero Consistent test runs
Network Public Internet Routing Local Peering (NIX) Faster registry push

When your CI pipeline is slow, developers context-switch. That kills productivity. Optimization here isn't a luxury; it's a TCO requirement.

Norwegian Context: Data Sovereignty

Operating in Norway means respecting Datatilsynet. If you are handling personal data, you need to know exactly where your bits live. By hosting your GitOps control plane and your production workloads on local Norwegian infrastructure like CoolVDS, you simplify your compliance posture significantly. You aren't worrying about a US-based cloud act subpoena touching your master encryption keys.

The "Break-Glass" Procedure

GitOps is great until Github goes down. It happens. You need a contingency plan.

  1. Local Admin Keys: Keep a set of emergency admin.conf keys in a physical safe or an air-gapped vault.
  2. Image Registry Mirror: If Docker Hub or GHCR is unreachable, can you still pull images? Run a pull-through cache on your CoolVDS cluster.

Final Thoughts: Consistency is King

A half-assed GitOps implementation is worse than none at all. It gives you a false sense of security while leaving backdoors open for manual changes. Lock down your cluster API server. Allow access only from your GitOps controller and your break-glass VPN.

Infrastructure should be boring. It should work. If you are tired of debugging latency issues or worrying about where your data actually sits, it is time to upgrade the foundation.

Ready to build a pipeline that flies? Deploy a CoolVDS High-Performance instance in Oslo today and stop waiting on I/O.