Git-Driven Operations: The End of Manual SSH
It is Friday, 16:00 CET. You are staring at a terminal window, sweat forming on your brow. You just ran apt-get dist-upgrade on your production load balancer, and now the latency has spiked to 400ms. Why? Because the configuration file on the server drifted from what was in your documentation three months ago. Somebody tweaked a buffer setting manually and didn't commit it.
This is the reality for too many teams in Norway. We pride ourselves on engineering quality, yet we treat our server infrastructure like pets rather than cattle. We log in, we patch, we pray.
The solution isn't just "more backups." It is a fundamental shift in how we touch servers. We need to stop touching them altogether. This is the era of Git-Driven Operations (what some pioneers are starting to call "Operations by Pull Request"). If it isn't in Git, it doesn't exist.
The Philosophy: Single Source of Truth
In a traditional setup, the "truth" of your infrastructure is whatever happens to be running on the disk at /etc/nginx/nginx.conf right now. That is dangerous. In a Git-driven workflow, the repository is the only source of truth. The server is just a projection of that state.
To achieve this in 2017, we combine three powerful tools:
- GitLab CI: For orchestration (hosted here in Europe or self-hosted).
- Ansible (v2.2): For configuration management.
- Docker (v1.13): For immutable application packaging.
Why Latency Matters in Orchestration
When you run a CI/CD pipeline that pushes changes to production, network stability is non-negotiable. If you are serving customers in Oslo or Stavanger, your build artifacts and your target servers should be close. Hosting your runners on CoolVDS KVM instances ensures you are sitting directly on the Norwegian backbone. I have seen pipelines fail simply because a US-based CI runner timed out trying to push a 2GB Docker image to a server in Europe during peak hours.
The Architecture
Here is the workflow we are enforcing:
- Developer commits code + config change to `master`.
- GitLab CI triggers a build.
- Docker image is built and tagged.
- Ansible is triggered to apply the new state to the CoolVDS instances.
1. The Application Definition
Stop relying on the server's environment. Define it explicitly. Here is a standard Dockerfile for a Python application. Notice we lock the base image version to avoid surprises.
FROM python:2.7-slim
# Install dependencies
COPY requirements.txt /app/
RUN pip install -r /app/requirements.txt
# Copy app code
COPY . /app
WORKDIR /app
# Expose port
EXPOSE 8000
# Default command
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:8000", "main:app"]
2. The Orchestration Logic
Here is where the magic happens. We use a .gitlab-ci.yml file to instruct the runner. This pipeline builds the container and then deploys it using Ansible. Note the use of SSH keys injected via environment variables—never store secrets in the repo.
stages:
- build
- deploy
build_image:
stage: build
script:
- docker build -t my-registry.com/app:$CI_COMMIT_SHA .
- docker push my-registry.com/app:$CI_COMMIT_SHA
deploy_prod:
stage: deploy
image: williamyeh/ansible:ubuntu16.04
script:
- echo "$SSH_PRIVATE_KEY" > /tmp/key && chmod 600 /tmp/key
- ansible-playbook -i inventory.ini deploy.yml --private-key /tmp/key
3. The Ansible Glue
This is the piece most people get wrong. They write complex shell scripts. Don't. Use Ansible modules. They are idempotent. If the state matches, Ansible does nothing. If it differs, it fixes it.
This playbook updates the running Docker container on your CoolVDS node without downtime (if you have a load balancer upstream, which you should).
---
- hosts: webservers
become: true
tasks:
- name: Ensure Docker python library is installed
apt:
name: python-docker
state: present
- name: Pull latest application image
docker_image:
name: "my-registry.com/app:{{ lookup('env','CI_COMMIT_SHA') }}"
state: present
- name: Re-create container with new code
docker_container:
name: production_app
image: "my-registry.com/app:{{ lookup('env','CI_COMMIT_SHA') }}"
state: started
restart: yes
ports:
- "8000:8000"
The Hardware Reality: I/O is the Bottleneck
We talk a lot about software, but let's talk about the metal. When you move to a containerized, Git-driven workflow, you are generating massive disk I/O. Every time a pipeline runs, you are pulling layers, extracting files, and destroying containers.
On a standard budget VPS with spinning rust (HDD) or shared SATA SSDs, this pipeline crawls. I have benchmarked `docker pull` operations that take 45 seconds on cheap hosting and 4 seconds on CoolVDS. Why? NVMe storage.
Pro Tip: Check your `iowait` during a deployment. If it spikes above 10%, your hosting provider is stealing your productivity. CoolVDS guarantees dedicated I/O throughput because we don't oversell our storage backend.
Security and Compliance (The Boring but Vital Part)
In Norway, we have strict adherence to data practices, and with the EU discussions around GDPR heating up, you cannot be lax. When you use a Git-driven workflow:
- Audit Trails are Automatic:
git logtells you exactly who changed the firewall rule and when. - Rollbacks are Instant:
git reverttriggers the pipeline to redeploy the previous safe image. - Data Sovereignty: By deploying on CoolVDS instances located in Oslo/EU, you ensure that customer data processed by these containers stays within the correct legal jurisdiction.
Putting it into Practice
Transitioning to this workflow is scary. Start small. Do not try to containerize your monolithic legacy PHP app on day one. Start with a stateless microservice or a worker script.
- Spin up a CoolVDS KVM instance (Ubuntu 16.04 is the rock-solid choice right now).
- Install the GitLab Runner agent on it.
- Write a simple Ansible playbook to install Nginx.
- Commit a change to the
nginx.confin Git and watch the server update itself.
Once you see that first automated config change propagate from a git push to a live server in under 30 seconds, you will never type ssh root@... again.
Ready to build a pipeline that doesn't break? Deploy a high-performance NVMe instance on CoolVDS today and stop fighting your infrastructure.