I have recently deployed several static sites I manage, such as this blog, onto my new Kubernetes cluster. I have got it to the point that the build and deployment process is fully automated, so the only interactions required to update them is git commit
and git push
.
Each of the static sites is built differently, so I’ll focus on this blog, which is in Jekyll because that seemed like the current hotness when I started this blog. I have long intended to migrate it to a static site generating platform with less Ruby, but have so far failed to actually do so. To build this static site, I simply run jekyll build
(after installing the correct plugins via gem
). So I built a little multi-stage Dockerfile to build the HTML assets with Jekyll, then copy to an nginx container:
FROM jekyll/jekyll
RUN apk --no-cache add py-pygments python
RUN gem install jekyll-paginate jekyll-assets jekyll-minifier pygments.rb
COPY . /srv/jekyll/
RUN jekyll build -tV && cp -r _site /tmp/_site
FROM nginx
EXPOSE 80
COPY --from=0 /tmp/_site /usr/share/nginx/html
This produces a minimal final image which has just nginx and my HTML. Jekyll, the various plugins, etc are not part of the image.
Something that tripped me up was the resulting _site
folder mysteriously disappearing after running the build command. It turned out to be caused by the upstream jekyll/jekyll
image, which defines /srv/jekyll
as a volume, apparently causing docker to not retain new files that get created in it from step to step. So, I changed the command to also copy the contents of the rendered output to a different folder, which can then be accessed by future steps of that stage of the build, or future stages (as I do at the very end)
The next step is automatic deployment. I host this all on my friend’s GitLab instance, which has continuous integration setup. I made a .gitlab-ci.yml
file in the root of the repo for this blog that looks like this:
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_DRIVER: overlay2
services:
- docker:dind
build:
tags:
- docker-builder
image: docker:latest
stage: build
script:
- docker info
- docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY}
- docker build -t ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA:0:8} .
- docker tag ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA:0:8} ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}
- docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA:0:8}
- docker push ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}
deploy_staging:
stage: deploy
image: ${CI_REGISTRY}/finn/k8s-deployer/deployer:41aec2f4
variand: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: static-site-deployer
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: static-site-deployer-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: static-site-deployer
subjects:
- kind: ServiceAccount
name: static-site-deployer
namespace: gitlab-runners
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-site-deployer
namespace: gitlab-runnersbles:
KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: static-site-deployer
script:
- deploy finn-io nginx ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA:0:8} default
environment:
name: production
url: https://finn.io
only:
- master
This does a couple of different things: First, it builds the docker image with the Dockerfile shown above. It then uploads that to the container registry on the GitLab instance. The last steps only occurs when the CI is running in response to a change in the master
branch. It runs a little tool I wrote to tell the Kubernetes cluster to update the finn-io
deployment’s nginx
pod, to use the current version of the image rather than whatever it was previously using. The tool is packaged in a docker image, but it’s just a simple shell script:
#!/bin/sh
# Usage: ./deploy deployment-name container-name image-name:label namespace
set -e
if [ "$DEBUG" = "true" ]; then set -x; fi
K8S_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
CA=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
PATCH="{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"$2\",\"image\":\"$3\"}]}}}}"
URL="https://kubernetes.default.svc.cluster.local/apis/extensions/v1beta1/namespaces/$4/deployments/$1"
echo "Submitting patch:"
echo
echo $PATCH | jq .
echo
echo "To URL:"
echo
echo $URL
echo
curl --fail -sSX PATCH --cacert $CA -H "Authorization: Bearer $K8S_TOKEN" -H "Accept: application/json" -H "Content-Type: application/strategic-merge-patch+json" -d "$PATCH" $URL
Then it accepts 4 command line arguments: the deployment name, the container name, the image to use and finally the namesapce to deploy into, as shown in the gitlab-ci config above.
Finally, I created a service account called static-site-deployer
and told Gitlab to use that service account for the runner (with the line KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: static-site-deployer
in .gitlab-ci.yml
). I also created a role and a rolebinding to allow the static-site-deployer
service account to patch deployments in the default
namespace:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: static-site-deployer
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: static-site-deployer-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: static-site-deployer
subjects:
- kind: ServiceAccount
name: static-site-deployer
namespace: gitlab-runners
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-site-deployer
namespace: gitlab-runners
Note that this is a ClusterRole
rather than a normal Role
because it has to operate across namespaces, as far as I understand it. I’m somewhat fuzzy on all the RBAC stuff, but as best I can tell this is the most locked down set of permissions I can grant it to allow patching of deployments. Eventually I may look into further restricting it to a specific list of deployments, but at this point 100% of deployments in the default
namespace are static sites that get deployed in this manner, so I might just move them to their own namespace.