I really like creating demo applications. I partly put together demo apps so I have something to demo whatever feature or product I happen to be working on at the time. But I mainly build demo applications so I can experiment with some interesting tool or another. I’ve been writing a bunch of demo apps recently which I might write about separately, in this post I wanted to talk about the hello world demo, and why I like to go a little further.
I’m enjoying experimenting with GitHub Actions, and have now used the new beta for a number of projects and tasks. I’ve mainly been interested in Actions for ad-hoc automation, either based on a schedule or on an event like pushing to master. Several of those projects have involved having an Action generate some files and commit those back to GitHub. At the moment doing so requires some jumping through hoops, so I thought it would be worthwhile documenting.
I really like Tekton. The project is at a fairly early stage, but is building the primitives (around Tasks and Pipelines) that other higher-level (and user facing) tools can be built on. It’s plumbing, with lots of potential for sharing and abstraction. So my kind of project.
Now Tekton’s abstractions are similar to GitHub Actions. Tasks broadly map to Actions and Pipelines to Workflows. I’d love to see this broken out of the separate implementations and a general standard emerge.
Falco is a handy open source project for intrusion and abnormality detection. It nicely integrates with Kubernetes as well as other linux platforms. But trying it out with Docker Desktop (for Mac or Windows), either using Docker, the built-in Kubernetes cluster, or with Kind requires a new kernel module.
This is slighly complicated by the fact the VM used by Docker Desktop is really not intended for direct management by the end user.
I maintain a few open source projects that help with testing configuration, namely Kubeval and Conftest. Recently I’ve been hacking on various integrations for these tools, the first of which are plugins for Helm.
Validate Helm Charts with Kubeval Kubeval validates Kubernetes manifests against the upstream Kubernetes schemas. It’s useful for catching invalid configs, especially in CI environments or when testing against multiple different versions of Kubernetes. Lots of folks have been using Kubeval with Helm for a while, mainly using helm template and piping to kubeval on stdin.
For the past few months I’ve been hacking on Conftest, a tool for writing tests for configuration, using Open Policy Agent. I spoke about Conftest at KubeCon but am only now getting round to writing up a quick introducion.
The problem We’re all busy writing lots of configuration. There are already more than 1 million Kubernetes config files public on GitHub, and about 70 million YAML files too. We’re using a range of tools, from CUE to Kustomize to templates to DSLs to general purpose programming languages to writing YAML by hand.
Building on the previous post, I found myself wanting to grab quick clusters for experimenting with Helm Charts and with Kubernetes Operators.
Helm helm-%: cluster-% kubectl -n kube-system create sa tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller --kubeconfig=$$(kind get kubeconfig-path --name $(NAME)) helm repo --kubeconfig=$$(kind get kubeconfig-path --name $(NAME)) add incubator https://kubernetes-charts-incubator.storage.googleapis.com The above target for our Makefile makes it easy to grab a new Kind cluster and instantiate Helm.
Skopeo is a handy tool for interogating OCI registries. You can inspect the image manifests and copy images between various stores. I found myself wantting to use Skopeo in the context of a container, and having searched on Hub mainly found either out-of-date images or images designed for a slightly different purpose. Maintaining images is non-trivial, and sometimes it’s better to just let folks know how to build there own. So for anyone else in need of such a thing, here is a Skopeo image based on Alpine linux.
There are a number of options for persistent local Kubernetes clusters, but when you’re developing tools against the Kubernetes APIs it’s often best to be throwing things away fairly regularly. Enter Kind. Originally designed as a tool for testing Kubernetes itself, Kind runs a working Kubernetes cluster on top of Docker.
At its simplest we can create a new Kubernetes cluster like so:
$ kind create cluster Creating cluster "kind" .
With a new job (at Snyk) comes the opportunity to setup a new machine from scatch. I’ve always taken a perverse pleasure in building development machines for myself. It’s also the first time I’ve been back on a Mac (a MacBook Pro 13” to be precise) for a while so always new things to play with.
Homebrew The reality is I don’t use much beyond a web browser and a terminal (partly a concious decision, it makes it easier to move between different operating systems and computers).
We now have a nice configuration language in which to author our configs (CUE), and a way of validating and testing that configuration using Kubeval and Conftest.
Next we want to wrap that in a little automation. When writing our configuration we probably want to be running it against a Kubernetes cluster as we make changes. For that I’m going to use Tilt.
Before we jump into the Tilt configuration I’ll add another useful CUE commannd.
In the first two posts we have written some configuration using CUE and validated it against the Kubernetes schemas using Kubeval. In this post we’re going to expand testing to include custom assertions.
As mentioned before, I’m using Kubernetes configuration as an example here. The same approach is just as valid for other structured data configs like CloudFormation, Azure Resource Manager templates, Circle CI configs, etc.
Syntactically valid configuration doesn’t make it correct.
In the previous post I introduced using CUE for managing Kubernetes configuration. In this post we’ll start building a simple workflow.
One of the features of CUE is a declarative scripting language. This can be used to add your own commands to the cue CLI. Script files are named *_tool.cue and are automatically loaded by the CUE tooling.
First lets lay some of the ground work. This is taken from the Kubernetes tutorial and converts our map of Kubernetes objects to a list.
My interest in Kubernetes has always been around the API. It’s the potential of a unified API and set of objects that keeps me coming back to hacking on Kubernetes and building tools around it. If I think about it, I find that appealing because I’ve also spend time automating the management of operating systems which lack anything like a good API for doing so.
I’m also a little obsessed with domain specific languages and general configuration management topics.
garethr.dev is the blog of Gareth Rushgrove. I’m a professional technologist, mainly specialising in instrastructure, automation and information security.
I’m currently Director of Product at Snyk, working on application security tools for developers.
I have previously worked as:
A group Product Manager for Docker, responsible for a range of the developer-facing tools like Docker Desktop, as well as work on the Cloud Native Application Bundles (CNAB) specification. A Principal Engineer at Puppet, working on configuration management tooling for cloud infrastructure, Kubernetes and various source code analytics projects.