Qusic
Containerized CLI Tools
Jul 11 2020
development

Where it starts

Apple MacBook Pro is not really a good productivity option for me. The price is undoubtedly high, but what you get at that price is barely satisfactory. Intel making little progress, macOS adding irrelevant clutter apps each year, and especially the notorious thermal throttling at last push me to the other world — Windows PC.

For less than half the cost, I can get the equivalent or better performance, not only on the theoretical specs, but also in practice thanks to Zen 2 7 nm and much more rational cooling designs. I can also get many more ports and start to question myself, how I can live with all the inconvenience for so long.

So, back to the title, I have been thinking and trying with my WSL 2 setup and finally reach a state close to my expectation, where the interesting part is to use CLI tools from containers.

Why containers

The top thing I am seeking is easy reproducibility.

Software packages offered by Ubuntu repository are usually outdated. However, Ubuntu LTS is the most supported and widely used WSL distribution. I don't want to go through a series of manual steps to install another distribution, especially when I have to repeat them every time I get a new PC to set up.

Clearly, using VS Code dev containers will be the most reproducible solution. But that does not cover some scenarios where I only need to interact with several command line tools while spinning up a full-fledged dev container is unnecessary.

Craft executables

Firstly, we need a convenience script to handle some common things. Let's name it as _run:

run="docker run --user $(id -u):$(id -g) --rm -i"
if [ -t 0 ] && [ -t 1 ]; then run="$run -t"; fi
run="$run -w /data -v $PWD:/data"
exec $run "$@"
  • We run a one-off container with --rm flag, which will be removed after the container exits.
  • Interactive, but only allocate tty when we have both stdin and stdout open, which enables the script to handle pipes and redirects.
  • Mount the current working directory, so that the program can see files passed in by arguments.
  • Run as current user, so that files created by the program won't be owned by root user on the host OS.

Then, an easy example can be node:

_run node:alpine node "$@"
  • Use alpine-based images for smaller size.
  • "$@" to pass arguments as is.

It's quite powerful, and works as expected in most cases:

node app.js
echo 'console.log(new Date())' | node

My two most used tools are kubectl and helm. The scripts look similar to node, but this time we need to mount some local config files in the container as well.

_run -v $HOME/.kube:/.kube \
  dtzar/helm-kubectl kubectl "$@"
_run -v $HOME/.kube:/.kube \
  -e XDG_CONFIG_HOME=/.kube \
  -e XDG_DATA_HOME=/.kube \
  -e XDG_CACHE_HOME=/.kube \
  dtzar/helm-kubectl helm "$@"
  • dtzar/helm-kubectl is a non-official image I find useful. You can also build your own if needed.
  • Most containers doesn't create any non-root users, so the --user flag actually makes the program run as a $HOME-less user, where it falls back to /.
  • When appropriate, use environment variables to merge or reduce the directories we need to mount.

Try something fancy this time:

kubectl run alpine --image=alpine -it --rm --restart=Never \
--overrides='
{
  "spec": {
    "containers": [{
      "name": "alpine",
      "image": "alpine",
      "stdin": true,
      "tty": true,
      "workingDir": "/data",
      "volumeMounts": [{
        "name": "data",
        "mountPath": "/data"
      }]
    }],
    "volumes": [{
      "name": "data",
      "persistentVolumeClaim": {
        "claimName": "data"
      }
    }]
  }
}
'
helm install ingress-nginx ingress-nginx/ingress-nginx \
--values - << EOF
controller:
  service:
    externalTrafficPolicy: Local
  resources:
    requests: {cpu: 100m, memory: 128Mi}
    limits: {cpu: 100m, memory: 128Mi}
EOF

Shell completions

kubectl completion and helm completion offers their corresponding completion scripts. But saving them to files is a bad idea, because it brings additional maintenance work to keep them updated when new images are pulled. So, we can request those completion implementations on the fly:

#compdef kubectl
source <(kubectl completion zsh)
#compdef helm
source <(helm completion zsh)

Since it's executed only once within one shell session when completion is invoked for the first time, we don't have to worry about it slowing things down.

Put together

I use GNU Stow to manage these scripts together with various profile config files. This way, setting up a new machine is as easy as to:

  • Install Docker Desktop.
  • Clone my dotfiles repo into $HOME.
  • Run stow * in the clone.

Then I am ready to go, as Docker will pull missing images automatically when needed.

WSL 2 limitations

This setup doesn't have to be used with WSL 2, but if you want to try WSL 2, it's worth mentioning that WSL 2 is not flawless. I did encounter several annoying issues, which may be the deal breaker for you.

#4769
Only one port can be accessed by the host through localhost at the same time

This really made WSL 2 rather useless because connecting to servers running in WSL 2 from Windows gets really cumbersome. Fortunately, it has been fixed.

#4150
[WSL 2] NIC Bridge mode 🖧 (Has TCP Workaround🔨)

The opposite of the previous issue. You cannot easily access servers running in Windows from WSL 2. The workaround is too tedious and I choose to avoid running any services in Windows.

#4177
Winsock module breaks WSL2

Basically, this means you cannot use Winsock network tools such as Proxifier for programs running in WSL 2. I tried to configure proxies using environment variables, but it was too complicated when it came to WSL-based Docker engine. Solutions built on Windows TAP driver and go-tun2socks can be good alternatives.