Make, Fix, Know, Write

I code. I rarely write things. #TeamOxfordComma.

Current Podcasts: Accidental Tech Podcast, Back to Work, Reconcilable Differences, Cortex, Roderick on the Line

Current Books: Revelation Space, Network Programmability and Automation, Atomic Habits

On Setting Goals

- Aug 9, 2020

A recent exercise at work has us setting goals for ourselves. Wanting to commit to them in a broader way I thought I would post them here. Goals are to be set and re-evaluated in a six month window to keep them within a reasonable measure.

Accomplishment Goal

Shipping something of merit above what you normally do.

Collaborate in a PI planning session as a product owner (By Nov 2020)

I recently stepped up to the role of a “Technical Product Owner” for one of our clients. For those that practice Agile, you’ll know PI as a longer form of goal-post setting. For this project we are balancing the PIs between several teams from different companies and that can be a challenging feat. My first session as a product owner and not just a technical lead will kick-off in September. I’ve given myself to November however as I may need to lean on our existing PO and for myself I want this goal to be met by being solo in this event.

Efficiency Goal

Getting more value out of a single hour by working smarter.

Better balance of consulting/coaching/self-development time (End Sept 2020)

Currently I’m checking email at 7AM and reviewing things until at least 6PM, sometimes even jumping back on in the later evening. At the moment I’m still shaking some previous technical responsibilities, learning new aspects of my new responsibilities, and haven’t refined my day-to-day process to quickly turn around quality work. Around 6 hours of my day would ideally be focused on client work, an hour on coaching, and an hour on self-development. I’m going to try more delegation, better dashboards, and process improvement to meet this goal.

Development Goal

Improving yourself and your skills.

Develop and host a “Being an Effective Consultant” training course (Dec 2020)

I have been sketching out some thoughts on how to be a good consultant. There are differences between operating as a consultant and that of a full-time employee. Originally I wanted to just ship this as a blog post, however I thought to try a different format to possibly use the content in a different way. Building a training course would force me to think more methodically about content generation, along with pushing me to practice teaching. I would like to have at least one two-hour-long basics lesson and an HLD for the broader course done.

Community Goal

Giving back to the collective knowledge/community.

Participate in Hacktoberfest again (Oct 2020)

It was a great itch-scratcher for deep technical work I don’t always get to do. Last year I contributed enough to meet the bare minimum to get swag, challenging myself to all being different languages. This year I would like to give back to all of the same communities, but one additional one this year in Rust. That would bring me up to 6 PRs successfully merged, with the languages being:

  • TypeScript
  • Python
  • Go
  • Ruby
  • JavaScript
  • Rust (new)

I would also like to help organize and kick-off Network to Code’s first participation in Hacktoberfest.

At least one more blog post by the end of the year (Dec 2020)

This one doesn’t really count. I’m looking to ship another helpful blog post on a more basic how-to. No measures of traffic will measure success, other than I like it and it’s well received by those I respect.

NTC on Network Collective – Navigating Enterprise Process

- Apr 8, 2020

Some of my brilliant coworkers and I were guests of the “Network to Code on Network Collective” podcast series. We talked about many things regarding enterprise processes (people and technical) and how to not only deliver network automation in those environments, but overall evolution as well.

They (Daryn Johnson, Rick Sherman, and Jordan Martin) said some really smart stuff. I just tried to keep up.

You should check it out!

And while you’re there you should subscribe to Network Collective. There’s a bunch of great stuff there that Jordan Martin is cooking up.

Build Monorepo of Docker Images with Make & GitHub Actions

- Mar 30, 2020

Typically I’m a big fan of every app having a separate repo. However many times in the early phases of a project a monorepo can make more sense as a small team is building out the core infrastructure of a platform.

Several GitHub Actions already exist to build a repo’s Docker image but large in part they assume everything to be in ., aka the current, top-level directory.

This strategy I will show you will additionally allow you to build a monorepo of Docker images via any command line by running just make.

Directory Structure

Let’s start with the directory structure and then we’ll walk through the components:

➜ tree
├── .github
│   └── workflows
│       └── main.yml
├── Makefile
├── app
│   └── example
│       ├── Dockerfile
│       └── ...
└── platform
    ├── proxy
    │   ├── Dockerfile
    │   └── ...
    ├── service-mon
    │   ├── Dockerfile
    │   └── ...
    └── ...

Keeping the location and depth of Dockerfiles consistent is a key to how this works. In this setup the names of the Docker images will be inferred from their folder names. So in the example above three images will be made:

  • app-example
  • platform-proxy
  • platform-service-mon

If you have already published these images into DockerHub, you might want to adapt this setup to source a name from a dot file in the folders themselves. There is nothing special when it comes to the Dockerfiles themselves.

The first piece of the magic is the Makefile, something all projects should get back into the habit of making. In this example the default command (just running make) will build these images but you can easily adapt it to make it be a command of the make command.

The Makefile

GIT_SHA1 = $(shell git rev-parse --verify HEAD)
IMAGES_TAG = ${shell git describe --exact-match --tags 2> /dev/null || echo 'latest'}
IMAGE_PREFIX = my-super-awesome-monorepo-

IMAGE_DIRS = $(wildcard app/* platform/*)

# All targets are `.PHONY` ie allways need to be rebuilt

# Build all images
all: ${IMAGE_DIRS}

# Build and tag a single image
	$(eval IMAGE_NAME := $(subst /,-,$@))

There is only one variable expected to be set in this Makefile and that’s DOCKERHUB_OWNER. This is the top-level nesting of where these Docker images will be uploaded to. This will be supplied by the GitHub Actions workflow later but if you intend to also run this on the command line occasionally you should not only login to the registry by running docker login but also set DOCKERHUB_OWNER either in your personal environment variable or use something like direnv.

Depending on your directory structure, you will likely need to change IMAGE_DIRS to include the top level directory where your Dockerfiles will be.

Another customizable piece is the IMAGE_PREFIX. If you would like to keep all of your monorepo images together visually, you can supply a prefix to be prepended to the image name. With everything defined as above, the images would be named:

  • my-super-awesome-monorepo-app-example
  • my-super-awesome-monorepo-platform-proxy
  • my-super-awesome-monorepo-platform-service-mon

Images will be built and pushed with both the tag latest as well as a Git version tag.

Now we could stop here and you already have an ability to rapidly build and push Docker images. However if you are working on a team you likely want this to be done by a somewhat central authority ensure the builds are consistent and secure.

That’s where GitHub Actions comes in.

GitHub Actions Workflow File

name: CI

    branches: [ master ]
    tags: [ v* ]

    runs-on: ubuntu-latest

    - uses: actions/checkout@v2

    - name: Login to DockerHub Registry
      run: echo ${{ secrets.DOCKERHUB_PASSWORD }} | docker login -u ${{ secrets.DOCKERHUB_USERNAME }} --password-stdin

    - name: Build & Push Docker Images
      run: make

This will trigger the GitHub Actions workflow on all pushes to the master branch (note: merges to branches after PRs are considered pushes, don’t push to master directly) as well as any tags pushed beginning with v. The triggers can be customized.

This leverages the secrets storage in GitHub to securely store the username and password for the DockerHub login.

Since I am publishing this under my account in this example, I just mapped the DOCKERHUB_OWNER environment variable to my user from the secrets but this could easily be overridden to an organization or other owner.

Put it all together and you can now neatly and quickly build a monorepo of Docker images with make and GitHub Actions!

Since this solution largely uses make to do the building and pushing you can also use other CI/CD pipelines like CirlceCI or GitLab.

Make Bash/Zsh Faster With Temporary Amnesia

- Feb 20, 2020

In my persistent pursuit to make new shell sessions load faster, sub-20ms, on all of my machines one thing I’ve noticed is that a very long scroll back/history can cause startup slowness. This could be a by-product of me running ohmyzsh but I have yet to be willing to give up it’s niceties in this mission.

There is an eventual limit to how relevant an old command being available in my history-based autocomplete is, but I’ve also been bit when it comes to running things like cert-bot where I’m scrounging around as to how I made the call prior to just throw away N number commands.

So I added a tiny startup call and function to my zsh profile to help keep my Zsh aware history small but allow me to quickly search the archives.

# History Cleaner
if [[ $(wc -l < ~/.zsh_history) -ge 1000 ]]
  mkdir -p ~/.zsh_histories/
  mv ~/.zsh_history ~/.zsh_histories/$(date +%s).zsh_history

# Search Zsh Histories
function zshh() {
  if [ -n "$1" ]; then
    grep -rwh ~/.zsh_histories ~/.zsh_history -e $1

The # History Cleaner section above checks the default .zsh_history file and if the number of lines is over 1000, it moves it to a timestamped file under ~/.zsh_histories/.

The # Search Zsh Histories is where the real magic lies. If I can’t find a command in my history, I can quickly search the archives by running zshh the-command. Hint, zshh is short for “Zsh H(istory)”. Because it uses grep it’s pretty fast, requires no additional plugins/installations, and solves it’s purpose for me. I’ve seen other implementations that use a SQLite DB but that was too heavy handed in my opinion, much less very portable.

This can be adopted to bash as well:

# History Cleaner
if [[ $(wc -l < ~/.bash_history) -ge 1000 ]]
  mkdir -p ~/.bash_histories/
  mv ~/.bash_history ~/.bash_histories/$(date +%s).bash_history

# Search Bash Histories
function bashh() {
  if [ -n "$1" ]; then
    grep -rwh ~/.bash_histories ~/.bash_history -e $1

The nice part is that this also searches anywhere in the command that was called so it can be helpful if you don’t remember the exact syntax you ran before:

➜ zshh go
: 1570823071:0;brew install go
: 1570823199:0;go build
: 1570823203:0;which go
: 1570823317:0;go build
: 1570823385:0;go get
: 1570823493:0;mkdir -p ~/Workspace/go
: 1570824778:0;go get -v
: 1570824901:0;go get
: 1570824965:0;go build

Some future improvements would be to allow for a rolling history where lines beyond 1000 would be moved into the archives. I can be quite jarring when the rollover happens but for my workflows it hasn’t been an issue. The rollover happens on any new session so as long as the same session is open you will still be able to access commands prior.

Secure Messages Between Rails and NodeJS

- Jan 13, 2020

Digging around some old code I wrote I stumbled across a way to decrypt the encrypted messages created with Rails’ MessageEncryptor. It can be useful to use this to securely exchange secrets between a Rails app and a AWS Lambda function where you don’t trust the transport mechanism, for example a user’s web browser. This can be helpful as you won’t need to translate a bunch of code to JavaScript or open up either direct connections to a database or create an API endpoint for the serverless function to call.

This example will use AES-256-GCM which is fairly secure in my research, which of the few has been: here, here, and here. This example will create a new initialization vector (IV) for every new message (thanks Rails!) so it does cover one of the GCM weaknesses of IV reuse. However, there is a cipher flag set on both sides for you to change it as you choose.

Setup Rails Message Encryptor

This portion you would store somewhere inside your Rails app, possibly a controller to encrypt messages for you.

NOTE: ENC_PASSWORD should be grabbed from an environment variable or config file. Never hard code your secrets.

ENC_PASSWORD = 'k15XSjo1f6GKBfu0WbZkyC5DJgbsJyd9' #TODO: Securely access this, maybe from ENV?
message_encryptor =
  { cipher: "aes-256-gcm",
    serializer: ActiveSupport::MessageEncryptor::NullSerializer

Note the NullSerializer usage. Rails likes to use the Marshal serializer, and for this simple purpose of plaintext message exchange it’s not entirely necessary. If you don’t use the NullSerializer, you will be left with additional data on the output of your decrypted string later.

Generate Encrypted Message

Generating the encrypted message is as simple as calling the encrypt_and_sign function off of the recently created message_encryptor.

NOTE: Unless you define a signing key when initializing the message encrytor, there is only the accuracy of the decrypted message that is checked. For brevity I did not include the signing procedure/validation into this example. You may not need authenticated signing but use this example at your own risk without it.

=> "3m0TdDwpMQY=--AZweF8B22KJ5q01K--6zD/a8c9k7ve1o2VM8+cEA=="

Voila! An encrypted message you can safely use to transport secrets in an untrusted means. You can optionally pass in a Base64 encoded version of your input string if you expect special / non-standard characters. It should decrypt well on the other side but Base64 has the added benefit of being fairly visually verifiable and reduces your character set when encoding and decoding.

That would look like this:

=> "JttrhFXvH1mXtnUl--Cqc2n0nyV8wAKAas--debSS7YBLfcsEB/82BUwUQ=="

You should be using strict_encode64 as it won’t put a newline at the end of the encoded string.

NOTE: If you are going to be transporting this payload via a GET/URL param, you should Base64 encode this output above as it contains non-url-safe characters, namely =.

Setup NodeJS Message Decryptor

Now we’re on the NodeJS side of things. Again you should be securely storing and accessing your encryption password, enc_password below. You’ll likely stash this into it’s own module and import it as needed. Again for brevity sake this is just set up as a function. This uses just the standard ‘crypto’ library from NodeJS.

let crypto = require('crypto'),
  algorithm = 'aes-256-gcm',
  enc_password = 'wxYIj9V5jBZ9pJTEST8qjHpXRrS8sOAA'; //TODO: Securely access this, maybe from ENV?

function digest_and_decrypt(digest) {
  let [encryptedValue, iv, authTag] = digest.split('--');
  let decipher = crypto.createDecipheriv(algorithm, enc_password, Buffer.from(iv, 'base64'));
  decipher.setAuthTag(Buffer.from(authTag, 'base64'));
  let dec = decipher.update(encryptedValue, 'base64', 'utf8');
  dec +='utf8');
  return dec;

As the payload from Rails is a concatenation of the encrypted string, initialization vector, and auth tag (which checks for the accuracy of the decrypted message) we have everything we need to decrypt the text except for the secret.

Decrypt Encrypted Message

To decrypt the encrypted message it’s as simple as calling the digest_and_decrypt function. This will return just the string of the decoded message if everything decrypted correctly. It will throw Errors should the auth_tag check fail, fails decryption, or the IV is incorrect. You should wrap this call in a try {...} catch {...} for proper error handling.

=> "tacotime"

If you Base64 encoded before or after you encrypted the payload in Rails, be sure to reverse that process when decrypting.

Encode after encrypting because you used it in a URL parameter? Your call should look like this:

digest_and_decrypt(Buffer.from("cWxKUVYwSVo5K0k9LS1kTFY0dFhSVGErWmJDNTl4LS05ckQ3S1lMZGR4UTdwL21RcVk1U09RPT0=", 'base64').toString('utf-8'))
=> "tacotime"


  • Don’t copy and past this and use this in production code. Please. It misses a bunch of error handling and edge cases.
  • Store your secrets securely. Seriously.
  • Novel solutions can minimize a ton of code for relatively simple problems.

Ansible Vault: A Primer

- Jun 18, 2019

I wrote an introduction into Ansible Vault over at Network to Code. Head on over to check it out. Ansible is a great way to automate many frequent activities and with the combination of Ansible Vault can help eliminate secure credentials from leaking out into repositories.

Opus Zsh Plugin

- Jun 6, 2019

The zsh plugin previously named workon is now known as Opus. I have started working with more Python and ran into name collisions with virtualenv’s workon command.

Why Opus? Well the definition of Opus is “a work.” It may not be my magnum opus, but it didn’t seem to have too many shell name collisions. Thankfully if you don’t like the name you can just alias it to whatever your liking.

You can get it on Github.

I’ve renamed the previous repository but have tagged this release as v2.0.


- Mar 14, 2018

I just released a zsh plugin to help you quickly jump between projects and not having to scan through your history for previous cd commands.

It’s called workon.plugin.zsh.