Getting started with LetsEncrypt

LetsEncrypt really changed the SSL game, offering free certificates, but more than that offering them in a programatic way thus paving the way for a decent automation story. However the official client, now known as certbot, is lacking on certain features. Luckily there are a slew of clients that speak the ACME protocol. After fiddling around with a few clients I wound up settling on a client written in Go named Lego.

Obtaining the cert

I wanted a central location to manage my certificate lifecycle as well as having a single repository to handle the orchestration of the deployment of such certificates. As such, the default mechanism of dropping a challenge file in a webroot wouldn’t work, as well as a few of the things I run don’t lend itself to such an auth mechanism. In favor of this, I decided to leverage dns-01 instead.

I like things tidy, so I keep everything inside of a directory structure as follows in /opt/:

├── ansible // Where I keep the installation automation playbooks
│   └── roles
│       ├── host1
│       │   ├── files
│       │   ├── handlers
│       │   └── tasks
│       ├── host2
│       │   ├── files
│       │   ├── handlers
│       │   └── tasks
│       └── host3
│           ├── files
│           ├── handlers
│           └── tasks
├── bin // Lego bin lives here and misc scripts
└── data // Where Lego writes its goods
    ├── accounts
        │   └──
            │       └──
                │           └── keys
                    └── certificates // Here be certs

Since I am using the dns-01 challenge with AWS Route53, the following environment variables must be defined: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Ensure there is a proper IAM role defined for this task, as well a corresponding policy. The Lego README provides an example policy which will get you going:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

If you have multiple domains, in the second resource block just add a second ARN in a comma separated list. Or if you are less particular in doing things right and want to be looser on security, add arn:aws:route53:::hostedzone/* to allow modifications to all domains.

With that done, we can finally get our certs! This is as simple as:

AWS_ACCESS_KEY_ID="<accessid>" AWS_SECRET_ACCESS_KEY="<secretkey>" /opt/lego/bin/lego -a --path="/opt/lego/data/" --email="" --domain="" --dns route53 run
Arguments are:
-a Acknowledges that you agree to the current LetsEncrypt terms of service
--path Where to stick the certs and account information
--email The identity you want to use to register the cert with, they send you things like expiration notices
--domain The domain you want to get the cert for
--dns Specifies the DNS challenge, in this case route 53

You will notice that it both creates the DNS resource record to satisfy the challenge, and if everything went swimmingly will cleanup said record leaving things nice and tidy.

Renewing the certs

With the cert obtained, we need to ensure that the certs get renewed with the standard 90 day life of a LetsEncrypt certificate. This is as simple as changing run to renew and adding --days 30 to renew it within 30 days of expiration.

But since we want to automate this, lets make a little script to do this for us:



for domain in $DOMAINS; do
    /opt/lego/bin/lego -a --path="/opt/lego/data/" --email="" --domains="$domain" --dns route53 renew --days 30

This will iterate through the list of $DOMAINS and will renew each one. I threw this in a cronjob to run every night, but a systemd timer is nice too if you swing that way.

Installing the certs

There probably is a more elegant way of approaching this, but Ansible seemed perfect for what is being done here. It will ensure the certs are placed on the remote servers, and will execute actions if an update has happened and will noop otherwise. A basic boilerplate requires your inventory defined, I call mine hosts.ini. In my playbook I define each host as a role to customize how each server needs to be handled. My playbook certificates.yaml looks as follows:

- hosts: host1
  sudo: yes
      - {role: 'roles/host1'}

Inside of each role’s files directory I then symlink the cert and key in /opt/lego/data/certificates/ and define the specific installation plays in tasks.

Once your playbook looks and acts reasonably, cron it out:

ansible-playbook -i /opt/lego/ansible/hosts.ini /opt/lego/ansible/certificates.yaml > /dev/null

Installation for Subsonic

Since Subsonic runs java, we have to deal with the goofy keytool shenanigans. So the task I have defined for this server resembles:

- name: Install certs
  copy: src={{ item }} dest=/opt/subsonic/ssl/{{ item }} mode=0600
      - generate keystore
      - restart subsonic

With a handler definition resembling the following:

- name: generate keystore
  shell: /opt/subsonic/ssl/

- name: restart subsonic
  service: name=subsonic state=restarted is just a simple incarnation of the commands to convert the pems into the format that java is happy with:


/usr/bin/openssl pkcs12 -in /opt/subsonic/ssl/ -inkey /opt/subsonic/ssl/ key -export -out /opt/subsonic/ssl/subsonic.pkcs12 -password pass:subsonic

/usr/bin/keytool -importkeystore -srckeystore /opt/subsonic/ssl/subsonic.pkcs12 -destkeystore /opt/subsonic/ssl/subsonic.keystore -srcstoretype PKCS12 -srcstorepass subsonic -deststorepass subsonic -srcalias 1 -destalias subsonic

/usr/bin/zip -j /opt/subsonic/subsonic-booter-jar-with-dependencies.jar /opt/subsonic/ssl/subsonic.keystore

/bin/rm /opt/subsonic/ssl/subsonic.keystore /opt/subsonic/ssl/subsonic.pkcs12

Installation for weechat

The very capable IRC client weechat has a relay protocol allowing for remote access to the client from other things, such as a mobile browser such as Glowing Bear which I use to access IRC from my iOS devices.

This assumes weechat relay is already set up, to start encrypting programatically we need a task defined similiar to:

- name: Install certs for weechat
  copy: src={{ item }} dest=/home/taco/.weechat/certs/{{ item }} mode=600
      - reload weechat certs

And a handler such as:

- name: reload weechat certs
  shell: /home/taco/.weechat/

Since will send a /relay sslcertkey via the fifo channel, ensure your weechat has it enabled with plugins.var.fifo.fifo = on. If it’s on inside your .weechat directory you will find a file resembling weechat_fifo_123 with the suffix numbers indicating pid.


cat /home/taco/.weechat/certs/ /home/taco/.weechat/certs/ > /home/taco/.weechat/certs/relay.pem
for fifo in /home/taco/.weechat/weechat_fifo_*
    printf '%b' '*/relay sslcertkey\n' > "$fifo"

This will send the reload to all running weechat instances, but is mostly harmless if the certpaths are configured correctly.

more ...

Music playback on machines with tiny storage

With the advent of solid state storage, the once massive drives of spinning rust that shipped in laptops got faster but tinier. While the advent of streaming services (Spotify, Google Music, Pandora, etc) solved this for most people, I’m a bit more traditional relying on my own library. I generally enjoy listening to music whilst working on the compute box, but I had three primary requirements:

  1. Not iTunes (are there non-muggles who like iTunes?)
  2. Leverages central storage for actual music data, as to not duplicate the data and save disk on workstation
  3. Allow for media control keys on the macbook keyboard to continue working.

I wanted a solution for being on the local network and another when remote.


Wanting a light weight solution, I decided to fall back on MPD controlled by NCMPCPP. The MPD instance on the macbook will get the metadata from a MPD instance on the NAS, and plays the FLACs natively from the NFS store on the NAS.

Since the NAS has no soundcard, I configured the null audio output and it just hangs out scanning for new media and presents the database to the network.

On the macbook I installed MPD via brew with the NFS option: brew install mpd --with-libnfs and configured MPD to act as a satellite with the following configs:

music_directory "nfs://nas.local/mnt/music/"
database {
    plugin "proxy"
    host "nas.local"

Now the local mpd/ncmpcpp plays all the flacs natively just fine from the NAS. But not having media keys were driving me crazy! When in doubt, just go to github and search for random projects to see if anyone already hacked something together. From there, I found osxmpdkeys. Once I pointed it to the local MPD instance, the media keys just magically started working! Simple service to capture keypress and send it to the daemon. Brilliant.


On the NAS I also run an instance of Subsonic for use on my phone, and for playback when I’m out and about doing computing things. I used Clementine, a thick client player, for a while for Subsonic playback but it was a little too heavy. Since I actually like the Subsonic web interface, all I had to do was launch BeardedSpice and now I had media keys.


If I end up doing more remote computing, I reckon I will set up Mopidy with a subsonic backend to maintain a consistant interface for playback. Also, BeardedSpice seems like a pretty squared away project so I may try writing a MPD handler.

more ...