Mastodon

Rust2018 and the Great CLI Awakening

From Rust2018 and the Great CLI Awakening by Vitiral:

spoiler: simultaniously[sic] refreshing and painful

Rust is a fantastic language for writing a Command Line Application (CLI). For the ergonomics of hacking, it has one of the best argument parsers ever, has seriously the best serialization library ever and it compiles to almost any target and goes fast when it runs.

I had a very similar experience rewriting tcc in Rust. The first iteration was written in JavaScript, and was essentially just a wrapper around the Node title-case package using oclif. In my defence, it took (no exaggeration) fifteen minutes to make, however running it felt like it took fifteen minutes to complete (slight exaggeration). I’ve been intrigued by Rust since I first heard about it a few years ago, and have been following its development at a distance. I’ve never felt that I had a good excuse to try it though, until I finally got fed up with tcc 1.0’s performance and took the plunge. I recently saw a fantastic list of excellent resources for learning Rust, and can’t wait to find out all of the anti-patterns I stumbled into in the development of tcc's second iteration.

❋❋❋

New Year, New Password Manager, Part Deux

Back during the Before Times™, I set out on a quest to free my passwords and various other credentials from the walled garden of the Apple ecosystem, and was fairly satisfied with my first attempt at a solution. Six months later, and, well, you can probably guess by the fact that you’re reading this that things were not so peachy, and you’d be right. So what went wrong? A few minor quibbles here and there, with one annoyance substantial enough to drive me back to the drawing board: That pandora’s box we all unwittingly unleash upon ourselves the moment we find ourselves with more than one device: synchronization.

In my first post about KeePass, I naïvely hand-waved away the problem of synchronization onto the clients, blissfully unaware of how ill-equipped many were to the task. The principle problem, I believe, lies in the fact that KeePass stores everything in a single binary file. A change to one password affects every password, from the perspective from a client. While backing up one’s database is as simple as copying a file, there was nonetheless much apprehension whenever a sync was required. So, back at the drawing board, I’ve added a requirement of easy conflict resolution. Reviewing my list of available options, I eased my requirement of widespread availability, and settled on pass, "the standard unix password manager".

pass is made with the Unix philosophy in mind; it does one thing, and it does it well, namely retrieving passwords and orchestrating other tools to save (any editor you’d like), encrypt (gpg) passwords in a versioned (git) repository. Oh, and passwords are saved as individual files, because everything is a file. With version control being handled by git, I think it’s safe to say that the “easy conflict resolution” box can be checked.

Synchronization across devices works in a similar fashion to multiple persons collaborating on a project with Git; each device holds a local copy of the password store, pushing and pulling updates to and from one another. For simplicities sake, I’ve opted to establish a central repository hosted on a private server using NGINX.

Providing Password Protection

Although our passwords are individually encrypted, it’s probably still a good idea to password-protect the repository endpoint on the server. There are a variety of ways this can be accomplished; one of the simplest being with htpasswd. Add a user like so (you may omit the -c flag if a credential file already exists):

$
htpasswd -c /etc/nginx/.htpasswd

Serving the Password Store

Serving a Git repository with NGINX is relatively straight-forward. We can simply pass any requests to git-http-backend via FastCGI. Like password protection, it’s probably a good idea to secure the endpoint with SSL as well. Thankfully, this can be achieved with virtually no effort thanks to Let’s Encrypt. I’ll leave that part of the process as an exercise for the reader.

server {
listen 443 ssl;
server_name my.domain.com;
root /var/www/pass;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
access_log /var/log/nginx/pass.access.log;
error_log /var/log/nginx/pass.error.log;
auth_basic "Authentication required";
auth_basic_user_file /etc/nginx/.htpasswd;
location / {
client_max_body_size 0; # git pushes can be massive, make sure nginx doesn't abruptly cut the connection
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/libexec/git-core/git-http-backend;
fastcgi_param GIT_HTTP_EXPORT_ALL "";
fastcgi_param GIT_PROJECT_ROOT /var/www/pass; # directory containing repositories
fastcgi_param PATH_INFO $uri;
fastcgi_param REMOTE_USER $remote_user;
fastcgi_pass unix:/var/run/fcgiwrap.socket; # pass the request to fastcgi
}
}

Tug of War

Git relies on the concepts of “pushing” and “pulling” to update others and receive updates from others, respectively. Unfortunately, these are manual processes. We can automate pushing though, through the use of commit hooks, as pass automatically commits changes. Hooks are simply executables, such as a bash script, that are run by git at various stages of the commit process. For our purposes of automatically pushing updates, we can use a post-commit hook. To do so, add an executable file to /path/to/your/.password-store/.git/hooks/post-commit containing the following:

#!/bin/sh
git push origin master

Automating pulling updates is a little more tricky. Perhaps a hook could be written to regularly run git pull, or some solution using push notifications from the server could be architected. I’ll leave this as an exercise to the reader or my future self.

Déjà-vu All Over Again

Because I like the idea of symmetry with the first post, here’s a list of clients I’m currently using:

  • pass, obviously.

  • passff, a Firefox addon for web integration.

  • Pass, an iOS client.

Conclusion

I’m excited to be continuing taking steps towards a more private, personal infrastructure for my data. In particular, pass, with its philosophy of being a focused, composable application, represents freedom not only for my data, but for my workflow and the tools supporting it.

❋❋❋

In Praise of Tinkering

If you’re reading this, it’s a good bet that you’re a tinkerer like me. I love tweaking and tailoring just about everything I can, to the point that it’s probably confusing that I’m not a Plasma user. To which I say, I believe that form follows function, not that function negates the need for form completely (You can send your complaints to $NAME_OF_OTHER_PODCAST_HOST). Despite being a perfectly defendable position epistemically speaking, I don’t believe that everyone is a bot except me, and instead tend to believe that my fellow developers like to tinker as well, to the point that I’m always rather surprised and a little hurt when I hear others say that they’d rather stick to the beaten path; that they don’t want to customize their operating systems more than they absolutely have to. Don’t get me wrong, I’m not saying you’re a bad person if you’ve never felt the desire to make your windows wobbly (@Plasma users), I’m with you there, but I’m not going to subject myself to using bash as my default shell because I have to SSH into servers on at least a weekly basis and am worried about being slightly uncomfortable in this context. It’s a fraction of my time and I’m significantly more productive with the rest of the time using Fish. Not only that, but my comfort with Fish was what encouraged me to dive deeper into shell scripting; something I had previously tried to avoid to a fault, writing ridiculous python scripts that could have been implemented trivially with a handful of pipes in a one-liner.

This is a common manifestation of the most frequently cited reasons I see given against tinkering: the fear of losing the fruits of ones labor, be it because one has been forced into a situation of using a stock system, such as in the case of the aforementioned SSH session, or from data loss, such as after setting up a new OS. To the former I say, don’t copy-paste configs if you don’t know what they do. Any tinkering you do ought to be done reversibly, i.e. that you only do it if you at least generally understand what you’ve done and how your modified configuration differs from the original, and have some kind of version control in place (More on that below). Don’t remap “:” to “;” if you’re worried that you won’t be able to remember how to use vanilla vi.

To the people worried about losing their configurations, this is a little more understandable as it requires some actual effort to rectify, but not much. There are a myriad of solutions available for not only backing up your configurations (also known as “dotfiles” because they’re frequently stored in hidden files and directories), such as GNU Stow or (my personal favorite) Dotdrop. You can also write installation scripts for the parts that can’t be handled by symlinking dotfiles from a git repo. I have a small collection for mine, and GitHub has an entire page dedicated to the documenting some of the most masterful dotfile setups. Even if you aren’t a DevOps engineer, maintaining a git repo and drafting a few shell scripts isn’t bad practice of the kinds of hard skills that every developer needs.

Finally, I’d like to address the idea of our machines being mere tools, disposable if anything goes wrong. I completely agree with the latter half of this sentiment. I don’t know if I should be admitting this, but I tend to hose my machine at least once or twice a year. I do something so incredibly stupid that I either have to completely reinstall the OS, or worse, I’ve accidentally deleted it while trying to reformat another partition or something. It used to be a horrific event, with repercussions rippling outward days into the future as I slaved over restoring every bit back to its proper place, clicking checkboxes and re-downloading installers. Now, everything I need is backed up onto multiple media in multiple physical locations and my current configuration is a shell script away. I could throw my computer into a lake and be able to pretend like nothing ever happened within a couple of hours. My computer has been effectively disposable for the past year and a half now, and tinkering with configuration management is the reason why.

To the people who want to see their machines as mere tools, I say they already are. They always have been! They’re functionally-equivalent to a hammer in the grand scheme of things. The only difference being that instead of providing mechanical leverage, their’s is computational. As an implementation, they’re actually worse than their physical counterparts. If you’ve ever handled old tools, you’ll have noticed how they wear. Their handles erode in such a way that your hand fits them perfectly. They’ve been sculpted over years of use. Computers, namely operating systems, don’t exhibit such a property. Without the effort, their sharp edges will remain as sharp as they day they were compiled until the bit rot finally breaks them. They’re almost brittle in this regard. We can smooth those edges though, with a little effort and a bit of tinkering.

Since we’re all stuck inside for an indeterminate length of time into the future, use some of it to sand some sharp edges. Try a new shell! Make some aliases for frequently-used commands! Write a program that nags you when you inevitably don’t use them for the first few weeks! Ruminate about why you didn’t do this sooner!

But above all, stay safe. We’ll see you on the other side.

#andràtuttobene 🇮🇹