Book Review: Efficient Linux at the Command Line

The book Efficient Linux at the Command Line was a relieving reading. I was already sold on the idea of the command line, and when I heard about this book, it took my attention immediately; I read the book without any hope of learning anything new, but hoping I could learn at least one new trick. I learned things that significantly improved my productivity, including the CDPATH, and now I cannot imagine how I lived so long without it.

The book is probably more impactful for people who are not yet too much into the command line tools, but it would not get their attention, in my opinion, so it is a bit odd that the ones that would benefit the most from the book will probably never read it. Nevertheless, even if you are super familiar with command line tools, this book will teach you important lessons that will make you more productive and or consolidate your knowledge in some aspects.

In my case, for example, I never fully understood the usage of $() in the command lines to run subcommands, and they now seem a bit more clear to me, like for example I would like to review if all my posts on this blog on books review had the correct tags, so I needed to search all files that did not contains the tag of books review book-review, and than edit all of them with neovim for that I combined the two commands to find all the buggers that I needed to edit using the simple command grep -L "book-review" _posts/** | grep book that returns all files with the name book that does not have the works book-review nest I need to send that as a param to nvim and the final command looks like vim $(grep -L "book-review" _posts/** | grep book) and there is all fines I needed to edit.

Process substitution is also a powerful tool, but I see it being used less often, and, therefore, I may need to remember it over time. I need to come back to it to see what it can do. Hopefully, next time, I will remember at least the keywords I need something similar. It consists of using the results of command lines as files. For example, if you want to read all fines in a fold in Vim to edit, you can use the command ` Vim < (ls -al); the important point is that this only works on bash; if you use fish like I do they have another alternative to it called [psub]( and the command in fish will look like vim (ls -al pub).` Most of the applications I have seen for this were to differentiate the result of 2 commands. Let’s see what I will use for in the future.

The author also uses the awk command in very interesting ways, for example, to compare numbers on a command result, for example, putting it all together in a single command, when I am analysing a command in Nun-db that would take more than a 100µs, I can find all commands in a log fine grep Server primary.log.

[2023-08-19T13:40:53.562857000Z nundb::process_request] [6183153664] Server processed message 'unwatch-all' in 176.041µs
[2023-08-19T13:40:54.904943000Z nundb::process_request] [6183153664] Server processed message 'watch featureToggle' in 36.875µs
[2023-08-19T13:40:54.905037000Z nundb::process_request] [6183153664] Server processed message 'watch lastEvent' in 17.625µs
[2023-08-19T13:40:54.905148000Z nundb::process_request] [6183153664] Server processed message 'set-safe lastState -1 {"_id":1692452181129,"value":{"todos":[],"visibilityFilter":"show_all"}}' in 31.583µs
[2023-08-20T20:12:28.095893000Z nundb::process_request] [6183153664] Server processed message 'unwatch-all' in 87.834µs
[2023-08-20T20:13:26.964202000Z nundb::process_request] [6211055616] Server processed message 'unwatch-all' in 65.083µs
[2023-08-20T20:13:34.676154000Z nundb::process_request] [6206763008] Server processed message 'unwatch-all' in 334.375µs

use sed to clean up the lines grep Server primary.log | sed 's/.*message//i' | sed 's/µs//i' and

 'unwatch-all' in 176.041
 'watch featureToggle' in 36.875
 'watch lastEvent' in 17.625
 'set-safe lastState -1 {"_id":1692452181129,"value":{"todos":[],"visibilityFilter":"show_all"}}' in 31.583
 'unwatch-all' in 87.834
 'unwatch-all' in 65.083
 'unwatch-all' in 334.375

And finally use the command awk to find the slow ones pushing the result of that one to a process substitution. awk -F" " '$NF > 100 { print "Command: " $1 "Time: " $NF}' (grep Server primary.log | sed 's/.*message//i; s/in/\t\t/i' | sed 's/µs//i'| psub)

Command: 'unwatch-all'Time: 176.041
Command: 'unwatch-all'Time: 334.375

I also learned in this example that I could specify the awk with -F, the separator it can use to process files.


This book is worth reading if you are a Jedi in the command line or if you know little about it; you will get faster and be eager to try new things. It is worth reading and having it in your library occasionally. It made me think about reading other books about tmux and other tools I use every day. Maybe in the near future, I will publish here if I ever get to read another book about them. For now, thanks for the reading, and give this book a try.

Where to find

Written on August 29, 2023