I’ve been experimenting with the tools I use on a regular basis lately – switching up my shell, terminal multiplexer, and even trying out other editors. Today, Id like to focus on my experiments with my shell.
My old setup
Before this, I had been using a minimal zsh setup for a long time, with only built in features and a handmade prompt. Zsh is a good shell, probably one of the best POSIX shells out there, and I still use it when a POSIX shell is needed.
However, I got tired of the endless footguns that POSIX shell scripting imposes, easy to make errors around quoting, word splitting, and escaping, the sort of thing that makes shellcheck necessary.
I played around with fish for a few days, but it had many of the same fundamental design choices, mainly, being ‘stringly typed’, that made POSIX shells such a displeasure to work with.
A Nu shell
While googling around for alternative shells, I stumbled across nushell, a shell that claimed to work around structured data instead of just strings. This was exactly what I was looking for, and I installed it immediately. I decided to work with it for around a month, give myself enough time to really use it, see not only how it felt with ordinary usage, but to give myself time and opportunity to construct a few pipelines and scripts in it.
All that said, the month is up, and I’ve been collecting examples, thoughts, and some criticisms along the way.
Piping structured data
One of the core features of nushell is that commands return structured data, instead of plain strings. Pipelines can pass lists, records, or tables. Individual entries can be one of several built in datatypes, including rich datatypes like datetimes, durations, and filesizes.
Nushell can also open many filetypes and turn them into nushell native datastructures to work with, including csv, json, toml, yaml, xml, sqlite files, and even excel and libreoffice calc spreadsheets.
Once you have your data in nushell datastructures, you can do all sorts of manipulations on it. It feels like an interesting mix of functional programming and SQL, but it actually works really, really well. You can sort, filter, and aggregate the data, use a SQL style join statement between two tables, and use functional programming patterns to manipulate tables.
Some examples of things that nushell enables with this structured data passing through pipelines includes:
|
|
|
|
|
|
All of these can be one liners, but have been broken up in order to insert explanatory comments.
Parsing non-nu tools
But what if our tool/text file isn’t in a format nushell understands? Thankfully, for most formats parsing is relatively straightforward. Lets take this NGINX server log, for example (not a log of real traffic, just a sample log I found)
|
|
We can parse it into a nu table like so:
|
|
Now that we have it in nushell tables, we can bring all of nushells tools to bear on the data.
For example, we could plot a histogram of the most common ips, just by piping the whole thing into histogram ip
.
We could easily calculate the average bytes sent per request.
We could group the records by the day or hour they happened, and analyze each of those groups independently.
And we can do all of that after arbitrarily filtering, sorting, or otherwise transforming the table.
While it would be a pretty long one liner if we decided to put it in a single line, its still quite easy and straightforward to write. Most log formats and command outputs are similarly straightforward.
Defining custom commands, with built-in arg parsing
Nushell has a feature called Custom Commands, which fill the same purpose as functions in other shells/programming languages, but are a bit more featurefull than traditional POSIX shell functions.
First of all, nushell custom commands specify the number of positional arguments they take.
|
|
You can optionally give the arguments a type
|
|
You can give the arguments a default value, making it optional. (can be combined with a type specification)
|
|
You have flag parsing, complete with short flags, is included as well. (A flag without a type will be taken as a boolean flag, set by its presence or absence)
|
|
And finally, you can add a rest command at the end, allowing you to take a variable number of arguments.
|
|
All of the specified parameters are automatically added to a generated --help
page,
along with a documentation comments, so that the following code block:
|
|
Results in a help page that looks like this.
> recently-modified --help
display recently modified files
Usage:
> recently-modified {flags} ...(paths)
Flags:
--cutoff <String> - cutoff to be considered 'recently modified' (default: '1 week ago')
-h, --help - Display the help message for this command
Parameters:
...paths <any>: paths to consider
Input/output types:
╭───┬───────┬────────╮
│ # │ input │ output │
├───┼───────┼────────┤
│ 0 │ any │ any │
╰───┴───────┴────────╯
(the input/output table at the bottom has to do with how the command is used in a pipeline, and is covered in more detail in the book)
This addition of easy argument parsing makes it incredibly convenient to add command line arguments to your scripts and functions, something that is anything but easy in POSIX shells.
Error messages
Nushell brings with it great error messages that explain where the error occurred. In bash, if we have a loop like:
$ for i in $(ls -l | tr -s " " | cut --fields=5 --delimiter=" "); do
echo "$i / 1000" | bc
done
This gets the sizes of all the files in KiB. But what if we typo something?
$ for i in $(ls -l | tr -s " " | cut --fields=6 --delimiter=" "); do
echo "$i / 1000" | bc
done
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
This error tells you nothing about what went wrong, and your only option is to start print debugging.
The equivalent in nushell would be:
> ls | get size | each {|item| $item / 1000}
If we typo the size column, we get a nice error telling us exactly what we got wrong, and where in the pipeline the error and value originated. Much better.
> ls | get szie | each {|item| $item / 1000}
Error: nu::shell::column_not_found
× Cannot find column
╭─[entry #1:1:1]
1 │ ls | get szie | each {|item| $item / 1000}
· ─┬ ──┬─
· │ ╰── cannot find column 'szie'
· ╰── value originates here
╰────
Whats not there yet
Now, nushell is not finished yet. As I write, I am running version 0.91 of nu. Similar to fish, it not being a POSIX shell means that you still need to drop into bash or zsh in order to source env files in order to, for example, use a cross-compiling c/c++ sdk. (thankfully, python virtualenvs already come with a nu script for you to source, so doing python dev will not require you to launch a POSIX shell)
Additionally, while you can write nu script files,
invoking them from within nu treats them as external commands,
meaning they take in and pass out plain text,
rather than the structured data that you would get with a proper custom command
or nu builtin.
The best workaround I’ve found so far is instead of making scripts that you run
directly, you define a custom command in the script file, use
that file, and
then run the custom command, like this:
|
|
> use recently-modified.sh
> recently-modified --cutoff '2 weeks ago' ./
Its certainly not the most ergonomic, but seems to be the best way at the moment to make ‘scripts’ that are integrated with the rest of nushell.
So, overall, is it worth it?
Nushell is certainly an promising project, and I will almost certainly be continuing to use it as my daily shell. It cant do everything, but dropping into zsh for a task or two every once in a while isn’t that big a deal for me, and having access to such a powerful shell by default has made other tasks much easier. If you regularly use pipelines in your default shell, consider giving Nushell a try.