The Heart of Unix
Despite all of its warts, I like working in Linux. I've used it for 15 years and I've never been as productive in another environment. Most people claim that it's the configurability of Linux that keep the users coming. That may have attracted me at first, but what attracts me now is its programmability.
Let me be very clear. I'm not saying that Linux is great because I can patch the source code to grep and recompile it. In all my years of Unix, I've never done anything like that. And I'm not saying that Linux is a great workstation for programmers because it helps you program better. Those are topics for another essay.
Unix is a programmable environment
I am saying that Unix is a programmable environment. When you interact
with the shell, you are writing programs to be interpreted. You can
easily extend the Unix system by writing a shell script, copying it to a
directory in your PATH
, and making it executable. Boom. You've got a
new program.
What's more, that program, if it follows certain simple conventions, is now able to work with other programs. Those conventions are simple, and they are summed up well by Doug McIlroy, the inventor of Unix pipes:
This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.
If your program reads text lines from standard in and writes text lines on standard out, it is likely to do well on Unix.
Programs on your path are like pure functions in the higher-level language called the shell
Not all programs are so pure. But the vast majority of the programs that
are so typically Unixy are. grep
, awk
, sed
, wc
, pr
, etc.
Unix is a multi-lingual environment
I must have compilers or interpreters for 30 languages on my machine.
Maybe more. All of these languages are invited to the party. They can
all call each other (through the shell). And of course their
stdin
/stdouts
can be piped together.
You really can use the best tool for the job. I've got Bash scripts, awk scripts, Python scripts, some Perl scripts. What I program in at the moment depends on my mood and practical considerations. It is a little crazy that I don't have to think about what language something is written in when I'm at the terminal.
Unix provides a universal interface with a universal data structure
It needs to be stated that there is a reason all of these languages can work together. There is a standard data structure that programs are invited to use: text streams. That means sequences of characters. Text streams are cool because they're simple and flexible. You can impose a structure on top of the flat sequence. For instance, you can break it into a sequence of sequences of characters by splitting it on a certain character (like new-line). Then you can split those sequences into columns. In short, text is flexible.
Unix is homoiconic
There's another property that I think is rarely talked about in the context of Unix. In Lisp, we often are proud that code is data. You can manipulate code with the same functions that you manipulate other data structures. This meta-circularity gives you a lot of power.
But this is the same in Unix. Your programs are text files and so
can be grep
'd and wc
'd and anything else if you want to. You can
open up a pipe to Perl and feed it commands, if you like. And this feeds
right back into Unix being programmable.
Functional + universal data structure + homoiconic = power
All of this adds up to synergy. When you write a program that follows
the Unix conventions of stdin
/stdout
with text streams, it can
work with thousands of programs that are already on your computer.
What's more, your program has to do less work itself, because so much of
the hard work can be done better by other programs.
On the file system, hierarchical names point to data objects
And this synergy extends well beyond just using text streams. I have
this tendency to look to databases as storage solutions for my personal
projects. They have some nice properties, like ACID and SQL.But by using
a database, I'm missing out on joining the Unix ecosystem. If I use
the file system to store my data---meaning text files in directories---I
can use all of Unix to help me out. I can use find
, grep
, head
,
tail
, etc., just because I chose to use the measly file system instead
of some fancy database.
Blog example
A good example of the synergy I'm talking about is the blog you are reading now. Here's how my blog works:
I store everything on the file system. I have an src/
directory with
drafts/
posts/
pages/
and links/
. I wrote a Python script
(currently at 183 well-commented lines) that reads src/
and spits out
the final product to build/
. The Python uses a few libraries, but the
meat of it is done by calling other programs. The rendering of Markdown
to HTML is done by pandoc
, which
happens to be written in Haskell. I also do a call out to the shell to
copy a directory (cp -rp
) because I was too lazy to figure out how to
do it in pure Python.
I sync build/
to Amazon S3 with a Ruby program called s3sync
. I edit
my entries in Emacs. If I need to delete a post, I run rm
. If I need
to list my posts, I run ls
. If I'd like to change the name of a post,
I use mv
.
It may not be the best interface for writing a blog. But notice all of the stuff I didn't have to write to get started. I'm already writing posts and publishing them. Compare that to the reams of PHP and Javascript it takes to get the same functionality in Wordpress. That's the power of small tools working together.
Unix is old
Now that I've expressed how great Unix is, allow me to speak about its numerous shortcomings. I can't say for sure, but I would guess that most of the shortcomings are due to the long history of Unix starting on underpowered machines.
For instance, the fact that your programs have to be manually stored to disk using file system operations so that your dynamic shell language can have access to them seems awfully quaint. But when Unix was developed, disk space, RAM, and computation were expensive. Everything was expensive. So the strategy was to cache your compiler output to disk so you wouldn't have to do a costly compile step each time you ran a program.
If I want to write a new program, even a short one, I have to open up a
text file in Emacs (make sure it's in the path!), write the program,
save it, switch to the terminal, and chmod +x
it. Compare that to
Clojure, where you constantly define and redefine functions at the REPL.
Or, if you like, a Smalltalk system where you can open up the editing
menu of anything you can see and change the code which will then be
paged out to disk at a convenient time. Unix clearly has room to grow in
that respect.
The file system
The file system is archaic, too. It's reliable, but a little feature-poor. It's one of the reasons I think first about a database before remembering the synergy available with the file system. It doesn't provide any kind of ACID properties. The metadata available is laughable (permissions, owner/group, date, and filesize?). A more modern file system would give a little more oomph to compete with other forms of storage.
The terminal
The terminal is just old. It's all text. The editing is
sub-primitive. The help it gives you is the bare minimum. One of its
biggest shortcomings is how opaque it is. It doesn't do much to help you
learn commands. It's not very good with huge dumps on stdout
.
Multiline commands? Supported with \
. I think we can do better.
Text streams
The world of computers has grown up a lot since the early days of Unix. There has been a Cambrian explosion in the number of file formats. Lots of them are binary formats. Lots are structured text, like XML or JSON. Unix can handle those kinds of files, but it has failed to find a lever to help the Unix user master them with the same synergy you see with flat text files.
Wrong turns
Uni x has a long history. Some of that history was kind, some was unkind. Most of the development of Unix was just practical people doing their best with the tools they had.
What's unfortunate is that we now have better tools and we see what could be done, but to do it would break backwards compatibility. And so we continue with sub-optimal tools.
Layering instead of evolving
One thing I think is unfortunate in the world of Unix today is layering. Modern Linux distributions are midden piles of configuration daemons to manage permissions daemons to give your configuration GUI access to the configuration daemons. Or we find ourselves installing a database to manage a few kilobytes of metadata.
The problem is Unix has not evolved in those areas. The permission system has changed very little. Modern distributions want to provide a modern and unintrusive interface to protected resources, so they add a layer of indirection onto the primitive permissions model instead of evolving the permission system itself. The Unix permissions system is solid and has worked for years. Maybe it should stay. But instead of giving us small programs that do one thing well to let us become masters of the permissions system, we get obtuse, opaque daemons that also need to be learned.
The file system, though much improved in terms of capacity, stability, and reliability, still has the same basic features: hierarchical directories containing files, accessed by name. If you want something more, you have to add a layer like BerkeleyDB or SQLite. These tools are great, but I'd like to see a more Unixy solution that allows for the synergy you get from existing programs made to run with files on the disk.
Megacommands
Command bloat is terrible. Rob Pike and Brian Kernighan have written about this. I'll merely refer you to their excellent paper. The gist is that having n commands gives yo u O(n^2^) ways of combining them. Having fewer, bigger, "more powerful" programs does not give you this exponential and synergistic advantage.
If you look at it the right way, all of these little programs that do one thing are like functions in the higher-level language that is Unix. We see that languages like Perl and Python have huge numbers of libraries for doing all sorts of tasks. Those libraries are only accessible through the programming language they were developed for. This is a missed opportunity for the languages to interoperate synergistically with the rest of the Unix ecosystem.
The road ahead
I've given a bit of a taste of some of the non-Unixy directions we're going in. Now I'd like to end with some right directions.
I mentioned before that saving a compiled binary to disk is done to cache what used to be an expensive operation. With modern hardware, a short utility C program could be read in, parsed, compiled, and run very quickly. Probably with no noticeable delay. It's something to consider when thinking about the division between static programs and dynamic scripting languages and the role of the compiler.
Talking to Unix
Foreign Function Interfaces between programming languages are considered very difficult to work with because of the semantics mismatch between any two languages. But Unix provides a universal interface for programs to interoperate without the need for FFI. I hope to see more "sugar" in languages to take advantage of calling out to other programs for help. Perl's backticks comes to mind.
You might say that this is expensive. Well, yes and no. Yes, there is much more overhead in reading in who-knows-how-many files to execute some script on disk than in just calling some library function. I argue, though, that the time difference is becoming small enough not to matter; and the operating system should evolve to make it more practical.
Evolving
stdin
/stdout
Stdin/stdout with text streams is the closest thing we have to a universal, language-agnostic interface. It defines a minimal "constitution" with which programs can interact. Can this interface be improved on without destroying it? I wouldn't doubt it. There are lots of "data flow" patterns besides input and output. Pub/sub, broadcast, dispatch, etc., should be explored.
Text streams, evolved
Unix was designed for flat text and the existing Unix tools operate on
text. We need new tools to bring structured text and binary into the
Unix world to join the party. I don't think this would be hard. I've
written programs that read in JSON and write it out with one JSON object
per line. That lets you grep it to find the one you want, or wc -l
it
to count the objects.
Another thing I've been working on is defining a dispatch mechanism for common operations on files of different types. Take, for instance, metadata that is stored in a file. An HTML file has a title, sometimes it has an author (in a meta tag), etc. A JPEG file has metadata in the EXIF data. Is there some way we can unify the access of that metadata? I think there is and I'm working on it. The same command would dispatch differently based on mime-type.
21st Century Terminal
How can we improve the terminal? I think it's a hard problem but not
impossible. Part of the issue with the terminal is that as X Windows
developed, people started using menus to run monolithic programs instead
of piping things with the shell. So the usefulness of the terminal will
be improved, without changing the terminal itself, by breaking those
monolithic programs up into composable programs. For instance, a
program which displays all of the thumbnails of the files listed on
stdin
would be much more useful to me than a mouse-oriented file
browser.
The terminal is about text. I don't think that could or should change. But does it have to be only about text? Explorations are underway.
The Shell
The last improvement I want to touch on is the shell language itself. Bash is ugly. There. I said it. A lot of good work has been done in programming language design and I'd like to see some of it make its way to the shell. What if we take the idea of Unix programs as pure functions over streams of data a little further? What about higher-order functions? Or function transformations? Combinators? What if we borrow from object-oriented languages? Can we have message passing? What about type-based dispatching?
Conclusions
Unix has always been practical and it has proven itself over the years. It's 40 years old and it's still being used. Furthermore, Unix is the closest thing to a personal computing experience^1 that is practical today.
People tend to contrast Unix with systems like the Lisp Machine and Smalltalk. But I see more similarities than differences: Code as data. Everything is programmable. Dynamic language prompt. Universal data structure. A propensity for "dialects" or "distributions". Garbage collection.^2 Unix just made a lot of compromises to make it practical on the limited hardware that was available.
Unfortunately, those compromises have stuck. A lot of work went into workarounds and a lot of software has been built on top of those design decisions. The question is: where to go from here?
My own personal choice is to go back to the roots. Often, when we want to make a change, we must look to what has worked in the past. What has brought us this far? What were the things that made Unix special? Unix was built by individuals all adding their own practical knowledge and hard work into one cohesive system. Their individual work was multiplied by the synergy of a common interface. If we want to evolve Unix (and I do), that common interface---the heart of Unix---is the place to start.
-
When I say "personal computer", I'm referring to Alan Kay's vision:
What then is a personal computer? One would hope that it would be both a medium for containing and expressing arbitrary symbolic notions, and also a collection of useful tools for manipulating these structures, with ways to add new tools to the repertoire.
-
Unix has a limited form of garbage collection. Short-running programs (like those executed at the terminal) need not concern themselves with freeing allocated memory since the OS will free everything when they exit.