Computer Programming as a Liberal Art

One of the college majors most widely pursued these days is computer science. This is largely because it’s generally seen as a ticket into a difficult and parsimonious job market. Specific computer skills are demonstrably marketable: one need merely review the help wanted section of almost any newspaper to see just how particular those demands are.

As a field of study, in other words, its value is generally seen entirely in terms of employability. It’s about training, rather than about education. Just to be clear: by “education”, I mean something that has to do with forming a person as a whole, rather just preparing him or her for a given job, which I generally refer to as “training”. If one wants to become somewhat Aristotelian and Dantean, it’s at least partly a distinction between essence and function. (That these two are inter-related is relevant, I think, to what follows.) One sign of the distinction, however, is that if things evolve sufficiently, one’s former training may become irrelevant, and one may need to be retrained for some other task or set of tasks. Education, on the other hand, is cumulative. Nothing is ever entirely lost or wasted; each thing we learn provides us with a new set of eyes, so to speak, with which to view the next thing. In a broad and somewhat simplistic reduction, training teaches you how to do, while education teaches you how to think.

One of the implications of that, I suppose, is that the distinction between education and training has largely to do with how one approaches it. What is training for one person may well be education for another. In fact, in the real world, probably these two things don’t actually appear unmixed. Life being what it is, and given that God has a sense of humor, what was training at one time may, on reflection, turn into something more like education. That’s all fine. Neither education nor training is a bad thing, and one needs both in the course of a well-balanced life. And though keeping the two distinct may be of considerable practical value, we must also acknowledge that the line is blurry. Whatever one takes in an educational mode will probably produce an educational effect, even if it’s something normally considered to be training. If this distinction seems a bit like C. S. Lewis’s distinction between “using” and “receiving”, articulated in his An Experiment in Criticism, that’s probably not accidental. Lewis’s argument there has gone a long way toward forming how I look at such things.

Having laid that groundwork, therefore, I’d like to talk a bit about computer programming as a liberal art. Anyone who knows me or knows much about me knows that I’m not really a programmer by profession, and that the mathematical studies were not my strong suit in high school or college (though I’ve since come to make peace with them).

Programming is obviously not one of the original liberal arts. Then again, neither are most of the things we study under today’s “liberal arts” heading. The original liberal arts included seven: grammar, dialectic, and rhetoric — all of which were about cultivating precise expression (and which were effectively a kind of training for ancient legal processes), and arithmetic, geometry, music, and astronomy. Those last four were all mathematical disciplines: both music and astronomy bore virtually no relation to what is taught today under those rubrics. Music was not about pavanes or symphonies or improvisational jazz: it was about divisions of vibrating strings into equal sections, and the harmonies thereby generated. Astronomy was similarly not about celestial atmospheres or planetary gravitation, but about proportions and periodicity in the heavens, and the placement of planets on epicycles. Kepler managed to dispense with epicycles, which are now of chiefly historical interest.

In keeping with the spirit, if not the letter, of that original categorization, we’ve come to apply the term “liberal arts” today to almost any discipline that is pursued for its own sake — or at least not for the sake of any immediate material or financial advantage. Art, literature, drama, and music (of the pavane-symphony-jazz sort) are all considered liberal arts largely because they have no immediate practical application to the job of surviving in the world. That’s okay, as long as we know what we’re doing, and realize that it’s not quite the same thing.

While today’s economic life in the “information age” is largely driven by computers, and there are job openings for those with the right set of skills and certifications, I would suggest that computer programming does have a place in the education of a free and adaptable person in the modern world, irrespective of whether it has any direct or immediate job applicability.

I first encountered computer programming (in a practical sense) when I was in graduate school in classics. At the time (when we got our first computer, an Osborne I with 64K of memory and two drives with 92K capacity each), there was virtually nothing to do with classics that was going to be aided a great deal by computers or programming, other than using the very basic word processor to produce papers. That was indeed useful — but had nothing to do with programming from my own perspective. Still, I found Miscrosoft Basic and some of the other tools inviting and intriguing — eventually moving on to Forth, Pascal, C, and even some 8080 Assembler — because they allowed one to envision new things to do, and project ways of doing them.

Programming — originally recreational as it might have been — taught me a number of things that I have come to use at various levels in my own personal and professional life. Even more importantly, though, it has taught me things that are fundamental about the nature of thought and the way I can go about doing anything at all.

Douglas Adams, the author of the Hitchhiker’s Guide books, probably caught its most essential truth in Dirk Gently’s Holistic Detective Agency:

…if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that’s really the essence of programming. By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve learned something about it yourself.

I might add that not only have you yourself learned something about it, but you have, in the process learned something about yourself.

Adams also wrote, “I am rarely happier than when spending entire day programming my computer to perform automatically a task that it would otherwise take me a good ten seconds to do by hand.” This is, of course, one of the drolleries about programming. The hidden benefit is that, once perfected, that tool, whatever it was, allows one to save ten seconds every time it is run. If one judges things and their needs rightly, one might be able to save ten seconds a few hundred thousand or even a few million times. At that point, the time spent on programming the tool will not merely save time, but may make possible things that simply could never have been done otherwise.

One occasionally hears it said that a good programmer is a lazy programmer. That’s not strictly true — but the fact is that a really effective programmer is one who would rather do something once, and then have it take over the job of repeating things. A good programmer will use one set of tools to create other tools — and those will increase his or her effective range not two or three times, but often a thousandfold or more. Related to this is the curious phenomenon that a really good programmer is probably worth a few hundred merely adequate ones, in terms of productivity. The market realities haven’t yet caught up with this fact — and it may be that they never will — but it’s an interesting phenomenon.

Not only does programming require one to break things down into very tiny granular steps, but it also encourages one to come up with the simplest way of expressing those things. Economy of expression comes close to the liberal arts of rhetoric and dialectic, in its own way. Something expressed elegantly has a certain intrinsic beauty, even. Non-programmers are often nonplussed when they hear programmers talking about another programmer’s style or the beauty of his or her code — but the phenomenon is as real as the elegance of a Ciceronian period.

Pursuit of elegance and economy in programming also invites us to try looking at things from the other side of the process. When programming an early version of the game of Life for the Osborne, I discovered that by simply inverting a certain algorithm (having each live cell increment the neighbor count of all its adjacent spaces, rather than having each space count its live neighbors) achieved an eight-to-tenfold improvement in performance. Once one has done this kind of thing a few times, one starts to look for such opportunities. They are not all in a programming context.

There are general truths that one can learn from engaging in a larger programming project, too. I’ve come reluctantly to realize over the years that the problem in coming up with a really good computer program is seldom an inability to execute what one envisions: it’s much more likely to be a problem of executing what one hasn’t adequately envisioned in the first place. Not knowing what winning looks like, in other words, makes the game much harder to play. Forming a really clear plan first is going to pay dividends all the way down the line. One can find a few thousand applications for that principle every day, both in the computing world and everywhere else. Rushing into the production of something is almost always a recipe for disaster, a fact explored by Frederick P. Brooks in his brilliantly insightful (and still relevant) 1975 book, The Mythical Man-Month, which documents his own blunders as the head of the IBM System 360 project, and the costly lessons he learned from the process.

One of the virtues of programming as a way of training the mind is that it provides an objective “hard” target. One cannot make merely suggestive remarks to a computer and expect them to be understood. A computer is, in some ways, an objective engine of pure logic, and it is relentless and completely unsympathetic. It will do precisely what it’s told to do — no more and no less. Barring actual mechanical failure, it will do it over and over again exactly the same way. One cannot browbeat or cajole a computer into changing its approach. There’s a practical lesson and probably a moral lesson too there. People can be persuaded; reality just doesn’t work that way — which is probably just as well.

I am certainly not the first to have noted that computer programming can have this kind of function in educational terms. Brian Kernighan — someone known well to the community of Unix and C programmers over the years (he was a major part of the team that invented C and Unix) has argued that it’s precisely that in a New York Times article linked here. Donald Knuth, one of the magisterial figures of the first generation of programming, holds forth on its place as an art, too, here. In 2008, members of the faculties of Williams College and Pomona College (my own alma mater) collaborated on a similar statement available here. Another reflection on computer science and math in a pedagogical context is here. And of course Douglas Hofstadter in 1979 adumbrated some of the more important issues in his delightful and bizarre book, Gödel, Escher, Bach: An Eternal Golden Braid.

Is this all theory and general knowledge? Of course not. What one learns along the line here can be completely practical, too, even in a narrower sense. For me it paid off in ways I could never have envisioned when I was starting out.

When I was finishing my dissertation — an edition of the ninth-century Latin commentary of Claudius, Bishop of Turin, on the Gospel of Matthew — I realized that there was no practical way to produce a page format that would echo what normal classical and mediaeval text editions typically show on a page. Microsoft Word (which was what I was using at the time) supported footnotes — but typically these texts don’t use footnotes. Instead, the variations in manuscript readings are keyed not to footnote marks, but to the line numbers of the original text, and kept in a repository of textual variants at the bottom of the page (what is called in the trade an apparatus criticus). In addition, I wanted to have two further sets of notes at the bottom of the page, one giving the sources of the earlier church fathers that Claudius was quoting, and another giving specifically scriptural citations. I also wanted to mark in the margins where the foliation of the original manuscripts changed. Unsurprisingly, there’s really not a way to get Microsoft Word to do all that for you automatically. But with a bit of Pascal, I was able to write a page formatter that would take a compressed set of notes indicating all these things, and parcel them out to the right parts of the page, in a way that would be consistent with RTF and University Microfilms standards.

When, some years ago, we were setting Scholars Online up as an independent operation, I was able, using Javascript, PHP, and MySQL, to write a chat program that would serve our needs. It’s done pretty well since. It’s robust enough that it hasn’t seriously failed; we now have thousands of chats recorded, supporting various languages, pictures, audio and video files, and so on. I didn’t set out to learn programming to accomplish something like this. It was just what needed to be done.

Recently I had to recast my Latin IV class to correspond to the new AP curriculum definition from the College Board. (While it is not, for several reasons, a certified AP course, I’m using the course definition, on the assumption that a majority of the students will want to take the AP exam.) Among the things I wanted to do was to provide a set of vocabulary quizzes to keep the students ahead of the curve, and reduce the amount of dictionary-thumping they’d have to do en route. Using Lee Butterman’s useful and elegant NoDictionaries site, I was able to get a complete list of the words required for the passages in question from Caesar and Vergil; using a spreadsheet, I was able to sort and re-order these lists so as to catch each word the first time it appeared, and eliminate the repetitions; using regular expressions with a “grep” utility in my programming editor (BBEdit for the Macintosh) I was able to take those lists and format them into GIFT format files for importation into the Moodle, where they will be, I trust, reasonably useful for my students. That took me less than a day for several thousand words — something I probably could not have done otherwise in anything approaching a reasonable amount of time. For none of those tasks did I have any training as such. But the ways of thinking I had learned by doing other programming tasks enabled me to do these here.

Perhaps the real lesson here is that there is probably nothing — however mechanical it may seem to be — that cannot be in some senses used as a basis of education, and no education that cannot yield some practical fruit down the road a ways. That all seems consistent (to me) with the larger divine economy of things.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

One response

  1. John Esposito Avatar
    John Esposito

    Agreed. Three riffs:

    1. Debugging is a different kind of general-use mental exercise than the creative expression of enacting a vision in code. It’s not easy to see the full logical entailments of things you’ve asserted (written, programmed), or of things others have asserted (written, programmed), or of your and others’ assertions together, or of any change to any of those assertions. The old Medieval obligationes (https://www.jstor.org/stable/20118602) were training for just this kind of entailment-expansion. When a program does something you don’t expect, this means (barring physical failure) that some entailment — of something you wrote, or of something someone else wrote, or of some combination — obtains in (logical reality) that your human mind didn’t grasp. Debugging is figuring out what entailed the thing you didn’t expect. And of course, because logical entailments obtain whether anyone notices or not (and similarly — though not identically — causal, physical entailments), the ability to navigate those entailments ‘beyond the brain’ gives you general exploratory-kubernetic mastery over the world. But also — one doesn’t really understand x until one sees its full entailment set; so this kind of navigation also habituates you to pure understanding.

    2. As literary virtues of programming — to elegance and economy let’s add expressiveness. (Perhaps this is entailed on some understandings of elegance, but I’d like to call it out explicitly.) Consider for instance: what’s wrong with being an ‘if and for’ programmer (i.e. the kind of programmer who does everything by means of conditional branches and loops)? Certainly the code may execute inefficiently (though with modern, cleverly-optimizing compilers and interpreters this is actually far less likely than it was a few decades ago); certainly such code is inelegant, ham-fisted, flat. But also: ifs and loops often don’t express the underlying abstract thought very well. For example: what is a poorer expression of translation than ‘if you say x in language l then you should say y in language m in order to say the same thing’? Even a dictionary — a simple lookup table — better expresses translation than this. Yet bad programmers (and bad expert systems) write oceans of ifs in just this way. Such code is hard to read because it doesn’t really express the thought behind it. Of course the same is true of much non-code expression: someone who expresses everything in terms of a few simple logical structures is hard to understand. And, perhaps more importantly: because the written/scratched/physical output (linguistic or otherwise) carries its own momentum, inexpressive physical outputs carry the expresser’s own brain forward much less than expressive outputs do — killing the mental leverage that externalized writing/painting/playing provides.

    3. Liberal arts are thrilling mainly (I think) because they provide generic ‘freedom to’ — that is, they let your mind access stuff far beyond the stuff it accessed during liberal arts training. But programming is so amazingly broad an example of this. Anything stepwise-thinkable is expressible in a Turing-complete language. So far, so Pythagorean-Platonic — a beautiful floating mind. But programming re-Aristotelianizes thought, too. There is great animal virtue in learning by bashing your head against walls; there is vast angelic (Thomistic sense?) virtue in flying your brain far beyond the flammantia moenia mundi; but the physical realization of thought (that is: code at runtime) grants both of these to your hylomorphic self. Programming gives you the absolute thought-freedom of mathematics along with the humbling physical feedback of breaking badly at runtime. (The specific mechanics run the Plato-Aristotle gamut pretty nicely: say, from heavyweight build+compiled to pure REPL programming.)