So here we are in 2012, the Year of Code, and we should all be learning to code! Shouldn’t we? Especially if we belong to this community known as Digital Humanities, a field that is endlessly wrestling with its self-definition. Who’s in, who’s out? Is it really necessary to code? Don’t we have to know our stuff, computationally, if we are to understand what computers can do for us? Does coding culture exclude women, and is this imperative therefore sexist? Wouldn’t we be better off concentrating on being better humanists?
As a historian (since, arguably, 1999) and a coder (since 1996), I have to tell you: it’s not easy. Sure, the ability to make things, to dream up a system and watch it take shape, to save yourself three days of work with five minutes of command-line scripting, is wonderfully empowering, and I wouldn’t have it any other way. But along the way, to get to the triumph of having your tests pass and having your program actually work, there is a lot of grunt work, even more frustration, and a lot of time spent looking to your flanks, chasing after problems that aren’t directly related to your actual goal.
The gritty reality of learning to code
Or you run into a problem that you haven’t solved before, but it seems so obvious and so necessary that you know it must have been done. And indeed, you will find eventually that it has been done, but as it is not part of a standard library and the problem is so integrated and/or specific, no one has seen fit to design and release a general-purpose solution for it (which would be far too much overhead anyway.)
My apologies to anyone whom I lost in the preceding pair of paragraphs. The point I am trying to make actually got a name, long ago in Internet history:
You see, yak shaving is what you are doing when you’re doing some stupid, fiddly little task that bears no obvious relationship to what you’re supposed to be working on, but yet a chain of twelve causal relations links what you’re doing to the original meta-task. [Source]
Yak Shaving is the last step of a series of steps that occurs when you find something you need to do. “I want to wax the car today.”
“Oops, the hose is still broken from the winter. I’ll need to buy a new one at Home Depot.”
“But Home Depot is on the other side of the Tappan Zee bridge and getting there without my EZPass is miserable because of the tolls.”
“But, wait! I could borrow my neighbor’s EZPass…”
“Bob won’t lend me his EZPass until I return the mooshi pillow my son borrowed, though.”
“And we haven’t returned it because some of the stuffing fell out and we need to get some yak hair to restuff it.”
And the next thing you know, you’re at the zoo, shaving a yak, all so you can wax your car. [Source]
In fact, I wonder how many budding coders fully realize how prevalent this is. You aren’t three levels deep in browser tabs looking for help on some odd JQuery problem you’re having just because you’re inexperienced; you’re there because all coders are there, at some time or another, and the need to do this never goes away.
You may not even be looking for help. Fundamentally, computer programming is a very low-level task, and the “do what I mean” language has never been invented. You might be able to describe the thing you want to do in a single sentence, but then you have to break it down to a series of computer statements, and you have to break some of those down even farther, and you have to be ultra-precise in your interpretation. At some point you will realize that there is some detail of the system that you intended to disregard, but that turns out to be important. There is a parallel to be drawn here with transcription or translation of manuscript texts. It doesn’t get you any credit to speak of, nobody likes doing very much of it, we take shortcuts and then desperately wish we hadn’t because now we have to go re-do some of the work, we all wish we could pass it off to enthusiastic but cheap helpers. Unless the work gets done, though, you will have nothing to show for your actual idea.
I would even say that the problem is worse, the more interesting the task you are trying to do–and let’s face it, the whole reason you’re a digital humanist is that you want to do interesting things that involve the computer, right? The whole point is to try things that (hopefully) have never been tried before, and certainly to try things you have never tried before. Unlike software contractors who might be providing Solution A for Company Z with a few improvements learned along the way, nearly everything you do is (or ought to be) in an exploratory direction. You will constantly run into situations that you don’t understand, you will write and rewrite and refine the precise set of statements that reflect the concept you thought you had adequately coded six months ago, and you will never feel like an expert at this whole programming business.
Bring on the collaboration
Well, it’s time to bring in the experts then, isn’t it? Here is where we come to another issue that DH (and before that, humanities computing, and before that, academic programming) has been facing for a long time. What does it mean to collaborate?
The answer to this question, in fact, might depend on your answer to the question “does a digital humanist need to learn to code?” The answers that I have seen tend to fall into two categories:
- No, as long as you can think systematically and understand the possibilities that digital methods open to humanities research, who cares if you know how to run a compiler? That’s what collaboration is for.
- Of course you have to learn to code, because otherwise you will never fully understand the possibilities, and anyway you will simply not get anywhere if you sit around waiting for others to provide the tools for your specific problems.
So it is clear in both of these answers that the two themes of methodological theory and programming skill are relevant, and in one answer they are more intertwined than in the other. But how far can collaboration really take us, today, in digital humanities research?
As Andrew Prescott most recently pointed out, in most collaborations between the academic and the programmer, the academic considers him- or herself the lead partner, and it is the responsibility of the programmer to realize the vision that will lead to a successful research outcome. The vision may well have been shaped by the programmer, but the primary goal was the academic one all along. The dynamic has not disappeared with the establishment of dedicated Departments of Digital Humanities, with DH academic programs. The “traditional” humanist still tends to call the shots; the digital humanist supplies the hired help, and it is then up to him or her to find some means of extracting academic credit for the substantial work that is nevertheless not considered to be academic output worthy of record. In this model, while equal partnerships can happen, they are exceedingly rare. (That said, a properly equal partnership of this form does usually indicate a truly innovative project, since it implies that there is something there that is academically interesting to multiple fields.)
So to make any headway on the tenure track, it seems, the digital humanist must often put him- or herself in the driver’s seat of the project–that is, mostly on the humanities side, and seek collaboration with one or more programmers. This is the model of collaboration implied by those who see no need for digital humanists to do the coding themselves. But in this case there is no balance to be struck. Both the research result and the methodological credit will go to the non-coding humanist, digital or otherwise, who will simply have contracted out the grunt work necessary to build the actual tools. Now the coder is in the same position that the digital humanist occupied in the first scenario, only with even less of the academic credit; it is usually assumed that the coder is not really an academic at all. The work becomes just another programming job, albeit one that makes for good dinner conversation. Thus, while this is a fine model for employment if the humanist can afford it, it is not academic collaboration either.
The fundamental problem with humanities computing (if I may return to the slightly outdated phrase, and revive it to refer specifically to the practice of writing computer programs to solve problems in the humanities) is that an awful lot of the work has an awful lot of yak hair stuck to it. True, the end product might be spectacular. The methodological concepts behind the code might be mind-bendingly innovative. But how many academics can afford either the time to carry these projects through, or the money to hire people who can?
So by all means, get out there, learn to code. Find out what is possible. But understand that the things you want to do are still going to be hard, and forbiddingly time-consuming, without any sort of guarantee that the investment will pay off. If every digital humanist who doesn’t already know how to code gets out there tomorrow and signs up for a class, if the doors to this field are trampled down by techies and early dot-com retirees who really are code wizards and want a change of pace, what then? How will we explain to funders that we haven’t written any papers for the last six months because we were too busy trying to build a computational model for the evolution of Greek iconography from the tenth to the sixteenth centuries, and ran into some problems with databases along the way, and realized halfway through that the model needed to be re-designed to include UV identification of ink types? Put another way, how is our field going to bridge the gap between what we would like to do and what we are able to do?