Did Steve Jobs simply achieve what technology and society would have brought about anyway? That’s like saying modern music would have been the same without Wagner.
Engelbart was trained as an engineer, and based his designs on engineering logic. What if he had thought in terms of musical logic? In terms of harmony rather than of architecture, for example?
Is bebop the musical parallel of networked interaction?
Are these new virtual classroom technologies a dead-end and a waste? That’s like trashing the first piano. We have to learn how to play it.
I’m paraphrasing here, but all of these statements were made in today’s New Media Faculty Seminar – and none of them by the musicologist in the room (i.e. me). This is a very exciting development. It shows how useful thinking about music can be as we grapple with the transformational technologies of our day, where they came from, and what’s next. To understand technological “genius,” how metaphors shape design, how our technologies might be other than they are, and our experiences with new technologies, we do well to think about composers, musical instruments, musical styles – the history of music.
If the user can think his problem through in advance, symbiotic association with a computing machine is not necessary…One of the main aims of man-computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems. [emphasis added]
What does “man-computer symbiosis” sound like? The question might conjure the vocoder or auto-tune – sonic mixtures of man and computing machine that, when functioning to augment expressivity, I’ve called hyperhuman. When computer scientist J. C. R. Licklider wrote about the potential for “man-computer symbiosis” in 1960, however, he had something different in mind – a use of computing machines not just to help perform work, but to help formulate the work to be done. For Licklider, man-computer symbiosis was an “expected development,” requiring major advances in computer memory, programming languages, and interfaces. Today, it’s a reality – and it sounds like the music of David Cope and Emily Howell.
In 1980, David Cope had composer’s block. A friend suggested he write a computer program to help him compose, Cope took the suggestion, and the result was EMI: a computer program that analyzed a composer’s “musical DNA” and produced new works in his style.
David Cope discusses EMI on Radiolab, with musical examples:
EMI was a highly sophisticated realization of an old idea: composition by algorithm, using combinatorial procedures. Athanasius Kircher was one of the first to use combinatorial procedures to mechanize musical composition. In 1650, he described a box containing wooden strips covered with sequences of numbers and rhythmic values; by selecting and combining sequences on these strips according to Kircher’s rules, anyone – even those with no musical knowledge – could compose a hymn in four-part counterpoint. Kircher called this box his “arca musarithmica,” or “music-making ark,” and presented it at as a musical marvel to astound his royal patrons.
Kircher's music-making ark
In the eighteenth century, composers turned composition by algorithm into a popular diversion by publishing musical fragments together with instructions for their combination into pieces. In 1757, for example, C. P. E. Bach published an “Invention by which Six Measures of Double Counterpoint can be Written without a Knowledge of the Rules.” Bach instructed readers to invent two rows of numbers with six digits each, and explained how to cross-reference these numbers with the tables of notes he provided. Following his procedure produced one of over two-hundred-billion possible short pieces of two-voice invertible counterpoint.
C. P. E. Bach, Table 2 from "Invention by which Six Measures of Double Counterpoint…"
More common were procedures involving dice, and producing brief minuets or other dance pieces. In a musical dice game attributed to Mozart, the player roles a pair of dice to obtain a number (1-12), looks this number up on a chart to obtain another number (1-176), then cross-references this latter number with a table of 176 measures of music to identify the next measure for his 16-measure minuet. Repeating the same procedure with 1 die and a table of 96 measures for the 16-measure trio, the player would produce 1 of over 10^29 possible minuet and trios.
Mozart's Dice Game: The iPhone App
All of these devices suggest that musical composition can be mechanized – that even human invention is essentially mechanical. Yet in each case, one need only scratch the surface to see that machines are not participating in the “formulative parts” of musical composition. All of the musical material has been formulated by a human composer, with constraints placed around it to enable its algorithmic recombination into acceptable new music.
Cope’s latest project, however, is different. Cope calls EMI’s successor program Emily Howell, and the humanizing name is telling. Cope composes with Emily Howell in a cooperative – we could say symbiotic – relationship. As Ryan Blitstein reports:
Instead of spitting out a full score, it converses with Cope through the keyboard and mouse. He asks it a musical question, feeding in some compositions or a musical phrase. The program responds with its own musical statement. He says “yes” or “no,” and he’ll send it more information and then look at the output. The program builds what’s called an association network — certain musical statements and relationships between notes are weighted as “good,” others as “bad.” Eventually, the exchange produces a score, either in sections or as one long piece.
In Licklider’s terms: dice games and EMI mechanically extend the composer whose music they recombine; Emily Howell enables composer-computer symbiosis. By cooperating with Emily Howell to make compositional decisions, Cope has effectively brought a computing machine into the formulative parts of technical problems.
I’m currently participating in a seminar entitled “Awakening the Digital Imagination.” Developed by Gardner Campbell at Virginia Tech but now a “networked faculty-staff seminar” at institutions across the country, the course concerns today’s new media – its history, theory, practice, and particularly its applications to education. A premise of the course is that understanding new media requires using new media. Hence blogging is one of our main activities, and for the next 8 weeks or so I’ll be using Spooky & the Metronome to post my post-class thoughts.
This does not mean, however, that I’ll stop talking about music. As a blog about music and media, old and new, Spooky & the Metronome is a perfect venue for connecting our seminar discussions to music and musical culture. So let us begin.
This week, we read Vannevar Bush’s “As We May Think” (1945). Writing in the wake of the atomic bomb, Bush turned to the question: to what ends should we apply technology in peacetime? Seeing around him a superabundance of information, Bush identified access as the major roadblock to the advancement of knowledge, and he described a system of information organization and retrieval that pointed the way for the computer, web and hyperlinks. Bush’s essay is thus significant as a landmark in the prehistory of digital media. More generally instructive, however, is Bush’s critical and productive engagement with the question of what to do with our “new and powerful instrumentalities” – the question of “how to use technology intelligently,” as one person put it in class.
Bush wanted to advance knowledge; let’s say we want to widen access to, and deepen appreciation of classical music. How can we use our new and powerful instrumentalities to these ends? Michael Tilson Thomas has shepherded into reality numerous responses to this question, from the YouTube Symphony Orchestra to the New World Center. Others have rejected the question, arguing that the value of classical music lies in the freedom it offers from a digitally enmeshed world. The latter stance worries me, not because I think the anxieties producing it are ill-founded, but because I fear music that doesn’t interface with our dominant communications media will become a tree fallen in the forest – no longer even making a sound. Perhaps we don’t want smart phones in the concert hall; but perhaps we do want concerts on our smart phones. We can only have that discussion, however, if our starting place is not “how do we save classical music from our new technologies?” but rather “how do we apply our new technologies to classical music intelligently?” This is the question I’m asking myself as I teach the “History of the Symphony” this semester and participate in the “Awakening the Digital Imagination” seminar – and which I invite you, gentle reader, to ponder as well.
The microphone as we know it – a device for turning acoustic vibrations into electrical signals – was first conceived in the 1850s, when Charles Bourseul described its application to making the voice audible at distance. What Bourseul described was a telephone, and the component we call the microphone was intended to pick up the sounds of speech so they could be reproduced at the same time in another place. The component acquired the name microphone, however, thanks to David Edward Hughes, who in 1878 showed that it could be used to make quiet sounds louder – an application he demonstrated upon the footsteps of a fly. This way of conceiving the microphone – as a device for listening in on the tiny or hidden – predated electroacoustics. In 1827, Charles Wheatstone, unaware of the stethoscope invented some fifteen years earlier, reinvented the device but gave it the more appropriate name “microphone.” Over a hundred years before that, in 1684, the clergyman Narcissus Marsh observed that the ear trumpet should rightly be called a “microphone” on analogy with the “microscope”: it was an acoustical magnifier – a device one put to one’s ear in order to perceive sounds that would otherwise remain inaudible. The microphone, in this sense, was not a device for transmitting one’s voice, but for extending one’s hearing.
Listening to Sad Songs for Cell Phones, we can experience the microphone in its original sense. Though recorded by the microphone built into a cell phone, the songs were not sung to be heard at a distance. The microphone is instead the device that allows us to hear – the ear trumpet we’ve turned upon a hitherto inaudible phenomenon. Weare not there in the room where these songs were sung, but the microphone – the technological extension of our hearing – is. This is what we can experience in the lo-fidelity of these recordings, and in the monoaural listening they invite.