Gebrauchsmusik constitutes something with which one has dealings in the way one has dealings with things of everyday use…
– Heinrich Besseler (1925)
“Gebrauchsmusik”is a German compound adopted by English speakers to name something for which they had no single word. It means “music for use,” “utility music,” or as the Oxford English Dictionary defines it, “music intended primarily for practical use and performance.” We might wonder about this word: is not all music “for use?” The term was coined in the 1920s, however, as an antonym to the default musical category of the time, namely, “concert music” – music to be presented by professionals and silently contemplated by everyone else.
The concept of “Gebrauchsmusik” points back to before the nineteenth century, when composers wrote much of their music for amateurs or particular occasions, and music had not yet acquired the rarefied status it took on as growing middle class audiences supported ever more specially trained musicians. For twentieth-century composers such as Paul Hindemith, writing Gebrauchsmusik meant writing music with a utilitarian function, for children or amateurs to play, and for new media (radio in his case) to disseminate. Hindemith’s Music Day at Plön(1932), for example, provided a full day of music-making for a children’s camp, divided into “Morning Music,” “Table Music,” “Cantata” and “Evening Concert.” The work has been compared to the medieval “office” – the cycle of psalms, prayers and lessons chanted by monks on a daily basis, according to the hours of a liturgical day.
Today, we frequently hear that new technologies are democratizing access to music making. We can now compose a symphony without reading music, and record a song without singing. By enabling us to do such things, new technologies are encouraging us to emulate professionals – to make concert music. We don’t need new technologies, however, to help us sing in the shower, in our cars, or in our heads as we decide what to do for lunch. Perhaps we just need some modern-day Gebrauchsmusik: some practical songs for everyday use.
I’m currently participating in a seminar entitled “Awakening the Digital Imagination.” Developed by Gardner Campbell at Virginia Tech but now a “networked faculty-staff seminar” at institutions across the country, the course concerns today’s new media – its history, theory, practice, and particularly its applications to education. A premise of the course is that understanding new media requires using new media. Hence blogging is one of our main activities, and for the next 8 weeks or so I’ll be using Spooky & the Metronome to post my post-class thoughts.
This does not mean, however, that I’ll stop talking about music. As a blog about music and media, old and new, Spooky & the Metronome is a perfect venue for connecting our seminar discussions to music and musical culture. So let us begin.
This week, we read Vannevar Bush’s “As We May Think” (1945). Writing in the wake of the atomic bomb, Bush turned to the question: to what ends should we apply technology in peacetime? Seeing around him a superabundance of information, Bush identified access as the major roadblock to the advancement of knowledge, and he described a system of information organization and retrieval that pointed the way for the computer, web and hyperlinks. Bush’s essay is thus significant as a landmark in the prehistory of digital media. More generally instructive, however, is Bush’s critical and productive engagement with the question of what to do with our “new and powerful instrumentalities” – the question of “how to use technology intelligently,” as one person put it in class.
Bush wanted to advance knowledge; let’s say we want to widen access to, and deepen appreciation of classical music. How can we use our new and powerful instrumentalities to these ends? Michael Tilson Thomas has shepherded into reality numerous responses to this question, from the YouTube Symphony Orchestra to the New World Center. Others have rejected the question, arguing that the value of classical music lies in the freedom it offers from a digitally enmeshed world. The latter stance worries me, not because I think the anxieties producing it are ill-founded, but because I fear music that doesn’t interface with our dominant communications media will become a tree fallen in the forest – no longer even making a sound. Perhaps we don’t want smart phones in the concert hall; but perhaps we do want concerts on our smart phones. We can only have that discussion, however, if our starting place is not “how do we save classical music from our new technologies?” but rather “how do we apply our new technologies to classical music intelligently?” This is the question I’m asking myself as I teach the “History of the Symphony” this semester and participate in the “Awakening the Digital Imagination” seminar – and which I invite you, gentle reader, to ponder as well.
The microphone as we know it – a device for turning acoustic vibrations into electrical signals – was first conceived in the 1850s, when Charles Bourseul described its application to making the voice audible at distance. What Bourseul described was a telephone, and the component we call the microphone was intended to pick up the sounds of speech so they could be reproduced at the same time in another place. The component acquired the name microphone, however, thanks to David Edward Hughes, who in 1878 showed that it could be used to make quiet sounds louder – an application he demonstrated upon the footsteps of a fly. This way of conceiving the microphone – as a device for listening in on the tiny or hidden – predated electroacoustics. In 1827, Charles Wheatstone, unaware of the stethoscope invented some fifteen years earlier, reinvented the device but gave it the more appropriate name “microphone.” Over a hundred years before that, in 1684, the clergyman Narcissus Marsh observed that the ear trumpet should rightly be called a “microphone” on analogy with the “microscope”: it was an acoustical magnifier – a device one put to one’s ear in order to perceive sounds that would otherwise remain inaudible. The microphone, in this sense, was not a device for transmitting one’s voice, but for extending one’s hearing.
Listening to Sad Songs for Cell Phones, we can experience the microphone in its original sense. Though recorded by the microphone built into a cell phone, the songs were not sung to be heard at a distance. The microphone is instead the device that allows us to hear – the ear trumpet we’ve turned upon a hitherto inaudible phenomenon. Weare not there in the room where these songs were sung, but the microphone – the technological extension of our hearing – is. This is what we can experience in the lo-fidelity of these recordings, and in the monoaural listening they invite.
Auto-tune: it’s the ubiquitous digital effect that gives pop singers’ voices that robotic quality, that you may know from the Youtube phenom auto-tune the news, and that recently prompted Alex Pappademas to start a three-part New York Times blog series with the question: “really now, what’s so bad about auto-tune pop?”
Pappademas speaks to the many who find auto-tune repugnant, and especially to those who justify their (dis)taste for the effect as a matter of standards. As he observes in installment #3:
the biggest criticism Auto-Tune’s critics level against it is that it’s the sonic equivalent of plastic surgery or ‘roids, a digital fix that lets lousy singers skip over that whole learning-to-carry-a-tune thing (boring!) and cut straight to pop stardom’s V.I.P. room.
Auto-tune, in short, is a musical cheat, to which Pappademas replies:
The truth is that artists and producers have been using technology (reverb, overdubbing, electronic harmonizers) to change the sound of their voices for decades. The link between “organic” live performance and recorded music was broken in the late ‘40s when Les Paul popularized multitrack recording.
Indeed. But the arguments surrounding auto-tune aren’t unique to vocal effects, and they long predate recorded music. In fact, the same arguments have perennially attended tone-modifying devices of all sorts: many technologies for altering the sound of an instrument were once regarded as crutches but went on to gain acceptance as expressive resources, ultimately becoming part of what it means to know how to play or compose for a given instrument. This is the story of two tone-modifying devices whose detractors have fallen silent: the violin mute, and the sustaining pedal.
The Violin Mute
The violin mute was invented in the seventeenth century, and is a device applied to the bridge of the violin to dampen its sound. The French composer Lully was one of the first to call for muted violins in a written score, pairing them with recorders in a depiction of enchanting, soporific murmurings in the opera Armida (1686):
By the middle of the eighteenth century, however, French critics regarded the mute as a crutch for violinists incapable of playing softly. As one writer remarked in an article on performance in the Encyclopédie:
It is well known that in Lully’s lifetime the violinists needed to resort to mutes in order to play softly enough in certain passages.
If this view of the mute as a substitute for skill had persisted, the mute no doubt would have become obsolete. Elsewhere in Europe, however, composers and listeners perceived the mute as giving the violin a special tonal quality that violinists without mutes could not match. And so composers continued to call for violin mutes for that special tonal quality, often combining it with slow, flowing music to produce the peaceful or dreamy atmosphere pioneered by Lully. Today, critics reserve their venom for violinists who fail to use mutes when they should. Speaking of the violin mutes in Haydn’s Il mondo della luna (1777), period performance specialist Nikolaus Harnoncourt recently remarked:
Players are lazy and think it’s enough just to play very softly. But when composers write ‘con sordino’ — ‘with mute’ — they want a very particular, different sound…The idea was just to hear quivering air.
Nikolaus Harnoncourt conducts “Vado, vado” from Il mondo della luna – with mutes
The Sustaining Pedal
The piano was one of the major technological breakthroughs of the eighteenth century, trumping the harpsichord with its touch-sensitivity and the clavichord with its greater volume. Early on, piano manufacturers experimented with ways of modifying the piano’s tone, eventually arriving at pedals as a hands-free way to apply tone-modifying devices. Many players, however, considered piano technique a matter of fingers only, and pianists who used pedals were commonly charged with charlatanism – with resorting to technological trickery to mask a lack of keyboard skill. Such was the case with the sustaining (or damper) pedal, which when depressed allowed the strings of the piano to resonate even after the finger had been lifted from the key. The sustaining pedal was criticized as a cheat for finger legato, and also for producing a muddled blur of sound. As late as 1828, the pianist-composer Hummel maintained:
a truly great artist has no occasion for the pedals…Neither Mozart, nor Clementi, required these helps to obtain the highly deserved reputation of the greatest, and most expressive performers of their day.
By this time, however, the tides were turning. As Beethoven’s student Carl Czerny observed:
by means of the pedal, a fullness can be attained which the fingers alone are incapable of producing.
Initially, composers used the sustaining pedal primarily for special effects, reserving them for unusual passages that stood apart from their surrounding context. One of Beethoven’s comparatively rare indications for sustaining pedal, for example, occurs in the opening of the Tempest Sonata, where it sets off slowly accumulating chords from the hurried Allegro theme.
Beethoven, Piano Sonata Op. 31 No. 2 (Tempest), first movement, first edition (Bonn, 1802)
The mid-1800s, however, witnessed a period of “pedal mania” during which pedaling became ubiquitous, and composer-pianists developed not only modern pedal technique but also new musical styles premised on use of the sustaining pedal. Chopin and Liszt were two of the chief architects of this development. The nocturne style that became Chopin’s trademark, for example, required the pedal to sustain the down-beat bass notes while the left moved to play mid-register chords, and to continue the cantabile legato across large (highly expressive) leaps in the right hand melody:
Arthur Rubinstein plays Chopin’s Nocturne Op.9 No.2
What we’re seeing with Auto-tune, then, isn’t just a discussion of taste masquerading as a discussion of standards. It’s part of the process by which new technologies are incorporated into musical practice, transforming from crutches (which replace something old) into expressive resources that enable new musical styles and require new musical skills.