Tag: nmfs_f11

  • A Dynamic Medium for Creative Thought: Beethoven’s Erard Piano

    For most of his early career, Beethoven played on German and Viennese pianos.  With the light action, clear attack, and rapid decay of these instruments, he composed themes such as this, from the concluding movement of Op. 49, No. 1:

    Op. 49, No. 1 – Rondo: Allegro; Zvi Meniker playing a reproduction of an Anton Walter piano, c. 1790

    Beethoven, Op. 49, No. 1, third movement (first edition)

    In 1803, Beethoven received a new, French piano from the maker Sébastian Érard.  This piano responded differently to his touch, used a foot pedal rather than knee lever to lift the dampers, and produced different sonorities.  It was – as Tilman Skowroneck has discussed – a new tool with which to conceive musical ideas.  With the less clear attack and longer decay of its tones, Beethoven explored new possibilities and abandoned old ones.  One result was themes such as this, from the last movement of Op. 53 (the Waldstein Sonata):

    Sonata No. 21, Op. 53 – Rondo: Allegretto moderato – Prestissimo, Bart van Oort playing a c. 1815 Salvatore Lagrassa piano

    Beethoven, Op. 53, third movement (first edition)

    The long slurs, the slow pace of harmonic change, the rippling accompaniment in the right hand while the left hand alternates between resounding bass notes and treble-register theme, the bell-like quality of this theme  – all are products of Beethoven’s interaction with his new Érard piano, the medium of his creative thought.

    In “Personal Dynamic Media” (1977), Alan Kay and Adele Goldberg heralded a new “dynamic medium for creative thought” in the form of the Dynabook, a predecessor to the notebook computer.

    Kay and Goldberg described the Dynabook as an active medium (or really, “metamedium” that can be all other media), which they saw as basically unprecedented.  “For most of recorded history,” they wrote, “the interactions of humans with their media have been primarily nonconversational and passive in the sense that marks on paper, paint on walls, even ‘motion’ pictures and television, do not change in response to the viewer’s wishes.”

    Yet the piano, and indeed all musical instruments, are responsive media.  Some are more responsive than others – in 1796, Beethoven was dissatisfied with a Streicher-made piano because it “deprived him of the freedom to create my own tone.”   But all musical instruments respond to the “queries and experiments” (to use Kay and Goldberg’s language) of their users.

    Why did Kay and Goldberg exclude musical instruments from the prehistory of the Dynabook?  Not out of neglect.  As Kay and Goldberg state, “one of the metaphors we used when designing such a system was that of a musical instrument, such as a flute, which is owned by its user and responds instantly and consistently to its owner’s wishes.”  Here, the reason for the exclusion becomes clear: Kay and Goldberg conceived musical instruments as interfaces, not as media.

    Recently, Kay has suggested that musical instruments and computers belong to the same category. In a 2003 interview, he remarked, “the computer is simply an instrument whose music is ideas.” This sounds like a statement from a culture in which musical instruments are primarily vehicles for already composed music.  It is as if music exists prior to instruments, simply waiting to be accessed.  That musical instruments and computers are now the same for Kay may reflect the failure of one the dreams behind the Dynabook: the dream that everyone would become computer “literate.”  Discussing the thinking behind the programming language he developed for the Dynabook, Kay explained, “the ability to ‘read’ a medium means you can access materials and tools generated by others. The ability to ‘write’ in a medium means you can generate materials and tools for others. You must have both to be literate.”  The early environments developed using Kay’s language emphasized the “writing” side of literacy: they were for such activities as painting, animation, and composing.  On the Dynabook, kids wouldn’t learn how to play a musical instrument – they would create their own musical instruments, and write music with them.  With the Dynabook, Kay and Goldberg hoped, “acts of composition and self-evaluation could be learned without having to wait for technical skill in playing.”

    Music on the Dynabook prototype.  On the right, “a musical instrument is created.”


    But lets look at how Kay and Goldberg conceptualized media in 1977: “external media serve to materialize thoughts and, through feedback, to augment the actual paths the thinking follows.”  That, to me, sounds like a good description of media.  And it sounds like an excellent description of Beethoven’s Érard piano.  Which should teach us that no technology can be a short-cut to our ideas; but any can be a medium for creative thought.

  • Transmitting Knowledge: Two Views

    Theodor H. Nelson, “No More Teachers’ Dirty Looks” (1970)

    We can now build computer-based presentational wonderlands, where a student (or other user) may browse and ramble through a vast variety of writings, pictures and apparitions in magical space, as well as rich data structures and facilities for twiddling them.

    “Face to Face: Alan Kay Still Waiting for the Revolution” (2003)

    It’s like missing the difference between music and instruments. You can put a piano in every classroom, but that won’t give you a developed music culture, because the music culture is embodied in people.

    On the other hand, if you have a musician who is a teacher, then you don’t need musical instruments, because the kids can sing and dance. But if you don’t have a teacher who is a carrier of music, then all efforts to do music in the classroom will fail—because existing teachers who are not musicians will decide to teach the C Major scale and see what the bell curve is on that.

    The important thing here is that the music is not in the piano. And knowledge and edification is not in the computer. The computer is simply an instrument whose music is ideas….

    So computers are actually irrelevant at this level of discussion—they are just musical instruments. The real question is this: What is the prospect of turning every elementary school teacher in America into a musician? That’s what we’re talking about here. Afterward we can worry about the instruments.

  • The New Applied Musicology

    Did Steve Jobs simply achieve what technology and society would have brought about anyway?  That’s like saying modern music would have been the same without Wagner.

    Engelbart was trained as an engineer, and based his designs on engineering logic.  What if he had thought in terms of musical logic?  In terms of harmony rather than of architecture, for example?

    Is bebop the musical parallel of networked interaction?

    Are these new virtual classroom technologies a dead-end and a waste?  That’s like trashing the first piano.  We have to learn how to play it.

    I’m paraphrasing here, but all of these statements were made in today’s New Media Faculty Seminar – and none of them by the musicologist in the room (i.e. me).  This is a very exciting development.  It shows how useful thinking about music can be as we grapple with the transformational technologies of our day, where they came from, and what’s next.  To understand technological “genius,” how metaphors shape design, how our technologies might be other than they are, and our experiences with new technologies, we do well to think about composers, musical instruments, musical styles – the history of music.

    It’s the new applied musicology.

  • The Music of Man-Computer Symbiosis

    If the user can think his problem through in advance, symbiotic association with a computing machine is not necessary…One of the main aims of man-computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems. [emphasis added]

    -J. C. R. Licklider, “Man-Computer Symbiosis” (1960)

    What does “man-computer symbiosis” sound like?  The question might conjure the vocoder or auto-tune – sonic mixtures of man and computing machine that, when functioning to augment expressivity, I’ve called hyperhuman.  When computer scientist J. C. R. Licklider wrote about the potential for “man-computer symbiosis” in 1960, however, he had something different in mind – a use of computing machines not just to help perform work, but to help formulate the work to be done.  For Licklider, man-computer symbiosis was an “expected development,” requiring major advances in computer memory, programming languages, and interfaces.  Today, it’s a reality – and it sounds like the music of David Cope and Emily Howell.

    In 1980, David Cope had composer’s block.  A friend suggested he write a computer program to help him compose, Cope took the suggestion, and the result was EMI: a computer program that analyzed a composer’s “musical DNA” and produced new works in his style.

    David Cope discusses EMI on Radiolab, with musical examples:

    EMI was a highly sophisticated realization of an old idea: composition by algorithm, using combinatorial procedures.  Athanasius Kircher was one of the first to use combinatorial procedures to mechanize musical composition.  In 1650, he described a box containing wooden strips covered with sequences of numbers and rhythmic values; by selecting and combining sequences on these strips according to Kircher’s rules, anyone – even those with no musical knowledge – could compose a hymn in four-part counterpoint.  Kircher called this box his “arca musarithmica,” or “music-making ark,” and presented it at as a musical marvel to astound his royal patrons.

    Kircher's music-making ark

    In the eighteenth century, composers turned composition by algorithm into a popular diversion by publishing musical fragments together with instructions for their combination into pieces.   In 1757, for example, C. P. E. Bach published an “Invention by which Six Measures of Double Counterpoint can be Written without a Knowledge of the Rules.”  Bach instructed readers to invent two rows of numbers with six digits each, and explained how to cross-reference these numbers with the tables of notes he provided. Following his procedure produced one of over two-hundred-billion possible short pieces of two-voice invertible counterpoint.

    C. P. E. Bach, Table 2 from "Invention by which Six Measures of Double Counterpoint…"

    More common were procedures involving dice, and producing brief minuets or other dance pieces.  In a musical dice game attributed to Mozart, the player roles a pair of dice to obtain a number (1-12), looks this number up on a chart to obtain another number (1-176), then cross-references this latter number with a table of 176 measures of music to identify the next measure for his 16-measure minuet.  Repeating the same procedure with 1 die and a table of 96 measures for the 16-measure trio, the player would produce 1 of over 10^29 possible minuet and trios.

    Mozart's Dice Game: The iPhone App

    All of these devices suggest that musical composition can be mechanized – that even human invention is essentially mechanical.  Yet in each case, one need only scratch the surface to see that machines are not participating in the “formulative parts” of musical composition.  All of the musical material has been formulated by a human composer, with constraints placed around it to enable its algorithmic recombination into acceptable new music.

    Cope’s latest project, however, is different.  Cope calls EMI’s successor program Emily Howell, and the humanizing name is telling.  Cope composes with Emily Howell in a cooperative – we could say symbiotic – relationship.  As Ryan Blitstein reports:

    Instead of spitting out a full score, it converses with Cope through the keyboard and mouse. He asks it a musical question, feeding in some compositions or a musical phrase. The program responds with its own musical statement. He says “yes” or “no,” and he’ll send it more information and then look at the output. The program builds what’s called an association network — certain musical statements and relationships between notes are weighted as “good,” others as “bad.” Eventually, the exchange produces a score, either in sections or as one long piece.

    In Licklider’s terms: dice games and EMI mechanically extend the composer whose music they recombine; Emily Howell enables composer-computer symbiosis.  By cooperating with Emily Howell to make compositional decisions, Cope has effectively brought a computing machine into the formulative parts of technical problems.

    From the album Emily Howell: From Darkness, Light