Vol. 37 Issue 2 Reviews | Reviews > Publications > | ||
Two Reviews of Hannah B. Higgins and Douglas Kahn (eds.): Mainframe Experimentalism: Early Computing and the Foundations of the Digital Arts |
|||
Hardcover, 2012, ISBN 978-0-520-26837-1, softcover, ISBN 978-0-520-26838-8; 376 pages, edited volume with introduction, 24 essays, 30 illustrations, and index; available from University of California Press, 2120 Berkeley Way, Berkeley, California 94704-1012, USA; telephone: (510) 642-4247, electronic mail orders@cpfsinc.com; http:/www.ucpress.edu/. 1. Reviewed by Hubert Howe Mainframe Experimentalism is a big book, with several different sections and chapters by different authors covering work done in many of the arts in the early days of computing. Music is only a small part of the book, and there are only three composers and activities that the book focuses on: James Tenney at Bell Labs in the early 1960s, John Cage and Lejaren Hiller and their collaboration in the production of HPSCHD and the University of Illinois in the late 1960s, and Alvin Lucier’s North American Time Capsule from 1967. The book has extensive footnotes and has clearly been well researched. The early days of mainframe computing are so far past us at this time that much of its history is now hopelessly buried, and this book reminds me that what people see from this perspective may be distorted. One topic I did not see discussed in the parts of the book I read was the importance of the IBM Corporation to the development of modern computing. In the early days, a computer was an “IBM machine.” While the company intended to automate the business world and greatly profit from doing so (which it did), the greater impact may have been on universities and research labs. Both IBM and Bell Labs were among the few institutions in the world that carried out pure research, where they explored science without having to worry about the impact on their companies’ bottom lines. The authors of the main articles in this section are all scholars in the fields of art, philosophy, culture, and media, but not music. As a result, I often found their perspectives somewhat askew to my own understanding of the ideas they discuss. I. James Tenney at Bell Labs There is no question that James Tenney’s work was central to the development of computer music, but the contributions of the others in this book is more questionable. HPSCHD was one of John Cage’s “happenings,” and there was no reason it had to involve a computer. Lucier’s work depended on the vocoder, an instrument which has had a long and interesting history that is partly described in the chapter, but his use of it was more in the manner of an analog device. There were many other examples the authors might have found where the computer would have been a more integral and necessary aspect of the work. Douglas Kahn, the author of the chapter on Tenney and one of the editors of the book, knew Tenney personally and attended some of his presentations at the time, and he writes with a sense of authority. James Tenney was one of the first professional composers to work in the environment of Bell Telephone Laboratories, where Max Mathews, the legendary “father of computer music,” developed the first software for music synthesis. Tenney was also an exceptionally thoughtful and inquisitive individual who sought to understand the meaning and historical context of what he was doing. The book describes how Tenney would listen to the noises of the highway and environment on his way from New York City to Murray Hill, New Jersey, where Bell Labs was located, and try to hear all that noise as music. The book also describes his uneasiness in being in an environment where music was not a central activity and most of the others were self-described amateurs with only a passing interest in it. The central problems that Tenney discovered in his work were that using the computer was extraordinarily difficult for someone without scientific training, and it was very hard to produce sounds of any sonic depth and quality. It wasn’t just that the early digital-to-analog converters at Bell Labs could work at only 10,000 samples per second, allowing frequencies only up to 5 kHz, but that there were no known methods to generate interesting sounds. He complained about how “it is not only possible but necessary to specify quantitatively every parameter of the sound-material to be used in composition.” It is no wonder that his first composition done there was his Noise Study, which had a rather limited sound vocabulary. Tenney learned that he had to focus on the properties of musical tone production that were less well understood, such as vibrato, timbre, and modes of attack. In doing this, he was one of the first to realize that very little was understood about these aspects, and much that was written about them in books was wrong. Even though he made great strides from where he had begun, the results were still quite primitive from our perspective looking back upon these times. Nevertheless, Tenney’s work reminds us that the value of all early computer music work was in clarifying our ideas about what was necessary to do and in getting feedback from doing it. This process still continues today. While Tenney’s work was certainly at a higher level than most of the previous work done at Bell Labs, it is unfortunate that a book that seeks to document the history of mainframe computer work in music failed to consider some of the even better composers who followed Tenney. These would include J. K. Randall, a professor (full disclosure: one of my teachers) who worked at Princeton in 1964-65 but brought his tapes to Bell to be converted to sound, and Jean-Claude Risset, who worked there in 1967-68 in much the same capacity as Tenney and who produced some of the most memorable music in the early history of computer music, Computer Suite from “Little Boy” and Mutations. II. John Cage and HPSCHD John Cage is one of the most enigmatic of composers, as we have been reminded by the many celebrations in his honor that have taken place over the last year to celebrate the centennial of his birth. By the time he came to work on his project that became HPSCHD, he had progressed from experimenting with new sounds from innovative use of percussion instruments such as tin cans and conch shells, to inventing the “prepared” piano, and finally, in 1952, to abandoning the use of sounds altogether in his composition 4'33", where the entire conception of the piece is what happens when a pianist sits on stage and does nothing. Naturally, audiences erupted, some perhaps not realizing that this was the whole point. From that time on, Cage began designing “happenings,” in which all kinds of crazy events occurred, little of which had anything to do with music in the traditional sense. This kind of thing found some favor, more in Europe than the United States, with audiences who were becoming alienated from the inaccessible and complex atonal and serial music of the mid-20th century. HPSCHD, presented in a barn-like building on the campus of the University of Illinois in Urbana-Champaign, “allowed the audience ... to circulate freely amid a willfully chaotic performance that consisted of seven amplified harpsichord soloists and fifty-two amplified tape recorders” [p. 148]. The composers divided the octave in all ways between 5 and 56, and then further allowed 64 microtonal inflections, as determined by the I Ching, resulting in a potential reservoir of some 885,000 pitches, about which Cage remarked, “This breaks the scale into such small components that at times the listener cannot detect tone differences” [p. 148]. The seven harpsichord soloists then played randomly selected fragments of music by Mozart, Beethoven, Chopin, Schumann, Gottschalk, and Busoni, as well as earlier work by Cage and Hiller. (It might perhaps be noted that none of the classical composers had ever written music for harpsichord.) The result was something like a circus, even down to the fact that images were projected, and popcorn, candied apples and T-shirts were sold. There was no reason that a computer had to be used to create such a spectacle. One of the main advantages of using one was that it provided a more efficient way of producing random numbers than the I Ching. Though there may be some value in looking at the type of randomness that this produced, there is no reason to suspect that it had any relevance to the success or even the character of the piece. Branden W. Joseph, the author of this chapter, knows a great deal about Cage and has written a book about random order, although as it applies to art more than to music. He goes on to describe how Cage’s work on HPSCHD related to his increasing interest in political anarchy, his fear of policemen, his sexuality, and how this experience shaped his views on how happenings should be staged in the future. A reader looking back at these times gets, in my opinion, a rather distorted picture of what was happening in computer music in the 1960s from this focus on Cage. The topic of randomness, for example, has numerous applications in modern computer music ranging from noise composition to stochastic music and granular synthesis, and the groundwork for these processes was all being laid during these times by other researchers. Xenakis’s early version of Musique formelles was published before the Cage-Hiller collaboration. III. Alvin Lucier’s North American Time Capsule The vocoder is an instrument that has a long history, some of which is chronicled in Christoph Cox’s chapter on Alvin Lucier’s North American Time Capsule. The device goes back to 1928, and one of the earlier versions was demonstrated at the 1939 World’s Fair. It was invented at Bell Telephone Laboratories (again!), and the company was interested in the process in order to lessen the cost and complexity of long-distance telephone calls. It was thus invented primarily for speech and not music, and one of its goals was to reproduce speech so that it would be intelligible even though it may not have a great bandwidth. The device consists of a series of bandpass filters, each of which produces an envelope follower that can be applied to another signal. In its original application, the other signal was at the opposite end of the telephone line. Over the years, many different versions of the device have been made, and once it became available through manufacturers such as Robert A. Moog, it was widely used, although more for popular music than for classical. It was never, as the book claims, a “staple of electronic music” [p. 174]. Thus, the vocoder is primarily an analog instrument, although, of course, at this point in time there are many digital versions of it. Even though some of the research that initially went into its creation used mainframe computers, it is hard to see why this instrument or the piece Lucier created with it is relevant to the book’s topic. Alvin Lucier is an interesting composer whose main focus is often on very subtle changes in musical sounds that evolve over a long period of time. Probably his most famous piece is I Am Sitting in a Room, in which a speaker reads a text into a tape recorder and then plays the recording over and over, each time making a new recording of the text. After a period of several minutes, the speaking becomes less and less intelligible and begins to take on the resonances of the acoustic space in which it is played. In North American Time Capsule the sound source was the Brandeis University Chamber Choir, and the singers were not given a score but were instead told to “prepare a plan of activity using speech, singing, musical instruments, or any other sound producing means that might describe – to beings very far from earth’s environment either in space or in time – the physical, social, spiritual, or any other situation in which we find ourselves at the present time” [p. 171 and 191]. The singers responded as you might well imagine, by producing a wide variety of sounds that included speaking in other languages and using not just musical instruments but things like electric shavers and toothbrushes. No doubt some of the resulting cacophony is due to the actions of the vocoder, but it is also due to the non-musical diversity of the input signals. 2. Reviewed by Jeffrey Trevino Sometimes, the most important work you can do in your field is to share something with people outside of it. Mainframe Experimentalism: Early Computing and the Foundations of the Digital Arts extends Hannah B. Higgins’ and Douglas Kahn’s research into the FORTRAN programming workshop that composer James Tenney offered his friends – Philip Corner, Dick Higgins, Alison Knowles, Jackson Mac Low, Max Neuhaus, Nam June Paik, and Steve Reich – in the fall of 1967, at the Chelsea (Manhattan) office of Something Else Press, the home of Knowles and Higgins. This book focuses on the works of these, and other kindred, mainframe experimenters during the 1960s, via an introduction and six sections of essays titled: Discourses, Centers, Music, Art and Intermedia, Poetry, and Film and Animation. Original essays, as well as primary source texts by artists working in the 1960s, portray and analyze the aesthetic, political, and social milieu of early computer art, through detailed discussion of specific works, artists, scenes, and trends, accompanied by photographs and program flowcharts. Fittingly, the prose style in these essays on collisions between art and science ranges from hyper-detailed, technical history (e.g., Robert A. Moog’s history of the vocoder) to poststructuralist analysis (e.g., a political reading of Alison Knowles’ House of Dust by Benjamin H. D. Buchloh). By working outward from a single historical event, Higgins and Kahn have compiled a multi-disciplinary introduction to computer artwork of the 1960s unrivaled in its breadth, detail, and critical perspective. The editors hope to correct history by rehabilitating ignored and maligned work. In the introduction, Kahn summarizes the historiography of digital media in a paragraph and concludes that the present volume puts the ”art” back into early digital art by considering early works outside of established narratives of computer art, new media, and digital culture: The early digital arts seemed degraded as art; they seemed to be about workshopping technological possibilities, or in the case of musicology seemed constrained to academic computer music. More recently, histories of digital arts have been channeled through prescient moments in the development and social uptake of digital technologies. Where once commentators followed the lines of Miro or Klee and found digital art wanting, later historians were drawn to the lines of Douglas Engelbart’s mouse; where once canonical artists were housed in the domain of museums of modern art and the commodity culture of collectors, they are now housed in the computer architectures of John von Neumann and Silicon Valley design centers. The vanguard art world that New York had stolen from Paris during World War II had been digitally rerouted to Palo Alto, if we follow the migrations of the discourse (p. 2). Beyond a desire to rehabilitate these works and authors ”in their own right” (ibid.), the introduction hints at a broader, but ultimately unsubstantiated significance. By constructing early digital works as foreshadowing precursors to the current state of digital arts, the book asserts significance while sidestepping the issue of whether or not these early experiments actually influenced later digital art: After surveying the ways recent history has ignored these works, the book’s first essay focuses on the way public and artistic institutions first dismissed them. In ”The Soulless Usurper,” Grant Taylor complements the introduction’s narrative of derision and exclusion with a history of public suspicion, disgust, and aggression toward computer art. With the aid of pleading artists, hurled eggs, and prejudiced curators, whose responses to computer art ranged from exclusion at worst to disclaimed ambivalence at best, he argues that computer art is ”possibly the most maligned art form of the twentieth century” (p. 18). Taylor suggests, sympathetically, that this attitude resulted, perhaps, from widespread suspicion toward alliances between C. P. Snow’s increasingly divergent ”Two Cultures.” Or maybe it was a bad idea to resort to metaphors of human intelligence when describing machines designed to run the United States’ ballooning military-industrial complex. Metaphors aside, it may also have been due to the 1960s’ association between computers and oppressive, centralized government and bureaucracy that left critics deriding computer art’s ”mechanical sterility” (ibid.), despite contemporary aesthetic interest in automatic and algorithmic art made without the assistance of the computer. Taylor gives the lowest marks of all to the visual arts and highlights the absurdity of prejudice with a brilliant comparison between two nearly identical works: Sol LeWitt’s Variations of Incomplete Open Cubes, executed without a computer beginning in 1974, and Manfred Mohr’s Cubic Limit series, realized with computer assistance beginning in 1973. Both artists sought to eliminate emotional content from their work, and both artists identified seriality, incompleteness, and cubes as central concerns of their work. Their back-to-back presentation in the book invites the reader to contemplate a striking pair of accidental twins. Most telling of all is Taylor’s account of these works’ historical reception: ”The critical debate surrounding LeWitt’s series established it as one of the key works of the decade. By contrast, when Mohr developed his Cubic Limit series in Paris in the early 1970s, he endured taunts for employing the computer in what was viewed as a corruption of art” (p. 27). The remaining essays in the Discourses section discuss two cases of creativity, influenced by broader social discourse. David Bellos surveys the works of author Georges Perec, describing a before and after tale of an artist whose conceptual interest in constraint-based creativity led him from Oulipo (a group of authors, poets, and mathematicians interested in extreme literary formalism) into a collaboration with a computer programmer at the behest of the Humanities Computing Center of the French National Research Council. The latter project resulted in The Art of Asking your Boss for a Raise, a work of algorithmic literature (eventually adapted for the stage) represented in Bellos’ essay as a beautifully arranged flowchart both in the original French and in English translation on opposing pages. Edward A. Shanken finishes the section with an analysis of curator Jack Burnham’s 1970 Software Exhibition at the Jewish Museum, which creatively extends the dichotomy between hardware and software to a range of aesthetic, social, and political implications. The Music section recounts James Tenney’s work at Bell Labs; HPSCHD, the collaborative project between John Cage and Lejaren Hiller; and North American Time Capsule, Alvin Lucier’s collaboration with Sylvania Electronic Systems. Kahn’s essay on Tenney’s work at Bell Labs is a loving tribute to his departed friend’s pioneering contributions, and it details the link between the composer’s radical aesthetic (one of ”attentive listening” [p. 138]), his acoustic research, and Bell Labs’ pursuit of an increasingly nuanced approach to musical timbre, which created some friction with Bell Labs’ musically conservative, but ultimately supportive, engineers. Branden W. Joseph’s ”HPSCHD – Ghost or Monster?” departs from Cornelius Cardew’s 1972 critique of Cage’s work, arguing that Cage approached technology optimistically as an ethical assertion of social power, glorifying it in his collaboration with Hiller in the form of overwhelming sensory input and hagiographic imagery of the nascent American space program. In this section of the book the Lucier piece receives the most attention. Christoph Cox explicates the piece via the writings of Jacques Lacan and Friedrich Kittler to argue that, because technology cannot help but record ”the real,” Lucier’s speech-focused works (Cox proposes a grouping of several of Lucier’s pieces from the late 1960s) exude a kind of transcendent naturalism, which is, as Lucier points out in I am Sitting in a Room, more than ”the demonstration of a physical fact.” Moog and Lucier himself provide detailed technical commentaries on the specifics of the Sylvania vocoder used in the composition, as well as the collaborative actions described in the composition’s notation. The Art and Intermedia section shows that the computer was a potent site for experiments in new art forms that straddled boundaries between existing disciplines. Alison Knowles’ House of Dust, as described by Hannah B. Higgins and analyzed by Buchloh, began as a computer-generated poem but yielded artistic responses in architecture and performance art. While Higgins provides a fascinating historical account, Buchloh’s garbled poststructuralist argot adds little to the reader’s knowledge or understanding of Knowles’ work, as it strives unconvincingly to position artists as Marxist saviors who subversively ”rupture [the] collectivization of silence and prohibition,” whatever that means. (Readers brave enough to wade into these tangled nominalizations will be rewarded with a lovely digression on the list as a formal device in contemporary art.) Simon Ford introduces three texts by artist Gustav Metzger, which describe a computer-automated, self-destructing sculpture designed as a subversive appropriation of military-industrial technology, to be installed as an art object and quasi-musical-instrument in the central courtyard of a housing block. To finish the section, William Kaizen contextualizes Paik’s interactive works in an American discourse that positions television as a canary in the coal mine of technology’s educational potential. Although Paik created basic, interactive artworks and imagined technologies similar to the Internet, his Cage-like technophilia fades into distinctively non-interactive works for multiple televisions that ”exaggerate the pleasures and terrors of one-way information flow” (p. 238). Like the treatment of Lucier’s work, the Poetry section balances analytical overviews with technical descriptions by the artists themselves. Christopher Funkhauser describes early experiments in algorithmic poetry, followed by primary source texts by Nanni Balestrini and Emmett Williams. Higgins then surveys the works of Eric Andersen and Dick Higgins, her father, followed by primary sources from these authors, and Mordecai-Mark Mac Low finishes the section with his take on the computational poems of his father, Jackson Mac Low. The presence of top-down, hierarchical models of entire artistic form – the easily modeled templates of the sonnet and haiku, for example – strikingly contrasts with the largely emergent strategies of formal design at use in the previously discussed works. Perhaps the verified concision of traditional poetic forms invited the first experimental computer poets to think more concretely about large-scale form. In the Film and Animation section, Gloria Sutton details the Poemfields collaborations between Stan VanDerBeek and Ken Knowlton at Bell Labs, created using Knowlton’s BEFLIX system for computer-generated animation, and Zabet Patterson analyzes and posits James Whitney’s Lapis – created with a modified M5 antiaircraft gun – and Permutations as subversive reappropriations of military technology and its accompanying visual logic. Through an overview of cybernetics and Martin Heidegger’s concept of ”emplacement,” the notion that society responds to the breakneck speed of technological change with obsessive regard for determin- ing and fixing the position of objects and bodies, Patterson shares a creative analysis of anti-aircraft weaponry as a formal and functional elaboration of the human eye. By eliminating a stable point of visual orientation, Patterson argues, Whitney overturns a militarized visuality bent on placing objects and relentlessly orienting the observer. This analysis seems forced at times, especially as the constant implication of concentric circles in Lapis implies axial rotation around a fixed point. However, it is a politically attractive analysis and impetus for artists critical of the military use of potentially artistic technology, a recurrent theme in this collection. The essays in this book are a unique opportunity to take inspiration from the histories of other arts that rely on the computer, while comparing their origins to those of computer music. Beyond the surfeit of technical prowess and artistic accomplishment on display here, this survey invites the reader to compare the past with the present. When was the last time a high-profile, technology company from your country commissioned an artist to create a work in order to communicate new technology to an educated public, as on several occasions detailedin these essays? Which potentially artistic technologies of the present might we regard as the current analogues of these room-filling mainframes? Most importantly, as attested to by these tributes to the pioneers of computer art, who will be brave enough to step forward, read the manuals, and make art with unruly machines? As each generation asks these questions in the face of rapid technological change, the past may be an invaluable aid.
|
|||