Note: If you see this text you use a browser which does not support usual Web-standards. Therefore the design of Media Art Net will not display correctly. Contents are nevertheless provided. For greatest possible comfort and full functionality you should use one of the recommended browsers.

ThemesOverview of Media ArtAudio
Audio Art
Golo Föllmer

http://mediaartnet.org/themes/overview_of_media_art/audio/

In the last quarter of the nineteenth century, a comprehensive mechanization of music began which contained three radically new principles: the transmission, storage and synthesis of sound. These basic media technologies enabled new forms of designing and generating sound, changing the way in which music is heard. After being limited to a confined space and an apprehensible audience until well into the nineteenth century, at the turn of the century the scope of music expanded greatly. The gramophone and radio enabled it to become omnipresent, because from now on it was not confined either to a particular space or a particular time. Finally, the technical media even broke away from their hitherto reproductive function by producing their own sounds.

In the subsequent phase, which began in the mid-twentieth century, the basic technologies of electronic media were integrated into the creative techniques, now making it possible to process a variety of other subject matter. Intermedia connections, space as a musical determinant, media-specific forms of narration, detemporalization, virtualization and dehierarchization will be discussed by way of example.In this second phase, ‹musical art› no longer meant just music. Artistic ways of dealing with sounds developed that burst the traditional understanding of music and called for the coining of a new term. While the term ‹sound art› has established itself for the general, non-media-specific expression of this phenomenon,[1] in the present context ‹audio art› stands for sound art for whose production technical media are either essential or necessary.

The main part of the present contribution introduces the development and the spectrum of audio art. This is followed by a comparison of the techniques and motifs of its use of media with those of the historic precursors of electric music media, mechanical musical instruments, allowing the clear identification of the radical change that separates audio art from the traditional understanding of music.

Transmission

Radio made music and other acoustic cultural assets freely available. Its appearance occurred at a time in which a significant number of German composers were searching for a new berth for their music. Using terms such as ‹utility music› and ‹colloquial music,›[2] they experimented with integrating popular musical elements into art music and involving music in everyday situations in a functional way.

Participation

In 1929, Bertolt Brecht, Paul Hindemith and Kurt Weill produced the radio opera «Lindberghflug,» which was designed to include the listeners at home sitting at their radio receivers. For the stage performance in Baden-Baden, Brecht placed a shirtsleeved representative of the listeners on stage, who took over Lindbergh's singing part. For later productions, Brecht had in mind that for example classes of schoolchildren become familiar with the piece and then complete a version of it, which is broadcast without the part of the aviator. «The radio would be the finest possible communication apparatus in public life … if it knew how to receive as well as to transmit, … how to bring [the listener] into relationship instead of isolating him.»[3]

Brecht was not aiming at aesthetic arrangement, but rather at social educational value, which amongstother things was criticized by Theodor W. Adorno. In Adorno's view any kind of music that lets itself in for elements of popular music, i.e. music with commodity character, did not achieve its goal of reflecting life in an unadulterated way.[4] Brecht, on the other hand, judged Adorno's position to be the expression of an arrogant elite that secures its integrity (amongst other things through music) while reproachfully—but de facto idly—looking at an ideologically blinded mass of music listeners under the tight control of the culture industry.

The medium of radio presented structural obstacles to Brecht's far-reaching utopias. Technologically and organizationally speaking, it had already developed into a mass medium[5] that lacked an effective transmission channel for its recipients.[6] In the 1960s, when Hans Magnus Enzensberger criticized that the mass media artificially separated the producer from the consumer,[7] Max Neuhaus had just begun working on a series of pieces for radio that demonstrated the potential for openness. In «Public Supply I,» produced in New York at WBAI in 1966, Neuhaus did a live mix of incoming telephone calls from ten lines. Although he considered himself to be the designer of the technical configuration, at this point he was merely the host of a musical event. The listeners who called in were the broadcasters. In 1977, for «Radio Net» he withdrew even further as an artist by leaving the arrangement up to an automatic electronic system. At the same time he thematized the dimension and the immanent aesthetics of the technical system of the radio by wiring up the circuit of the American radio network NPR, which spanned the entire continent, in such a way that the sounds of the signals it contained were transformed through feedback.[8]

Aesthetization

Radio also had an aesthetic influence on music. By potentially opening up the entire world, it released a fascination for hearing global, alien, multi-shaped things and stimulated the imagination of artists. In 1936, Rudolf Arnheim wrote that radio virtually had a consciousness expanding effect: «In radio, the sounds and voices of reality revealed their sensual affinity with the word of the poet and the tones of music… .»[9] Radio listeners discovered that noises possessed anaesthetic quality they had hardly taken notice of before. The radio play theorist Richard Kolb attributed this effect to the disembodiment of sound, which invariably leads the listeners to become more involved mentally. «The less we are bound to a particular idea about time, place, costume, character, the more scope is left to our imagination, with whose aid we can form an idea that is befitting us. In this way the effect of the word approaches music ….»[10] This altered perception of noises did not first begin with the radio, but had already begun with the advent of industrialization. The Italian Futurists considered the rhythm of machines to be an aesthetic expression of their epoch, and thus in 1913 the painter Luigi Russolo proclaimed the ‹art of noise›: «Ancient life was all silent. In the nineteenth century, with the invention of machines, Noise was born. .. .We will amuse ourselves by orchestrating together in our imagination the din of rolling shop shutters, the varied hubbub of train stations, iron works, thread mills, printing presses, electrical plants, and subways. .. .We want to give pitches to these diverse noises, regulating them harmonically and rhythmically.»[11] Russolo constructed special mechanical noise generators and demonstrated these ‹intonarumori› at events attended by important artists and musicians of the time. Edgard Varèse, John Cage and others were influenced by Russolo's art of noise and were the first to implement percussion instruments, which had hitherto primarily been used in art music for the purpose of rhythmic accentuation, as carriers of a music consisting of timbres of noise.

Radio art

With his piece «Imaginary Landscape No. 4,» in 1951 John Cage was the first person to perform the peculiarities of the radio—the cheeping and hissing, the accidental juxtaposition of language, music and noise on the waveband—in a composition. He not only used the natural sounds of noise, which from a traditional point of view are perhaps only just acceptable, rather he also used the side effects of technical media, which are typically absolutely undesirable in music, as musical material. A specific ‹radio art› developed out of this approach in the 1960s that thematized the aesthetic effects of thetransmission and perception of sound via radio as well as the social conditions of radio production and consumption. In radio collages consisting of audio fragments, Negativland, for example, show the aesthetic and social effects of the merchandising control of media content and from this—as did John Oswald—they derive their demand for the preservation of the creative scope when dealing with technology.[12]

Storage

The storage of sound through the phonograph and the gramophone enabled the unlimited reproduction of music. Whereas sheet music was only disseminated amongst the bourgeoisie, the record was the first musical medium to reach listeners of all classes. Like transmission, sound recording also changed production and reception as the two areas were now separated in terms of both time and space. Because listeners were no longer dependent on musicians, for the first time they were able to integrate music into their daily lives. Music had, so to speak, become a ubiquitous source of nourishment.

Musique concrète

Artistic experiments with reproduction technology were a long time coming. Although the gramophone had already been developed in 1877 and was widespread at the latest at the turn of the century, concrete suggestions for its artistic-musical use were not made until the 1910s. Around 1917 the later documentary film pioneer Dziga Vertov attempted to create a montage of noise, however his plan fell through because of the state of technology at the time.[13] In 1923 the Hungarian Bauhaus artist László Moholy-Nagy suggested «to change the gramophone from a reproductive instrument to a productive one, so that on a record without prior acoustic information, the acoustic phenomena itself originates by engraving the necessary ‹Ritzschriftreihen› (etched grooves).»[14] In the 1990s, the sound artist Paul DeMarinis[15] referred to Moholy-Nagy's idea that a graphic ‹etched alphabet› could be found by reading sound grooves optically, as a false estimation owning to the dominance of the visual in Western culture. In the mid-1920s, Paul Hindemith experimented with ‹gramophone music› by creating a montage of recordings and playingthem backwards at different speeds. He did not get beyond the experimental stage. For the first successful noise composition experiment, in 1930 Walter Ruttmann did not use the unwieldy gramophone, but rather the optical sound technology that had been developed for film a year prior to this. Film sound, which could be cut with scissors and taped back together, enabled the creation of the first stringent sound montage. At great technical expense, Walter Ruttmann collected sound recordings over a weekend in Berlin. The montage he produced, «Weekend,» changes between narrative and sound portrait—an art of listening inspired by photography. Although he attempted to structure the montage according to musical standpoints such as pitch and rhythm, the characteristic style of «Weekend» is narrative throughout; timbre, rhythm and pitch merely organize the narrative.[16]

It was not until 1948—eighteen years after «Weekend» and seventy-one years after the invention of sound storage—that Pierre Schaeffer's approach to discovering a way to compose specifically with the gramophone led to fruition. The compositional attitude responsible for this was based on two aspects: Firstly, Schaeffer concentrated solely on the aesthetic qualities of the sound material, thus largely eliminating the occurrence it connotated. Secondly, he did not force a preformed, superordinate structure onto the material. He stressed that the ‹musique concrète,› a name he chose in order to distinguish it from ways of composing that come from abstract ideas, is always based on the experience of concrete musical material: While traditional composition attains interpretation from the intellectual concept via writing down, for his music Schaeffer uses the reverse path: from listening to the collected material, to sketch-like experiments, finally arriving at the material composition, which is recorded as a finished sound carrier.[17]

In his opinion, sound material can be everything: the primarily noise-like sound occurrences in the environment, linguistic utterances, as well as conventional music. Sounds become so-called ‹objets sonores› when they are recorded technically, but they do not become ‹objets musicaux› until they have been processed in a special way. According to Schaeffer, these methods include the cutting of individual sounds,the variation of speed, playing from specially manufactured closed record grooves, playing backwards, and the layering of several sounds. The record player becomes a musical instrument the moment creative methods are derived from its specific possibilities. In Schaeffer's first piece, the «Étude aux chemins de fer,»[18] composed in 1948, ‹musique concrète› anticipates the later DJ methods of cutting, cueing, and in part scratching.

Sound

Pop music also received decisive impulses with the introduction of the tape recorder. The playback method, which consists of recording the instruments in a piece of music one after the other, causes reproduction to become the primary instance of music. Now even live performances have to sound like the record. Because more and more complex studio technology is used, the ‹song› in the sense of a certain melody and harmony sequence counts less and less. Instead, ‹sound› has become the central criterion of music styles.[19] This begins with cover versions, for example Jimi Hendrix's version of the American national anthem played on a feedback electric guitar. The significance of sound reaches a new level with the audio electroquote techniques in DJ mixes, and then again with the dissemination of digital sampling in the 1990s. Now not only a song is quoted, but the sound itself. When John Oswald recomposes Beethoven and Michael Jackson using the same means, then what primarily counts is the processing technique—melody, harmony, formal structure or lyrics only prompt a sound realization.[20] The sound is the music.

In 1993, Christian Marclay, who as an artist and art DJ thematized the history of the sound carrier, assembled a collage of music from a variety of stylistic and geographic origins in his «Berlin Mix.» What was unusual about it was that he did not use any technical media, rather he assembled the original sound sources in an auditorium and conducted them using cardboard signs. The physical presence of more than 180 musicians made the usual eclectic dealing with samples seem absurd. Marclay's action showed that music can be more than just sound and how much we have become accustomed to getting by with just a fraction of the substance from the media we receive.[21]

Principles of chance

With his composition «Imaginary Landscape No. 1,» in 1939 John Cage applied techniques similar to those of Pierre Schaeffer, however he used test records with sine tones, thus keeping to the ‹musical tone.› With the publication of his manifesto-like text «The Future of Music: Credo» in 1937, he had already predicted that the use of noises and the complete control of the overtone structure of all sounds with the aid of audio technologies would shape the music of the future.[22] In 1952 Cage started from the assumption that every sound and every noise is musical unto itself, and he manifested this in his first tape composition «Williams Mix.» For him, the advantage of tape technology was that one could penetrate into the micro-time of the sound and create a high degree of complexity. «What was so fascinating about tape possibility was that a second, which we had always thought was a relatively short space of time, became fifteen inches.»[23] In a nearly 500- page score drawn up according to principles of chance, the way the tape has to be cut is presented graphically, much like a cutting pattern. The score specifies in which form, for how long and which of six types of sound are to appear in the montage. In one case a section of tape a quarter of an inch long (one-sixth of a second) had to be assembled out of 1,097 tape particles. Cage applies the specific characteristics of technology in order to discover unconventional structures during the transformation of an idea into sounding reality.

In 1963 the Fluxus artist Nam June Paik extended Cage's principle of indeterminacy[24] by placing Schaeffer's technologies into an installation situation at his «Exposition of Music—Electronic Television.» «In most indeterministic pieces of music the composer grants the decision of will or freedom to the interpreter, but not to the audience.»[25] «Random Access,» for example, enabled listening to tapes, which had been stuck to the wall, with a freely moving recording head. During «Schallplattenschaschlik» visitors would help themselves to records rotating simultaneously using the stylus of a phono pickup. Paik's sculptures had a refreshingly contradictory effect because they were created out of profane consumer media in a crude, handcrafted fashion, while their interactive operation so obviously stood in the way of the one-way communication of mass media.

Musique d'ameublement

More than forty years prior to Paik, the French composer Erik Satie drew up a similar critical scenario. In pamphlets, which have since become famous, he suggested extremely functional music intended to fill embarrassing pauses in conversations during dinner or to cover up unpleasant interfering sounds. Satie criticized that department store music, which at the time was still played live by musicians, was a simplified adaptation of concert music. In a letter written in March 1920 he took up the musical climate of his piano piece «Vexations» (1893), which allowed for 840 repetitions of two rows of notes. «We now want to introduce music that satisfies the ‹useful› needs. Art does not belong to these needs. ‹Musique d'ameublement› generates vibrations; it has no other purpose; it performs the same role as light, warmth—and comfort in every form.»[26], Hofheim, 1994, p. 124. On March 8, 1920 in the Barbazanges Gallery in Paris, Satie used fragments from pieces by Ambroise Thomas and Camille Saint-Saëns to produce such ‹musique d'ameublement.› According to an account written by Darius Milhaud, the experiment went wrong: Satie could not keep the visitors from listening to the music.[27], Regensburg, 1974, p. 227.

Sound installation and ambient music

Two central concepts from the second half of the twentieth century make reference not only to Satie's experiments, but also to Cage and Paik: sound installation and ambient music. The sound installation, developed at the end of the 1960s by Max Neuhaus, Maryanne Amacher and others, pursues amongst other things two of the objectives emphasized by Satie: Firstly, not to simply adapt music conceived for a performance situation to casual forms of reception, rather to fundamentally conceive the tonal design of space as integration into a specific place. Secondly, not to occupy the attention of the listeners, rather to provide the scope for the listeners to determine which kind of attention they choose to lend to the tonal design. In 1975 Brian Eno, a commuter between art and pop music, transferred these avantgarde techniques onto the format and sound aesthetics of the pop record and coined the genre term ‹ambient music.› In the 1990s many other musicians, for example The Orb and Aphex Twin, developed Eno's idea of the electronic ‹ambient› style.[28]

Synthesis

Around 1930, the invention of the electron tube allowed the development of the first promising electronic musical instruments, amongst others Leon Theremin's «etherophone,»[29] Jörg Mager's «spherophone,» Friedrich Trautwein's «trautonium,» and Naurice Martenot's «Ondes Martenot.» They proved that the laws of physical mechanics could be circumvented in an ‹electric music›[30] and that this meant the dawning of a new musical era. Composers hoped for new timbres from the sound synthesis, a substitute for unpredictable human interpreters, as well as the opportunity to overcome the twelve-tone scale, which they perceived as constricting. However, for the most part these instruments had been conceived out of a traditional understanding of music, like the etherophone imitating for example a romantic espressivo style.

Sound composition

When Karlheinz Stockhausen produced «Studie I» in the newly equipped NWDR studio in 1953, he did not use the available musical instruments—a melochord and a monochord—working instead with awkward sound generators originally developed for the purpose of transmission measurement. These new technical possibilities were meant to enable ‹composing› the individual sound as well as the musical form of a composition, including its spectral details.[31] Serialism, which dominated Europe's musical avantgarde at the time and in which all of the parameters of a composition are organized according to a central principle of construction, required precise planning. This meant that it was contrary to Schaeffer's approach, which rested on intuition and the reverse method of composition from the material to the structure.

Score synthesis

In 1956 Lejaren Hiller and Leonard Isaacson conducted the first experiment involving the reproduction of human decision-making processes with regard to music on the computer: they had the mainframe ILLIAC 1 synthesize a four-movement score for string quartet, the «Illiac Suite.» The first three movements were based on formalizations of conventional rules ofcomposition (simple polyphony, counterpoint, serial techniques); the fourth movement, however, was based on the mathematic principle of so-called «Markov chains.» Composers such as Iannis Xenakis later frequently took up the use of these ‹non-musical› means by implementing mathematic disciplines such as game or chaos theory for score synthesis.

Today's advanced music programs such as Max/MPS and SuperCollider integrate sound and score synthesis in a single system. As claimed by Stockhausen's electronic music, timbre and compositional form can be processed with the same instruments and thus more easily according to the same principles. At the same time, the idea of the computer has changed fundamentally since then. While Hiller started out from the image of the unbroken formalization of human behavioral knowledge, which was typical for the early stage of research on artificial intelligence,[32] computers were now not meant to replace human beings, but rather confront them as interactive partners. The notion of the machine transformed from a human surrogate to a cooperative counterpart, which was impressive less due to its perfection than its uniqueness. Interactive systems became pools of ideas; ‹interactive composing›[33] firmly established the process orientation of experimental music[34] in the domain of music electronics and computers.

The repercussions of interactive, process-oriented computer technology are becoming wider and wider. In addition to the sound installation and electro-acoustic music out of the research studios, in the 1990s club music as well was pervaded by processlike methods of creation. Autechre regard their CDs as sections of continuing processes.[35] Farmer's Manual elucidate the process idea by unpretentiously breaking off a performance by pulling the audio cable out of the laptop—to emphasize the fact that the music being heard is a segment from endless automatic design processes in the computer.[36] Markus Popp sees his aesthetics as a result of digital means of design. In his view, ‹electronic listening music,› one of many genre terms for the products of the ‹laptop scene› around the year 2000, requires an understanding of music that pays tribute to the immense technical influence on design processes—delinearized time, the split-second reshaping of sounds, the ergonomics of softwareoperation. «[T]he concept of ‹music› itself is almost tragically overshadowed by assumed notions of creativity, authorship, and artistic expression.»[37] Crackling sounds caused by a faulty CD player, the noises coming from computer hardware and caused by the (frequently intentional) incorrect use of software characterize the sound material. «Indeed, ‹failure› has become a prominent aesthetic … , reminding us that our control of technology is an illusion … .»[38]

The duo Granular Synthesis uses a sound synthesis technique on video recordings. During the acoustic granular synthesis new timbres are created from existing samples by iterating extremely short sound fragments according to different patterns. Using this method on sound and image synchronously, since 1991 Granular Synthesis have been presenting image-sound collages that could work like the technical simulation of cerebral malfunctions—however, it would remain unclear whether the person being portrayed cannot coordinate his or her movements or the audience cannot coordinate their perception as usual—see for example «Pole» (1998) with Diamanda Galas. In reality one is seeing an extreme example of everyday media manipulation, i. e. what happens when the material extracted from the medium is not defamiliarized in itself, but rather is prevented from its full scope of movement by leaps in time made possible by technology.[39] In «MODELL 5» (1994), the movements by the performer shown in the portrait, Akemi Takeya, appear to be dehumanized. Here it becomes clear that the characteristic qualities of a person should not be sought in the substance of the individual image, but rather in the person's movements.

Intermedia

Intermedia forms of expression seek correspondences between phenomena in different areas of perception. Technical transformations are highly efficient in this respect, because once they are configured a mechanical structure can be evaluated with arbitrary inputs. In the process, it turns out that the translation code is the actual problem associated with intermedia: the question arises of which rules should be applied to transform sound into image, spatial movement into timbre, or harmony into color. As early as 1729, Louis-Bertrand Castel built the ‹optical cembalo,› an instrument that translated sounds into color. Amongstothers, Kastner's «pyrophone» (1870) and Rimington's «color organ» (1910) pursued this idea further.[40] After about 1910, the associative transference of musical-spatial forms into painting became more frequent.[41] It was not until after 1900 that technologies were developed which allowed flexible transference between areas of perception. When, for instance, the poem for orchestra «Prométhée— Le Poème du feu» by the mystic and synaesthetician Aleksandr Skrjabin premiered in 1911, the two voices for colored light had to be produced using simple light bulbs.[42] Film involved new technologies and suggested that fine art, which in the nineteenth century was understood purely as spatial art, could come closer to the temporal art of music. Walter Ruttmann's composition «Lichtspiel Opus I,» which premiered in 1921, mobilizes abstract visual forms and colors in a characteristic musical style. The introduction of the optical sound recording principle enabled the analogies between image and music to be drawn even closer using technical coupling. Similar to what Moholy-Nagy had suggested for the record, Oskar Fischinger took up the technically conditioned visual manifestation of sound: the relief-like jagged script of the optical soundtrack.[43] By painting the optical soundtrack for «Tönende Ornamente» by hand, in 1932 Fischinger attempted to prove that there is an aesthetic correspondence between visual and acoustic forms. However, synaesthetic theories, which presuppose these kinds of unambiguous relationships between hearing and seeing, were soon identified as subjective perceptive phenomena. They were replaced by the machine and its unique, technically conditioned rules of transformation.

Le Corbusier summarized the visual design, music and architecture of the Philips Pavilion he created for World's Fair in Brussels in 1958 under the title «Poème électronique.» The blending of the three levels of image/light, sound and structure was intended to express how electric technologies connect the levels of perception in a new way and make it necessary for human beings to reorient themselves.[44] Two tape compositions were created for this occasion: «Poème électronique» by Edgard Varèse was aimed at an intense fusion of space and sound experience. The synthetic and concrete sounds used were set into motion as lines and volumes in space to Le Corbusier'sfilm/light projection with the aid of lavish loudspeaker technology. Iannis Xenakis' ‹intermission piece› «Concrete PH» was formally based on parabolic and hyperbolic curves, which had also lent the structure its extraordinary form. Xenakis thus interpreted principles of mathematics as general truths that could express themselves in different media and form a link between them.[45]

«Fontana Mix» (1958) counts as one of the early examples of graphic notation. John Cage created a kind of generative score out of transparent graphics, which promoted the creation of an arbitrary number of realization scores. In 2002, Matthew Rogalsky, Anne Wellmer and Jem Finer used «Fontana Mix» for «FontanaNet,» a performance for networked computers in which the lines of the generative score were followed on a graphics tablet and after that, sound occurrences were negotiated between the participating computers according to complex rules. Artistic practices that combine the different levels of expression and take mutual advantage of the possibilities of the transformation of visual, acoustic, haptic, spatial or other data have become more and more widespread with the dissemination of electronic and digital technologies. Intermedia techniques have been adopted into the repertoire of the graphic languages of form and the montage practices of the pop music video clip. They also occur as decorative, abstract visuals shown in the chillout rooms of clubs and raves as an optical counterpart to varieties of ambient music. Representatives of the ‹laptop scene› such as 242 Pilots and the commuter between art and music, Carsten Nicolai, interweave the design of sound and image with special hardware and software.[46]

Space

In the twentieth century, the spatiality of sound gained new significance. Space locations and movements had not been treated as design parameters in the theoretical reflection of music for a long time, although they had, for example, already been specifically implemented by Andrea and Giovanni Gabrieli in sixteenth century Venice. Following attempts made by Gustav Mahler and Charles Ives, Edgard Varèse elevated space to a central category by striving to physically materialize his music in eachindividual sound. By implementing the orchestra in a special way he allowed music to move in space, thus moving it close to sculptural and choreographic works.

Even before Varèse allowed sound masses and surfaces to be electronically mobilized in the Philips Pavilion, Karlheinz Stockhausen treated space as a design parameter equal to pitch, volume, duration and timbre—in 1956 in his five-channel piece recorded on tape, «Gesang der Jünglinge,» and in 1955–1957 in «Gruppen,» in which three orchestras were distributed around the audience.[47] On his suggestion, the German Pavilion at the 1970 World's Fair in Osaka was built as a spherical auditorium, in which sounds could be moved electro-acoustically in three dimensions.[48]

In 1967 Max Neuhaus reversed the customary direction of thought, thus attaining a new kind of musical space: the sound installation. Music should not be enriched by adding a new dimension to it, rather it should primarily start out from space: «Traditionally, composers have located the elements of a composition in time. One idea which I am interested in is locating them, instead, in space, and letting the listener place them in his own time.»[49] For «Drive In Music,» Neuhaus installed sound sources which could be heard along a road via the car radio, thus subordinating time to space. For the first time in the history of music, musical form was no longer primarily temporal art, but rather it was based on space. Temporal sequence ensues from three factors: The distribution of sound sources (mostly loudspeakers) in space; the individual path of the user, which in installations in public space is molded by everyday needs; as well as the frequently underlying temporal structure of the sounds, often obtained from environmental influences, for example in that brightness, volume or physical movements influence the development of sound via sensors.

Christina Kubisch also works with the temporalization of real space. «Klang Fluß Licht Quelle» (1999) is part of a series of sound installations at which visitors wearing special induction headphones hear sounds out of cable structures and then assemble them to create an individual sound composition. In addition, Kubisch also often refers to the historic content or the background elements of existing rooms by using sounds that could once be heard in them or by accentuating an atmosphere unique to a particular place.[50]

David Rokeby's «Very Nervous System» depicts motion in Euclidean space in musical dimensions—therefore a non-Euclidean space. In this respect his work represents a continuation of the attempts at intermedia transformation, however it possesses a further level. In the version installed in 1995 in the «Eisfabrik» (independent exhibition venue for media art—ed.) in Hanover, one could lead the ticking of a free-hanging alarm clock into roaring feedback loops by starting it to swing. The crux here was to fathom out an invisible, unseizable space that is consistently elusive, because any transformations can only be arbitrary.

Gordon Monahan's performance «Speaker Swinging» (1994) describes the way back from electronic into physical space. Static sine tones are rotated in space by three performers swinging loudspeakers on long ropes around them in a circle. The monotone sine tones obtain an unimaginable vitality through the Doppler effect and the complexly varying reflections and interference patterns. This is enhanced by the corporeality of the perspiring performers and the menacing character of the misused loudspeakers tearing through the room. Monahan demonstrates that musical three-dimensionality means considerably more than occupying points in a three-dimensional system of coordinates.[51]

Media narration

In the great forms of media narration such as the book, film and radio, design techniques have developed that are familiar to us as being specifically novel-like, cinematic or ‹funkisch› (radioesque).[52] Laurie Anderson uses these stereotypes in her media narratives. At the same time she describes their origin and their everyday meaning. In her performance «United States I–IV» (1983), Anderson's voice guides us through everyday stories and bases them on a changing multimedia accompaniment. Although her performances implement a lavish multimedia apparatus, they do not reflect high technology, but rather the experience with profane everyday media.[53] The performer Laetitia Sonami takes up Anderson's virtuoso style of media narration and replaces its centrally controlled multimedia presentation with physical interaction with a technical system made up of motion sensors. During her narratives, by moving her body she navigates through a pool of sounds, noises, melodies andharmonies. Anderson tells of the myths of the media world; Sonami's choreographically narrated pieces demonstrate a ritual-like association with the mystery of technical media.[54]

Paul DeMarinis deals with media history. The installation «The Edison Effect» (1989–1993) reflects mystical components of the technical achievements of sound storage. Instead of using a stylus, a wax cylinder and shellac discs are scanned contact-free by a laser technology developed by the artist himself. DeMarinis virtually stops time, because in contrast to digital storage technologies, the mechanical record playback persistently deletes what has been memorized every time the recording is played; it even writes the moment of play into the storage medium because the noises present in a space are engraved into the groove via the stylus when the record is played back.[55] By example of a clay cylinder with grooves that stems from ancient Jericho, DeMarinis points out that Edison's simple invention of mechanical sound storage could have already been developed centuries earlier. Original recordings of Bach and Mozart would have been preserved, and their music would be different to us.

Detemporalization

With DeMarinis, what narration chiefly consists of disperses: narration follows a line, steers along a dramaturgy which has been prescribed or even developed ad hoc towards an end, often aims towards a resolution or relaxation. If one removes this line from a narrative structure, then what remains is a detemporalized gesture of showing. Detemporalized does not have to mean that duration does not play a role, but that the focus is not on the logical sequence from the beginning to the end. Temporal duration only provides ‹space› for a lengthened snapshot or a multi-perspective view of a phenomenon in order to be able to concentrate on a single phenomenon, a kind of detailed shot or purification of the same.

Alvin Lucier's performance «I Am Sitting in a Room» (1969) is based on a constant development from one state to another. However in reality, what we are hearing is only different stages of one and the same phenomenon: the specific resonance of a space. Lucier plays his voice through a loudspeaker into the room and repeatedly records the sound until due to the resonance frequencies of the space, the text becomes unrecognizable. The text to be spoken islibretto, score, performance instruction and comment in one.[56] By reversing a relation, the perspective changes: Our normal understanding is that the spatial reverberation is the coloring appendage of objects expressing themselves sonically. However, now the space expresses itself in the reverberation of a sounding object whose own sonic quality is only a coloring addition to the experience of space. The space changes from the surrounding context to the object.

La Monte Young's installations allow time to stand still to different degrees. In 1962 he conceived of the «Dream House» as a kind of laboratory; in the 1980s he used it to investigate the long-term effects of purely tuned intervals of sine tones on the psyche. The series of «Drift Studies» explore the sublime phenomenon of a minimally out-of-tune pure interval. Later installations with large sets of minutely tuned sine tones use interference to form infinitely complex volume distributions of the individual frequencies in space. Each location in the space contains other tone combinations. If the listener moves, he/she hears a thunderstorm of alternating sound patterns; if he/she is still, the music stands still in time.[57]

Virtualization

Technical media reproduce. The transmission, storage and synthesis of sound are based on semiotic systems which respectively reproduce those features of a phenomenon in plastic, magnetic, optical, electrical and digital representations that appear relevant to us in a particular context. The daily experience that a representation can never reproduce a phenomenon in all of its aspects and thus alters the reality experienced points out that even unreal, virtual phenomena can be represented with the aid of fictitious semiotic systems.

The focal point of Bernhard Leitner's work is the virtual construction of space. As do many of Leitner's other works, the permanent installation «Ton-Raum TU-Berlin» (since 1984) provides acoustic versions of architecturally constructed forms.[58] Bernhard Leitner liquefies dimensions and architectural characteristics such as proportion, tension and weight by temporalizing their features. Sound movements follow conceivable architectural forms, the lines of a structure are plastically adapted to become lines ofsound. Conversely, architectural coordinates lend structure to a sound event that can definitely be understood as a musical occurrence. Leitner blends musical and architectural systems of symbols to create a new aesthetic symbolic language.

In his installation «Klangbrücke Köln/San Francisco,» Bill Fontana combined the local displacement of sounds over half the globe with distortions of spatial form and dimension. He transmitted prominent urban sounds from all of San Francisco live to Cologne's Cathedral Square (and vice versa), thus pulling together a field of sound in one place that originally extended over kilometers. Under the parameter extension of space, the acoustic representation of real space is so to speak decoded with a false multiplier.[59]

In his computer installation «SMiLE,» Klaus Gasteier virtualized time by using the hypertext principle to represent a musical myth. The more than a hundred music fragments from an ominous, never released album by the Beach Boys were run—semi-automatically and semi-controllable by the listener—via a graphic interface. In the process, possible links between individual fragments were derived from musical similarities and from legends circulating around the album and entered into a database. Time is normally understood to be that one-dimensional ‹space› in which the structure of music is fixed. Here, technical means and the semiotic system chosen transform them into a multidimensional space of possibilities.

Dehierarchization

Audio art frequently endeavors to dissolve hierarchies. The network presents itself as an environment and structural model for this purpose, which is why examples with this focus have occurred more and more frequently since the genesis of the Internet. But the approach is older.

With reference to John Cage, as early as the 1950s David Tudor began building indeterministic electronic systems whose components were interwoven in such a way that he could not predict their behavior. At the end of the 1970s the «League of Automatic Music Composers»[60] transferred the concept to three locally networked ‹KIM 1› computers, the firstaffordable precursors to the PC. Each composition consisted of a system of rules, according to which each individual computer (and its performer) responded to the different information coming from the other two, in turn influencing them in different ways. «One can conceive of a computer system as a framework for embodying systems offering complexity and surprise …. Under this paradigm, composition is the design of a complex, even wild, system whose behavior leaves a trace: this trace is the music.»[61] There are no clear relationships of power between the performers and the computers or even between these amongst themselves. The pieces are different models of music that are created discursively between participants—including the machines—of equal status.

Since the mid-1990s, similar concepts have developed in association with ORF Kunstradio in Vienna, however in this case they are motivated by experiments in the field of telecommunication art.[62] In 1994, «State of Transition» by Andrea Sodomka, Martin Breindl, Norbert Math and x-space depicted data movement processes. Different electronic data paths were used between Graz and Rotterdam: the performers communicated using amongst other things audio, MIDI and HTML via ISDN, radio station transmission paths, normal telephone lines and Internet connections. Listeners could play sounds into the two independent concerts by telephone and trigger off sound occurrences in the concert halls while navigating in the Internet through websites on the topic of ‹migration.›[63] It was impossible for either the listeners or the performers to identify all of the different sub-actions. It was also uncertain in how far one's own actions were integrated into the context at the remote location. So it could not be a matter of synchronizing individual occurrences. The system had to be coordinated in its entirety via stimulation and correction of sub-systems.

«nebula.m81» by Netochka Nezvanova is a network for the Internet and a single user. The software constructs audiovisual output out of found material, the player merely sets it going. HTML codes and other data formats found in the Net are transformed into sound, sound is transformed into a visual form. Text, graphics and sound have equal status. The user influences the automatic mechanisms, can listen into individual audio particles and trigger off vaguely defined transformation processes. However, thedynamics and aesthetics of music, image and text arise primarily from the interaction between the program, the data and technical processes in the network. Nezvanova takes Gregory Bateson literally: «All that is not information, not redundancy, not form and not restraints is noise, the only possible source of new patterns.»[64] All three examples of dehierarchized networks are not limited to the production of new aesthetics. They also serve to depict unseizable technical processes in a sensory way and to represent and criticize the social and political significance of these communicative processes as well as to develop alternative models thereof.

Audio Art as a phenomenon of the modern age

Music did not first begin being shaped by media in the twentieth century, but centuries before that. Musical instruments and the written notation of music determine as media how it is made, how it is heard and thus: what makes up music. However, it was not until the emergence of mechanical musical instruments that music could be conveyed completely by media, as it was now no longer bound to its being concretisized by a human being. Three central concepts molded the manner in which mechanical musical instruments were handled. The first one can be found in the oldest automatic musical instruments: the aeols harp and wind chimes, whose strings or chimes are caused to accidentally vibrate by the movement of the air, creating a kind of natural, ‹organic› music. Diederich Nikolaus Winkel's «componium,» constructed in 1821, which could derive more than fourteen quintillion variations from an incoming theme, was a machine that developed this idea.[65] The second central concept of mechanical music machines is the aesthetic representation of higher laws. The carillons in astronomical clocks (for example in the Strasbourg Cathedral, circa 1354) represented divine principles and their connection with science, for instance the idea of the harmony of the spheres.[66] The third central concept implies that human beings can be perfected by a mechanism able to reproduce their abilities or even surpass them. Jacques Vaucanson's flute-playing satyr from 1738 embodied this striving for exact reproduction and greater control.

These three central concepts can also be found in audio art. The first one—the extraction of ‹scores› from processes alien to art—is present in score synthesis and is widespread in audio art. It accepts not only nature and mathematics, but also technical and communicative processes as sources of design rules. We encounter the second central concept, the representation of higher laws, amongst other things in the intermedia connections between the arts. However, these are seldom unbrokenly directed towards metaphysical ideas, but rather more towards phenomena of perception.

The central issue of audio art is the third concept: gaining control and the technically determined feasibility of what was previously unachievable. The storage, transmission and synthesis of sound as well as intermedia transformation and virtualization are amongst these new possibilities, and as the examples show they are consistently at the core of the musical examination of technical media. As the examples likewise document, artistic value does not solely unfold through an increase in control, extended playability or new sound perspectives. As was the case with the first electronic instruments or one or the other interactive installation, the mechanical orchestrion was nothing more than a technical attraction.

Wolfgang Amadeus Mozart shared this view. He used the technical extraordinariness of the instrument in a commissioned composition for an automatic organ clock, however he felt that the result was somewhat frivolous.[67] It was furthest from his mind to thematize the technical medium itself, because as a musician in the pre-modern age his measure of all things was music played by human beings. Even the inventor of noise music, Luigi Russolo, could not conceive of taking this step. By wanting to «give pitches to these diverse noises, regulating them harmonically and rhythmically,»[68] he did not seek the rules of design in the new material or its origin (the machines), rather he engaged an understanding of music that had been cultivated on traditional instruments and traditional ways of making and listening to music—and which the Futurists actually wanted to overcome.

Igor Stravinsky hoped to gain increased control through mechanical instruments. However, one of the reasons he took so much notice of the player pianowas that the specific problems caused by composing for it enriched his work.[69] The «Studies for Player Piano,» which the American composer Conlon Nancarrow began writing around 1950, constitute the first complete body of musical work to consistently place the possibilities of a technical medium in the foreground.

On the one hand, audio art has hopes of gaining control through the use of technical media. Media convey information where conveying information was previously impossible; they make greater amounts of data available, which in turn can only be researched and navigated with the aid of media; they enable the control of the temporal, spatial and structural details of processes, which without these aids would be neither perceptible to nor controllable by human beings. On the other hand, the deliberate loss of control is being implemented to counter this gain of control. Not only the potential of the technical advantages, but also the alleged technical disadvantages of the media used are being exhausted: mechanical rigidity, amateur-like construction, and ‹unnatural› dimensions of space and time. A comparative examination of a second domain of mechanical musical instruments still needs to be made. Besides the three central aesthetic concepts mentioned, considerable social effects of music-making machines can also be made out: the representation of influence, power and wealth in the technical work of wonder; the synchronization of social groups during the course of days and years; the reflection of the everyday in depictions figured by crafts, dance, etc.; the comfort of independent background music at court and later in middle-class households and entertainment facilities; the widespread dissemination of popular tunes and operetta hits by hurdy gurdies, street pianos and music boxes.

These aspects, too, are articulated in audio art: not as a secondary effect of the social or economic processes of art, but on the contrary, frequently as the true focus of a work. Technical media are used to re-experience everyday perceptions of body, history, space or time in an aestheticized form. However, they are also critically reflected on with regard to their social potential for and effect on the individual.

Three fundamentally new ways of implementing technical media thus distinguish audio art from the traditional understanding of music as manifested in the use of mechanical musical instruments. These differences define audio art as a phenomenon of Modernity. Firstly, audio art accepts the structural peculiarities of media as the source of aesthetic rules of design. Secondly, it accepts the task of the experimental investigation of mediaspecific phenomena of perception. Thirdly, it uses media both in a critical and in a playful way against media themselves by deliberately seeking the loss of control: because the plurality of access and the unpredictableness of the results are considered to be the condition of development.

 

Translation by Rebecca van Dyck

© Media Art Net 2004