Note: If you see this text you use a browser which does not support usual Web-standards. Therefore the design of Media Art Net will not display correctly. Contents are nevertheless provided. For greatest possible comfort and full functionality you should use one of the recommended browsers. |
Etymology is deceptive in this case: data are never ‹given›: they are produced and manipulated. Archiving, like the technologies it operates with, is now confronted with processes of fictionalization [1] and transience. It faces structural problems that undermine any overall concept and erode it from the inside. Data, like media, are dying by the year, by the month, by the day; we can already trace a long history of «Dead Media». [2] On the one hand, the much-mentioned transience of the electronic media is technically based, as the pressure to innovate does not admit criteria like longevity, but transience is also built into the archive's own structure as soon as it exceeds a critical volume threshold. Data loss has always been a process inherent in the archive. Experience of handling archives and databases shows that any system comes up against the gaps and breaks imposed by practice. Even archivists, those guarantors of reliable document administration, can get tangled up in the snares of storing, sorting, discarding and not finding again. But the electronic media promise an availability potential that played a crucial part even in the early days of scientific imagination, as shown inVannevar Bush's groundbreaking essay «As We May Think». [3] Here comprehensive knowledge on call is finally combined with the vision of a machine, but without reflecting the history of knowledge production, with all its social and historical conditions. [4]
But this desire to have everything that is worth knowing (or just ‹everything›) on call is not something that was dreamed up with the Internet and then realized in a special way, it is rooted in the enlightenment-driven motivation of the Encyclopaedists, continues in the processes of historicizing knowledge, and combines in the 19th century with the end of the book's role as the preferred place for storing produced knowledge: «The gradual substitution of card indexes for fixed book-oriented memory locations is linked with the process known as the historicization of knowledge. The reference systems of knowledge, the ordering systems of knowledge itself, are seen as historical entities, and the temporary nature and permanent revision of all
knowledge is postulated.» [5] Knowledge becomes mobile, extensible, re-combinable. The card index as a medium is a step towards computer-supported knowledge. It made its most impressive literary impact on the theory of knowledge in Walter Benjamin's major work, which remained a fragment, the «Passagen-Werk,» essentially intended as a reading of the 19th century. Benjamin's work was already able to take advantage of structuring by themes and headings, sorted through the alphabet alone, as the only ‹way out› of the hopeless abundance of material that no longer could and should be forced into any linear and coherent literary or theoretical narrative. [6] The card index fitted in with this theory and aesthetic of the fragment, at the same time also undermining what Michel Foucault later identified as the disciplining of knowledge, effective in every respect. But the card index's ability to be modulated and recombined still does not offer a pragmatic model for linking knowledge beyond individual concepts. Benjamin was already aware that under modern conditions knowledge could not be managed conceptually and with taxonomies alone. To understand social processes, and thecharacter of phenomena as goods, it was also necessary to work on the basis of interpreting images using an approach similar to Freud's dream analysis. His concept, related to Aby Warburg in this respect, was fundamentally rooted in a concept of visual constellation—and thus still provides a bridge to the media arts today. [7] Benjamin focused on avant-garde experiments in Russia, not on other contemporary artists like James Joyce with his linguistically and formally universalist text «Ulysses»or the filmmakers influenced by Dada and Surrealism, with their associative montage of moving images. Peter Greenaway reveals an interest in other systems when he writes in the synopsis of his film «The Falls» (1980): «Selection by alphabet is random enough, for what other system could put Heaven, Hell, Hitler, Houdini and Hampstead in one category?» Here a linguistic system meets cinematic narrative. Whatever medium Modernist artists used, they were operating with radical concepts of alterity and difference that have now found a form in keeping with our electronic age in the electronic media. So this essay will also address the
question of how these Modern and then Postmodern experiences inscribed themselves on the history of archives and encyclopaedic concepts in the 20th century.
«The technical structure of the archiving archive also determines the structure of the archivable content in the very way it is created and in its relation to the future.» [8] Here the technical structure relates both to the actual recording system and to decisions made by the «árchontes,» i.e., the gatekeepers. We must still ask with Vilém Flusser and Marshall McLuhan whether the isolating archive, thinking in terms of typifying and standardizing, has not now long been replaced by a «grainy» electromagnetic culture that withdraws from this arrangement into discrete elements? [9]
Artistic strategies for revealing the archive's powers of definition start here. For example, Antoni Muntadas, in his collaborative Internet project «The File Room» (1994), responds to the connection between exclusion and (art-)political censorship by collecting cases of censorship from all over the world via the Internet andmaking them available to anyone as a collection of documents there. What emerges here, in a particular field, is a counter-archive to postulated official writing of history.
The «Order of Things» (Michel Foucault) as a categorical and indexical problem, with its infinite seriality of digits, can be cited only as concept work (as in On Kawara's work for Documenta XI, for example), or as an alternative to accumulating marginal, unassuming things and events. Peter Piller collects newspaper photographs and arranges them in series like «Car touching,» «Thumbs up» or other surprising motifs from pictorial history, but they have lost their link with a newspaper item, a report about a real event or a real place. It is difficult, or even impossible, to devise a conceptual approach to reality in different media, something we increasingly see attempted in contemporary artistic production. But this is evidence that the archive's categorial topography continues to be relevant, whether in an art collection, a database or a catalogue. Douglas Blau provides an example relating to the topos of the index with his «Index from ‹The Naturalist Gathers›» (1992-97). [10] This text was
created with reference to his photo-installation referring to Aby Warburg's pictorial atlas principles. It presents a complex academic index for a collection catalogue that does not exist, but could emerge as a permutation of the index, similarly to Dan Graham's «Poem-Scheme». This reveals a dual strategy: firstly, we are not ‹reading› a text through its main text only, but more through its periphery and specific textures like the notes apparatus, the selection of pictures, the quotations and references, the imprint, the binding or context for an essay etc. Academic texts present this subtext and context apparatus very consciously. The second part of the strategy is that the index, relieved of its referential quality, has now become the main text.
These artists ‹liberate› images (Piller) and words (Blau) from their original indexicality of reference to an original system, so that they can be re-ordered and opened up to a new way of reading. The generative quality of the text apparatuses and the logic of the library (as a store for all reference structures), make the archive into a producer and into an archive of potential texts. Text and image are not just placed inthe archive as an ‹Akte› (document) but become ‹Akteure› (actors) in their own right. It is misleading to talk about a knowledge store when in fact we are dealing with a knowledge generator.The Russian Constructivists were quick to realize the potential of new distribution paths for information, right down to the new concept of a book as an image store: «The traditional book was torn into separate pages, enlarged a hundred-fold, coloured for greater intensity, and brought into the street as a poster. […] If today a number of posters were to be reproduced in the size of a manageable book, then arranged according to theme and bound, the result would be the most original book. The cinema and the illustrated weekly magazine have triumphed. We rejoice at the new media which technology has placed at our disposal.» [11] Russian Constructivism was just as concerned with dynamizing the distribution process as was American capitalism, with the difference, as El Lissitzky notes, that the Americans brought posters into the public sphere specifically for the passing
motorist's fleeting glance. For Borges the world was still a book or a library, now it is a picture store that is starting to get mobile. Once images were starting to circulate, they had to be ‹captured› again. 40 years later, Nam June Paik imagined a «Center for Experimental Arts» that he thought would also house a video archive, and Stan VanDerBeek came up with his «MovieDrome» as a place with a universally available picture gallery. [12] Following on from Vannevar Bush and seconded by Marshall McLuhan the Expanded Cinema of the 1960s helped us to see the world as a gigantic audiovisual warehouse. This has indeed been realized today as a gigantic server network that can be sifted through by search engines, for artistic or commercial purposes (see the Microsoft company Corbis, which holds the global rights to Otto L. Bettmann's collection of 16 million photographs). [13] The search engines in their turn work with gigantic storage capacities and within the parameters of intelligent database structures. Lev Manovich [14] asserts that databases, as the 21st century cultural form, employ more and more technicians and archivists today, but they interest artists as well.
Two artistic database projects introduce this section. Agnes Hegedüs' interactive work «Things Spoken» challenges users to research a personal memorabilia database and activate objects. Then two narratives can be called up: the artist's narrative about each object and one by another person, close friend or family member who give their view of things, their interpretation of a particular object in relation to its owner. This establishes two category planes: formal head-wording as a process of often absurd meta-information and narrative contextualization from two different perspectives. In an extension of this concept, visitors could develop this work as a participative installation by having a personal object of their choice scanned and telling a story about it, which was recorded and saved. It was possible to access the growing archive of objects and their oral history at the same time, on the spot and via a computer. So in time this collection of data created links between things and narratives, fragmentary, anecdotal and yet expressive, but with no coherent
take on the figure of the collector.
Eric Lanz' 1994 «Manuskript» had already realized a related CD-Rom. Hegedüs's work illuminates an object's horizon of meaning, Lanz is interested in the ‹language of things›. Multi-media transposition of video material into a visual text on a pictorial panel enables him to present an exemplary visual history of the concrete use of hand tools. It functions as an artistic project without any reference structure or database and without any accompanying explanatory text. The objects are neither identified nor embedded in the context they come from. Comenius' «Orbis Pictus,» the famous predecessor of encyclopaedic picture atlases, relied on explanatory links between image, plot description and alphabetical order, while «Manuskript» transposes a kind of visual topology of tools in a multi-media way. The charm of this reading matter, which is also poetic, lies in the often surprising difference between outward appearance and actual use. By connecting text and animated reality, Lanz achieves a vividness that an encylopaedia entry, however lavishly illustrated, could never match. So «Manuskript» demonstrates that it is possible to gobeyond iconographic art history and not just conceive the topological constitution of an objective universal history, but also implement it exemplarily.
Quantitative restriction to a small, finite number of ‹things in use› could now be expanded beyond an artist's limited resources into a dimension that was just as universal. The Institut für wissenschaftlichen Film (Institute of Academic Film; IWF) in Göttingen had made a start on an «Encyclopedia Cinematographica» project as a comprehensive cinematic documentation of movement sequences from 1952 onwards: «A matrix is intended to record all movement forms of all genres and present these exemplarily as movement specimens lasting for about two minutes." [15] The hubris of an enterprise of this kind can be seen immediately, as the researchers involved call their «cinematograms» «specimens.» The project initiator, Gotthard Wolf, actually thought he would prepare hundreds of thousands of such «specimens» in order to be able to classify movements universally. [16]
A project for a database of visual lexemes was
bound to founder given encyclopaedic ambitions on this scale. But isolating movement sequences lexically to research the contextual constitution of «behavior» and reality is unsuitable from the outset. Grammar does not admit statements about use and contextual semantics, and «cinematograms» cannot go beyond the realms of a limited visual syntax. And yet it is a fascinating vision: working on an encyclopaedia including time-based documents, possibly also configured dynamically.
According to Lev Manovich, the database is the cultural form of the 20th century. But here the database does not provide prefigured systems, but lists and arrangement preferences, which Manovich sees as equalling a central paradigm shift. In traditional theory, the syntagmatic plane presents an explicit narration; the paragdimatic plane of choice possibilities (for narrative forms) was present only implicitly. The relationship is turned round in the computer age: the options are explicit, but the resulting narrations are only implicitly present. [17] Manovich introduces a whole series of artworks to support his theory, from Dziga Vertov to Peter Greenaway—see also Manovich'srandomly generated database cinema, «Soft Cinema» (2002). But a glance at home computers is enough to confirm the theory. Computers illustrate the library as one tool among others on the screen (the «desktop»). «Libraries» and «picture archives» are pre-installed on every home computer, as users now consume and produce masses of picture files as well. Practising working on a computer does not start with creating files, but by learning what possibilities are offered by the reference structures and storage systems within the computer as a universal machine.
If a database is accessible to a wider public (as an intranet or on the Internet), multi-media objects are confronted with a diverse range of ordering, rating and intervention practices. Thus networking, as a dynamic production structure, offers an exponentiation of meaning—though of course the converse is also implied: meaning, or sense, can also change into over- or sub-complex nonsense (cf. the range of textual permutations in Daniela Alina Plewe's Internet work «General Arts,» 2003). [18]
The media archive (and not just the database) is the ‹backbone› of globalized culture and a concrete expression of the fact that people are living in the above-mentioned «era of picture exchange.» The «iconic turn,» as diagnosed by William J. Mitchell and Gottfried Boehm [19] in the mid 1990s, is obvious from the new media view, from the omnipresence of technical images in the natural sciences, but also in the expanding use of digital cameras, webcams, MMS and other picture generators, thus filling the computers' ever-growing memories. The physical act of storage is accomplished so quickly and simply by a click that content-related decisions of an archival or curatorial nature are postponed, at least until the hard disk is full or relieves us of the problem by crashing completely. And yet every user would love to know what he is storing the data for. One of the answers could be: so that they can be sent off again into the infinite circulation of signs on the Internet. The images acquire exchange value, and not just utility value.
When greater bandwidth and larger memories increase the quantity of data, machine-to-machine processing becomes necessary if we are to filter andsort. This leads to an exponentially climbing curve in terms of information quantities, which can be handled only with ‹intelligent› tools. [20] Over the 1990s, powerful search engines (from Lycos to Google) increasingly revealed themselves as the actual ‹strategists› of the New Media boom. They make it possible to navigate within an immense expanse of data that has never existed before in this form. If we have learned to appreciate artists' antennas it is because they often recognize this development sooner than others, and respond to it with their own counter-strategies even before the whole impact of the development can be recognized—see Cornelia Sollfrank's «Net Art Generator» (1999) or Christophe Bruno's «non-weddings» (2002).
But can algorithms achieve complex semantic data indexing, or create meaningful indices for images? Is an automatic archivist in prospect, or does the automatically indexed data-set with headwords still have to be checked by an expert eye? The Geneva computer academic Stéphane Marchand-Maillet is researching these problems. It is difficult not to suspect that it is only possible to describe pictures
technically if they were produced within the necessary technical parameters—a consequence of the «illusion of a universal (because technicized) intelligibility of images»? [21] So with these questions in the background the phenomenon of navigation and mapping cyberspace, but also images and text, moves centre stage.
In his publications since 1983, Edward R. Tufte has summed up his experiences as a graphic designer making visual presentations and maps. Here he confronts the Modernist dogma of «less is more» (Mies van der Rohe, Robert Venturi) with the data display dogma «less is a bore»: «Well-designed small multiples are inevitably comparative; deftly multivariate; shrunken high-density graphics based on a large data-matrix drawn almost entirely with data-ink; efficient in interpretation; often narrative in content; showing shifts in the relationship between variables as the index variable changes (thereby revealing interaction or multiplicative effects).» [22] But artistic/scientific applications in particular can make itparticularly clear how much the map depends not just on design, in other words on creative artistic input, but also on users and the deployment of dynamic presentation options. Being able to generate maps mechanically and algorithmically and also present them in electronic form underlines the fact that the map is just temporary, thus manifesting itself as a fleeting point in time.
Mapping, the process of making a map or superimposing two different areas, and navigation, exploring a space (a stretch of road) are two complementary «art of action» modes (Michel de Certeau). A number of artworks have addressed the theme of maps in order to track down events and actions and locate themselves topographically. They have formulated a «liberated cartography strategy,» [23] without creating a specific view of the link between digitalization and electronic networking. This is all the more astonishing as scaling, the fact that the scale can be changed, is directly analogous with the zoom function used in optical techniques, so it is definitely always conveyed and broadcast digitally and telematically. These connections will be presented in
relation to the practice of mapping text and images in context, artistically and algorithmically.
A classical literary text by Shakespeare is interpreted in different ways, but always present textually in linear mode. Textual analysis takes place only on the meta-plane of interpretation. So what would happen if the text were to manifest itself in a different form? So the first example looks at the various states a text can exist in. Benjamin Fry's «Valence» software in fact creates a program that brings algorithm and narration into a strikingly new relationship. The crucial conceptual difference lies in interaction between elements. A recursive process starts up: image becomes text (code) and text becomes image. A kind of textual software sculpture is produced, and can be compared with other graphic 3D forms ‹at a glance›. But this dynamic object could also be described as a special form of textual mapping (see also Bradford Paley, «TextArc» [LI2] and David Link, «Poetry Machine 1.0.») Something that «Valence» demonstrates word for word is presented visually in what is probably the mostfrequently cited mapping project in Internet art, «Web Stalker,» 1997, by the I/O/D artists' collective. This alternative browser can present the static link structure of any website abstractly, in order to create an image that can be compared with other structures in its turn. Correlations can be recognized at a glance, and visual comparatistics appear on the horizon. Linear text uses the universally applicable convention of line structure, but for dataspace this is a form that is much more difficult to standardize. Perceptible, intelligible division of information into space and time is a prerequisite for the balance between noninformation and too much information. [24] Fry destroys the linearity of the text to construct a different textual form that has nothing to do with earlier literary techniques like cut-up (see William Burroughs), for example, but with quantitative textual analysis. As a comparatistics tool, «Valence» could be cited as a visual ‹signature› for a text, and contribute to the information condensation process.
The cursor, blinking to invite us to type text, is one of the appealing elements of the electronic writing process. As a dividing line, it embodies the place
between sign and emptiness, but in an interface the place where we begin as well: it is possible to type here—this cannot be taken for granted. An empty page can be filled in a number of ways. But it is new for the writing process to be involved in constructing a home: each typed sentence helps to generate an «William Burroughs» (2001)—that is the title of the work—in Marek Walczak's and Martin Wattenberg's software art. The phrase «Sex and the City,» to quote a popular American TV series, is translated into semantic units that are then arranged to form the ground plan of an apartment: ‹sex› goes to the «bedroom,» but ‹city›—and this is already interpretative semantics—becomes «window.» This project represents a simple, immediately comprehensible and ‹intelligible› solution to subsequent text display handling problems: how does the machine ‹interpret› the text? This can obviously be quickly learned here, in that the user recognizes relations more clearly as more material is typed in. The words presented can also form animated loops, and their frequency also determines the size and position of the room—so the apartment is in a constant state of flux. But another aspect of theinterface appears when the
Walczak and Wattenberg have already programmed a whole series of innovative applications. Each of them offers a different solution to the problem of integrating current data into a dynamically generated interface and thus achieving a representation of real time itself. [27] In this respect the Internet projects described so far illustrate an aspect that embodies thinking out alternatives as a visual process. In the history of exploration and geography, mapping an open field has meant deleting «spaces.» But artists and scientists now use the concept of maps and mapping to arrive at new ways of presenting geographical, social, political and artistic fields in the light of new data.
When it comes to displaying a text, the data are already given and unambiguously identifiable (usually on the basis of a word). But what is to be done if these data still have to be generated when dealing with images. How can images be recorded formally and descriptively when so far no descriptive systems for electronic images have been developed (despite art history)? How can moving images be indexed? If images are analysed as data packages, our text-based approach takes us as far as Google shows us we can go: searching for images by examining the text associated with them. The fundamental question is: why is it still at all necessary to continue with the «structuralisticlinguistic paradigm» (cf. Wolfgang Ernst, «Beyond the Archive: Bit Mapping« of subordinating the image to the word? We do not have to address this generally in order to ask the simple question of what happens if the image, the data package or the video has no text attached to it? Any media archaeology that does not already know what it is looking for deductively but constructs classifications and relationsthrough inductive cluster analysis and iterative processes has to face these questions.
In television archives, editors are already working with the first forms of automatic video item sequencing. These software solutions work with algorithmically manufactured storyboards, so it is simple for the editors to search for material that can be used again. This pictorial analysis is not applicable only to narrative, but makes it possible to analyse any amount of visual data in relation to structures, textures, colour values and other parameters. [28] Behind this already lies the concrete practice, not just the vision, of searching for images with visual references, not on the basis of text. Images look for other images, images recognize text, especially handwriting, videos are broken down into visual indexes. In all these cases, patterns and structures help comparative analysis. These image search engines carry out a change from simple meta-data to complex annotations and to the semantics of the image. By analogy with full text searches, this could be called a full image search. [29] But what Wolfgang Ernst and Harun Farocki now call the «Visual Archive of Cinematic
Topoi,» [30] following the «Encyclopedia Cinematographica» project, no longer means analysing images in terms of art history and semantics, but a surprising, unexpected algorithmic image analysis. Understanding images as data that can be ‹read› by the computer ultimately also means not just addressing images through a computer, but the individual «picture elements,» known as «pixels.» This analysis rejects semantics in favour of a media archaeology or a «technoimage archaeology that uses mathematical, intelligent machine-related agents to analyse and map images and thus create a visual grammar. One way or another, the images are registered and identified, or one might almost say, handled as if by an intelligence service. Thus mapping a process of this kind is linked with the literally archaeological concept of data-mining or displaying data.
Before we move on to the current practice of data display and mapping, it is worth recalling three historical references to the paradigm of the map as a reference to geography. Tufte cites copying maps asan example: «A 1622 map depicting California as an island was reproduced in 182 variants, as the distinctive mistake traces out a disturbingly long history of rampant plagiary. The last copyist published in 1745, after which California cartographically rejoined the mainland.» [31] In this case the practice of copying is the media-historical condition, including variants, and thus mistakes as well. So a map is never more than an approximate model. If texts generate other texts, this applies equally to maps and images.
The next two examples also crucially changed our understanding of maps as geographical references: first, Charles Joseph Minard's map of troop movements throughout the Napoleonic campaign, which translated time data into spatial parameters; the second is the map of a London district made by John Snow in 1855 to record the incidence of cholera there. This made it possible to use the spatial distribution to conclude that the local cause was one pump in one street, even though the predominant view at the time was that the epidemic was transmitted by air. Closing the well down contained the epidemic, thus proving the opposite. So these maps were not so much about the geography of
a place, but about events in time. For 150 years, maps have been tools for localizing and displaying links and hypotheses, and not only those relating to spatial topography. So in terms of the growing importance of databases, maps are a strategy, not a fixed format for understanding data. [32]
In the «conceptual and programmatic view of the virtual» (see Christine Buci-Glucksmann) we find not just the cartographer liberated by Icarus' mobile view (with all the problems the mythological link suggests), [33] but the map reader as well, not just in terms of artistic forms, but also of topology. In this way, maps open themselves up to a wide range of display modes and narratives, and also to disturbances and deviations. If these are then linked with the term ‹constellation›, then the processual aspect of the link between image and text becomes more three-dimensional. But at the same time Vilém Flusser's melancholy tale about the end of atlases sharpens our view of the losses linked with lost «atlas naïveté»: «The aim was to design history on the back of geography. The result was the opposite of what was intended. Anyone who cracked the code of these maps was notinside history any more, but outside it. He could flick through the pages of history and recognize them as a code. Post-history had begun.» [34] This led to historical and encyclopaedic atlases that Flusser felt contributed to the «death of humanism,» but at the same time produced a «new imagination.» He saw this imagination at work in the codification of human beings to make them the content of atlases. It also affects the tools used by geopolitical strategists with their power-political urge to control access to electronic mapping. Only someone who can locate peoples and things as precisely as possible on a monitor is in a position to win wars today. Following Paul Virilio, this view of things has been explored in many forums and publications since the Gulf War. However correct these analyses might be, what is interesting here is the potential the ‹new› imagination has for designing other maps for more democratic and participatory use beyond the military sphere.
The idea of a new way of perceiving images is not new (see the «Gallery Pictures» by David Teniers the
Younger, 1651). The serial accumulation of pictures in a space had of course been run through a whole range of variations in endless exhibitions, private rooms or even chambers of curiosities. But the crucial media break comes in the 20th century with the mass spread of catalogues of art and other things, and of picture atlases. The basis for this is mass media photographic reproduction, as analysed by Walter Benjamin in his investigations. The work of art in its technical reproducibility takes the form, alongside film, of catalogue production, which becomes a key medium in a new history of pictures and art, located now not in spaces devoted to art, but in lecture theatres or private studios and homes. These media conditions led to Aby Warburg's famous images constellations in his «Mnemosyne Atlas» («Atlas of Memory»), which uses the photographic catalogue in the same way as André Malraux was to do later for his «musée imaginaire,» which he researched from 1935 and published in 1947 as a book with blackand- white illustrations, and Marcel Duchamp as well for his «Boîte-en-valise» (1942). Before this, constructing meaning in alternative displays and algorithmic transformations of image into text and viceversa had been examined, but Warburg stresses the contextual constellation of history as a visual process presented in three dimensions. Here Warburg was trying «to fuse the systemic ordering function of a typology, the historical ordering function of a type history and the geographical ordering function of a ‹Mediterranean basin event› in a single tableau.« [35] Warburg was concerned with problems of sorting, arranging and displaying relations that we are familiar with today in the concept of a multimedia user interface. The aim was to use specific and complex constellations of photographic reproductions to display relations that were different in each case in such a way that the hidden structures and connections were identifiable visually without textual explanation. So Warburg's picture atlases can also be read as data and relations revealing quite new structures beyond visual or historical-textual evidence, clarifying the media-historical prerequisites of images, for example. [36]
Claus Pias highlighted Warburg's difficulties with representing different relations graphically within an order and on a tableau as «non-contradictory evidence.» [37] But beyond these immanent
representation problems, from today's point of view the fundamental problem still remains that relations, priorities and interpretations always present themselves differently for each viewer or user. As each user has different hierarchies of interest, it is worth investigating the extent to which this can also lead to individualized presentation options. An early example of a differentiated option is John Simon's «Archive Mapper», which presents a given number of websites as a separate quantity and makes the graphic presentation dependent on the users' decisions. They can submit variables on file size and date (on the horizontal axis) or subjective variables (on the vertical axis). After that, the «Archive Mapper» presents a scatter-cluster of coloured pictograms as a non-hierarchical information set. This was a very early attempt to arrive at individualized presentations forms for connections, but already identifying the filtering problem visually: how can users filter out redundant information so that the (subjectively) relevant data are in the foreground?
Christine Buci-Glucksmann has been examining the map's relationship with thevirtual and with the map-reader's view for years, and links it with the concept of the plateau. [38] The plateau (a reference to Deleuze/Guatarri's «Mille Plateaux») is a multiple-access action field. We can say more, it is also an action field for socially linked protagonists/agents. But as a rule these social structures remain invisible. Now a central criterion for mapping in digital space is to display invisible relations in relation to statistics, subjective perceptions, discourses or social networks which will be discussed below using artists' projects as examples.
1.Using data to transform objects: John Klima's «EARTH» offers an impressive 3D zoom function for the geography of the USA, but without ever being more than an elaborate design innovation for navigation on a geographical map, which also unquestioningly accepts the problematical aspect of data-mining in its networked variant as a surveillance function. [40] In contrast with this, Ingo Günther's series of globes pick up the globe shape in order to generate an abundance
of interpretative maps of the world, in a critique of the predominant view taken by the political world map, in that global data—often military in origin—displayed in a graphically simple way, produce new constellations and representations of ‹world›. [41]
2.Transformation of real space on a map: Michael Pinsky's «In Transit» (2001) project on the relativity of distances in a big city is based on travel times from A to B, so that the geography of a city (London in this case) seems variable at different times—the city as a function of experienced time, as it were.
3.Mapping data on to real space—augmented reality: recording real perceptions can create a real map here in the form of a psychogeography of the city and its mental spaces. This was based on the Situationists' practice, who in 1975 published a «Guide psychologique de Paris» on the subject of «discours sur les passions de l'amour.» This was demonstrated more recently by the «PDPal«(2002) project by Scott Paterson, Marina Zurkow and Julian Bleecker.
4.Mapping date in data space: as well as mapping real space, the concept of ‹mapping› also applies to distributing data within a given system of co-ordinates that does not necessarily have to have a spatial,physical counterpart. Ismael Celis's «InterMaps» (2003) project maps communication within a social network of relationships between friends or colleagues, dynamically and in real time, as a multi-user map. But each participant's view of this network is individualized, and centred on their own IP address. Thus they are all moving in the same data space, but always see the relationships as presented on their screen differently.
The banal fact that we may well see a conversation quite differently according to whether we are deeply involved in it or just listening leads to linguistic analyses that have high hopes of new insights from graphic presentation. For example, some projects display spaces for electronic discourse. One of these is Warren Sack's «Conversation Map» which is more interested in content and semantics. It is a complex illustration of a Usenet newsgroup discussion during the American George Bush/ Al Gore presidential campaign, and can thus become a self-reflection tool: «I propose the design of technologies like the ‹Conversation Map› as technologies of the self (cf. Foucault): a means for a group to reflect on its
discussions, lexical repetitions (i.e., what Deleuze explaining Foucault's methodology has called ‹statements‹) and possible (dis)agreements: i.e., as pictures of the group's ‹public opinion›.» [42]
In all these cases we do not at first establish whether mapping of this kind is effective or not. But every map has the great advantage of being legible as well as visible. Thus two access modalities are available. Even so, the map can contain too much or too little information. The relation between these two poles is not only dependent on the design, but also on the user. This brings us back to the question of how much the user can influence creating both content and presentation forms. Artistic projects, like the work of Knowbotic Research, especially the «IO_dencies» series (1997), or didactic and exploratory mediation projects like «DataCloud» (since 1998) by the Rotterdam V2_Lab attach great value to the collective and discursive process of generating and presenting data. In this respect a project like «DataCloud» is not just a tool for representing relationships dynamically, but also for strengthening links within a group or community. So the question arises of the extent towhich display and interface-design opens up new semantic horizons or simply remains a new design tool. Graham Harwood has also addressed this mapping problem. He is a member of the Mongrel group of artists working on a project presenting the Bijlmeer Community: «Nine(9),» organized by the Waag society in Amsterdam. The software for workshops and working in groups was developed here, i.e., it arose from specific social practice. The relevant headwords in the media context here are usually «open access,» «open source» and «democratic participation.» But Mongrel was faced with the experience that no one is interested in a completely open system. For this reason they introduced limitations linking the open editing system for users with clear instructions on how to proceed: ‹Choose 9 images/sounds/videos and add 9 texts to them›. Given that an open archive will probably run wild, «Nine(9)» offers a convincing solution guaranteeing coherence and self-determination for the map, which functions with categorial classifications, as this was obviously not appropriate for the community's ideas on self-presentation, as revealed by comments during the
production process. Graham Harwood therefore emphasizes how much the mapping pattern has been co-determined in this case by interaction with the individual ‹cartographers›. The Mongrel project contributes to mapping relations and the subjectivity of visual clusters. The story that will be told by nine images, texts or videos remains open in each case. But generally Net producers feel committed to open, anti-hierarchical processuality that is not remotely interested in anything as ‹conservative› as archiving. All production is committed to increasing the intensity of the moment, but not to the kind of relativity that would stem from knowledge of the past (even though that is not excluded in principle). In that respect, Community mapping is ‹community building› in the first place.
are looking at the Internet as a whole, and have to realize that it is not so much a map for better navigation but a list of addresses with gaps, culs de sac and access constraints. Though in reality the Webcrawler's «softbots» captured only about 2% of the overall number of IP addresses for «1:1» at the given time, this was random-driven, non-linear sampling, and thus conveys an authentic image of the Internet. In other words, the Internet was also coined «Deep Web,» accessible to the public only as an excerpt.
The «softbots'» performance, as is also the case with Google, reflects a presence, but it is a one-way street. Jevbratt's system scans what is available and displays it in different ways in each case, dependening on the interface as well. Thus these programs promote our understanding of the Internet but do not redeem its promise of communicative presence. It was John Cage and Nam June Paik who introduced the participatory concepts of two-way communication, uncertainty and Random Access into art when art was starting to address recording and broadcasting media in the 1950s and 60s. Academics as well as artists are working on applying and implementing these conceptstoday, trying to find new answers through ‹shared authorship›. The path leads from the static constellation, which Aby Warburg failed to conquer in his «Mnemosyne-Atlas,» to a dynamic, open but also controlled configuration of knowledge, relying in many cases on «fuzzy logic.» [45] Performativity and data-mining are key concepts for looking out over the dynamic, networked archive.
The implications of profiling will not be discussed here, though the interesting feature of this for the our present context is that a mapping system functions as a recording device only when it operates ‹secretly› (heimlich), and is thus linked with the ‹uncanny› (unheimlich), as Warren Sack points out. Claus Pias also expresses a suspicion that any image search project is consciously or unconsciously part of research conducted by intelligence services or the police, and thus has some highly problematical aspects. The «Firefly-Agent,» [46] developed by Pattie Maes at the MIT Media Lab, generates patterns on the basis of simple ratings that are then presented to the user for
a second rating, and possibly for a third, etc. But the subjective decisions are accompanied by data from other people who have selected similar intersections as positive. In this way, the users' open and active participation builds up algorithmic knowledge about intersubjective preferences (here with reference to taste in music). In principle this works in precisely the same way as image results ratings in the Viper project. Behind this is the vision of a semantic network of references that was not developed according to archival, curatorial or other institutional (including police) criteria, but by the users' saving and adjustment of the data. Here, over and above the populistic rating, the aim is also to explore meaningful contingencies via a statistical database correlation program: «In other words the machine does not know that James Taylor belongs in the «Soft Rock» corner, but only that other users who like Tracy Chapman had time for JamesTaylor.« [47]
This can also be applied to neuronal nets: they do not ‹know› the semantics of the memory path, but they adjust to previous patterns, creating coherence and cultural continuity. [48] A practice is inscribed in anarchiving program and so becomes a monument or document, both concepts that produce cultural continuity, but at the same time this act changes the practice and the social process. We are in a cybernetic system of cycles and recursive processes, similar to the self-organizing community illustrations demonstrated by the Celis or Mongrel projects.
But examining language also shows how alterity and co-presence create a possibility-space in literature or also within the psychological processes of suppression and actualization of suppressed content, a space that operates with connotations and replacements. This emphasizes the paradigmatic axes of meaning. Thus every word has a meaning-volume that—and this is important for our concept of a dynamic archive—is always realized fragmentarily and differently. [49]
Jacques Derrida goes a step further by questioning «whether contradiction between the act of memory or archiving on the one hand and suppression on the other remains irreducible. As though one could not remember and archive precisely what one is suppressing, archive it by suppressing it (as suppression is archiving).» [50] Thus the archive is no longer a ‹given›,
but a process of actualization, interpretation and re-impression, as Derrida calls it. So the data collection process takes place beyond conscious ordering. Showing this in its media performance is one of the key aspects of all artistic work with databases, archives and displays.
Thinking of the archive as an open, dynamic system also means replacing the intransitive term with the transitive, and processual ‹archiving› and the ‹store› with the ‹generator›, means «[…] following a (one's) inventory- or catalogue-structure, which is open in principle—an index form that IT long since rediscovered as a hypercard.» [51] But there is still a long way to go to an interlinked map of transmedial archiving processes, and it is a way that will always come up against the inertia of people and language. This resistance reminds us not to pursue the hubris of linking everything with everything else.
Translation by Michael Robinson