Note: If you see this text you use a browser which does not support usual Web-standards. Therefore the design of Media Art Net will not display correctly. Contents are nevertheless provided. For greatest possible comfort and full functionality you should use one of the recommended browsers.

ThemesGenerative ToolsComputer Art
What is Computer Art?
An attempt towards an answer and examples of interpretation
Matthias Weiß

«No instrument plays itself or writes its own music.» Per Cederqvist[1]

«That which is programmable must also be computable.» Frieder Nake[2]

«It is understood that the artistic goal of THE SOFTWARE is to express conceptual ideas as software. It is also understood that THE SOFTWARE is partially automated, and its output is a result of its process. Despite the process being an integral part of THE SOFTWARE, this does not imply nor grant the status of artwork on the output of THE SOFTWARE. This is the sole responsibility of YOU (the USER).» Signwave Auto-Illustrator LICENSE AGREEMENT (for Version 1.2), § 7.3

A. Towards an Ahistorical Assessment of the Computer Art Scene

The history of reflection on the artistic use of computer software and hardware did not begin with transmediale.01. For artists who employ computer programming, however, that event, as the first international festival of its kind—the origin of which,like the origin of virtually all of the efforts in art history involving new procedures in technologybased artistic strategies and production methods lies in video technology—provided an impetus for new, more extensive explorations of software programmes, and found its most recent, and broadest platform to date at the Ars Electronica 2003. [3] In classical art appreciation, however, it is generally ignored that the computer is and has been both a tool and a component of art for nearly as long as the machine itself has existed. A reappraisal of this history, one that attempts to place it in an art history context, is still needed. [4] This perspective shifts when we consider the international art scene, and the many varieties of computer art that closely follow developments in the technological domain. Accordingly, in what follows, two stimuli will be provided for looking at computer art in an art history context. The first clarifies the historicity of the phenomenon by stages. In the second, I will emphasize the role that description plays, in order, on the one hand, to show that close examination facilitates differentiation, so that comparisons between older and newer works are possible, and on the other, opens up the possibility for a more profound understanding of computer art. Going against the trend of using more and more new ‹categories,› based on different technologies, to classify art movements, I suggest making use of the traditional and comprehensive term ‹computer art› to refer to the use of digital methods in the arts. For this reason, I will proceed in an historical fashion in the first part, and, by means of a loose narration, I will point out connections that have been neglected to date. In the second part, I will examine four works from different time periods that, in my opinion, are especially good examples for illustrating the history of computer art. First of all, in order to be able to focus on and understand the phenomenon of computer art, a setting is required. It is important to note that the following definition contains no requirement for a type or category. Precisely for this reason, I will, however, favour the term computer art over software art because the former implies an historically integrated factor that permits a comparative investigation ofcomputer art. Using this approach, I will establish connections between the latest phenomena, which have achieved great popularity, for example during transmediale.01, and works from the sixties and seventies—connections which cannot generally be seen by looking at the testimonials and essays on the computer art of the present. [5] For the first time, these connections make it possible, by looking at the artistic use of computers from an historical perspective, to draw closer to an appreciation of its contexts and their importance for and within art as a system. [6] Contrary to conventional practice, I will draw no type distinctions between immersive artificial worlds, which are generated using computers, and ‹software art› programmes. In recent art history, this division has led to a less than fruitful connection between interactive environments and video art, which, as a supposedly logical consequence from the history of film, [7] has prescribed the digital as the medium of the future with a «Gestus des Advents.» [8] By computer art, I understand that artistic activity that would not be possible without computers, and those works that would not have any meaning without the computer. Therefore, there are various meaningful areas where the use of computers arises for and within a work of art [9] . Whether we are dealing with a specific script that can run on any ordinary computer (and that actually requires it for the desired performance [10] ), or with a remotely connected installation which generates creatures (e.g. «Life Spacies») using distant computers over the Internet with local user input data, that are then shown in a projection room [11] , without the user being able to follow the reactions of the system—both of these situations are determined by the utilisation of computer systems and a communication structure, and would be unthinkable without these components. These technologies are necessary and constitute meaning for the work of art. If one looks at more recent published surveys on the history of art in the twentieth century, little or no information about the computer as a tool for generating art will be found. [12] At the same time, it appears that the early years are very well documented. At the beginning of the seventies, both writers and artists, in addition to their artistic and scientific works, presented their ideas on art theory and—as in the case of Herbert W. Franke,Georg Nees or Kurd Alsleben—have all separately and repeatedly written down their ideas for practical aesthetics. [13] By looking at the formal similarities with the works of the pioneers, who themselves appealed to a cybernetically-marked aesthetics of information, it becomes clear how close computer art is to other systems of art. In the classical arts, there were several points of contact between concrete and constructive tendencies and Op Art or kinetic art. Within the framework of a general interest in the theoretical paradigm of cybernetics, there were, even in the sixties and even in the arts, intensive discussions of its core ideas. [14] By way of example, we might consider the art machine of Victor Vasarely, which produced a different work daily, or perhaps the works of Gerhard von Graevenitz, who considered the role that chance played in the fine arts. [15] This also applies to Herman de Vries, who, in his early work, dealt both theoretically and artistically with chance and the concept of information. [16] Especially in the implementation of chance elements as structure-forming working principles, there were some artists who worked using traditional materials but then became computer users no later than in the modern era. [17] If one looks at the professions from which computer artists come, it is conspicuous that only a few genuine artists work with computers. Manfred Mohr is one of them. Trained as a gold and silver smith, this jazz musician studied painting in Pforzheim, which no doubt explains why his work is more often discussed in the context of the classical arts. In 1998, to redress the balance, the Bottrop Josef-Albers Museum Quadrat, which was particularly outstanding in the concrete/constructive arts, presented an extensive individual exhibition of his work from the sixties to the end of the nineties. [18] Although Mohr used computers to perform the calculations, he actually produced the images by using classical techniques, such as painting, which then made them compatible with the classical market. Using graphics, the first computer artists—far removed from theoretically colored prophesies such as those formulated by Herbert W. Franke—addressed a ‹classical› observer in the same way that digital literature addressed a reader by means of a book. Erwin Steller: «Computers and Art» In 1992, Erwin Steller described and systematized the relationshipbetween «Computers and Art.» His text is a draft of a lecture given at the University of Karlsruhe, and he concerns himself with the more recent history of art. He describes computer art as being a consequence of the great movements that revolutionized the fine arts at the beginning of the twentieth century. [19] He then compares works resulting from these movements to works produced by computer artists. First of all, he defines photography as the first «interface» to technology-based art (p. 15ff.). The focus of his discussion is on whether images generated using technology up until the discovery of the camera obscura are art. He then presents the distrust, which is felt by many towards technical aids with the question: «Can […] objectifying such a ‹poor imitation of reality› still be art? Other than the arrangement of a still life, the posing of subjects or the search for a suitable detail in a landscape, can anything be designed actively?'' [20] Steller does not undertake an investigation of specific correspondences or of any ‹line of succession› between the earlier technology-based arts and computer art. He simply compares assertions concerning the judgment that photography too was rejected in its early days, just like computer art. In a later chapter, Steller describes the radical changes in art in the twentieth century as reflecting the movement from the concrete to the abstract on the one hand, and the ‹discovery› of «concretion» (according to the definition by Theo van Doesburg [21] ), on the other. He methodically connects picture syntax as it was presented in the form of an elementary principle by Kandinsky in his writings on the theory of art, with the graphics of the computer artists. [22] Steller sees a further element for comparison in Op Art. In his opinion, systematic and mathematically-formulated imaging processes are especially suited for automated production. [23] This is basically similar to the view advocated by Franke suggesting that the status of the equipment is raised to great heights because of its intrinsic capacity for precision, but it is only understood as a tool. [24] In addition, both chance and the symbolic nature of works of art that have been developed with computers, have been chosen as central themes. In his chapter on generative and informational aesthetics, Steller discusses the theoretical foundations ofcybernetically influenced aesthetics, and, above all, how the artists mentioned previously wanted to put them into effect. Here he criticizes the artists› intentions of making artefacts and their effects quantifiable and therefore calculable, as being a matter of taste. [25] Without discussing the changed contexts of new art movements, he links the trends in computer simulation to Pop Art and other realisms although it is not clear how appropriate it is to speak of art with respect to the examples given. [26] The author speaks about the visualization of mathematical formulae and their alienation on the basis of fractal computations and their variants, which were in fashion at the beginning of the nineties, as well as the charm to be found in mathematical oscillations. In his summary, he leaves the world of art and attempts a critique of the machine itself. It therefore follows that he can only criticize the «duality of high tech and the computer as mere devices.» [27] Nonetheless, in his book he attempts to catalogue computer art systematically from the perspective of a more general history of art. However, he ignores the positions of artists like Myron Krueger, whose interactive environment, called «Videoplace,» has less to do with video art than with computer art. [28] The concept of artwork that Steller therefore implicitly hands down is tied to the materiality of products from the palettes of those media used by classical, visual art. He subsumes strategies—such as performances—which are distinguished by the fact that they can be captured within a definite time frame, in just a single paragraph under ‹the immaterial,› a concept that was current at the time, and that Florian Rötzer investigated shortly before the appearance of his two-volume work on art forums, which Steller also cites. [29] Three phases in the history of computer art The history of computer art can be classified into three larger phases, defined by and dependent on what was technically feasible at the time. [30] In the first phase, computer art fed back into practical aesthetics, which in turn developed out of the two models of abstract art—abstraction and concretion. This phase ended in about the middle of the seventies. The output consisted of graphics along with works like «Videoplace» by Myron Krueger, for example. As computing capabilities increased and industry, from mechanical engineering to film,discovered simulation, immersive artificial worlds suddenly began to be used for technical and artistic experimentation. This was a trend which characterized prominent institutions like the Center for Art and Media Technology in Karlsruhe or the Ars Electronica, and therefore also the media arts scene. With the recognition that contemporary physics had called into question the role of philosophy as the primary field for the development of world views, models like Chaos Theory, which motivated artists interested in mathematics and cybernetics like Karl Gerstner to create new imagistic worlds, began to be seen as sources of inspiration for the art scene. [31] At that time, technical arts became institutionalized at institutes of technology along the lines established at the Center for Advanced Visual Studies (CAVS) at MIT. Frieder Nake summarizes: «Computer generated images are booming. Activity in computer art during the sixties was a trifle compared with the attention that has been paid to it since the middle of the eighties. Exhibitions, awards, books, programs, products. There was hardly a department or school of fine arts in the USA that would not have had a few computers sitting around.» [32] The concentration on images, the constant insistence on the production of twodimensional visualizations on the one hand, and the extremely expensive development of three-dimensional image machines, like the Caves [33] , on the other, led to a schism in computer art. While the artists of the sixties were working under a paradigm of an art of precision, they were rather insensitive to those arts which dedicated themselves more to communicative or politicizing actions. Performative work could not be taken up by artists who were object-oriented. For this reason, no direct influence from the artists and pioneers of the technically-oriented arts of the earlier period is detectable on the contemporary computer art scene. [34] As an example of ahistoricity, let us consider the case of the Munich Make-World Festival of 2001. Organized by Olia Lialina and Florian Schneider, the show was nothing more than an aperçu. Graphics by Herbert W. Franke were exhibited, but no connections were made between theses works and contemporary, partially animated works. In addition, the thematic focus of the festival was activism. And it was only the contextualization as expert that allowedFranke’s graphics, as the work of an artistengineer, to be appreciated. [35] At present, at least on the Internet, there are two initiatives that are worth looking at, both dedicated to using computer codes as the raw material for artistic creation, but each proposing a completely different project. The more prominent Web site of the two is Since festivals usually function as the main medium for presentation in the media arts, a festival was organized after contemporary works in computer art had been collected, a festival whose structure differentiates itself from other exhibitions by virtue of its annual change of location. The second project goes back to an initiative by Adrian Ward and Alex McLean, called But even here, historical precursors from the same domain are not to be found. In the case of, this is especially surprising because, after Franke, the Anglo-American scene had essentially been characterized by continuity. Yet another factor makes writing a history of computer art difficult. Although the development of the computer itself has of course become worthy of a museum—because of the ubiquity of the digital world—early computer art has not. Denied the conventional means of exposure via museums and public displays, computer art can seem confusing. Without the benefit of the relevant computer codes, and in the absence of technicians, machines or competent art historians with the ability to read such source codes, only the material artifacts of computer art are visible. [36] What is astonishing about this discontinuity in art history is not so much the realisation that there have been users with artistic interests ever since the beginning of the first electronic calculating machines, but that computer art could not, for a long time, spread as widely as video technology, for example, which appeared at about the same time, or other storage media such as audiotape. The reason for this is not only the complexity of the machines that had to be used and the fact that, at first, specialized personnel were needed to operate them. Restricted access to machines, which in the early days of computers was completely different from what it is now in the PC age, was the main reason why there was little acceptance on the part of the critics. Computers were usually rare and extremely expensive pieces of equipment, and computing facilities offeredwork places only for top experts in their special fields. The machines were run constantly in research environments and in critical business domains, where the risk of failure had to be kept to a minimum due to time and cost considerations. In addition, the size of the first machines has to be kept in mind—they occupied entire rooms. The late democratisation occurred after the development of personal computers such as the Altair 8800 (around 1974), the Apple I (1976) or, at the end of the seventies, the Sinclair ZX680 (1979). Computer use spread, especially when the Commodore C64 came on the market in 1982. [37] Despite the availability of the early play consoles, it took some time before there was broader interest in computer hardware. [38] In addition, from around 1970 onward, skepticism towards machines and their role in society grew. The naive belief in the technically possible as a cure and support for the deficiencies in the human condition dissipated, finally leading to an anti-technology, anti-computer stance in political activism. [39] During that period, with the academic field of art history being marked by conservatism, there was no interest in reappraising how technology related to the fine arts of the present. With the far-reaching fame of Joseph Beuys and Andy Warhol, other paradigms had become important in art criticism whereby the production of material by graphic artists was no longer acceptable—something which had been done earlier as a concession to the art market. [40] Dieter Daniels describes this with the help of the concept of interaction: «During the sixties, the interaction between the public, the artists and the works themselves became a characteristic element of the new forms of art, outside the established categories and institutions. ‹Intermedia› was the term for the ideal of surmounting types and technologies. In place of being inaccessible, Happenings and ‹Fluxus› offered the audience the opportunity to determine its own experiences with art to a great extent. The goal of erasing borders between artists and the audience, and the removal of the differences between production and reception, had many parallels with the political demands of the 1968 uprisings, after the means of production had been occupied by those who were consumers of the products.» [41] For this reason, static computer graphics, which comprised most of the computer art of phases one and two, only became part of the system of art in an irregular manner, because, for a long time already, art had been trying to changesocial practices through artistic action. In addition to this, given that the technology did not make possible what could already be achieved in the analogue world—with its mail, fax and copy machine networks—it is understandable that computer art made no lasting impression on the aesthetics of information.

B. Descriptive Analysis

In the computer art of both of the two first phases, there is a conceptual gap between the experimental and multimedial efforts by American artists, and the graphically oriented forms of German provenance. Two positions will make these differences clear. In the following, I will contrast the «Videoplace» of Myron Krueger, whom I consider to belong to phase two, with a phase one work of George Nees called «Schotter.» [42]

1. George Nees: «Schotter »

«Schotter» by Georg Nees is a portrait-format graphic assembled from twelve sets of twenty-two squares each, each set having the same length along the sides. Read from left to right, as one would read a European language, it shows disorder that increases from top to bottom as one views the graphic. [43] The visible defines the order, which is not the same as the order of the pictures, but an optimal state in which the squares lie along a horizontal line, forming a row in which each one is set precisely beside the next one, so that straight lines are formed by the upper and lower edges. This state is not seen in the picture. Row by row, the state of disorder successively increases down to the lower border of the picture. The program creates disorder through the rotation of each square at the point of intersection of its diagonal, and also through the increasing distortion in the graphic space. This graphic introduces a number of questions related to the relationship between an image that is constructed and one that is computed. For this reason, the slight possibility of gaining insight through viewing has to be assessed by comparison with a work of classical constructive art. [44] In this way, by means of observation, the intent of the picture, or its inherent logic, can be recreated. But then, what is the actual content of the picture? It can be said that the picture illustrates the relationship between order and disorder. This, however, arises through priorknowledge that is already interpreted, and that then leads to the above description. In this way, the rather orthogonal section of ordered squares in rows next to each other (up to row 6), can be evaluated as a state of higher ordering compared to the lower section of the image. By mere viewing, however, it cannot be exactly determined to which processes the increase in disorder is due. The coordinates and inclinations of the squares could of course be measured. This would already result in the defining of a boundary that could not crossed by inspection. On the contrary, it must be assumed that the observer actually sees past the sense of the picture or extracts each of its senses visually without having them supported contextually. With additional viewings, a spatial effect is seen, an optical illusion of a gentle turning from the inside out in the center left and lower right of the image area. If one then draws in the context, which is that this is a graphical realization of a mathematical model that was coded by means of a formal language, then the question arises as to what the image is above and beyond a specific visualization randomly generated by a machine? This leads to the core question: is the depiction then a picture, a diagram, a technical drawing, or something in between? At first, as a result of successive examinations of the image from top to bottom, the impression of increasing deviation from the system of order arises, as described above. Upon further examination, structures appear even as disorder increases; structures that cross over from the formal context of individual pieces through their respective positions relative to each other on the surface, to new non-rigid geometrical figures. An interpretation allows us to claim that the condition of increasing disorder allows ordered structures within the image to appear, without clearly fixing them into definite geometrical forms. This effect can be described using information theory by saying that super symbols are being formed in the region of disorder. [45] They can be described as having dynamic and contingent qualities. The upper portion of the image, however, is static. This leads to the realization that by interpreting the results of observation on a higher plane, dependent elements of order are visible within the realm of disorder. This does not occur in the region of the image with higher order. Here it isevident that an additive lining up of squares can in turn lead only to the formation of other squares or rectangles. [46] We still have to consider the programming that actually gave rise to the image. The optical evidence for simultaneous states of order without the generation of formally divergent super symbols with a relatively gradual transition into disorder—a disorder that evokes contingent and formally divergent super symbols—points to a feature in the programming. For this reason, the role of programmed chance in the parameters would have to be taken into consideration. Following the artist's description of the process [47] , it can thus be concluded that the meaning of an image, which adds value to that which the work has as a diagram of a formula, can only be deduced if an integrated investigative model is applied, comprising both observations and investigation of the computational foundations. [48] In a logically deterministic computer program, this action then imposes a relationship between an experience in observation and knowledge about the abstraction of a problem. [49] It is therefore seen that understanding is determined exclusively from a unilateral investigation of the source code, on the one hand, and the sheer examination of the image alone on the other. This is because, in contrast to a composition created in the traditional way by the visual calculations of an artist in a series of trials, or simply through the creation of an image however it came about, an examination of the source code shows that in «Schotter,» it exists in the form of one of the n-possible graphical states of the program. In terms of computer art, this is the key element of this work. [50] 2. Myron Krueger: «Videoplace» By contrast, Myron Krueger’s «Videoplace» is a ‹dynamic› work in both function and genesis, a work that, at first glance, is difficult to compare to a graphic as in the example above. Under no circumstances should the visual be compared; instead, it is the way the computer itself is used. This is a work in progress, one on which this computer scientist has been working since about 1974. [51] Krueger’s primary goal is the development of ‹user interfaces› for man-machine combinations. He pursues an approach oriented towards the physical and communicative range of the extremities and sense organs of humans. In his environments, no parallelworlds in which people «immerse themselves» are developed. Instead, the actor in his installation is not restricted by technology applied to his body but rather possesses complete freedom of movement so that he can respond to visual and acoustic stimuli. As a rule, this work consists of a surveillance camera which is linked to computers through feedback systems. The computers calculate the movements of the users and the reactions of the system to the input data in real time. Because of this, a fundamentally different understanding of the computer and its suitability as a tool for the artist is conveyed. Krueger describes the motivation behind his work as follows: «As I observed how artists stood in relation to their traditional tools, I noticed what they were doing with computers around the end of the sixties. I found that they were making art in a truly time-honored fashion. That seemed wrong to me. If the computer was to revolutionize art, it had to define new forms that would be impossible without it, and not simply help create traditional works.» [52] What appeared unthinkable at the time when Nees' «Schotter» was created in the seventies, for Krueger became a driving force in achieving the above-cited re-assessment of ideas on computer art: reactions in ‹real time.› The complexity of switching, by using tactile and visual sensors, serves not just to control and trigger certain functions. In his description from 1990, it is seen that the system possesses two functional modes. In one of them, the machine alone decides what type of interaction, from among the range of possible sequences, will be running. In the other mode, there is a human ‹teammate,› an ‹operator› who takes control. The computer is used as a switch in this mode of operation, but in the first mode, the complexity of the human reaction is determined and interpreted within an environment of a group of machines. The second mode was exclusively a transition phase for Krueger in 1990 until the technology was so far developed that complex ways of integration became possible through the use of the machine alone.

C. Software Art – Computer Art

Now, in a third phase, are we experiencing a renaissance of computer art as software art? [53] Will it once again be the art of the programmer that will befed back into a contemporaneously developed idea of artistic creation on the basis of a code at the festivals of activist artistic activity? There is again less of an impetus coming from art academies than from other professions, such as software development (Antoine Schmitt), and media layout and design (W. Bradford Paley). This connection possesses an Archimedean point in the movement related to free software, which has had the greatest influence on the present-day scene of the artist-programmer. [54] Along with this, there have been accompanying debates over copyrights, concepts of artwork, the role of the artist, and software code and how these relate to the legal system and the arts. With all these other concerns, the computer art scene also has a political and critical element, which has become the main theme of the software developed by Adrian Ward, for example. The works presented here share a conceptual aspect which can clarify where the differences with the earlier works reside. [55] In contrast to the works of Nees and Krueger, both works show a high degree of self-referencing. In these works, which do not simply rely on pseudo-random generated graphics, the degree of generative contribution is evident from their conceptual viewpoints, and illustrates other modes of use.

1. Alex McLean: «Forkbomb»

«The ENTER key has acquired power that corresponds better to the meaning of the word in poetry, that is: ‹to make›−than all of the poetry and literature in history.» Friedrich A. Kittler [56] The «Forkbomb,» which Alex McLean wrote in 2001 using the Perl [57] scripting language, is essentially a thirteen-line program that found its way into the art community through transmedial.02, where it won a prize. [58] In describing the function and action of the script, a certain radical quality is apparent, coupled with the claim that this piece of software is art: in the end, it is nothing other than an uncontrolled system halt. Through mechanisms that will be described here, the code gradually paralyzes the system on which the interpreter [59] applies the script. This occurs through a so-called ‹process› that branches out more and more and launches an avalanche of identical processes, and continues on until—depending upon the capabilities of the computer—the system resources are exhaustedand a system halt results. At that point, an output is produced as a bit pattern of zeros and ones. On the homepage of transmediale, we can read the following: «The pattern in which these data are presented can be said, in one sense, to represent the algorithm of the code, and, in another way, to represent the operating system in which the code is running. The result is an artistic expression of a system under stress.» [60] —see the microanalysis of «Forkbomb.» In general, the script initializes a cascade of loops which, although they follow a programmed logic, use the inherent logic of the system itself in a way that it was not intended to be used. When the program is started, a succession of zeros and/or ones can be seen on the standard output device, which nowadays is usually a monitor screen. From this, the part that the ‹while› statement has already executed can be recognized. [61] The computer gradually becomes paralyzed. As this happens, the output changes. [62] The software can also be interpreted as being a random generator. [63] Here however, it does not fulfill the function that it had in the work of Nees, for example. In any case, the program can also be understood as displaying the finite nature of the computer contrary to the attributes ascribed to it by industry, which, in advertisements, has elevated the machine to mythic levels of capabilities and possibilities. The program is efficiently written and so fulfils the requirement of a ‹normal› computer program. In the way it works however, it overturns the paradigm of functioning. In principle, it is programmed taboo breaking. If an attempt were made to use the program in a productive context, there would be no more productivity because, most likely, the system would have to be restarted again and again. In this respect, it is something very different, something other than that which, by means of norms and other controls, is brought under control and classified as art and, at least in theory, remains controllable. [64] In the digital day-to-day world, it is tempting to compare this to a virus. [65] By placing the section of code in an artistic context, another arrangement for both the code and its developer is appropriate. As a rule, if the legal system steps in quickly to arrange the safeguarding of normalcy, lawsuits will be brought against programmers who do not follow the dominant paradigms of the respectiveprogramming languages, but rather use these languages intentionally for destructive purposes. [66] If the functionality is described metaphorically as a virus, and is interpreted as such, then there is room for discussion. For this, only a limited analysis of the code, such as is undertaken above is needed. But here the work falls apart into code/effects and the context noted above, without there being any conclusions drawn concerning possibly significant formalisms or conventional subjects. On the positive side, this would, however, contradict the definition of computer code which, being clear and unambiguous, excludes any semantic relationship to its elements. [67] In a way, every higher language possesses the possibility of semantically charging symbols. These are the variables the naming of which is arbitrary. McLean calls the core variable of the program ‹strength.› As described earlier, a ‹my› has to stand in front of this variable since all variables must be declared accurately. This produces the phrase ‹my strength.› If a lyrical ‹I› were put into the text, then some interpretation would be needed to provide meaning. The instruction ‹twist› could also be viewed similarly, a word that, in the context of programming, could be chosen arbitrarily, and which forms the anchor point for the ‹goto› instruction. It does, however, seem that the relationship of three semantically charged symbols, relative to the formal arrangement of the code, is rather arbitrary. The probability that there is then a subtext behind what is explicitly stated is small.

2. Adrian Ward: «Auto-Illustrator»

A completely different approach is adopted by Adrian Ward, who has written support programs for products from the software maker Adobe, whose name immediately brings to mind completely conventional software for the construction and revision of pictures and drawings. «Autoshop» and «AutoIllustrator» have been released by a company called Signwave. The prefix ‹Auto› already betrays the fact that the user is partially incapacitated. Ward’s programs have been modeled on Adobe’s standard image manipulation program Photoshop, and the vector graphics program «Illustrator,» respectively. Both programs show how the tools that are employed control the appearance of any possible image that canbe constructed. Being different from the satirized software, both offer functionalities that influence usage. As a result, the user gets the feeling that he is losing the possibility of control—the very feature that large software companies used to attract customers. By going beyond the merely satirical, the programs reveal—and exceed—the way that manufacturers of proprietary software conduct themselves. The user is spoon-fed with the usual built-in, attention-grabbing mechanisms: for every update, an Internet connection is opened. Users who do not wish to register are regularly faced with the request to enter a serial number (see Fig. 8-10). On the other hand, in order to freely navigate the programs, registered users have to enter a yard-long chain of characters. Yet even before any type of work can begin with «Auto-Illustrator,» the user is forced to enter into an end-user licensing agreement. While this is normally a rather complicated undertaking because of the nature of the ‹license› text variety, and one that can hardly be understood in its entirety by those without legal training, [68] Ward provides the user with something to read that deserves attention. In this text, which is a contract between agreement between the viewer/user and the artist in the sense stated by Ernest H. Gombrich, who maintained that «only in exceptional cases are the illusions of art illusions of our actual surroundings.» [69] Ideas of controllability sold by contemporary production software break down here as almost all of the tools behave contrary to expectations. In any case, even designers have already recognized that in creating images, they can rely and fall back on machine support. Accordingly, a clause in the license also mentions that the user has assigned the software itself as being the ‹creator› of any printed or otherwise published work. But the core of the software is not there to allow the artist to use the different random generators that swallow up images, hunt down bugs on screen or write by themselves. [70] Much more important is the context, whose entire breadth is artistically, subversively and discursively under discussion. While the designer tries to make the tool his own, unwieldy though it may be, something unexpected happens: the means of production are infiltrated by art. [71] Because of all of the copyright restrictions and limits placed on the user, all the symptoms of a paternalistic group ofproducers of digital content become virulent here, and ever more evident the more intensively one becomes involved with the software, and this also means reading the licenses, README files and ‹abouts,› and interpreting menus and the dialog boxes.

D. Conclusions

For the moment, this software is the tentative culmination of the development of computer art and, from the perspective of art history, the protagonists have no other links to each other beyond the utilization of a machine whose processes are controlled «before and after every text» by text. [72] And this is disregarding any outputs. In the beginning, there was an image, whether in motion or not, which, by means of symbolic script, could be programmed to be under the control of a computer. In the end, Ward chooses mimicry and not mimesis. If Nees, Nake and Noll and other ‹pioneers› still had to deal with the burden of sluggish machines, they nevertheless tried out many processes for creating images. And, with the introduction of methods of inspection and description, these results produced meaning. Here the role of the programming of images as being something applied is clear. Switching and reactive systems are complex, as Krueger’s «Videoplace» example should have shown. The role of the computer there is completely different, even if it is used to produce images in real time. «Forkbomb» should have shown how computer code can be used as an object of investigation. Although the pressure of the mechanical to achieve precision always carries with it the restriction of the possibilities of expression as a requirement for its functioning, it is in principle possible that the code, under certain circumstances determined by comments, variables or by the use of other solution methods, can be active in numerous ways. In the end, Adrian Ward points to an entire range of phenomena that today are bound up with the culture surrounding the computer. He places these in an artistic subversive context that he himself has created, in a way that has been unique until now, in order to create a tool out of his work. With this, the circumstances of reception change dramatically. Although Nees could still deal with the ‹contemplative observer,› this has changed already for Kueger because his observer has space-imageexperiences and is prompted to undertake different behaviors in different situations of human-machine interaction. To be sure, the «Forkbomb» provides an extremely symbolic output as a ‹result,› consisting of a changing sequence of zeros and ones that rather blatantly refer to the mathematical placeholders for the two possible states of the universal machine. However, the aesthetic limits are called into question: the script can be changed and the source code is supplied along with it. It can be put into any context whatever and applied under different operating systems. This transparency permits the user-recipient direct access to artistic material. Besides being an implicit and quasi-real machine parody, the work also stands for a culture of free software, which is also indirectly called for by Ward with his Auto-Illustrator. This software also outlines the consequences of excluding the user, who is incapacitated legally, systemically and practically. The works he produces do not belong to him; nor is he able to see how all the processes work, and, because of the random generators, he has to give up control over the software. In all three phases, the generative factor, which has been written in subconsciously is subjected to increasing restrictions. It completely determines the appearance of the image and, through its meaning, constitutes a constructive means of exchange itself (Nees). The modern experience of contingency is a condition for the sensation of freedom in the installation by Krueger. The «Forkbomb» can also be understood as a random generator. As its core functions, «Auto-Illustrator» possesses generative elements. The computer art of the present creates more room for interpretation than would have been the case in the early years. The essential means of interpreting works of art was description. It is thus clear that description, even in the case of computer art, produces differences and therefore meaning, because it establishes comparability. It therefore makes sense to speak of computer art and not software art. A history of computer art must therefore move forward in an integrating fashion by describing and interpreting its subjects, and not merely by contextualizing them.

© Media Art Net 2004