The Open Work: Participatory Art Since Silence

Judith Rodenbeck


In the introduction to his 1998 book, Relational Aesthetics (English translation 2002; essays dating as far back as 1992), the curator Nicolas Bourriaud writes that currently, “the liveliest factor that is played out on the chessboard of art has to do with interactive, user-friendly and relational concepts.” (8) The resulting works Bourriaud thinks of as “hands-on utopias” (9)—“the artwork of the 1990s turns the beholder into a neighbor,” he writes (43)—and it is the object of his small book to not only bring some coherence to the category they define but also to elucidate some of the terms of that category. “The artwork is presented as a social interstice within which these experiments and these new ‘life possibilities’ appear to be possible.” (43) Bourriaud’s claim is that this new work “in no way draw[s] sustenance from any reinterpretation of this or that past aesthetic movement.” (44) Neither utopian nor formalist, relational aesthetics emphasizes immediacy, contingency, service, and has at its core what Claire Bishop has glossed as a “DIY, microtopian ethos” that is fundamentally political. (Bishop 2004, 54) And for Bourriaud, “the very first question…has to do with the material form of these works. How are these apparently elusive works to be decoded, be they process-related or behavioural[,] by ceasing to take shelter behind the sixties art history?” (7)
Though the rubric of “relational aesthetics” has provided an extremely useful conceptual frame through which to understand the gambits of a number of important contemporary projects it is a frame that needs a little interrogating. I want to ask what Bourriaud’s defensive move might have obscured in that sixties work, and what it might tell us about contemporary practice, and I want to do so by putting those terms Bourriaud has retrofitted—interactive, user-friendly, relational—back into the (or an) historical perspective that Bourriaud has willfully rejected.

overview of paper
This talk will be addressed loosely to three overlapping sets of ideas: 1) participation; 2) the open work and, more specifically, what Eco calls the “work in motion;” and 3) the problematic presented by contemporary artistic autonomy.

I want first to propose a loose typology of participation. It is possible to think about participation in at least three ways, each of which proposes a different idea of competence.
Philosopher David Novitz, writing on “participatory art and appreciative practice,” provides a useful weak definition. His interest is in an underappreciated set of artworks, “those largely neglected art forms that cannot adequately be appreciated, and cannot function properly, unless the viewer is physically present in the artwork itself or a performance of it, and, while there, participates in certain activities that arise out of and are required by these works.” (Novitz, 153) For Novitz, participation is another word for a kind of immersive physiological engagement.
A second, stronger definition understands participation in a more active sense as an extension of engagement. While this kind of participation does involve some degree of conscious navigation it nevertheless lacks a sense of responsibility beyond the immediate obligation to twiddle or observe.
Finally, the strongest definition necessitates a more precise and active notion of participation. Under this rubric participation involves a conscious decision-making, action-taking on the part of the participant in such a way that the structure of the work itself is shaped by that activity. This kind of participation—what Umberto Eco calls “an oriented insertion” (Eco, 19)—yields the actual reconfiguration of the work. The work is iterable; no two performances will be the same. The artist’s embrace of chance extends itself to the received form of the work itself.

Cage & Silence
I’d like to retell a familiar story. It is the story of John Cage’s silent piece, 4’33”, which was composed (or rather, given a score) and first performed in 1952. 4’33” was premiered one rainy evening in 1952 in Woodstock, New York. The piece—three strictly measured movements of silence—marked an endpoint to certain compositional explorations, introducing indeterminacy into the compositional repertoire. Though chance operations (reading a modified tarot) were used to determine the duration of each movement, the performance of the piece—for it was a piece to be performed—was left to the performer. As the sound of the everyday crept in to fill the measures of Cage’s score it became clear to his audience that the three precisely timed measures of chance-determined length, framed by the concert-hall, designated a “content” that would always be indeterminate.
Cage’s silent piece, in part because it was so dramatically visual, presented to visual artists a new understanding of the field of relations available. From 1957 to 1958 Cage intermittently taught a composition class at the New School for Social Research in New York. Two of the most extraordinarily productive outgrowths of the Cage course at the New School were the time-based arts that emerged under the aegis first of happenings and then of Fluxus. Both were rule-bound intense investigations of time and its spatialization (as well as of the definition of art, materials, production) radically material, immersive, hybrid, and performative; they were funky, amateurish, and fundamentally social.
Out of these two practices and their engagements with process and, importantly, with behavior, grew the twinned projects of conceptual and systems art. Charlie Gere has suggested that the latter disappeared for a complex of reasons involving failures of quality, of exhibitions, skepticism about industrial & technocratic linkage, problems with instrumentalism, and the difficulties of commodifying such work. And arguably a similar array of problems taxed conceptual art projects throughout the 1970s as they morphed over time from critical to bureaucratic to, finally, institutionalized practices, gradually losing their urgency.
The question is: to what degree had these developments already been forseen in neo-Dada collective and collaborative projects of the 1960s? To what degree had Eco’s concept of the “work in motion” been a kind of prophylactic against precisely the kind of bureaucratizing, institutionalizing, and devitalizing--the taming--that took place? And finally, most poignantly, to what extent has the theorizing of “relational aesthetics” obviated critical address to those difficulties?

Works Cited
Bishop, Claire. “Antagonism and Relational Aesthetics,” October 100 (Fall 2004): 51-79.

Bourriaud, Nicolas. Relational Aesthetics (Paris: Les Presses du Reel, 2002)

Eco, Umberto. The Open Work.

Gere, Charlie. Digital Culture.

Novitz, David. “Participatory Art and Appreciative Practice,” Journal of Aesthetics and Art Criticism (2001): pp tk.

Read more | Comments (2)

Some Exploratory Notes on Produsers and Produsage

Dr Axel Bruns                     


In recent years, various observers have pointed to the shifting paradigms of cultural and societal participation and economic production in developed nations. These changes are facilitated (although, importantly, not solely driven) by the emergence of new, participatory technologies of information access, knowledge exchange, and content production, many of whom are associated with Internet and new media technologies. Already in the 1970s, futurist Alvin Toffler foreshadowed such changes in his coining of the term ‘prosumer’ (Toffler, 1971): highlighting the emergence of a more informed, more involved consumer of goods who would need to be kept content by allowing for a greater customisability and individualisability of products; this indicated the shift from mass industrial production of goods to a model of on-demand, just-in-time production of custom-made items. Going further beyond this, Charles Leadbeater has introduced the notion of ‘pro-am’ production models (Leadbeater & Miller, 2004) – alluding to a joint effort of producers and consumers in developing new and improved commercial goods. Similarly, the industry observers behind speak of a trend towards ‘customer-made’ products (2005a), while J.C. Herz has described the same process as ‘harnessing the hive’ (2005) – that is, the harnessing of promising and useful ideas, generated by expert consumers, by commercial producers (and sometimes under ethically dubious models which appear to exploit and thus hijack the hive as a cheap generator of ideas, rather than merely harnessing it in a benign fashion).
Such models remain somewhat limited still, however, in their maintenance of a traditional industrial value production chain: they retain a producer -> distributor -> consumer dichotomy. Especially where what is produced is of an intangible, informational nature, a further shift away from such industrial, and towards post-industrial or informational economic models can be observed. In such models, the production of ideas takes place in a collaborative, participatory environment which breaks down the boundaries between producers and consumers and instead enables all participants to be users as well as producers of information and knowledge, or what I have come to produsers (also see Bruns 2005a). These produsers engage not in a traditional form of content production, but are instead involved in produsage – the collaborative and continuous building and extending of existing content in pursuit of further improvement. Key examples for such produsage can be seen in the collaborative development of open source software, the distributed multi-user spaces of the Wikipedia, or the user-led innovation and content production in multi-user online games (some 90% of content in The Sims, for example, is prodused by its users rather than the game publisher Maxis; see Herz 2005: p. 335). Further, we also see produsage in collaborative online publishing, especially in news and information sites from the technology news site Slashdot to the world-wide network of Independent Media Centres, the renowned and influential South Korean citizen journalism site OhmyNews, and beyond this in the more decentralised and distributed environments of the blogosphere (Bruns 2005b).
While there are elements of boosterism in its coverage of such trends,’s identification of the participants behind such produsage phenomena as a new ‘Generation C’ is nonetheless useful (2005b). In this context, ‘C’ stands in the first instance for ‘content creation’, as well as for ‘creativity’ more generally (and Generation C appears closely related to Richard Florida’s idea of a creative class, therefore; see Florida 2002); if the outcomes of such creativity are popularly recognised this can also lead to another ‘C’-word, ‘celebrity’. But also notes that Generation C poses a significant challenge to established modes and models of content production, and importantly, therefore, the ‘C’ can also refer to issues associated with both ‘control’ and the ‘casual collapse’ of traditional approaches.

Some Common Characteristics of Produsage
Across the various domains in which produsage occurs, some common traits can be observed. Necessarily, produsage takes somewhat different forms depending on the object of the produser effort, and the community which is engaged in that effort, but these fundamental traits are nonetheless present in varying balance in each case.

User-Led Content Production
The core object of produsage is to involve users as producers, and these user-produsers often take the lead in the development of new content and ideas. Whether instigated by the operators of produsage sites, or out of their own motivation, users create content. In many cases (including the Wikipedia or various open news sites), the sites themselves act as tools for content production; in several others (especially where content produsage for computer game environments is concerned), the sites provide or point to useful tools and offer hints, guidelines, and frameworks for effective produsage.

Collaborative Engagement
Produsers tend to collaborate rather than work by themselves as individual content producers; indeed, in order to be a produser (rather than producer) it is necessary also to be a user of other participants’ content. Use often leads to the identification of opportunities for further extension and improvement of existing material. Produsage environments frequently encourage collaborative engagement by providing tools or informational structures which are preconfigured for collaboration between individual produsers; this can be seen for example in the distributed discussion functionality present across the blogosphere, or the placemark sharing and discussion tools available within Google Earth.

Palimpsestic, Iterative, Evolutionary Development
Engagement with existing content provides produsers with a motivation to further improve upon it; this evolutionary development may lead to a new iteration of existing versions (for example, the generation of a new revision of an open source software package) or the remixing of content in the development of a new branch species (whether in the form of a new remixed version of artistic material, or the forking of an open source project in different directions of development). Many produsage spaces also are their own archives, enabling users to trace the evolution of content through its various stages, so that the continuous development of new versions of content leads to the creation of a palimpsest: a repeatedly over-written, multi-layered document. This is evident for example in the Wikipedia with its elaborate page history tools, or the ability to trace the genesis of a music track in the ccMixter produsage site.

Alternative Approaches to Intellectual Property
Iterative engagement with content in a continuous process of evolutionary development require new approaches to the recognition and enforcement of intellectual property rights. A strict enforcement of such rights will tend to stifle the ability of later produsers to build on the work of their predecessors, and many produsage environments utilise open source- or creative commons-style licencing frameworks. At the same time, a complete release of content into the public domain, amounting to produsers giving up their legal and moral rights to be recognised and acknowledged as the creators of intellectual property, would often turn out to be counterproductive, since one of the motivations for produsers still remains the ability to be seen as a contributor to distributed produsage efforts. Produsage sites therefore must negotiate a middle path between IP regimes which enable as far as possible their participants’ engagement with one another’s content, and approaches which maintain individuals’ rights to be acknowledged as content contributors.

Heterarchical, Permeable Community Structures
Sites of produsage flourish if they can attract a large number of engaged and experienced participants who adhere to the ideals of the site. This requires a balance between openness and structure – if sites are seen as being controlled by a closed in-group of participants, they are unlikely to attract new produsers into the fold, as these are likely to feel alienated; on the other hand, if anyone can participate without any sense of oversight by individuals or the established community as a whole, then cohesion is likely to be lost. Many produser sites have therefore instituted heterarchical regimes of one form or another – in many open news sites, for example, community members are chosen at random or based on seniority and given the right to moderate their peers’ contributions; in some of the Wikimedia Foundation projects, groups of administrators have been created by vote of the overall community; while some open source development projects are led by a group of ‘benevolent dictators’ who have emerged from the community (and have limited powers, as development can always be forked into new projects if there is disagreement). Each of these models can be described as heterarchical: showing neither purely hierarchical organisational traits, nor operating simply as a leaderless anarchy.

Emerging Questions for the Produsage Model
The success of open source software development and other collaborative produsage spaces, such as the Wikipedia, point to the fact that produsage models are in the process of being more widely adopted across a number of content production domains. As this mainstreaming of produsage takes place, the model must also encounter a number of significant questions – especially as it attempts to find points of connection and coexistence with existing, production/consumption-based approaches. Answers to these questions have not yet been fully formulated, and may vary depending on a number of other factors, but it is important to foreshadow some of the areas of contestation already.

As the emergence of software companies formulated around an open source software development model has already shown, produsage and the commercial exploitation of the intellectual property generated through it are not necessarily mutually exclusive. Open source software firms often operate along either one of two related models – ‘selling bottled water’, that is, selling a convenient package and framework for what is otherwise a freely available resource (such as, for example, Red Hat’s ready-to-install CD-ROMs of open source software packages), or offering expertise and consultancy around that resource. Either model has completed a move from selling product to selling service which is characteristic of a post-industrial economy. However, both models rely on and exploit the continued free availability of the core resource around which services are offered; to ensure this availability, it is important that a portion of the proceeds generated from service provision is fed back into the protection and maintenance of that resource (and many open source software providers do in fact allow and encourage their staff to be active participants in and produsers of open source software projects on company time).
At the same time, computer game publishers like Maxis (producer of The Sims) do appear to profit more directly from selling the produser-generated resource itself, rather than offering ancillary services. Where Red Hat, for example, sells a useful but not crucial service to open source users (who are always able to directly access the open source package itself from its development site), the Sims game package is an indispensable prerequisite for entering the game universe of The Sims. In essence, then, Sims users pay Maxis for the privilege of being granted the ability to become produsers of game content, and as produsers subsequently continue to generate games assets which through their richness will attract further potential users and produsers to the game. In some cases of such proprietary spaces for produsage, end-user licence agreements (EULAs) even grant the games publisher ownership of and rights to incorporate any content generated by the user during their engagement with the game. Such models could be described more as hijacking than harnessing the hive, as they lock produser creativity into proprietary environments and deny users any ability to profit from the outcomes of produsage other than as sanctioned by the commercial operator of the environment. (It is therefore incumbent on produsers to become more aware of the rights granted to them as a condition of their participation within specific produsage environments.)

Such potential commercial exploitation of produsage, without direct rewarding of produsers as the collective originators of content, also point to questions around the sustainability of produsage environments, then. As produsers become aware of attempts to exploit their work without reward, their attitudes towards the produsage environment will rapidly deteriorate, slowing the rate of content produsage and undermining further development. Some reported cases of dissent within massively multi-player online role-playing games environments as players encountered overly restrictive EULA arrangements are already instructive in this regard, and it is likely that more are to follow. It is possible that such cases might motivate participants to develop alternative produsage spaces operated by the community rather than commercial entities (and some community-run online gaming servers do in fact already exist) – indeed, this would mirror the genesis of open source software itself, which also in good part emerged out of a sense of disenchantment with the poor customer relations in the existing software industry –, but in the case of resource-intensive spaces of produsage (e.g. in online gaming) the cost of community-run development might be prohibitive.
Even where there is no overt commercial exploitation, however, the sustainability of produser communities can be questioned. Community-led content produsage has so far built its success on a classic model where the value of the prodused resource is greater than the sum of its parts; on average, any participating produser has been able to receive more value from the collaborative project than they had invested themselves. However, the time spent contributing to such projects must still be financed somehow, and entirely volunteer-based produsage models may not be able to be sustained in the longer term. The model of open source service providers cross-subsidising the resource upon which they depend by allowing their staff to participate in development projects on company time may be able to be extended to other domains of produsage.
At the same time, new economic models which are built entirely around produsage as a core practice must also be explored – and some of the ideas gathered on sites such as may be instructive in this regard (while also indicating potential avenues for further exploitation of produser communities). It is likely that where such new models turn out to be successful we will see a repeat of the bitter battles already being fought between the traditional software industry and its new open-source rivals, and that much rhetoric aimed at undermining the perceived quality of the opponent is going to be exchanged (in a more restrained way, this is now also taking place between supporters of the Wikipedia and the producers of traditional encyclopedias).
Finally, a different, but related sustainability question also arises at the earliest stages of produsage projects: as such projects emerge and communities around them are beginning to form, how can they be guided to gather the critical mass and momentum needed to sustain development in this first, crucial phase? At such stages, projects often rely on a small number of highly engaged contributors, and it is crucial for them to both convey a sense of purpose and drive for the project as well as create an environment which invites participation from new contributors.

Many of the core traits of produsage spaces are organised around practices of repurposing, remixing, and redeveloping existing content. As noted, this requires innovative internal intellectual property schemes; however, beyond this many produsage spaces are also externally focussed and rely on an engagement with materials from outside of their own environment. Open news sites, for example, depend on their ability to cite and comment on news reports which have been identified from other news sources through the practice of gatewatching (see Bruns 2005b); the Wikipedia builds on knowledge drawn from an even wider variety of sources; while audio- and video-based produsage sites might also incorporate (or hope to incorporate) external elements into their own creative output.
Operating fundamentally on a principle of iterative content evolution within the produsage space, then, which assumes a right to incorporate available materials in the produsing of new content, produsers are often tempted to apply the same approach also to materials drafted from outside (and therefore often available under significantly different licence schemes or traditional copyright frameworks). This raises the potential of widespread intellectual property infringements – and indeed, commercial news operators would likely be able to identify a raft of infringements against their copyright very readily, for example, were they to examine the content of the news-related blogosphere or of many open news publications.

This omnivoracity of participating produsers could present a significant threat to produsage spaces, therefore, as they could be subject to prosecution for copyright infringements. Legal responsibilities are yet to be clarified in such cases – and it may be important for the sustainability of produsage approaches to apply a legal framework not unlike that which governs Internet service providers (ISPs) in many jurisdictions: here, the ISP usually cannot be held responsible for content hosted on user Websites as long as they take down infringing content as soon as it is reported. However, this may also require specific organisational frameworks for produsage spaces (potentially reintroducing a stronger hierarchical organisation once again), which in turn could also affect the feasibility of the space itself.
Such legal questions are not limited only to intellectual property, of course; the quality and reliability of content which has been collaboratively prodused must also be questioned. Misinformation in some of this content (for example, in a collaboratively prodused self-help site on medical issues) may have some very serious consequences, and it is easy to imagine legal action from those who have been negatively affected by it – in such cases, who should be held responsible?

One answer to such questions would also stress that any collaboratively produced content, or indeed any content at all, should always be taken with a grain of salt, of course – indeed, that a caveat of ‘use at your own risk’ should apply to all outcomes of produsage. This may be especially important also because the iterative and evolutionary model of content produsage must by its very nature lead to eternally incomplete outcomes; the point of produsage is that it is always possible to further improve on what is already available.
This realisation should not be seen as undermining produsage overall; instead, it merely indicates a need to further educate participants in produsage as well as users of produsage outcomes: all products continue to contain room for improvement, and so it is not produsage with its continuing, ever-incomplete development of content and artefacts, but industrial production with its artificial separation of development outcomes into distinct ‘complete’ product models and editions, which presents an aberration from the norm. And paradoxically, by always presenting the latest update to the artefact (and always enabling users as produsers to contribute further updates right then and there), produsage frequently offers a more recent, more ‘complete’ version of the artefact than traditional production models are able to do.

Cultural, Social, and Political Implications of Produsage
As noted above, today we are experiencing the emergence of produsage models across a wide range of domains of content development and exchange. This phenomenon appears to be part of a wider paradigm shift, which is supported in part also by the rise of new media technologies. Media play an important part in shaping our consciousness and understanding of the world around us, as well as our place within it, of course, and in this case the very shape of the media as it has shifted away from mostly passive, mass reception to more interactive, individualised modes of active engagement can be shown to have an effect. Advancing even beyond this, especially Internet-based media forms have begun to take on elements of intercreativity (see Berners-Lee 1999), and as this mode of collaborative, productive engagement with content is becoming more prevalent it creates the groundwork for the expansion of produsage environments.
While it is too early to predict the full implications of this change, it already seems evident that one key development is likely to be the expansion of grassroots or vernacular (see Burgess 2005) creativity; this will necessarily have a significant effect on the existing structure and position of the creative industries. At the same time, it must also be recognised that the skills and socioeconomic and technological requirements for becoming a produser in whatever domain are not distributed evenly throughout societies, much less global society as a whole; therefore, there is also a risk that a further digital divide – in this case, specifically a participatory or creative divide – might open up between the more and less privileged strata of society. Such trends must be addressed and reversed through government and non-government intervention at as early a stage as is possible; education at all levels also plays a crucial role here, and must prepare its students to become effective produsers in a wide range of environments.
Ultimately, then, if a widespread adoption of produsage approaches can be engendered across society, this could also come to have a profound effect on civic participation and democratic engagement as a whole. Again, we might note that the media affect our consciousness, and our understanding of the world as well as of the societies we live in, and the mass media traditions from which we have emerged may have also had a significant impact on our understanding of democracy – and so, in many developed countries citizens relate to their democratic environment much as they do to the mass media: democracy has become a spectacle produced by political parties and interest groups and moderated and distributed by journalists and pundits, with citizens as audiences who occasionally switch channels by voting in elections (or generally tune out and regard politics as nothing more than background noise).
If prodused media become a credible and wide-spread alternative to produced media forms, however, then this might ultimately also have an effect on citizens’ understandings of how they relate to their local, national, and global environments – and as regards democracy, it could rekindle a desire on their part to once again become active produsers of democracy, rather than mere passive audiences. Exactly what form this produsage approach to democracy might take remains yet to be seen, as does whether the transition can be a smooth one – but the potential for change which it enables makes produsage an important phenomenon to follow.


Berners-Lee, Tim (1999) Weaving the Web, London: Orion Business Books.
Bruns, Axel (2005a). “Axel Bruns at iDC,” Institute for Distributed Creativity, (accessed 31 Oct. 2005).

——— (2005b). Gatewatching: Collaborative Online News Production, New York: Peter Lang.

Burgess, Jean (2005, 26 Mar.). “Mapping vernacular creativity v0.1,” Creativity/Machine, (accessed 1 Nov. 2005).

Farmer, James (forthcoming in 2006). “Blogging to basics: How blogs are bringing online education back from the brink,” in Axel Bruns and Joanne Jacobs (eds.), Uses of Blogs, New York: Peter Lang.

Florida, Richard (2002). The Rise of the Creative Class: And How It's Transforming Work, Leisure, Community and Everyday Life, New York: Basic Books.

Herz, JC (2005). “Harnessing the hive,” in John Hartley (ed.), Creative Industries, Malden, Mass.: Blackwell, pp. 327-41.

Leadbeater, Charles, and Paul Miller (2004). The Pro-Am Revolution: How Enthusiasts Are Changing Our Economy and Society, London: Demos. Also available at (accessed 31 Oct. 2005).

Toffler, Alvin (1971). Future Shock, London: Pan. (2005a). “Customer-made,” CUSTOMER-MADE.htm (accessed 31 Oct. 2005).

——— (2005b). “Generation C,” (accessed 31 Oct. 2005).

Read more | Comments (69)

Working On and With Eigensinn

Media | Art | Education [1]

by Giaco Schiesser
Translated by Tom Morrison

The article focuses on a media and art education in pace with the times, through a new approach: the conception of Eigensinn (approx.: wilful obstinacy) of media and artists is developed in detail as the crucial artistic and media productive force. Giving an insight in some central influences / demarcations/ transformations of different art media in the 20th century it proposes that a forward-looking media art education in pace with the times could rest on three pillars: 1. Training in individual, collective and collaborative media authorship. 2. Working on and with the Eigensinn of media (e.g. film, photography, computers / networks and the fine arts). 3. Art as process, art as technique. These three pillars are worked out and presented in detail.

Biography Giaco Schiesser
Giaco Schiesser is a professor for the theory and history of the media and culture with a focus on ”Media Cultures Studies” as well as head of the Department Media & Art at the University of Art and Design, Zurich (Hochschule für Gestaltung und Kunst Zürich, HGKZ).

Giaco Schiesser studied philosophy and German literature studies at the Freie Universität (FU) in Berlin. From 1997 to 2002 he conceptualised and realized the establishment of the university department new media with the focus on digital agency, connective interfaces and collaborative environments at the University of Art and Design Zurich as head of that department. From 1999 to 2002 he was a member of the direction of the department (together with Knowbotic Research and Margarete Jahrmann).
His work focuses on the culture, aesthetics and eigensinn of media, on ideology and democracy, on the constitution of the subject and everyday life.
Giaco Schiesser has lectured as a guest professor in Switzerland, Germany, Austria, the Netherlands, Japan and the U.S.A.

Zurich, July 2004 / October 2005
In memory of Hans-Jürgen Bachorski (1950-2001) [2]

Since to talk about something inevitably means to keep silent about many other subjects, I wish to begin by stating what I will not be talking about.
1. I will not discuss "broad" or "narrow" definitions of art, or indeed propose a normative definition. You will hear nothing about notions of art as a "Gesamtkunstwerk" along the lines first formulated by Richard Wagner, then democratized by Joseph Beuys, and recently updated by artists like Roy Ascott. Nor will you hear anything about Umberto Eco's definition of the "open work" or about notions of art that attempt to establish a work's character as art exclusively on the basis of its aesthetics by means of the internal structure or of the semantic compression, and the resultant "surplus value" of a picture, a novel or a film.
2. I will not discuss "broad" or "narrow" definitions of the concept of media, either. That means you will hear nothing about the meaning and implications of definitions that, in line with Herbert Marshall McLuhan, count cars and trains alongside the media of literature, photography and film, or about the even broader concepts that, following Niklas Luhmann, include money and love as media. I will also keep silent about very specific understandings of media such as are the basis, for instance, of Claude E. Shannon's mathematical information models.

However, there are five things I do want to talk about:

1. that which I am attempting to describe with the notion of the "Eigensinn of a medium";

2. the meaning of the terms "art as technique" and "art as method";

3. several historically recurring processes in the emergence of a new medium, and the implications of these processes for the arts;

4. a few conclusions resulting for an art and media education in pace with the times;

5. and finally, the prospective central importance of art and media in what is problematically termed the "information society", the era now underway.

1.    Eigensinn – Meaning and potential of a concept

At a time when the major narratives to which we had bid conclusively farewell have become possible once more, I wish to begin with a small but magnificent story:
"Once upon a time there was a child who was wilful, and would not do as her mother wished. For this reason God had no pleasure in her, and let her become ill, and no doctor could do her any good, and in a short time she lay on her death-bed. When she had been lowered into her grave, and the earth was spread over her, all at once her arm came out again, and stretched upwards, and when they had put it in and spread fresh earth over it, it was all to no purpose, for the arm always came out again. Then the mother herself was obliged to go to the grave, and strike the arm with a rod, and when she had done that, it was drawn in, and then at last the child had rest beneath the ground."
This "tale" (no. 117) is by far the shortest of those included in the 1819 collection of fairytales by the Brothers Grimm. It is entitled Das Eigensinnige Kind [3] ("The Wilful Child", Grimm 1884, p. 125).
More than 150 years later, that particular fairytale was the subject of a lucid interpretation in Geschichte und Eigensinn, a book co-authored by the renowned writer, filmmaker and television producer Alexander Kluge and the sociologist Oskar Negt (Klege/Negt 1981, pp. 765-769). Kluge and Negt worked out the rich lexical substance of the term "Eigensinn" (along with the adjectival noun "Eigensinnigkeit" - a word and motif core existing solely in the German-speaking countries - and made the extended, transformed term the strategic pivot of their individual- and species-historical developmental analysis. They define "Eigensinn" as 1) a focus in which history can be comprehended as the centre of conditions of dialectic gravitation, 2) as a result of dire distress ("bitterer Not"), 3) as a reaction to the duress of a given context, 4) as the protest, condensed in one point, against the expropriation of one's own senses leading to the external world, and 5) as the further working of motifs expelled or retired from society at the place where they have most protection, namely in the subject (see Kluge/Negt 1981, p. 765ff.).
For Negt and Kluge, the Eigensinn of individuals represents an intertwining of two different processes: on the one hand, it is the place of repressed desires that have not been lived (Ort der verdrängten, nicht gelebten Wünsche) that accumulate in the course of an individual and social life. Of something yet to be settled ("ein Unabgegoltenes"), which - because unable to be stifled - insidiously and recurrently makes itself noticed (the hand of the obstinate child that repeatedly emerges from the grave after the child's death, because the child finds no rest). On the other hand, Eigensinn is the point of departure of all social and individual processes (Ausgangspunkt aller gesellschaftlichen und Individuellen Prozesse): social starting point for every political and cultural project, individual starting point for a self-determined life lived according to its own sense (eigen-sinnig). Eigen-Sinn, "own sense, ownership of the five senses, through that capability of perceiving what happens in the world around oneself" (Kluge/Negt 1981, p. 766) is the place which must recurrently be worked out in the course of an individual biography and from which a life of one's own can and/or must develop under the given conditions of a historical conjunction. In everyday life, people fulfil not only externally imposed requirements but also pursue their own objects by evading - sometimes consciously, sometimes unconsciously - with surprising, peculiar ("eigen-artig": of its own kind) and obstinate attitudes those things which they are economically, politically or culturally required to do, undermine them, ignore them, trample them underfoot, oppose and transverse them. [4]
The Eigensinn of individuals is best described by this conscious-unconscious, sometimes bizarre and often contradictory will to do that which they want to do, under whatever conditions, by their self-determined actions, their mentalities and their recalcitrance, and by the desires recurrently articulated in a form that goes against the grain. [5]
Due to the semantic richness of the words Eigensinn / Eigensinnigkeit, I have proposed that they be adopted as loan words in English.

2.    Excursus: The two paradigms of the concept Eigensinn / Eigensinnigkeit - superbia vs. productive force
In German, the words Eigensinn / Eigensinnigkeit possess a lexical aurora encompassing at least four layers of meaning:

1) in the most current everyday usage, with clearly negative connotations: stubbornness, headstrongness, obstinacy, wilfulness, sometimes madness;

2) the literal meaning is "with the specific sense a person gives to him or her self and with which he or she interprets/maps their environment";

3) again, literally: with one's own five senses, that is to say with one's own sensibility/sensuality (in German, sense Sinn and sensibility Sinnlichkeit share the same common etymological root), with the logic and/or structure according to which a person behaves;

4) as positively connotated attributes, Eigensinn / Eigensinnigkeit mean independence,
originality, perseverance, self-confidence, an original way of looking at things.
The subdominant, repressed and suppressed tradition of the conception of Eigensinn as positive, as a productive force was disclosed only in the 19th century, with the Grimm brothers' transcription of the tale of The Wilful Child. The conception which appears here deserves to be worked out in more detail - because of space limitation I can only mark the direction here - by linking it to Sigmund Freud's conception of "extrusion" / "condensation" (Verdrängung/ Verdichtung) (Freud 2001), to Jacques Lacan's conception of the "split subject" (gespaltenes subjekt) (Lacan 1975, 1991) and to Antonio Gramsci's concept of the "bizarre", highly contradictorily composed "everyday mind" (Alltagsverstand) (Gramsci 1970, p. 130f.)
The predominant, opposite tradition which stresses the negative meaning of the conception of Eigensinn goes back a long time and can be found very early in the antique and the German languages. [5a] Augustine's more ambivalent concept of "voluntas propria", lat. "cosilium proprium" (a person's own will) becomes definitely a negative concept under the influence of the neo-platonism. From that time on "voluntas propria" became the origin of the original sin and the concept has become a battle concept (Kampfbegriff) to fight for the order willed by God. In the mysticism of the late middle ages the concept was translated as "eigen meinunge" (a person's own opinion) by Meister Eckehard and Tauler. Luther became the first to translate it with "Eigensinn". For both, for Luther's Protestantism and for the Catholic spirituality of the 16th and 17th century (e.g. Ignatius of Loyola, Teresa of Avila), "voluntas propria" became the marking for the totality of individual existence and was therefore to be rigorously fought against. Rousseau takes this thread by secularising the term but still using it in a negative way (volonté particulière vs. volonté génerale).
This secularised tradition is taken up, transformed and reformulated, in the work of G.W.F. Hegel [5b] which had a great impact on subsequent times. For Hegel Eigensinnigkeit is a level of the Unhappy Consciousness of the servant, which has to be sublated (aufgehoben). "Der eigene Sinn ist Eigensinn, eine Freiheit, die noch innerhalb der Knechtschaft stehen bleibt". In the framework of her Theories of Subjection Judith Butler has recently picked up the negatively connotated concept of Eigensinnigkeit uncritically approving with direct references to Hegel, (Butler 1997, Chapter 1): "Indeed self-feeling (of the servant, G.S.)refers only and endlessly to itself (a transcendental form of eigensinnigkeit), and so is unable to furnish knowledge of anything other than itself. (Butler, 1997, p. 47).
The history and development of the dominant conceptions Eigensinn / Eigensinnigkeit make clear that the individuality of a person that is rooted in his or her sensibility (Sinnlichkeit) - his or her own senses (Sinnen) - and in his or her own, developed meaning (Sinn) of being in the world has been excluded first under the verdict of the pregiven order willed by God and then subjugated to the majestic-dignity of the (world-)mind. End of the excursus.

3.    Eigensinn of the Media as Productive Force
I have proposed that the notions of Eigensinn and Eigensinnigkeit be used in analyzing media and the arts, as well as in producing art (Schiesser 2002, 2003a,b, 2005). In other words: I propose to consider the Eigensinn / Eigensinnigkeit of media as a productive force of its own.
It is this collision of the Eigensinn of the media with the Eigensinnigkeit of creators that initiates and perpetuates a significant and paradoxical process. The artist is subjugated to the Eigensinn inscribed in the media, yet as a creator who is himself eigen-sinnig, the artist also incessantly tries to make the Eigensinn of media yield to his own will. Art has always derived its subjects, aesthetics and future from this process which, because it cannot be resolved, is interminable. I am proposing, in other words, that we talk about the Eigensinn of the media as a productive force.

Everything we are able to say, apprehend and know about the world is presented, recognized and known with the help of media. Ever since the half-blind Friedrich Nietzsche clear-sightedly found that the typewriter was "also working on our thoughts" (an understatement, from the contemporary stance), or at the latest since Herbert Marshall McLuhan's much-quoted aperçu that "the medium is the message", we have known that media do not merely serve to convey messages but are - somehow - involved in the substance of the message. It is therefore necessary to ascribe to media the power of co-producing, and not just transporting meaning, if not to join Roman Jakobson in declaring meaning to be product of the material (sensory) attributes of the medium itself. In other words, media (by which, in the present context, I mean merely those media which have historically earned special significance for art production, that is to say: literature, music, theatre, photography, film, video, television, computer and networks) possess a meaning of their own (einen eigenen Sinn) -- Eigensinn.[6]
The talk of the specific Eigensinn of different media initially makes it clear that media and their codification are never neutral tools for transporting ideas, images and sounds, especially when these media and codes are being used for academic or artistic purposes. They are inscribed with material, semantic, syntactic, structural, historical, technological, economical and political Eigensinningkeiten and their history (one need only think of what we have learned about the Eigensinn of language from writers like Saussure, Nietzsche, Freud, Marshall Mc Luhan, Lacan and Laclau), of which their users have only partial conscious command. In every contemporary medium being used for artistic purposes, then, its entire cultural history is inscribed, sometimes as "dead labour", sometimes as "living labour" (Alexander Kluge in taking up a notion introduced by Karl Marx). Every medium possesses a specific materiality, specific technological prerequisites, specific structural attributes, different traditions, semantic charging, and requires different techniques and modes of proceeding of which the artist is only partially aware. Therefore, every medium contains different potentialities and boundaries, and is furthermore defined in its type and effect by economic, political and cultural factors. That which is able to be written in a literary work differs from that able to be shown in a film. That which photography records or places in scene is different from that expressed by a piece of music.
Each of these mediums is unique and irreplaceable. The history of each medium saw the development of an ongoing repertoire of aesthetics often strictly separated from, or in contradiction to, those of the others. In film, for instance, this repertoire ranges from the silent-film aesthetic of somebody like Georges Méliès over the first and second French avant-garde movements and Italian Neo-Realism to the contemporary splatter movie. In literature it stretches from the aventure novel of Walther von der Vogelweide (or, in the Anglo-Saxon context, from Beowulf) over Dadaism and the écriture automatique of the Surrealists to the collaboratively authored Net literature of the present. In music it ranges from medieval pentatonics and Italian opera over twelve-tone music and jazz up to punk, hiphop and ambient - to name but a few examples.
Let me specify a few aspects of the Eigensinn of a medium on the basis of three mediums subject to extensive artistic usage, and using the examples of literature, Net art, and painting. The basic material processed by literature is language. Language is a time-based, mono-aesthetic medium. Whatever literature wishes to express must be presented in linear, sequential form. As a general rule, the reader reads literature in the form of a book, linearly, from top left to bottom right, page for page. A very different situation applies in the case of works of Net art: they too are time-based media, but they are synaesthetic as opposed to mono-aesthetic, since text, image and sound can be present in equal measure. Second, works of Net art are a polyphonic medium: text, image and sound may also occur simultaneously. And, third, Net artworks are fundamentally non-linear in design. Therefore, they demand from the spectator what I call "structural interaction", which may differ in quite a number of ways from the "interaction" of somebody who is reading a book or looking at a painting. Imagine, as the third example, that you enter the Louvre armed with a paintpot and brush, place yourself in front of the painting entitled Mona Lisa and attempt to actively alter the painting with your brush and paint. At the very least, you would have to reckon with legal proceedings and a psychiatric assessment.
These examples must suffice as demonstration of the fundamental differences in materiality, authorship, status of the artwork and the necessary behaviour of recipients in such cases. In one case we have an individual authorship, a finished work of art, and a recipient who, in order to enjoy the art, must read a book or view a picture, while in the other case we often have in front of us a collective, sometimes collaborative, authorship, an "artwork in movement" (Umberto Eco), along with, ideally, recipients who - translocally distributed and synaesthetically solicited - must actively first co-create the work of art as actual co-authors, for if they do not act interactively, nothing happens: no work of art comes into being. And the converse holds true: If the artwork comes to a standstill, if there is nobody interactively manipulating it, then it might be "completed", but is dead at the same time.
I must immediately stress the fact that from the historical perspective the Eigensinn specific to a particular medium - which was always a central theme of artistic production - has always emerged in a process of disassociation combined with reciprocal influence. The separation of established media from new mediums always entailed the transformation of the former. After the invention of photography, for example, the until then important genre of portrait painting receded into the background. Photography was now the medium of portraiture - until, after a renewed transformation, portrait painting became current once more in an innovative form, as for example in Cindy Sherman's untitled photo-portrait series in the 1980s. As a second example I would point to montage, a technique filmmakers adopted from literature and, having further developed it in the film medium, differentiated and transformed to produce a process of reciprocal interaction which has endured up to the present day. [7]

4.    Influences | Demarcations | Transformations - On the History of the Media and the Arts
The varied history of the media and the arts makes more clearly discernible, at least since photography was invented, the following processes:
1. Artists working in and with the newly emergent medium must initially take recourse to established aesthetics and the methods of old media. [8] They try out, experiment, and only gradually work out the potentialities of the new medium. In some cases - like literature -the development of adequate, media-authentic aesthetics takes centuries, whereas in other cases - like film - it takes merely a few decades. In the early days of film, for instance, the medium as a matter of course took up established aesthetic elements of literature (such as the narrative structure of the story or the figure of the hero), of theatre (actors, dialogue, set), of dance (choreography, rhythm), and of fine art (panorama, close-up, long shot). [9]
2. "Old", that is to say established, media are plunged into crisis by the emergence of a new medium, and are required to alter their focus and differentiate their strongpoints and unique attributes in a new way within the dispositif of their particular, historically different media and art productions. [10] I have pointed out the altered focuses in the case of portrait painting in the field of fine art. Since the mid-1990s, it has been possible to witness a clear demonstration of the same process in the case of the theatre.
Due to the rise of the new media, the theatre has been in crisis for several years, and has recognized this situation. What answers has it found so far? On the one hand, we have seen the emergence of theatre that radically returns to and brings into focus one of its specific attributes, its physicality (as in the work of the Catalonian group La Fura dels Baus, or in contemporary post-dramatic theatre). On the other hand, theatre has emerged that attempts to reflect upon the new media (computer, networks), and to deploy them not merely as tools but as mediums for renewed, transformed theatre forms (for instance, the Japanese group Dumb Type, the Canadian director Robert Lepage, the Swiss director Stefan Pucher, or the German playwright Ulrike Syha or, within the last few years, also Fura dels baus).[11] In art-historical terms, the alternatives grasped are recurrently either to recall and focus upon a specific attribute of the old medium, or to reflectively integrate the medium which is new at a particular time. Even if the consequences of either method differ, they both bring about a transformation of the established medium.
3. If the Eigensinn of a new medium has to some degree been recognized, tried out and developed, the new artistic methods and possibilities have an effect on the old media. Soon after the invention of photography and film, for instance, these media began to exercise a strong influence on literature, and since very recently we can witness a similar influence being exercised by the new media: The attempt to explode the linearity of the language defining the literary work in its four-hundred year tradition can be traced from Dadaism over the montage novel and écriture automatique up to Concrete Poetry and the contemporary attempts to make useful for printed literature the non-linear link structure which is fundamental to the Internet.
4. Hybrid forms emerge that co-exist with the mono-media art forms. Historical examples would be ready-mades, experimental films, Happenings, art interviews, film essays, video installations.

5.    "Art as Technique" | "Art as Method"

As I will demonstrate, art as technique and art as method are two different aspects of one and the same process. I will begin with "art as technique".
It would be possible to connect up the following considerations to current art discourse by referring to somebody like the French philosopher Jacques Rancière, a recognized authority on literature and film, who articulated his view on art as follows: "Like knowledge, art (...) creates fictions, i.e. material redistributions of signs and images of the relationships between what one sees and what one says, and also between what one does and what one can do"(cited after David 2001, p. 195). Or by referring to Jean-François Lyotard's thesis that the work of art "tries to present the fact that there is an unpresentable" (Lyotard 1984, p. 101). This attempt - the ultimate driving force in art - is a "task of derealization" (Lyotard 1986, p. 79) of the images, the representations, the ordering frame of reference. However, I wish to go back further in time and deploy the historical formula of "art as technique", which is rhizomatically linked to the analyses of Rancière and Lyotard. The hugely influential notion of "art as technique" dates back to the Russian literary theorist Viktor Shklovsky. In his 1916 essay "Art as Technique" (Shklovsky 1994) [12] he attempted to comprehend the objective of art, and in particular the objective of the image, while at the same time establishing a clear distinction from the aesthetic of mimesis predominant at the time of writing.
" 'If the whole complex lives of many people go on unconsciously, then such lives are as if they had never been.' And art exists that one may recover the sensation of life; it exists to make one feel things [and not, like in science, to recognize them, G.S.], to make the stone stony. The purpose of art is to impart the sensation of things as they are perceived and not as they are known. The technique of art is to make objects "unfamiliar" ["ostranie": making strange, G.S.[13], to make forms difficult, to increase the difficulty and length of perception because the process of perception is an aesthetic end in itself and must be prolonged. Art is a way of experiencing the artfulness of an object; the object is not important. (Shklovsky 1994, emphasis G.S.)
Shklovsky is essentially concerned with two things. First, by means of abbreviated (stunted), automated perceptions - "habitual associations" (Brecht) - people rapidly and transiently reduce the wealth of objects and facts in their everyday lives to recognizable schemata (cf. Shklovsky 1994). Art, by contrast, destroys these automatic mechanisms. By various techniques, objects and circumstances are abruptly severed from their customary associations, decontextualized, "made strange", so that the process of perception is prolonged and/or made more difficult, and the object is not merely recognized, but "felt" and, as if for the first time, "seen". The core concept in Shklovsky's considerations is that of the necessity to break through the "automatism of perception" by "various means" (Shklovsky 1994).
The technique of art stressed by Shklovsky has consequences in regards to the aesthetics both of production and reception; in the present context, the production-aesthetic consequences are especially interesting: If the "making of a thing itself" and the "form made difficult", that is to say the "making strange" by "various means", become the central focus of art, then immediately the question about the medium, about its Eigensinn, is on the agenda: about the undiscovered possibilities and obstinacies sketched out above. For the "form made difficult" and the "various means" are directly dependent on the materiality, structure, and technology specific to the chosen medium.

On the second aspect: art as method.
Art as method means to place the experimental in the foreground. But in contrast to the natural sciences, in which falsification and verifiability are the decisive criteria leading to proofs and verifiable results, the ultimate target towards which artistic practice is oriented is not the fixation on results but the process-based character of creative activity. Artistic experimentation is concerned explicitly with the "conditions of what is possible" (Philippe Lacoue-Labarthes, cit. after David 2001, p. 185), not with the foundations of the feasible. As a procedure of artistic practice, experimentation means to develop strategies of innovation. This, however, this presupposes something that might be described as an attitude of inner productivity. This attitude - which any academic media and art education must play an essential role in co-conveying to its students - is expressed in curiosity, willingness to take risks and refusal to compromise in regard to one's own subjects and interests and in regard to the work on and with the Eigensinn of the media. Admittedly, it is possible to theoretically reflect upon the possibilities of a specific medium and also, in the case of media whose histories are as long as those of literature, theatre, dance and music, to analytically define them more precisely. However, in order to investigate, try out, test to the limit and transform a medium, in order to undermine it, hybridize it, to go against its grain, in order to make it sensorially experiencable as an artefact, it is necessary to practice art on and with the particular medium.
Let me illustrate the above on the basis of two examples from film history. In the 1960s, the filmmaker Jean-Luc Godard withdrew to Lyon and for several years (as a member of the groupe dziga vertov) was almost exclusively preoccupied with video, at that time a new and exciting medium. The result was a series of videos (Six fois deux, British Sounds, Pravda et al.), in which the video medium is investigated experimentally, and new contents, new techniques, new methods and modes of perception are tried out. Finally, Godard put to use in film the experience thus gained by integrating the investigated formats, methods and findings (non-linear dramatic structure, splitting up one large screen into several small ones, aesthetic of the image) into films such as his Numéro 2 (1975), and so expanded the possibilities of film by transforming the medium. Or take the writer, filmmaker and TV maker Alexander Kluge, who attempts to make the television medium go against the grain, to wrestle from it new possibilities, and in this way enable the viewers to have new experiences, experiences that simultaneously presuppose and promote intense sensorial activity on the part of the audience (of broadcasts such as News and Stories, 10 vor 11, Bekanntmachung! et al.). Kluge accomplishes this by using a number of different aesthetic procedures, techniques and structural elements adopted from the rich history of film, music and literature and adapted for television: minute-long close-ups, original sound, slowness, inserted text panels, or the mounting of "classical lenses" on electronic cameras. "We use," states Kluge, "a Debrie camera from 1923, for instance, and program the electronic computers to obey the rules that long-dead cameramen fed to this Debrie camera. In this way, we recall a piece of dead work from film history, and program it into the broadcast." (Cit. after Schiesser/Deuber 2000, p. 363f.) [14]

6.    A Media and Art Education in Pace with the Times
Eigensinn of the mediums, art as technique / art as method - these are the focal themes on which I trained my sights in the foregoing. I chose these aspects of the wide "media and art" field because I consider them to be the strategic factors or problematics in a model of media and art education on a level with the times. Individual, collective and collaborative authorship is the third, and equivalent, factor that joins the two stated already. What would the Eigensinn of the mediums amount to without the Eigensinningkeit and the Scharfsinnigkeit (the acumen) of artistic authorship!
A media and art education in pace with the times, an education thought out in terms of the future and at the same time taking seriously and working through traditional experience, will place territories of experimentation at the disposal of students. In these territories students will be expected and encouraged to carry out curious, radical and uncompromising work -- both individually and collectively, and eigensinnig at all events -- on self-chosen or biographically inscribed interests, contents and subjects, as well on and with the Eigensinn of various single and hybrid mediums.
Today, transmedia education is part of media training. Transmedia education means that the students are empowered to work simultaneously in and with one medium, and at the same time to learn how to devise and use artistically the interface to other media. In a media- and technology-based age like the post-industrial present, authorship means not only individual or collective authorship to which everybody contributes his specific components, but collaborative authorship in which everyone is capable of networking his/her specific skills with those of the others, and over and over again emerges from this process having been fundamentally transformed. However, alongside the development of social, communicative and, in increasing measure, analytic competence, this requires in-depth knowledge of one's own medium and knowledge of the other media. I see the significance of an education that intensifies this mindfulness of the nature of media and simultaneously encourages transmedia networking - and such an education must inevitably extend beyond the subjects offered by an art and media academy - as lying in the fact that it enables the students to make their way as artists on a level with their times or as flexible and versatile media authors of the type increasingly and urgently required by the "information society". In either case, they will be capable as individuals and as members of a team of assuming the responsibility for content, conception, implementation, production processes and budgeting.
If it is true to say that a new medium exercises a dual influence on old media insofar as it forces the latter to re-assess their possibilities in the light of new conditions, and at the same time transforms them, then an important challenge and chance for media and art education lies also, and particularly, in the enabling and furtherance of hybrid or cross-over artworks, be they interactive audio installations, video essays, media architecture, transmedia interfaces in urban spaces, DJ events, digital poetry, new aesthetics of the performative, SMS visuals for clubs, parties, intercity streams of DJ events, Net TV, cultural software, radio concerts for mobile phones - or, or, or. Transmedia or hybrid art demands - and in the mid-term that is the central challenge for art education - the working out, communication and usage of a series of complex specialist areas like neurophysiology, cognitive sciences, architecture, nanotechnology, theories of information, aesthetics, cognitive and perception theory, life sciences. At present, these subjects are taught at not one, but several different, universities - a situation essentially due to the striking leap forward taken by the media as a result of digitalization, even if they had become increasingly technology-based from the invention of photography onward. Thus, for art too, the dispositif has changed fundamentally and dramatically. [15] Some years ago, Hans-Peter Schwarz, the former director of the Media Museum at ZKM Karlsruhe, published a richly informative article in which he reconstructed the changing history of the various arts and of technology since the eighteenth century, and established the inescapable significance of technologies for contemporary and future media arts (Schwarz 1997, p. 11ff.). The linkage of the arts, technologies and sciences - a linkage that during the brief, historic epoch of the Renaissance took place as a matter of course - has today undeniably become a prerequisite for future art and media work, and for that reason also for adequate training in that field.

7.    Art Subjects | Immaterial Labour | Post-Postmodernism

"Postmodernism", "Hi-Tech Capitalism", "Postfordism", "Information Age", "Cyber Society", "Network Society" or even "Post-Information Age" - so probing, boldy assertive or normatively defining as these current concepts variously are, and however divergent their implications, they all point to the fact that a transition is taking place from one era to another. Among all the differing viewpoints in the specialist literature, there is agreement on one aspect: that digitalization and the concomitant computeratization and networking will fundamentally change all areas of society, politics, economics and culture, and in part have done so already. At the same time, it is becoming increasingly clear that what for some time now has been discussed under the rubric "immaterial labour" is gaining strategic importance. (See Negri et al. 1998 for an introduction.) Whereas "Fordism" or, as the case may be, "industrialized society" required working subjects who, in line with the principle of division of labour and integrated in a regular working day within a system of vertical hierarchy, went about their specific duties and clearly separated their working time from their leisure time, Postfordism demands working subjects of a wholly new order. At least in the fields in which graduates of an art and media academy will work, Postfordism already requires extremely creative subjects, subjects who are active, have multifarious interests, and are "rich in knowledge" (as the Italian social theorist and Postoperaist Toni Negri puts it), and preferably can demonstrate "hybrid CVs" (as Josef Brauner, former CEO of Sony Deutschland, put it already in the mid-1990s). Or, in the words of Maurizio Lazzarato, a leading theorist of "immaterial labour": subjects who are capable of combining "intellectual capabilities, craft skills, creativity, imagination, technical expertise and manual dexterity," of making "entrepreneurial decisions, of intervening within the framework of the social conditions, and of organizing social co-operation" (Lazzarato 1998, p. 46) - in other words, subjects who have taken to heart the principle of art as method.
That the above does not automatically lead to an affirmation of the social status quo, as some of you may fear and others may hope, becomes clear if you remind yourself that critique of society, not to mention its transformation, never originates from one location (there is no Archimedean point), but takes place in several places simultaneously. It needs artists who, with their aesthetic works, their sensory artefacts, offer us new modes of perceiving and thinking, new models of experience, place in our hands new instruments for drawing up maps and navigating. And in equal measure it needs media workers who - because they have developed their powers of authorship and throughout their studies battled against the Eigensinn of one or more medium - as filmmakers are capable of making television better than the programmes we see every day, as photographers are capable of deploying their medium in innovative fashion in newspapers, magazines, books and advertising, or as new-media specialists are capable of trying out and implementing cultures of playing other than the conventional shooter games, as well as new learning environments or the machinic platforms whose potential has so far hardly been fathomed.
The obstinate, wilful (eigensinnigen) members of society will perhaps not thank the graduates or the art colleges, but they will certainly need artists and media products of this kind, and will know how to use them for the greatest of all the arts: the art of their own life.

1) This article was revised and enlarged for the English translation. The original version dates back to 2003, when a first, abridged version was published in the Catalogue of the Ars Electronica Festival Linz2003: “Medien | Kunst | Ausbildung. Arbeit am und mit dem Eigensinn. Das Departement Neue Medien an der Hochschule für Gestaltung und Kunst Zürich“, in Code – The Language of Our Time. Code = Law, Code = Art, Code = Life. Ars Electronica 2003, deutsch / english, ed. by Gerfried Stocker und Christine Schöpf, Ostfildern: Hatje Cantz 2003, pp. 368-370 (engl.), pp. 371-373 (german).
The presend English version was then revised and enlarged again for a renewed German publication which has been published integrally in October 2005: „Medien | Kunst | Ausbildung – Über den Eigensinn als künstlerische Produktivkraft“, in, Schnittstellen, ed. by Sigrid Schade, Thomas Sieber, Georg Christoph Tholen. (= Basler Beiträge zur Medienwissenschaft. Bd. 1). Basel: Schwabe 2005.
My thanks go to Matthew Fuller and the Piet Zwart Institute of the Willem de Kooning Academy, Rotterdam, for making possible this publication in English.
2) An Old German scholar and professor based in Berlin and Potsdam, with whom I started thinking and talking about the problematic of Eigensinn of men and the media in the 1980s.
3) The celebrated Brothers Grimm (Jakob and Wilhelm) collected a wide range of German fairy tales in the early nineteenth century, and published them under the title of "Grimms Märchen". The collection immediately became famous, and has since been a standard on the bookshelves of every German-speaking household. Just as most British children will have heard episodes from "Alice in Wonderland" over and over again, children in Germany, Switzerland and Austria are familiar with "Grimms' Fairy Tales".
4) Eigensinn / Eigensinnigkeit is one if not the main focus of the whole work of Alexander Kluge. See e.g. his early works Lebensläufe (Kluge 1962) and der Luftangriff auf Halberstadt (Kluge 1977) as well as his recent works Chronik der Gefühle (Kluge 2000) und Die Lücke, die der Teufel hinterlässt (Kluge 2003).
5) The concept of Eigen (one’s own) and its compounds is mostly understood – even with Negt / Kluge - in an essentialistic way. In this understanding Eigensinn becomes the archimedic point of the unquestionable authenticity of individuality. I propose to think of the notion of Eigen and its compounds in a non-essentialistic way: The Eigene, Eigensinnigkeit of a person are effects of conscious and unconscious agencies and experiences. A person has to work off his agencies and experiences again and again, she or he has to construct and organize his/her Eigensinn again and again in a new way – in the sense of Michel Foucault's “aesthetics of existence”.
5a) For this and the overview up to Rousseau, see, Fuchs 1972; completions by the author, GS.
5b) See especially the Chapter "Lordship/Mastery and Bondage/Servitude" in his "Phenomenology of Spirit"; Hegel 1977, pp. 178ff.
6) Sibylle Krämer gives us an impressive analysis of these facts in Das Medium als Spur und Apparat (Krämer 2000). In opposition to Marshall McLuhan ("the medium is the message") and to positions referring to Niklas Luhmann ("the medium is nothing, it does not inform, it contains nothing") she argues that "the medium is not simply the message; rather the message keeps the trace (die Spur) of the medium (Krämer 2000, p. 81, my translation). This trace, which in everyday life we perceive only in the case of disturbances, is a crucial part of every artistic production - facts that amazingly Krämer is not aware of.
A thoroughgoing theoretical connection of the conception of Eigensinnigkeit of media (rooted in the framework of Cultural Studies, media and discourse analysis) with the conception of the Trace (rooted in linguistics and psychoanalysis), as a “present absence” in the sense of Derrida, has yet be accomplished.
7) See, in terms of literature, the work of writers so dissimilar as Alfred Döblin, John Dos Passos, Alfred Andersch, Alexander Kluge, as well as the books of Marshall McLuhan, which by all means can be regarded as literature, and in terms of filmmakers for instance the work of Sergei M. Eisenstein, Dziga Vertov, Jean-Luc Godard or Alexander Kluge.
8) An example that speaks for itself is the title of Walter Ruttmann's effective article Malerei mit Zeit (Painting with Time) of 1919, in which he tried to catch the new of the new art form film through an impressive formula (see Goergen 1989, p. 74.)
9) Photography furnishes a further example. "In early photography, the shots were often composed like paintings (...); the 'random' appearance of the snapshot, the caught moment, were not yet used." (Bell 2001, p. 116).
10) Impressive evidence for that thesis is yielded by the catalogue Autour du Symbolisme (2004) where the interplay between the art of painting and photography in the early days of photography is worked out in detail. The interplay expands from the legendary reaction of the painter Paul Delaroche in light of photography, “La peinture est morte”, to the poignant similarity of Gustave Courbet’s Origine du Monde and the stereoscopic photography of Auguste Belloc.
Furthermore, every given historical cycle is characterized by articulation through media with a dominant factor or dominant factors. At present, television remains the dominant factor.
11) Fura dels baus have started to discover the net as new platform for their interactive street theatre. See e.g. their interactive audio-net project F@ust 0.3 of 1998. (Further information and links concerning this project can be found on: The example of Furas dels Baus shows that a realisation of the two possibilities of dealing with a crisis of an art media does not mean an either – or. Both possibilities may be chosen by the same authors.
12) Shklovsky, Viktor Borisovic, "Art as Technique" in Russian Formalist Criticism: Four Essays, ed. Lee T. Lemon and Marion J. Reis, Lincoln: University of Nebraska Press, 1965, pp. 3-24.
13) Here I follow Renate Lachmann's rendering of the Russian term "ostranie" as "making strange". See Lachmann 1970, pp. 226-249.
14) The history of film, like that of all technology-based mediums, is rich in artists who worked not only on but explicitly with the Eigensinn of the medium.
Just some of the many other deserving names not mentioned so far are, with respect to film: Georges Méliès, the filmmakers of the first and second French avant-garde (like Germaine Dulac, Elie Faures), the exponents of the “Absolute film” (Walter Ruttmann, Viking Eggeling, Hans Richter), Guy Debord, the “documentary filmmaker” Chris Marker, as well as Stan Brakhage, the American filmmaker who died in 2002. Concerning music remember, among others, such different artists as Kurt Weill with his “Absolute Music”, John Cage, Frank Zappa, Prince, Eugene Chadbourne or Fred Frith; for literature, e.g. James Joyce, the Dadaists, the exponents of the “Concrete poetry”, Arno Schmidt, William Burroughs or Thomas Pynchon; for video art, Nam June Paik, Isidor Isou or Karl Gerstner, just to mention some of the first generation; for computer and networks as art media, among others, Jodi, I/O/D, Margarethe Jahrmann, Knowbotic Research or the Chaos Computer Club.
Television is the only media which hardly became an art format. “Television is indeed the most hopeless medium of all for the arts. (…) There was scarcely a phase, when everything was open, allowing creative investigation to define the medium.” (Daniels 2004, p. 58). In spite of the experiments of Otto Piene / Aldo Tambellini, Gerry Schum, Peter Weibel, Valy Export and the WHGB-TV station in Boston it remains a “medium without art” (ibid., p. 59) – with the exception of music video clips, which, though, were developed for different purposes.
An impressive insight, rich in its material, in the development of the tight interplay of media and the arts since the invention of the photography in 1939 to the present is given by the german-english omnibus volume Frieling/Daniels 2004.
15) Here it is necessary to recall something "remaining to be settled" ("ein Unabgegoltenes") in "materials aesthetics", which made strong "art as a specific mode of production". And, in doing so, simultaneously referred art to the fact that it is dependent on the general development of productive forces and would have to reflect upon these for the sake of its own development. A comprehensive insight into the history and projects of material aesthetics is offered by Mittenzwei 1977, pp. 695-730.

- 1460 Antworten auf die Frage: Was ist Kunst?, hrsg. v. Andreas Mäckler, Köln 2000.
- Autour du Symbolisme. Photographie et peinture au XIXe siècle, Bruxelles: Palais des Beaux-Arts 2004.
- Bell, Julian, What is Painting? Representation and Modern Art, London, 1999.
- Brauner, Joseph / Bickmann, Roland, Die multimediale Gesellschaft, Frankfurt a.M. 1994.
- Butler, Judith, The Psychic Life of Power. Theories in Subjection, Stanford: Stanford University Press, 1997.
- Daniels, Dieter, »Television – Art or Anti-Art? Conflict and cooperation between the avant-garde and the mass media in the 1960s and 1970s«, in: Frieling/Daniels 2004, pp. 58 – 79.
- David, Catherine, "Kunst und Arbeit im Informationszeitalter", in Daniel Libeskind et al., Alles Kunst? Wie arbeitet der Mensch im neuen Jahrtausend, und was tut er in der übrigen Zeit?, Reinbek 2001, pp. 183-200.
- Freud, Sigmund, Über den Traum (1901), in id., Gesammelte Werke. 18 Bde., Frankfurt 2001.
- Frieling, Rudolf / Daniels, Dieter, (Hrsg.), Medien – Kunst – Netz, Bd. 1: Medienkunst im Überblick / Media – Art –Net, vol. 1: Survey of Media Art, Wien /New York: Springer 2004.
- Fuchs, H.-J., »Eigenwille, Eigensinn«, in, Historisches Wörterbuch der Philosophie. Bd. 2. D - F, ed. Joachim Ritter, Basel / Stuttgart: Schwabe, pp. 342 - 345.
- Goergen, Jeanpaul, (Hrsg.), Walter Ruttmann, eine Dokumentation, Berlin 1989.
- Gramsci, Antonio: Philosophie der Praxis, ed. by H. Riechers, Frankfurt 1967.
- Grimm, Brothers: see "The Wilful Child".
- Hegel, Georg Wilhelm Friedrich, Phenomenology of Mind/Spirit, transl. A.V. Miller, Oxford: Oxford University Press, 1977.
- Kafka, Franz, "A Report for an Academy", in id., Metamorphosis and Other Stories, New York, 1966.
- Kluge, Alexander / Negt, Oskar, "Antigone und das eigensinnige Kind", in id., Geschichte und Eigensinn, Frankfurt a.M. 1981, pp. 765-769.
- Kluge, Alexander, Lebensläufe, Stuttgart 1962.
- Kluge, Alexander, »Der Luftangriff auf Halberstadt am 8. April 1945«, in id, Neue Geschichten. Hefte - 18. >Unheimlichkeit der Zeit<, Frankfurt a.M,: Suhrkamp 1977, pp. 33 - 106.
- Kluge, Alexander, Chronik der Gefühle, 2 vol., Frankfurt: Suhrkamp 2000.
- Kluge, Alexander, Die Lücke, die der Teufel hinterlässt. Im Umfeld des neuen Jahrhunderts, Frankfurt: Suhrkamp 2003.
Krämer Sybille, „Das Medium als Spur und Apparat", in id. (ed.), Medien, Computer, Realität. Wirklichkeitsvorstellungen und Neue Medien. Franfurt a.M.: Suhrkamp 2000.
- Lacan, Jacques, »Das Spiegelstadium als Bildner der Ich-Funktion, wie sie uns in der psychoanalytischen Erfahrung erscheint«, in id., Schriften. Bd 1. Baden-Baden, 1975, S. 61-70
- Lacan, Jacques, Seminaire Livre XVI : L'envers de la psychanalyse, Paris 1991.
- Lazzarato, Maurizio, "Immaterielle Arbeit. Gesellschaftliche Tätigkeiten unter den Bedingungen des Postfordismus", in Negri, Toni et al., Umherschweifende Produzenten. Immaterielle Arbeit und Subversion, Berlin 1998, pp. 39-52.
- Lachmann, Renate, "Die 'Verfremdung' und das 'Neue Sehen' bei Viktor Sklovskij", in Poetica, Bd. 3, H. 1-2, 1970, pp. 226-249.
- Lyotard, Jean-François, "The Sublime and the Avant-Garde", in id., The Inhuman: Reflections on Time, Cambridge: Polity, 1991, p. 89ff.
- Lyotard, Jean-François, "Answering the Question: What is Postmodernism?", in id., The Postmodern Condition: A Report on Knowledge, trans. G. Bennington and B. Massumi, Manchester 1984, p. 71f.
- Mittenzwei, Werner, "Brecht und die Schicksale der Materialästhetik", in Wer war Brecht. Wandlung und Entwicklung der Ansichten über Brecht im Spiegel von Sinn und Form, hrsg. und eingeleitet von Werner Mittenzwei, Berlin 1977, pp. 695-730.
- Schiesser, Giaco / Deuber, Astrid, "In der Echtzeit der Gefühle. Gespräch mit Alexander Kluge", in Die Schrift an der Wand. Alexander Kluge: Rohstoffe und Materialien, hrsg. v. Christian Schulte, Osnabrück 2000, pp. 361-370.
- Schiesser, Giaco, " Connectivity, Heterogeneity and Distortions - Productive Forces for our Times. The xxxxx connective force attack: open way to the public project of Knowbotic Research +cf (KRcF)", in Aussendienst. Kunstprojekte in öffentlichen Räumen Hamburgs, German/English, ed. Achim Könneke and Stephan Schmidt-Wulffen, Freiburg 2002, pp. 233-237.
- Schiesser, Giaco, "The wilful obstinacy of man - the wilful obstinacy of machines. An Introduction", in, Jahrmann, Margarete / Moswitzer, Max, Nybble Engine. A Nybble is Four Bits or Half of a Byte, Storage DVD, Wien 2003a.
- Schiesser, Giaco, »Media | Art | Education - Working on and with Eigensinn«, in, Code - The Language of Our Time. Code = Law, Code = Art, Code = Life, Ars Electronica 2003, Deutsch / Englisch, ed. by Gerfried Stocker und Christine Schöpf, Ostfildern: Hatje Cantz 2003b, pp. 368-370.
- Schiesser, Giaco, „Medien | Kunst | Ausbildung – Über den Eigensinn als künstlerische Produktivkraft“, in, Schnittstellen, ed. by Sigrid Schade, Thomas Sieber, Georg Christoph Tholen. (= Basler Beiträge zur Medienwissenschaft. Bd. 1). Basel: Schwabe 2005.
- Schwarz, Hans-Peter, "Medien - Kunst - Geschichte", in, Medien - Kunst - Geschichte, hrsg. v. Hans-Peter Schwarz / ZKM Karlsruhe, München / New York 1997, pp. 11-88.
- Shklovsky, Viktor Borisovic, "Art as Technique", in, Russian Formalist Criticism: Four Essays, ed. Lee T. Lemon and Marion J. Reis, Lincoln: University of Nebraska Press, 1965, pp. 3-24.
- "The Wilful Child", in, Jakob and Wilhelm Grimm, Household Tales, trans. Margaret Hunt, London 1884, vol. 2, p. 125.

Read more | Comments (0)

New York Prophecies by Richard Barbrook

'Biological intelligence is fixed, because it is an old, mature paradigm, but the new paradigm of non-biological computation and intelligence is growing exponentially. The crossover will be in the 2020s and after that, at least from a hardware perspective, non-biological computation will dominate...'

At the beginning of the 21st century, the dream of artificial intelligence is deeply embedded within the modern imagination. From childhood onwards, people in the developed world are told that computers will one day be able to reason - and even feel emotions - just like humans. In science fiction stories, artificial intelligences have long been favourite characters. Audiences have grown up with images of loyal robot buddies like Data in Star Trek TNG and of pitiless machine monsters like the cyborg in The Terminator. These science fiction fantasies are encouraged by confident predictions from prominent computer scientists. Continual improvements in hardware and software will eventually led to the creation of artificial intelligences more powerful than the human mind. Commercial developers are looking forward to selling sentient machines which can do the housework and help the elderly. Some computer scientists even believe that the invention of artificial intelligence is a spiritual quest. In California, Ray Kurzweil and his colleagues are eagerly waiting for the Singularity: the First Coming of the Silicon Messiah. Whether inspired by money or mysticism, all these advocates of artificial intelligence share the conviction that they know the future of computing - and their task is to get there as fast as possible.

Despite its cultural prominence, the meme of sentient machines is vulnerable to theoretical exorcism. Far from being a free-floating signifier, this prophecy is deeply rooted in time and space. Not surprisingly, contemporary boosters of artificial intelligence rarely acknowledge the antiquity of the concept itself. They want to move forwards not look backwards. Yet, it's over forty years since the dream of thinking machines first gripped the American public's imagination ago. The future of computing has a long history. Analysing this original version of the prophecy of artificial intelligence is the precondition for understanding its contemporary variants. With this motivation in mind, let's go back to the second decade of the Cold War when the world's biggest computer company put on a show about the wonders of thinking machines in the financial capital of the most powerful and wealthiest country on the planet...

A Millennium Of Progress

On the 22nd April 1964, the New York World's Fair was opened to the general public. During the next two years, this modern wonderland welcomed over 51 million visitors. Every section of the American elite was represented at the exposition: the federal government, US state governments, large corporations, financial institutions, industry lobbies and religious groups. The World's Fair proved that the USA was the leader in everything: consumer goods, democratic politics, show business, modernist architecture, fine art, religious tolerance, domestic living and, above all else, new technology. A 'millennium of progress' had culminated in the American century.

Not surprisingly, this fusion of hucksterism and patriotism was most pronounced among the pavilions of big business. Pepsi hired Disney to build a theme-park ride. The U.S. Rubber Company built a Pop Art big wheel in the shape of 'a giant whitewall tire'. Although they were very popular, these exhibits never became the stars of the show. What really impressed the millions of visitors to the exposition were the awe-inspiring displays of new technologies. Writers and film-makers had long fantasised about travelling to other worlds. Now, in NASA's Space Park, visitors could admire the huge rockets which had taken the first Americans into earth orbit. Ever since the Russians launched the Sputnik satellite in 1957, the two superpowers had been engaged in the 'space race': a competition to prove technological supremacy by carrying out spectacular feats outside the earth's atmosphere. By the time that the first visitors arrived in NASA's Space Park, America was on the verge of overtaking its rival. Despite its early setbacks, the USA was still Number One.

The corporate exhibitors also promised that the technological achievements of the present would soon be surpassed by the triumphs of tomorrow. General Motors' Futurama looked forward to a world of giant skyscrapers, underwater settlements and, best of all, holiday resorts on the moon. At its Progressland pavilion, General Electric predicted that electricity generated by nuclear fusion would be 'too cheap to meter'. For many corporations, the most effective method of proving their technological modernity was showcasing a computer. While most of the mainframes at the World's Fair were used as hi-tech gimmicks, IBM dedicated its pavilion exclusively to the wonders of computing as a distinct technology. For over a decade, this corporation had been America's leading mainframe manufacturer. In 1961, one single product - the IBM 1401 - had accounted for a quarter of all the computers operating in the USA. In the minds of most visitors, IBM was computing.

Just before the opening of the World's Fair, the corporation launched a series of products which would maintain its dominance over the industry for another two decades: the System/360. Seizing the opportunity for self-promotion offered by the exposition, the bosses of IBM commissioned a pavilion designed to eclipse all others. Eero Saarinen - the renowned Finnish architect - supervised the construction of the building: a white, corporate-logo-embossed, egg-shaped theatre which was suspended high in the air by 45 rust-coloured metal trees. Underneath this striking feature were interactive exhibits celebrating IBM's contribution to the computer industry. For the theatre itself, Charles and Ray Eames - the couple who epitomised American modernist design - created the main attraction at the IBM pavilion: 'The Information Machine'. After taking their places in the 500-seat 'People Wall', visitors were elevated upwards into the egg-shaped structure. Once inside, a narrator introduced a 'mind-blowing' multi-media show about how the mainframes exhibited in the IBM pavilion were forerunners of the sentient machines of the future. Computers were in the process of acquiring consciousness: artificial intelligence.

For over a decade, prominent computer scientists in USA had been convinced that machines would sooner or later become indistinguishable from humans. Language was a set of rules which could be codified as software. Learning from new experiences could be programmed into computers. With the launch of the System/360 series, mainframes were now powerful enough to construct the prototypes of thinking machines. At the 1964 World's Fair, IBM proudly announced that the dream of artificial intelligence was about to be realised. In the near future, every American would have their own Robby the Robot.

'Duplicating the problem-solving and information-handling capabilities of the [human] brain is not far off; it would be surprising if it were not accomplished within the next decade.'

The IBM pavilion's stunning combination of avant-garde architecture and multi-media performance was a huge hit with both the press and the public. Alongside space rockets and nuclear reactors, the computer had confirmed its place as one of the three iconic technologies of modern America. Most visitors to the New York World's Fair understood the ideological message of the machines on display: the present was the future in embryo. Within at the IBM pavilion, computers existed in two time frames at once. On the one hand, the current models on display were prototypes of the sentient machines of the future. On the other hand, the dream of artificial intelligence showed the true potential of the mainframes exhibited in the IBM pavilion. At the New York World's Fair, new technology was displayed as the fulfilment of science fiction fantasy: the imaginary future.

Exhibiting New Technology

When the New York World's Fair opened, Americans had good reasons for feeling optimistic about their prospects. During the previous fifty years, their nation had out-fought, out-produced and out-smarted all of its imperial rivals. By 1964, the USA had become an economic and military superpower without comparison. Above all, America was the global leader in the three most important new technologies: space rockets, nuclear reactors and mainframe computers.

The New York World's Fair demonstrated that the USA not only owned the future, but also the past. For over a century, cities across the world had been organising international expositions. Some were little more than glorified trade fairs. Others had been major cultural events. What united all of them was their common inspiration: the 1851 Great Exhibition of the Works of Industry of All Nations. Flush with the wealth and power which flowed from owning the 'workshop of the world', the British elite had organised an international celebration of the wonders of economic progress. The Crystal Palace - a futuristic iron and glass building - was erected in a central London park. During its six months of operation, around one-fifth of the entire British population went to see the Great Exhibition. Once there, visitors were treated to a dazzling display of new products from the factories and exotic imports from the colonies. For most visitors, the stars of the show were the machines which were powering the world's first industrial revolution: cotton looms, telegraphy systems, farm equipment, rotary printing presses and, best of all, steam engines. The message of the technology exhibits was clear. Britain was the richest and most powerful nation on the planet because the British invented the best machines.

The promoters of the 1851 Great Exhibition declared that their event would give '...coherence to the idea of liberalism.' By wandering around the Crystal Palace, visitors would learn to admire the achievements of British industry. The layout of the exhibits of raw materials, machinery and finished goods was designed to give an overview of the manufacturing process. Despite this pedagogical intent, the displays at the Great Exhibition systematically ignored the lives of the people who had created the products on show. The silk dresses betrayed no traces of the horrors of the sweatshops where they were made. The glassware from Ireland contained no reminders of the terrible famine which had recently devastated the country. Public display was - paradoxically - the most effective method of social concealment: 'World exhibitions were places of pilgrimage to the fetish Commodity.'

Although the Crystal Palace was filled manufactured goods, none of them were directly on sale to the general public. Commodities became more than just commodities when on show at the Great Exhibition. With their labour hidden and their price irrelevant, their symbolic role of industrial products took centre stage. The commodity was transformed into an artwork. Use value and exchange value had been temporarily superseded by a more esoteric social phenomenon: exhibition value.

Within the space of the Crystal Palace, new technologies easily won the competition for public attention. Yet, the organisers of the Great Exhibition had originally envisaged a very different focus for their event: the promotion of high-quality British design. When the Crystal Palace was laid out, the prime location in the middle of the main hall was allocated to an exhibit of Gothic Revival furniture and religious items. Although inspired by English patriotism, this faux-medieval look deliberately avoided any aesthetic affinity with the foundations of the nation's domination over the world: the industrial revolution. Crucially, this retro-style also shaped the politics of Victorian England. The ruling elite took delight in disguising their hi-tech commercial republic as a romantic medieval monarchy. In the most modern nation in the world, the latest industrial innovation masqueraded as an archaic feudal custom: the invented tradition.

'[England's] essence is strong with the strength of modern simplicity; its exterior is august with the Gothic grandeur of a more imposing age.'

For its organisers, the Great Exhibition's primary purpose was the promotion of aesthetic nostalgia. Like the railway stations of Victorian England, new products in the Crystal Palace were supposed to be disguised as ancient artefacts. Yet, despite the best efforts of the organisers, it was the machinery hall which became the most popular section of the Crystal Palace. Gothic Revival furniture couldn't match the emotional impact of the noise and energy of working steam engines. More importantly, the machinery hall proudly celebrated the new technologies which had turned England into an economic and military superpower. Instead of disguising innovations as antiquities, the present was identified with better times to come. Invented tradition had lost out to the imaginary future.

Inside the Crystal Palace, new technology became the icon of modernity. Separated twice from its origins in human labour first through the market and then through the exposition, machinery was materialised ideology. Since the moment of production had disappeared from view, the specific ideology materialised in new technology was open to interpretation. Both bourgeois liberals and working class socialists found confirmation of their political beliefs in the steam engines of the Great Exhibition. Despite their deep differences about the ideological meaning of new technologies, the two sides agreed on one thing: defining the symbolism of machinery meant owning the imaginary future.

This political imperative also provided the impetus behind the world exposition movement. After the triumph of the Great Exhibition, other countries quickly organised their own industrial festivals to break the British ideological monopoly over the future. Within only two years, New York had held its first World's Fair and, a couple years later, Paris had hosted its inaugural exposition. Like the Great Exhibition, these imitators were much more than just trade fairs. The 1893 Chicago Columbian Exposition had more than 21 million visitors and the 1900 Paris Universal Exposition attracted nearly 48 million spectators. Whether as tourists, professionals or activists, huge numbers of people from many different nations and cultures came together at these events. World expositions were prefiguring world peace.

Despite these hopes, these expositions were also intensely nationalistic occasions. The main motivation for inviting foreigners to the Great Exhibition was so they could witness the economic supremacy of the British empire with their own eyes. When other countries subsequently put on their own expositions, the organisers always prioritised demonstrations of national technological excellence. The 1889 Paris Universal Exposition was immortalised by the superb engineering achievement of the Eiffel Tower. However, by the time that this exhibition opened, the European powers were already falling behind the rapid pace of innovation taking place in the USA. Only a few years after the Eiffel Tower was built, the Palace of Electricity at the Chicago Columbian Exposition provided spectacular proof of the technological superiority of US industry over its European rivals. America was taking ownership of the future.

During the first half of the twentieth century, the disparity between the two continents became ever more obvious. In the late-1930s, their diverging fortunes were dramatically demonstrated by the expositions held in Paris and New York. Visitors to the 1937 Paris International Exhibition were confronted with a sombre image of the world: the two massive pavilions of Nazi Germany and Stalinist Russia championing their rival versions of the totalitarian imaginary future. The political and ideological divisions driving Europe towards catastrophe were starkly symbolised in brick and concrete. In complete contrast, the icons of the 1939 New York World's Fair were Democracity - the main attraction of the organisers' Perisphere building - and Futurama - a diorama inside the General Motors' pavilion. Both exhibits promoted a utopian vision of an affluent and hi-tech America of the 1960s. In this imaginary future, the majority of population lived in family homes in the suburbs and commuted to work in their own motor cars. The USA was about to become a consumer society.

Facing such strong competition for the attention of visitors, other corporations resorted to displaying sci-fi fantasy machines. The star exhibit of the Westinghouse pavilion was Electro: a robot which '... could walk, talk, count on its fingers, puff a cigarette, and distinguish between red and green with the aid of a photoelectric cell.' This gimmick provided the inspiration for the imaginary future of artificial intelligence. Until the 1939 World's Fair, robots in science fiction stories were usually portrayed as emotionless monsters intent on destroying their human masters. Only a year after the exposition closed, Isaac Asimov decided to change this negative image. Just like Electro in the Westinghouse pavilion, his fictional robots were safe and friendly products of a large corporation. During the 1950s, this change of image led to artificial intelligence becoming one of the USA's most popular imaginary futures. In both science fiction and science fact, the robot servant was the symbol of better times to come.
Cold War Computing

For most visitors to the 1939 New York World's Fair, its imaginary future of consumer prosperity must have seemed like a utopian dream. The American economy was still recovering from the worst recession in the nation's history and Europe was on the brink of another devastating war. Yet, by the time that the 1964 World's Fair opened, the most famous prediction of the 1939 exposition had been realised. The Democracity and Futurama dioramas had portrayed a future where most workers were living in the suburbs and commuting into work in motor cars. However sceptical visitors might have been back in 1939, this prophecy seemed remarkably accurate twenty-five years later. By the early-1960s, America was a suburban-dwelling, car-owning consumer society. Exhibition value had become everyday reality.

'The motor car ... directs [social] behaviour from economics to speech. Traffic circulation is one of the main functions of a society ... Space [in urban areas] is conceived in terms of motoring needs and traffic problems take precedence over accommodation ... it is a fact that for many people the car is perhaps the most substantial part of their 'living conditions'.'

Since the most famous prophecy of the 1939 exposition had largely come true, visitors to the 1964 New York World's Fair could have confidence that its three main imaginary futures would also be realised. Who could doubt that - by 1989 at the latest - the majority of Americans would be enjoying the delights of space tourism and unmetered electricity? Best of all, they would be living in a world where sentient machines were their devoted servants. The American public's confidence in these imaginary futures was founded upon a mistaken sense of continuity. Despite being held on the same site and having many of the same exhibitors, the 1964 World's Fair had a very different focus from its 1939 antecedent. Twenty-five years earlier, the centrepiece of the exposition had been the motor car: a mass produced consumer product. In contrast, the stars of the show at the 1964 World's Fair were state-funded technologies for fighting the Cold War. Computers calculated the trajectories which would send American nuclear missiles to destroy Russian cities and their unfortunate inhabitants. While its 1939 predecessor had showcased motorised transportation for the masses, the stars of the 1964 World's Fair were the machines of atomic armageddon.

In earlier expositions, the public display of new products had intensified the effects of commodity fetishism. Exhibition value added another degree of separation between creation and consumption. Above all, this social phenomenon concentrated the public's attention on the symbolic role of new technologies. The present was portrayed as the immediate precursor of the imaginary future. Inside its 1939 pavilion, General Motors' latest products played a supporting role to the Futurama diorama which portrayed the corporation's ambition to turn the majority of the US population into suburban-dwelling, car-owning consumers. But, despite its prioritisation of exhibition value, this exposition couldn't totally ignore the use value of new technology. Almost everyone at the 1939 World's Fair had at some point travelled in a motor car. Although it might obscure the social origins of products, the imaginary future expressed the potential of a really-existing present.

The 1964 New York World's Fair needed a much higher level of fetishisation. For the first time, exhibition value had to deny the principle use value of new technologies. Whatever their drawbacks, motor cars provided many benefits for the general public. In contrast, space rockets, nuclear reactors and mainframe computers had been invented for murdering millions of people. Although the superpowers' imperial hegemony depended upon nuclear weapons, the threat of global annihilation made their possession increasingly problematic. Two years earlier, the USA and Russia had almost blundered into a catastrophic war over Cuba. Despite disaster being only narrowly averted, the superpowers were incapable of stopping the arms race. In the bizarre logic of the Cold War, the prevention of an all-out confrontation between the two blocs depended upon the continual growth in the number of nuclear weapons held by both sides. The ruling elites of the USA and Russia had difficulties in admitting to themselves - let alone to their citizens - the deep irrationality of this new form of military competition. In a rare moment of lucidity, American analysts invented an ironic acronym for this high-risk strategy of 'mutually assured destruction': MAD.

Not surprisingly, the propagandists of both sides justified the enormous waste of resources on the arms race by promoting the peaceful applications of the leading Cold War technologies. By the time that the 1964 New York World's Fair opened, the weaponry of genocide had been successfully repackaged into people-friendly products. Nuclear power would soon be providing unmetered energy for everyone. Space rockets would shortly be taking tourists for holidays on the moon. Almost all traces of the military origins of these technologies had disappeared. Exhibition value completely covered up use value.

Like nuclear reactors and space rockets, computers had also been developed as Cold War weaponry. ENIAC - the first mainframe ever built in America - was a machine for calculating tables to improve the accuracy of artillery guns. From the early-1950s onwards, IBM's computer division was focused on winning orders from the American government. Using mainframes supplied by the corporation, the US military prepared for nuclear war, organised invasions of 'unfriendly' countries, directed the bombing of enemy targets, paid the wages of its troops, ran complex war games and managed its supply chain. Thanks to American taxpayers, IBM became the technological leader of the computer industry.

When the 1964 New York World's Fair opened, the corporation was still closely involved in a wide variety of military projects. Yet, just like the displays of fission reactors and space rockets, the computing exhibits at 1964 World's Fair carefully avoided showing the military applications of this new technology. Although IBM had grown rich from government contracts, the corporation's pavilion was dedicated to promoting the sci-fi fantasy of thinking machines. Like the predictions of unmetered energy and space tourism, the imaginary future of artificial intelligence distracted visitors at the World's Fair from discovering the original motivation for developing IBM's mainframes: killing millions of people. Visitors were supposed to admire the achievements of US industry not to question its dubious role in the arms race. The horrors of the Cold War present had to be hidden by the marvels of the imaginary futures.

Cybernetic Supremacy

At the 1964 World's Fair, imaginary futures temporarily succeeded in concealing the primary purpose of its three iconic technologies from the American public. But even the finest-crafted exhibition values couldn't hide dodgy use values for ever. As the decades passed, none of the predictions made at the World's Fair about the key Cold War technologies were realised. Energy remained metered, tourists didn't visit the moon and computers never became intelligent. Unlike the prescient vision of motoring for the masses at the 1939 World's Fair, the prophecies about the star technologies of the 1964 exposition seemed almost absurd twenty-five years later. Hyper-reality had collided with reality - and lost.

Like the displays of nuclear reactors and space rockets, the computer exhibits at the 1964 World's Fair also misread the direction of technological progress. Yet, there was one crucial difference between the collapse of the first two prophecies and that of the last one. What eventually discredited the predictions of unmetered electricity and holidays on the moon was their failure to appear over time. In contrast, scepticism about the imaginary future of artificial intelligence was encouraged by exactly the opposite phenomenon: the increased likelihood of people having personal experience of computers. After using these imperfect tools for manipulating information, it was much more difficult for them to believe that calculating machines could evolve into sentient superbeings.

Despite the failure of its prophecy, IBM suffered no damage. In stark contrast with nuclear power and space travel, computing was the Cold War technology which successfully escaped from the Cold War. Right from the beginning, machines made for the US military were also sold to commercial clients. By the time that IBM built its pavilion for the 1964 World's Fair, the imaginary future of artificial intelligence had to hide more than the unsavoury military applications of computing. Exhibition value also performed its classic function of concealing the role of human labour within production. Computers were described as 'thinking' so the hard work involved in designing, building, programming and operating them could be discounted. Above all, the prophecy of artificial intelligence diverted attention away from the role of technological innovation within American workplaces.

The invention of computers came at an opportune moment for big business. During the first half of the twentieth century, large corporations had become the dominant institutions of the American economy. Henry Ford's giant car factory became the eponymous symbol of the new social paradigm: Fordism. When profitable, corporations replaced the indirect regulation of production by markets with direct supervision by bureaucrats. As the wage-bill for white-collar employees steadily rose, businesses needed increasing amounts of equipment to raise productivity within the office. Long before the invention of the computer, Fordist corporations were running an information economy with tabulators, typewriters and other types of office equipment. However, by the beginning of the 1950s, the mechanisation of clerical labour had stalled. Increases in productivity in the office were lagging well behind those in the factory. When the first computers appeared on the market, corporate managers quickly realised that the new technology offered a solution to this pressing problem. The work of large numbers of tabulator operators could now be done by a much smaller group of people using a mainframe. Even better, the new technology of computing enabled capitalists to deepen their control over their organisations. Much more information about many more topics could now be collected and processed in increasingly complex ways. Managers were masters of all that they surveyed.

Almost from its first appearance in the workplace, the mainframe was caricatured - with good reason - as the mechanical perfection of bureaucratic tyranny. In Asimov's sci-fi stories, Mr and Mrs Average were the owners of robot servants. Yet, when the first computers arrived in America's factories and offices, this new technology was controlled by the bosses not the workers. In 1952, Kurt Vonnegut published a sci-fi novel which satirised the authoritarian ambitions of corporate computing. In his dystopian future, the ruling elite had delegated the management of society to an omniscient artificial intelligence.

'EPICAC XIV ... decided how many [of] everything America and her customers could have and how much they would cost. And it ... would decide how many engineers and managers and ... civil servants, and of what skills, would be needed to deliver the goods; and what I.Q. and aptitude levels would separate the useful men [and women] from the useless ones, and how many ... could be supported at what pay level...'

For business executives, Vonnegut's nightmare was their computer daydream. As mainframes increased in power, companies were able to automate more and more clerical tasks. According to the prophets of artificial intelligence, the computerisation of clerical work was only the first step. For its new System/360 machines, IBM had constructed the world's most advanced computer-controlled assembly-lines to increase the productivity of its high-skill, high-wage employees. When thinking machines were developed, mainframes would completely replace most forms of administrative and technical labour within manufacturing. The ultimate goal was the creation of the fully-automated workplace. In the imaginary future of artificial intelligence, the corporation and the computer would be one and the same thing.

As the US military had already fortuitously discovered, machinery could operate much more efficiently without any human intervention. By building predetermined responses into the design, an inanimate weapon acted according to 'feed-back' from its environment. According to Norbert Wiener, these self-regulating technologies had been forerunners of the computer. In turn, the advent of mainframe heralded the remoulding of the whole of society in the image of a new technological paradigm: cybernetics.

'The notion of programming in the factory had already become familiar through the work of Taylor ... on time study, and was ready to be transferred to the machine. ... The consequent development of automatisation ... [is] one of the great factors conditioning the social and technical life of the age to come...'

The corporate vision of cybernetic Fordism meant forgetting the history of Fordism itself. This economic paradigm had been founded upon the successful co-ordination of mass production with mass consumption. Ironically, since their exhibition value was more closely connected to social reality, Democracity and Futurama in 1939 provided a much more accurate prediction of the development path of computing than the IBM pavilion did in 1964. Just like motor cars twenty-five years earlier, this new technology was also slowly being transformed from a rare, hand-made machine into a ubiquitous, factory-produced commodity. IBM's own System/360 series of computers - launched in the same month as the 1964 World's Fair opened - was at the 'cutting edge' of this process. Like Ford's motor cars before them, IBM's mainframes were manufactured on assembly-lines. These opening moves towards the mass production of computers anticipated what would be most important advance in this sector twenty-five years later: the mass consumption of computers.

The imaginary future of artificial intelligence was a way of avoiding thinking about the likely social consequences of the widespread ownership of computers. In the early-1960s, Big Brother mainframe belonged to big government and big business. Above all, 'feedback' was knowledge of the ruled monopolised by the rulers. However, as Norbert Wiener himself had pointed out, Fordist production would inevitably transform expensive mainframes into cheap commodities. In turn, increasing ownership of computers was likely to disrupt the existing social order. For the 'feedback' of information within human institutions was most effective when it was two-way. By reconnecting conception and execution, cybernetic Fordism threatened the social hierarchies which underpinned Fordism itself.

'... the simple coexistence of two items of information is of relatively small value, unless these two items can be effectively combined in some mind ... which is able to fertilises one by means of the other. This is the very opposite of the organisation which every member travels a preassigned path...'

At the 1964 World's Fair, this possibility was definitely not part of IBM's imaginary future. Rather than aiming to produce ever greater numbers of more efficient machines at cheaper prices, the corporation was focused on steadily increasing the capabilities of its computers to preserve its near-monopoly over the military and corporate market. Instead of room-sized machines shrinking down into desktops, laptops and, eventually, mobile phones, IBM was convinced that computers would always be large and bulky mainframes. The corporation fervently believed that - if this path of technological progress was extrapolated - artificial intelligence must surely result. Crucially, this conservative recuperation of cybernetics implied that sentient machines would inevitably evolve into lifeforms which were more advanced than mere humans. The Fordist separation between conception and execution would have achieved its technological apotheosis.

Not surprisingly, IBM was determined to counter this unsettling interpretation of its own futurist propaganda. At the 1964 World's Fair, the corporation's pavilion emphasised the utopian possibilities of computing. Yet, despite its best efforts, IBM couldn't entirely avoid the ambiguity inherent within the imaginary future of artificial intelligence. This fetishised ideology could only appeal to all sections of American society if computers fulfilled the deepest desires of both sides within the workplace. Therefore, in the exhibits at its pavilion, IBM promoted a single vision of the imaginary future which combined two incompatible interpretations of artificial intelligence. On the one hand, workers were told that all their needs would be satisfied by sentient robots: servants who never tired, complained or questioned orders. On the other hand, capitalists were promised that their factories and offices would be run by thinking machines: producers who never slacked off, expressed opinions or went on strike. Robby the Robot had become indistinguishable from EPICAC XIV. If only at the level of ideology, IBM had reconciled the social divisions of 1960s America. In the imaginary future, workers would no longer need to work and employers would no longer need employees. The sci-fi fantasy of artificial intelligence had successfully distracted people from questioning the impact of computing within the workplace. After visiting IBM's pavilion at the 1964 World's Far, it was all too easy to believe that everyone would win when the machines acquired consciousness.

Inventing New Futures

Forty years later, we're still waiting for the imaginary future of artificial intelligence. In the intervening period, we've been repeatedly promised its imminent arrival. Yet, despite continual advances in hardware and software, machines are still incapable of 'thinking'. The nearest thing to artificial intelligence which most people have encountered are characters in video games. But, as the growing popularity of on-line gaming demonstrates, a virtual opponent is a poor substitute for a human player. Looking back at the history of this imaginary future, it is obvious that neither the optimistic nor the pessimistic versions of artificial intelligence have been realised. Robby the Robot isn't our devoted servant and EPICAC XIV doesn't control our lives. Instead of evolving into thinking machines, computers have become consumer goods. Room-sized mainframes have kept on shrinking into smaller and smaller machines. Computers are everywhere in the modern world - and their users are all too aware that they're dumb.

Repeated failure should have discredited the imaginary future of artificial intelligence for good. Yet, its proponents remain unrepentant. Four decades on from the 1964 World's Fair, IBM is still claiming that computers are on the verge of acquiring consciousness. The persistence of this fantasy demonstrates the continuing importance of exhibition value within the computer industry. As in the early-1960s, artificial intelligence still provides a great cover story for the development of new military technologies. Bringing on the Singularity seems much more friendly than collaborating with American imperialism. Even more importantly, this imaginary future continues to disguise the impact of computing within the workplace. Both managers and workers are still being promised technological fixes for socio-economic problems. The dream of sentient machines makes better media copy than the reality of cybernetic Fordism. At the beginning of the 21st century, artificial intelligence remains the dominant ideological manifestation of the exhibition value of computing.

The credibility of this imaginary future depends upon forgetting its embarrassing history. Looking back at how earlier versions of the prophecy were repeatedly discredited encourages deep scepticism about its contemporary iterations. Our own personal frustrations with computer technology should demonstrate the improbability of its transformation into the Silicon Messiah. Forty years after the New York World's Fair, artificial intelligence has become an imaginary future from the distant past. What is needed instead is a much more sophisticated analysis of the potential of computing. The study of history should inform the reinvention of the future. Messianic mysticism must be replaced by pragmatic materialism. Above all, this new image of the future should celebrate computers as tools for augmenting human intelligence and creativity. Exhibition value must give way to use value. Praise for top-down hierarchies of control must be superseded by the advocacy of two-way sharing of information. Let's be inspired and passionate about imagining our own visions of the better times to come.


Isaac Asimov, I, Robot, Panther, London 1968.

Isaac Asimov, The Rest of the Robots, Panther, London 1968.

Stephen Ambrose, The Rise to Globalism: American foreign policy 1938-1970, Penguin, London 1971.

Jeffrey Auerbach, The Great Exhibition of 1851: a nation on display, Yale University Press, New Haven 1999.

Walter Bagehot, The English Constitution, Fontana, London 1963.

Richard Barbrook, 'Cyber-communism: how the Americans are superseding capitalism in cyberspace', Science as Culture, No. 1, Vol. 9, 2000, pp. 5-40,

Richard Barbrook and Pit Schultz, 'The Digital Artisans Manifesto', ZKP 4, nettime, Ljubljana 1997, pp. 52-53,

James Bell, 'Exploring the 'Singularity'',

James Beniger, The Control Revolution: technological and economic origins of the information society, Harvard University Press, Cambridge Mass 1986.

Walter Benjamin, 'The Work of Art in the Age of Mechanical Reproduction', Illuminations, Fontana, London 1973, pp. 211-244.

Walter Benjamin, 'Paris - the capital of the nineteenth century', Charles Baudelaire: a lyric poet in the era of high capitalism, Verso, London 1973, pp. 155-176.

Edmund Berkeley, The Computer Revolution, Doubleday, New York 1962.

Robert Brain, Going to the Fair: readings in the culture of nineteenth-century exhibitions, Whipple Museum of the History of Science, Cambridge 1993.

James Cameron (director), The Terminator, MGM/United Artists, 1984.

Paul Ceruzzi, A History of Modern Computing, MIT Press, Cambridge Mass 2003.

Urso Chappell, 'Expomuseum: World's Fair history, architecture and memorabilia',

Robert Dallek, John F. Kennedy: an unfinished life 1971-1963, Penguin, London 2003.

Richard Thomas DeLamarter, Big Blue: IBM's use and abuse of power, Pan, London 1986.

Charles Eames and Ray Eames, A Computer Perspective: a sequence of 20th century ideas, events and artefacts from the history of the information machine, Harvard University Press, Cambridge Mass 1973.

Editors of Time-Life Books, Official Guide New York World's Fair 1964/5, Time, New York 1964.

Exposition Publications, Official Guide Book of the New York World's Fair 1939, Exposition Publications, New York 1939.

Henry Ford in collaboration with Samuel Crowther, My Life and Work, William Heinemann, London 1922.

Igor Golomstock, Totalitarian Art in the Soviet Union, the Third Reich, Fascist Italy and the People's Republic of China, Collins Harvill, London 1990.

Eric Hobsbawm and Terence Ranger (eds.), The Invention of Tradition, Cambridge University Press, Cambridge 1983.

Honda, 'Asimo',

Jeremy Isaacs and Taylor Dowling, Cold War: for 45 years the world held its breath, Bantam, London 1998.

Herman Kahn, On Thermonuclear War, Princeton University Press, Princeton 1960.

Ray Kurzweil, 'The Intelligent Universe',

Fritz Lang (director), Metropolis, Eurekavideo, 2003.

Henri Lefebvre, Everyday Life in the Modern World, Transaction Publications, New Brunswick 1984.

Henry Luce, The American Century, Time, New York 1941.

F.S.L. Lyons, Ireland Since the Famine, Fontana, London 1985.

Karl Marx, Capital Volume 1: a critique of political economy, Penguin, London 1976.

Marvin Minsky, 'Steps Towards Artificial Intelligence',

Marvin Minsky, 'Matter, Mind and Models',

Emerson Pugh, Building IBM: shaping an industry and its technology, MIT Press, Cambridge Mass 1995.

Emerson Pugh, Lyle Johnson and John Palmer, IBM's 360 and Early 370 Systems, MIT Press, Cambridge Mass 1991.

Julie Rose, 'Reactions to the Fair',

James Schefter, The Race: the definitive story of America's battle to beat Russia to the moon, Century, London 1999.

Herbert Simon, The Shape of Automation for Men and Management, Harper, New York 1965.

Robert Sobel, IBM: colossus in transition, Truman Talley, New York 1981.

Jeffrey Stanton, 'Best of the World's Fair',

Jeffrey Stanton, 'Building the 1964 World's Fair',

Jeffrey Stanton, 'Showcasing Technology at the 1964-1965 New York World's Fair',

Robert A. M. Stern, Thomas Mellins and David Fishman, New York 1960: architecture and urbanism between the Second World War and the Bicentennial, Benedikt Taschen, Köln 1997.

Kurt Vonnegut, Jr., Player Piano, Panther, St. Albans 1969.

Immanuel Wallerstein, The Politics of the World-Economy: the states, the movements and the civilisations, Cambridge University Press, Cambridge 1984.

Norbert Wiener, The Human Uses of Human Beings: cybernetics and society, Avon Books, New York 1967.

Fred Wilcox (director), Forbidden Planet, Turner Entertainment, 1999.

Wikipedia, 'Data (Star Trek)',

Ray Kurzweil, 'The Intelligent Universe', p. 3.

See Wikipedia, 'Data (Star Trek)'; and James Cameron, The Terminator.

See Honda, 'Asimo'.

See James Bell, 'Exploring the 'Singularity''.

See Jeffrey Stanton, 'Building the 1964 World's Fair'; and 'Best of the World's Fair'.

'A Millennium of Progress' was one of the three feel-good themes used to promote the World's Fair. The publisher Henry Luce had announced the advent of the American century in 1941. See Jeffrey Stanton, 'Building the 1964 World's Fair'; and Henry Luce, The American Century.

See Editors of Time-Life Books, Official Guide New York World's Fair 1964/5, pp. 94, 96.

See Editors of Time-Life Books, Official Guide New York World's Fair 1964/5, p. 212.

See Editors of Time-Life Books, Official Guide New York World's Fair 1964/5, p. 208.

See James Schefter, The Race, pp. 145-231.

See Editors of Time-Life Books, Official Guide New York World's Fair 1964/5, pp. 52-53, 220, 222.

See Editors of Time-Life Books, Official Guide New York World's Fair 1964/5, pp. 90-92; and Jeffrey Stanton, 'Showcasing Technology at the 1964-1965 New York World's Fair'.

See Emerson Pugh, Building IBM, pp. 265-267.

In the early-1960s, IBM had a 70% share of the mainframe market and was making over 40% profit on some of its machines. See Paul Ceruzzi, A History of Modern Computing, pp. 110-112; and Richard Thomas DeLamarter, Big Blue, pp. 47-49.

See Richard Thomas DeLamarter, Big Blue, pp. 54-146.

See Editors of Time-Life Books, Official Guide New York World's Fair 1964/5, pp. 70-74; Jeffrey Stanton, 'Showcasing Technology at the 1964-1965 New York World's Fair'; and Robert Stern, Thomas Mellins and David Fishman, New York 1960, p. 1046.

See Marvin Minsky, 'Matter, Mind and Models'; and 'Steps Towards Artificial Intelligence'.

Robby the Robot was the devoted mechanical servant in Fred Wilcox, Forbidden Planet.

Herbert Simon, The Shape of Automation for Men and Management, p. 39. This confident prediction was made in 1960.

See Immanuel Wallerstein, The Politics of the World-Economy; and Stephen Ambrose, The Rise to Globalism.

See Jeffrey Auerbach, The Great Exhibition of 1851, pp. 47-53, 137-140.

See Robert Brain, Going to the Fair, pp. 97-103; and Jeffrey Auerbach, The Great Exhibition of 1851, pp. 104-108.

Jeffrey Auerbach, The Great Exhibition of 1851, p. 31.

See Jeffrey Auerbach, The Great Exhibition of 1851, pp. 100-104, 132-134.

See F.S.L. Lyons, Ireland Since the Famine, pp. 42-46.

Walter Benjamin, 'Paris - the capital of the nineteenth century', p. 165. Also see Karl Marx, Capital Volume 1, pp. 163-177.

See Walter Benjamin, 'The Work of Art in the Age of Mechanical Reproduction', pp. 218-219.

See Jeffrey Auerbach, The Great Exhibition of 1851, pp. 17-23, 113-118.

See Eric Hobsbawm and Terence Ranger, The Invention of Tradition.

Walter Bagehot, The English Constitution, p. 65.

See Robert Brain, Going to the Fair, p. 10.

See Jeffrey Auerbach, The Great Exhibition of 1851, pp. 161-189.

See Urso Chappell, Expomuseum.

See Julie Rose, 'Reactions to the Fair'.

See Igor Golomstock, Totalitarian Art, pp. 132-137.

See Exposition Publications, Official Guide Book of the New York World's Fair 1939, pp. 42-45, 207-209.

Charles Eames and Ray Eames, A Computer Perspective, p. 105.

For a famous 1920s example of these malevolent artificial beings, see Fritz Lang, Metropolis.

See Isaac Asimov, I, Robot; and The Rest of the Robots.

Henri Lefebvre, Everyday Life in the Modern World, p. 100.

See Jeremy Isaacs and Taylor Dowling, Cold War, pp. 230-243.

See Robert Dallek, John F. Kennedy, pp. 535-574.

See Jeremy Isaacs and Taylor Dowling, Cold War, pp. 230-243; and Herman Kahn, On Thermonuclear War, pp. 119-189.

See Paul Ceruzzi, A History of Modern Computing, p. 15.

See Emerson Pugh, Building IBM, pp. 167-172.

See Edmund Berkeley, The Computer Revolution, pp. 56-7, 59-60, 137-145.

In late-1950s, a US airforce think-tank estimated that an all-out nuclear war between the two superpowers would kill around 90 million Americans. In the worst case scenario, 160 million would have lost their lives. Herman Kahn, On Thermonuclear War, pp. 109-114.

See Emerson Pugh, Building IBM, p. 152-155.

See Henry Ford, My Life and Work.

See James Beniger, The Control Revolution, pp. 291-425.

See Robert Sobel, IBM, pp. 95-184.

'A modern computer can calculate more in ten minutes than a man [or woman] can calculate in fifty years, even if the man [or woman] is using a desk calculating machine.' Edmund Berkeley, The Computer Revolution, p. 5.

Kurt Vonnegut, Player Piano, p. 106.

See Emerson Pugh, Lyle Johnson and John Palmer, IBM's 360 and Early 370 Systems, pp. 87-105, 204-210.

'... we will soon have the technological means ... to automate all management decisions.' Herbert Simon, The Shape of Automation for Men and Management, p. 47.

See Charles Eames and Ray Eames, A Computer Perspective, pp. 128-129, 146-149.

Norbert Wiener, The Human Uses of Human Beings, pp. 204-205.

See Emerson Pugh, Lyle Johnson and John Palmer, IBM's 360 and Early 370 Systems, pp. 87-105, 204-210.

See Norbert Wiener, The Human Uses of Human Beings, pp. 210-211.

See Norbert Wiener, The Human Uses of Human Beings, pp. 67-73.

Norbert Wiener, The Human Uses of Human Beings, p. 172.

See James Bell, 'Exploring the 'Singularity'', p. 2.

See Richard Barbrook and Pit Schultz, 'The Digital Artisans Manifesto'; and Richard Barbrook, 'Cybercommunism'.

Read more | Comments (0)