Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
GET A 40% DISCOUNT ON YOU FIRST ORDER
W14 Reading Response Requirement:
Part one and part two, each part needs 2 paragraphs, total 4 paragraphs
one paragraph summarizing the readings. Often this will be a summary of more than 1 reading. So, a good idea is to find the common link(s) between them and lay that out.
Second paragraph doing one of several things: explaining a key concept in the readings in more detail, questioning something in the readings, or connecting them to either previous readings or external examples.
Again, this isn’t meant to be high-pressure writing; my goal here is simply iterative reading and writing practice in support of discussion in class.
Part one’s reading response: read Natale and Ballatore – 2017 – Imagining the thinking machine Technological myth.pdf
Part two’ reading response: read Damjanov_2017_Of Defunct Satellites and Other Space Debris.pdf
Imagining the thinking
myths and the rise of
Loughborough University, UK
Birkbeck, University of London, UK
This article discusses the role of technological myths in the development of artificial intelligence
(AI) technologies from 1950s to the early 1970s. It shows how the rise of AI was accompanied by
the construction of a powerful cultural myth: The creation of a thinking machine, which would be
able to perfectly simulate the cognitive faculties of the human mind. Based on a content analysis of
articles on AI published in two magazines, the Scientific American and the New Scientist, which were
aimed at a broad readership of scientists, engineers and technologists, three dominant patterns in
the construction of the AI myth are identified: (1) the recurrence of analogies and discursive shifts,
by which ideas and concepts from other fields were employed to describe the functioning of AI
technologies; (2) a rhetorical use of the future, imagining that present shortcomings and limitations
will shortly be overcome and (3) the relevance of controversies around the claims of AI, which we
argue should be considered as an integral part of the discourse surrounding the AI myth.
Artificial Intelligence, cybernetics, history of computing, intelligent machines, media imaginary, new
media, scientific controversies, software studies, technological myth
As historians of media and technology have shown, a new technology is always a field onto which
a broad range of hopes and fears are projected (Corn, 1986; Natale and Balbi, 2014; Sturken et al.,
2004). With the emergence of new media studies as a field of enquiry, scholars addressed the
Simone Natale, Loughborough University, Department of Social Sciences, Loughborough, Leicestershire LE11 3TU, UK.
Convergence: The International
Journal of Research into
New Media Technologies
ª The Author(s) 2017
Reprints and permission:
cultural discourses surrounding digital technologies in terms of ‘imaginaire’ (Flichy, 2007) or
‘modern myths’ (Mosco, 2004). As happened with previous communication technologies, the
public discourse on digital media, such as personal computers, e-readers, smartphones and the
Internet, is strongly informed by speculations, fantasies and references to the future (Ballatore,
2014; Boddy, 2004).
What we call ‘new media’, however, have a long history, whose study is necessary to understand
today’s digital culture (Park et al., 2011). This article aims to contribute to this endeavour by
illuminating the emergence of a crucial component of the digital imaginary: The speculations and
fantasies about artificial intelligence (AI), which characterized the development of computing
technologies during its early inception. It focusses on the emergence from the 1950s to the early
1970s of the AI myth, broadly defined as the ensemble of beliefs about digital computer as thinking
machines, as a key moment in which to study the patterns characterizing the construction of
technological myths and the digital imaginary. Based on a content analysis of articles on AI
published in two magazines, the Scientific American (SA) and the New Scientist (NS), we
identified three dominant patterns in the construction of the AI myth: (1) the recurrence of
analogies and discursive shifts, by which ideas and concepts from other fields were employed to
describe the functioning of AI technologies; (2) a rhetorical use of the future, imagining that
present shortcomings and limitations will shortly be overcome and (3) the relevance of controversies
around the claims of AI, which we argue should be considered as an integral part of the
discourse surrounding the AI myth. The recognition of these patterns may provide useful hints
for examining the rise not only of the specific AI myth but also of technological myths constructed
in other contexts.
The presence of controversies since the early history of AI, in particular, is revealing of the
dynamics through which technological myths emerge and proliferate. Pointing to the key role of
controversy in fields, such as parapsychology, we argue that scepticism and criticism added to AI’s
capacity of attracting attention and space in scientific debates and in the public arena. The AI myth
originated and developed not only as the result of the discourse produced by those who professed to
believe in the possibility of building a thinking machine, but rather through a dialogic relationship
that involved supporters as well as critics of this vision. The functional role of controversies helps
to explain the persistence of the myth, which continues to centre on the same overarching questions
and tropes characterizing early debates on AI.
By examining each of these patterns in the context of early AI research, this article has three
main goals. First, it aims to contribute to a better understanding of the key features of the rise of AI
and its cultural impact. Second, it aims to provide a relevant case study for the analysis the
rhetorical and discursive strategies accompanying the emergence of technological myths. Third,
our analysis also points to the necessity to re-evaluate claims about the history of AI. In particular,
we contrast the simplistic view according to which the rhetoric of the AI myth in popular culture
and the public sphere was counteracted by the computer scientists’ attempt to provide an accurate
image of the potential and the problems of these technologies. We demonstrate, on the contrary,
that the basic tenets of the AI myth can be found in the interventions of key researchers of the field,
published in magazines, such as the SA and the NS.
The remainder of this article is organized as follows. First, we discuss technological myths as
useful frameworks to discuss techno-scientific developments. Second, after having briefly
described the usefulness of choosing SA and NS to conduct our survey, we discuss the three main
rhetorical and discursive patterns (analogies, projector futures and controversy) characterizing the
emergence of the AI myth since the early 1950s. In the conclusion, we contend that the discourses
2 Convergence: The International Journal of Research into New Media Technologies XX(X)
set in motion by AI represent a powerful technological myth that still deeply influences and shapes
the current digital imaginary.
What is a ‘technological myth’, and why do we employ this concept to re-frame the emergence of
AI? The term ‘myth’ resonates widely in the foundations of European cultural and media studies,
particularly in the intellectual legacy of French semiotician Roland Barthes, who described
‘modern mythologies’ as the dominant cultural ideologies of our time, at the core of our relationship
to technology (1957). More recently, Vincent Mosco (2004: 3) stated that ‘myths are
stories that animate individuals and societies by providing paths to transcendence that lift people
out of the banality of everyday life’. In contemporary societies, these paths are often embodied by
technologies, such as digital computers and the Internet, pointing us to a ‘digital sublime’. In a
similar vein, Dourish and Bell (2011) in their study on ubiquitous computing define technological
myths as powerful ‘organising visions’ on how a new technology will fit in the world.
As Mosco underlines, the theoretical advantage in using this term in relation to digital
technologies is connected to the fact that, despite the popular pejorative usage of the term ‘myth’,
technological myths are not necessarily untruthful and deceitful. More precisely, their status of
truth or falsity does not interfere with their nature of myths. As he puts it, myths ‘are not true or
false, but living or dead’ (Mosco, 2004: 3). In this sense, it is not important if a belief corresponds
or not to reality, but rather what it reveals about the cultural context from which it originated. A
living technological myth may have deep effects, even if its tenets turn out to be grossly
incorrect. Indeed, this is coherent with the characterization of the AI myth provided by information
scientist Hamid Ekbia, who defined it as the ‘embodiment of a dream – a kind of dream
that stimulates inquiry, drives action, and invites commitment, not necessarily an illusion or
mere fantasy’ (2008: 2).
The fact that popular narratives and representations of technology may or may not correspond to
actual events has, as argued elsewhere (Natale, 2016: 440–443), an important methodological
implication for scholars interested in the study of technological myths: All technological myths
have to be taken in consideration and researched in the same way, notwithstanding considerations
about their accuracy or truthfulness. To use an expression conceptualized within the history and
sociology of science, research into technological myths requires the application of the principle of
symmetry, according to which the same type of causes should explain both ‘true’ and ‘false’ beliefs
How does a technological myth become one that affects culture and society? A potential answer
to this question lies in the narrative character of myths. Approaches to storytelling (e.g. Cavarero,
2000) have shown that one of the characteristic of narratives is its capacity to circulate, following
narrative patterns that are repeated again and again. The same applies to technological myths,
whose capacity to become influential in specific societies and cultures is closely related to their
nature of narrative tropes that are repeated and circulated over and over again and are used in
multiple contexts to represent the functioning, impact and promise of technology (Ballatore and
Natale, 2016; Natale, 2016).
The early history of AI is deeply intertwined with the emergence of a technological myth,
centred around the possibility of creating thinking machines by using the tools provided by digital
computing. C. Dianne Martin (1993) has discussed a prominent aspect of the imaginary surrounding
computers, that is, the vision of the computer as an ‘awesome thinking machine’. During
Natale and Ballatore 3
the early years of the digital revolution, primarily in the 1950s and early 1960s, a large segment of
public opinion came to see the emergent computers as ‘intelligent brains, smarter than people,
unlimited, fast, mysterious, and frightening’ (Martin, 1993: 122). Martin’s contention, based on a
body of poll-based sociological evidence and content analysis of newspapers, is that mainstream
media journalists shaped the public imagination of early computers through misleading metaphors
and technical exaggerations. By contrast, according to Martin, computer scientists attempted to
counteract this narrative and to avoid exaggerations about the new devices (129). As computers
moved into the workplace and into the daily lives of workers in the early 1970s, claims Martin, the
myth of the awesome computing machine lost part of its credibility, but still affected a large
segment of the American population. Two decades later, although further reduced, the myth was
still present, particularly in its negative forms. Yet, Martin’s analysis downplays the importance of
such myths not only among the general public, but among technologists and researchers in
computer science. As a result, the role of the AI field in establishing these beliefs is left unaccounted
for, a gap that we fill in the next sections.
The construction of the AI myth: A content analysis
As Ortoleva (2009: 2) notes, technological myths condition not only the perception of technology
within the public but also ‘the professional culture of those who have produced the technical
innovations and helped their development’. In this sense, in order to understand the AI myth, it is
essential to look also at the professional and techno-scientific milieux of technologists beyond the
inner circle of AI scientists. For this purpose, we carried out preliminary research on the period of
study (1950–1975) to identify significant magazines where the development of the discipline was
widely discussed at a technical level. This thematic inspection was conducted on a sample of
articles containing the words computer, cybernetics and intelligence. As a result, we selected two
widely read magazines, the United States-based Scientific American and the British New Scientist,
while we did not identify enough thematic relevance in others, such as Communications of the
ACM and Popular Mechanics.
Although far from comprehensive, this material provides insight on how the results and the
promises of AI research were presented to an informed readership. In fact, these magazines were –
and still are – aimed at a broad readership of scientists and engineers. Discussing techno-scientific
innovation across disciplines, they can be used as a proxy to investigate the visions, fears, desires
and fantasies triggered by AI research and to obtain clues about how an entire society debated the
introduction of a new medium. Crucially, these magazines were a platform where key researchers
in the AI field published articles aimed at a broader readership than scientific papers and through
which they were able to contribute to wider discussions about the potential and the future of AI.
Our use of these sources follows a methodological proposal for studying the history of media
and technology that was developed by media historian Carolyn Marvin. By examining magazines
that mainly targeted expert readers and to which professionals and engineers contributed
articles and letters, Marvin documented the way these groups, whose ranks included scientists,
electrical engineers, but also cadres of operatives from machine tenders to telegraph operators,
directed their efforts in the engineering, improvement, and promotion of the new media of their
age (1988). A further benefit of employing this approach is that it provides an opportunity for
comparison and corroboration with other research in media history and new media studies
employing popular scientific magazines as sources to unveil the dynamics of representations and
myth-making in the reception of new media. For instance, Vanobberghen (2010) has used
4 Convergence: The International Journal of Research into New Media Technologies XX(X)
Marvin’s methodology to explore reactions to the introduction of radio in a Belgian radio
amateur magazine. For what concerns digital media, Stevenson (2016) has recently unveiled
patterns of myth-making in the examination of what he calls ‘belief in the new’ by looking at
how cybercultural magazines Mondo 2000 and Wired contributed to the construction of mythical
narratives about Internet and the Web.
Following Marvin’s approach, we undertook a close reading of articles in the SA and NS that
addressed issues and concepts relevant to the AI field, such as cybernetics, systems theory,
computational linguistics, operations research and automata theory. In the case of SA, we obtained
1240 articles from the magazine’s index, while for NS, we screened all issues from the first issue of
the magazine in 1956 to 1975, identifying about 600 articles. This corpus was then analysed, and
about 100 highly relevant articles per magazine were selected for close reading. This thematic
analysis led us to identify three recurring themes (analogies, future orientation and controversies)
as central in the corpus across the two magazines.
Beside its strengths, our methodology also has limitations that should be taken into full
account. First, it is impossible to identify with precision the readership of SA and NS across
25 years. Yet, although their readership was probably broad and diversified, studies made
during the same time frame confirm at least for the case of SA that the magazine targeted
especially expert readers (Funkhouser, 1969). Second, and conversely, since the construction
of technological myths is performed within the public sphere, one might wonder if the SA and
NS readership might be instead too limited to account for such phenomenon. However, as we
pointed out in our discussion of the relationship between technological myths and narrative,
technological myths entail the construction of narrative tropes that circulate within a number
of contexts in the public sphere (Natale, 2016). In this regard, magazines with a strong focus
on science and technology constitute useful resources to identify contexts where technological
myths are constructed and made available to be repeated and disseminated also in other
milieux and through other channels.
Analytic philosopher John Searle proposed a broadly discussed distinction between weak
and strong AI. ‘Strong AI’, in Searle’s view, purports to devise general, human-like
intelligence. ‘Weak AI’, on the other hand, aims at creating highly specialized tools that
mimic specific cases of intelligent human behaviour (Searle, 1980). John Haugeland (1985)
labelled the Strong AI approach ‘Good Old-Fashioned Artificial Intelligence’, which
dominated the field until the 1970s. While weak AI applications are ubiquitous and go
largely unnoticed, the AI myth emerged around the possibility of strong AI. In the magazines
considered in our case study, the emergence of AI was discussed as an innovation that
promised not only exciting applications but also drastic changes in the relationship between
humans and machines.
The examination of how AI was represented to the readers of SA and the NS reveals three main
patterns that characterized the construction of the AI myth. The first pattern is based on a practice
that we propose to call ‘discursive shift’, by which concepts and ideas from other fields and
contexts are used as analogies to describe concepts in AI. The second pattern is based on the
construction of a mythical future, by which goals that are not met by AI at its present state are
projected into the future, turning the shortcomings of AI research into potential developments.
Finally, the third pattern is the recurring presence of controversies about the claims of AI, which, as
we will see, played a constitutive and instrumental role in the construction of the AI myth. Let us
see more closely how these different patterns and strategies worked and how they informed the
representation of AI research within the public sphere.
Natale and Ballatore 5
Discursive shifts and analogies
The first pattern characterizing the construction of the AI myth is the recurrence of discursive shifts
by which concepts and categories from other fields and disciplines are adapted to describing the
functioning of computing technologies. Hamid Ekbia points out a fundamental tension in AI
history between science and engineering. AI pioneers have engaged in engineering, scientific and
discursive practices, through a number of paradigms (Ekbia, 2008: 5). The discursive practices
entailed linking the workings of engineering artefacts, such as computer programmes and automated
devices, to broad scientific claims on the human mind, intelligence and behaviour, relying
on daring analogies among humans, animals and machines. While the usage of analogies is
widespread in scientific discourses and is not unique to this field (Bartha, 2013), it is particularly
prominent in the transdisciplinary research approach adopted by AI researchers.
Although some authors trace its foundations to the roots of Western philosophy in a teleological
manner (McCorduck, 1979; Russell et al., 2010), AI sprang up in the middle 20th century at the
junction of cybernetics, control theory, operations research, psychology and new-born computer
science. American neurophysiologist Warren McCulloch and logician Walter Pitts published in
1943 ‘A logical calculus of the ideas immanent in nervous activity’ (1943), formulating a mathematical
model of neural activity. Their theory brought together seminal work in logic by Rudolf
Carnap, David Hilbert, Bertrand Russell and Alfred N. Whitehead and the computability theories
by Alonzo Church and Alan Turing. In 1948, Wiener published ‘Cybernetics’, a best-selling
monograph that widely disseminated the idea of intelligent machines (Wiener, 1948).
In 1950, on the other side of the Atlantic, British mathematician Alan Turing published the
article ‘Computing machinery and intelligence’, in which he outlined several influential ideas,
such as natural language processing, machine learning and genetic computing. This article also
described the much-discussed Turing test, in which the intelligence of a machine is assessed in its
ability to produce a plausible conversation indistinguishable from that of a human (Turing, 1950).
In this phase, the computer as a metaphor for the mind gained credibility, along with the centrality
of information as a core element of reality (Floridi, 2008).
While the development of cybernetics is usually associated with the Macy Conferences in New
York, the formal birth of AI can be located in another academic conference, the Dartmouth
Summer Research Project on AI, held in 1956 in New Hampshire. The conference was conceived
as an attempt to ‘find how to make machines use language, form abstractions and concepts, solve
kinds of problems now reserved for humans, and improve themselves’ (McCarthy, 2006). In this
sense, from its inception, AI exhibited the ambitious goal of integrating diverse research areas
towards the implementation of human intelligence in general, applicable to any domain of human
activity, such as language, vision and problem solving, which fell outside the somewhat narrow
scope of control theory and operations research. As a consequence, AI technologies were often
described in the SA and the NS with terms that usually apply to human or animal behaviour. This
resulted in discursive shifts by which concepts migrated from different contexts through analogical
arguments, carrying with them their own cultural associations and meanings, and often resulting in
misleading cross-domain translations (Ekbia, 2008: 5).
The articles and commentaries published in the SA and the NS often focussed on the analogy
between the computer and the brain and between machines and biological life. In a 1950 article for
SA, W. Grey Walter noted that ‘there is an intense modern interest in machines that imitate life’,
and even suggested that ‘engineers who have designed our great computing machines adopted this
system without realizing that they were copying their own brains’ (1950: 43). Analogously, the
6 Convergence: The International Journal of Research into New Media Technologies XX(X)
Hungarian-American mathematician John George Kemeny observed that the human brain could be
itself compared to a machine. According to his view,
a normal human being is like the universal machine. Given enough time, he can learn to do anything.
… [T]here is no conclusive evidence for an essential gap between man and a machine. For every
human activity we can conceive of a mechanical counterpart. (Kemeny, 1955: 63)
The comparison between artificial and biological life could go so far as to include elements of
humanity that surpassed the boundaries of mere rational thinking, to include feelings and emotions.
In 1971, for instance, an article in the NS was titled ‘Japanese Robot Has Real Feeling’. By reading
the article with more attention, one could understand that the matter of the experiments was not so
much human emotions, but rather the capacity of a robot to simulate tactile perception by gaining
information about an object through contact (Anon., 1971). Playing with the semantic ambiguity of
the words feeling/feelings, and alluding to human emotions well beyond basic tactile stimuli, the
author added a considerable amount of sensationalism to his report. Other common attempts to
anthropomorphize computers and robots were based on references to children, whose behaviour
and learning strategies were regarded by some as a promising way to address the question of how a
computer could learn through experiences and trial-and-error (Robertson, 1975; see also Selfridge
and Neisser, 1960). Similar discursive shifts appear often in many other reports on AI research
published in the SA and the NS, the most common being the idea that machines can ‘think’, which
ultimately turned the focus from computing technologies to the discussion of psychological issues,
such as what does it mean to think or to perceive (e.g. Kemeny, 1955; Selfridge and Neisser, 1960;
Walter, 1950). Concepts from fields, such as medicine (Anon., 1960), developmental psychology
(Robertson, 1975), biology (Moore, 1964), among others, were appropriated and absorbed into the
As studies in the history of technology have shown, the construction of semantic fields is often
instrumental in the constructions of disciplinary fields and communities of researchers that work
under a common paradigm (Kline, 2006; Oldenziel, 2006). As observed by Ruth Oldenziel (2006:
478), ‘words serve as weapons to frame the social realities in which some communities are invited
to participate and others are not’. The introduction and adaptation of new concepts help in the
creation of shared meaning that is entailed in boundary work (Gieryn, 1983). In the case of AI,
discursive shifts that employed concepts and keywords from different contexts provided ground for
the creation of shared meaning within the communities of scientists and engineers involved in AI
research. The analogies blurred the boundary between the human mind and machines, contributing
to the emergence of particular expectations and imaginaries regarding the future of AI.
Projecting the future
The second pattern characterizing the construction of the AI myth is the strong reliance on claims
about future developments of the field. Predictions and visions of the future are one of the main
ways in which mythical ideas about technologies substantiate into particular cultural and social
imaginaries (Natale, 2014). As historians of technology have shown, future-oriented discourse in
techno-scientific environment may contribute to shift the emphasis from the present state of
research towards an imagined prospect in which the technology will be successfully implemented.
Such ‘sociotechnical projectory’ contributes to create a community of researchers, introducing a
shared objective or endpoint that informs and organizes the work of scientists, technologists and
engineers involved in such community (Messeri and Vertesi, 2015).1
Natale and Ballatore 7
In the case of AI research, the call to future developments was a common staple by which
present shortcomings in the applications of AI research were redirected towards a seemingly
proximate future in which these failings would be overcome. Numerous articles in SA and NS
explicitly addressed future developments: Writers reported on the potential applications of AI in
fields, such as transportation (Glanville, 1964; Lighthill, 1964), robotics (Taylor, 1960), medicine
(Anon., 1960), among many others. Predictions often included estimates about the lapse of time
required to the development of new fields of applications: For instance, Glenville (1964: 684) was
confident that ‘the control by a single computer of the road traffic in the busier parts of our cities’
would be ‘without doubt, be in operation within twenty years’. Even when current research was
presented, contributors frequently highlighted their impact in terms of future opportunities. In
discussing the results of his research, for instance, the director of the Department of Machine
Intelligence at Edinburgh University Donald Michie acknowledged that ‘no single technique is
going to bring about magic transformation’, but at the same time suggested that ‘the consequences
of effective methods for representing chess knowledge could be great’ (1972: 371–372).2
The initial swift achievements in several areas characterized what AI historian Crevier (1993)
defined the ‘golden years’ of AI. Such encouraging short-term advances brought with them predictions
about the development of this field that were exceedingly optimistic, fuelling the plausibility
of the myth of AI. Formal games, such as checkers and chess, provided a fertile test bed for
AI applications. Since 1952, Arthur L. Samuel at the IBM research department had been working
on a programme that was able to learn how to play checkers, choosing promising moves based on a
heuristic score of the pieces positions on the board (1959). Associated with high intelligence in
popular culture, chess attracted notable contributions from leading AI scientists (Newell et al.,
1958). In an article published in the NS, Donald Michie dedicated a section to the topic of ‘the
future’, in which he attempted to examine the prospective improvements in the methods to
reproduce expert knowledge in chess (1972). Others speculated that writing chess programmes
might result in the future in a better understanding of how a human brain actually works (Zobrist
and Carlson, 1973). Eventually, as an article in the SA pointed out as early as 1952, the application
of research on games could open the way for ‘future automatic machines which will make decisions
in business and military operations’ (King, 1952: 147). Yet, in the early 1970s, an article in
the NS had to admit that, despite some encouraging progress, a conference held in Britain had
proved that there was still a long time to go before a computer capable of beating an international
chess master could be designed (Anon., 1973). It was only in 1996 that a chess computer programme,
Deep Blue, succeeded to beat an established grand master, Garry Kasparov (Campbell
et al., 2002).
Predictions about the future were not only a way to imagine the potential of AI research but also
a specific area of technological development within the field. In 1958, the NS reported about the
possibility of using computers that could make effective forecasts. Although many improvements
in this context have been effectively made in the subsequent decades, the article suggested
practical applications that were to be developed yet. As the magazine reported, the Russian scientist
Leonid Krushinski had claimed to have discovered a new type of reflex, whose study ‘would
help mathematicians to create machines capable of effecting forecasts on a scale inaccessible to the
human brain’ (Anon., 1958). Some years later, the magazine also dedicated a long series of articles
to technological forecasting, collected under the science-fiction-like title of ‘The World in 1984’.
In this context, the examination of many predictions, such as how roads and traffic (Glanville,
1964) or the aviation network (Lighthill, 1964) would be 20 years later, emphasized the potential
of the use of intelligent computers to perform duties usually executed by human workers. Further
8 Convergence: The International Journal of Research into New Media Technologies XX(X)
subjects of prediction focussing on AI-related technologies included the applications of automation
in the farming industry (Morgan, 1961), the designing of techniques to mechanize haute couture
(Macqueen, 1963), and the construction and workings of an intelligent chemical plant (Ridenour,
1952). In 1960, an article pointed out that AI might even help discover a cure for cancer. This
hopeful claim was based on the consideration that ‘cancer could be defined, cybernetically, as an
error in the controlling system; that is to say, as misinformation or an error in a feedback system’
It is interesting to note that, similarly to the discursive shifts discussed above, this second
pattern also entails a shift between different contexts: The results of AI research are in fact moved
forward from the horizon of the present to the horizon of the future. This rhetorical move, which
often characterizes techno-scientific research in new and promising areas (Borup et al., 2006), is a
recurring pattern of the way AI research was represented in magazines, such as the SA and the NS
during the period examined. The construction of the AI myth involved an act of conceptual shift by
which concepts and ideas from different fields were translated and applied to the description of AI
research or results in AI research were moved from the examination of the present state towards the
imagination of future horizons and developments.
The role of controversies
The third main pattern emerging from our analysis of the construction of the AI myth in the pages
of the NS and the SA is the strong presence of controversies regarding the claims of (strong) AI.
Since at least the early 1960s – in a period of prevailing optimism regarding the prospects of AI –
sceptics and critics actively challenged the community, rejecting optimistic predictions as
groundless and pointing to the conceptual problems surrounding the core tenets of AI (Moore,
1964; Ulam, 1964). In both the NS and the SA, enthusiastic claims about the potential of AI
technologies came hand in hand with critical interventions. Researchers were particular sceptical
or nuanced about the possibility that a computer might equal the functioning of a human mind,
mainly because of technical limitations (e.g. Albus and Evans, 1976; Voysey, 1974). American
physicist Louis N. Ridenour calculated that, given the present state of computer technology, if a
vacuum-tube as complex as the brain was made, it would require ‘a skyscraper to house it, the
power of Niagara to operate it and the full flow of water over the falls to keep it cool’ (1951: 17).
Researchers also realized very early after the emergence of the AI field that the dream of a thinking
machine had started much before the development of cybernetics but had not delivered convincing
results thus far (Moore, 1964). Concerns about the possible consequences of automation in fields,
such as mass unemployment, were also expressed, pinpointing the ethical and social problems
involved in the applications and developments of AI (Voysey, 1975).
This tendency to invite controversies and criticism did not abandon the field throughout its
development. At different times, authors, commentators and scientists embraced, qualified or
rejected the AI myth. In the inner circle of AI research, although a certain consensus on the core
tenets of the discipline existed, critics rejected the assumptions of the discipline as simplistic and
philosophically naı¨ve (e.g. Taube, 1961). As early AI projects relied on abstract, disembodied
symbol processing, carried out through formal languages, the lack of a physical and perceptual
dimension to ground reasoning was soon identified by AI critics as one of its main methodological
flaws. Notably, phenomenologist and Heideggerian scholar Hubert Dreyfus launched open attacks
on AI, which resulted in his ostracism from the research community (1965, 1992). Between the
absolute belief in the AI myth of Minsky and Dreyfus’ radical scepticism, a spectrum of fluid and
Natale and Ballatore 9
nuanced positions existed. While some rejected the central metaphor of the brain as a computer as
unsound, other scientists still accepted the possibility of strong AI (Lighthill, 1973).
How can we reconsider the role of controversies in the construction of the AI myth?
Historians of AI have most often privileged a ‘rise and fall’ narrative to describe the rise of
the paradigm in the 1950s–1960s and its apparent demise in the following two decades
(Crevier, 1993; Russell et al., 2010). According to this established narrative, as AI researchers
obtained early successes, unrealistic expectations spread and sustained the belief that fully
fledged thinking machines were on the verge of being created. The hype hit its peak in the
late 1960s. At the beginning of the 1970s, the gap between the real outcomes of AI research
and the wild visions of thinking machines resulted in the so-called AI winter, damaging the
credibility of AI enthusiasts and resulting in a general loss of credibility and funding. The
narrative of hype and disillusionment, while adequate with respect to research funding cycles,
fails to adequately capture how the AI myth has always been – not only during or since its
‘winter’, but also during its ‘golden age’ and in the most recent developments – a field
characterized by a high degree of controversy around the question if a thinking machine is
possible or not. Criticism was not or, at least, not only a consequence of the hype; it was an
element that entered into and shaped the AI myth since its very beginning. Rather than
framing controversies within a rise-and-fall narrative, we might therefore interrogate if and to
what extent they were a functional and integral component to the construction of the AI myth.
Indeed, although scientific controversies are often regarded as an element that hinders the
development of a scientific theory or field (Besel, 2011; Ceccarelli, 2011), scholars in history
of science and technology – most notably, Gieryn (1983) and, within the Social Construction
of Technology framework, Pinch and Bijker (1987) – have underlined the functional character of
controversies in scientific and technological innovations (see also Engelhardt and Caplan, 1987).
Adopting a similar approach may be useful to comprehend how controversies have been a
structural component of the AI field.
The myth of the thinking machine emerged as a body of claims, theories and technologies that
constitutionally invited scepticism and criticism. Historians of AI have sometimes argued that the
heightened tendency to stimulate controversies was engrained in the very name given to the discipline.
Russell et al. (2010) suggest that the term ‘Artificial Intelligence’, coined by John
McCarthy in 1955, contributed to heighten expectations to an unhealthy degree, explicitly setting
the target of an artificial human-like intelligence.
Extensive and apparently endless controversy, observable throughout the history of the AI, also
characterizes other highly debated contexts, including parapsychology, a field of inquiry concerned
with the investigation of so-called ‘paranormal’ phenomena.3 Addressing the case of fringe
science, sociologist of science David J. Hess proposes that parapsychologists and their opponents
are not mere antagonists, but rather participants in a wider discourse whose very existence is based
on the incessant controversy that surrounds paranormal phenomena. He notes that scepticism is
constantly evoked not only by the ones who criticize the irrationalism of fringe science, but by the
parapsychologists themselves, who proclaim their scepticism against the corporate world, official
science, the medical establishment, as well as against the claims made by other parapsychologists,
New Agers or spiritualists. In a context where ‘scientists engage in boundary-work to distinguish
science from nonscience, but also [ … ] a variety of other groups construct boundaries (and
consequently themselves as groups) not only with respect to more orthodox scientists and sceptics
but with respect to each other’, controversies provide the ground and the condition for existence of
the field (Hess, 1993: 145).
10 Convergence: The International Journal of Research into New Media Technologies XX(X)
Looking at the case of how religious beliefs are assessed and challenged in the public arena may
also provide useful interpretative tools for addressing the role of controversies in technological
myths. Indeed, Robert Geraci (2008) has argued for the presence of striking resemblance between
the AI myth and religious thinking. Studies in religion and media studies have shown that religious
belief and practices not only coexists with scepticism but may even require it (Taussig, 1998;
Walker, 2013). Although science is, of course, very different from religion, the way beliefs are
simultaneously invited and challenged in such contexts may provide useful keys to an understanding
of how beliefs in scientific theories can be characterized by similar dynamics. It is, in fact,
within a dialectic that the AI myth emerged and progressed, grounded in the incessant dispute
between its opponents and its supporters.
As we noted at the beginning of this article, technological myths are defined by their capacity to
be present and pervasive in a particular society and culture (Mosco, 2004). In this sense, controversies
are an integral and important part of the myth of the thinking machine because they
contribute to its liveliness, to its capacity of attracting attention and space in scientific debates and
the public arena. The controversy on AI was inflated and reinforced in the public sphere by the
mass media. Jason Delborne (2011) has convincingly argued that scientific controversies are a
context through which specific paradigms, theories and fields construct their audience within the
scientific world as well as in the public and popular arena. Partly exploiting the allure of the
limelight for scientists, the popular press shaped through sensationalistic representations of AI
projects the popular perception of digital computers as ‘Electronic Super Brains’, even ‘faster than
Einstein’ (Russell et al., 2010: 9). This tendency is evident also in the pages of popular science
magazines, such as NS and SA, where controversy was one of the key ways through which the AI
myth was discussed, assessed and ultimately constructed.
Conclusion: The rise and persistence of the AI myth
The analysis presented in this article contributes to the study of the imaginary around digital
technologies by framing the emergence of AI technologies as a technological myth. Indeed, it is
difficult to comprehend the present cultural significance of computing technologies without
considering the impact of AI, which dominated a crucial period of their development between the
1950s and the 1970s. Yet, the myth of AI did not cease to exercise a strong impact after this period,
as the narrative of ‘AI winters’ implied. In fact, this myth continues to characterize several aspects
of the contemporary imaginary connected to new media technologies. While the myth seemed to
have exhausted its credibility in the 1970s, it was not by any means dead, and AI has survived
many winters, finding new surprising avenues and manifestations. While much recent scholarship
in new media studies has mostly focussed on the Web as the leading technological myth of our age
(Flichy, 2007), the symbolic and imaginative importance of the computer as a machine that
replicates the human mind in our present day is still one of the dominant aspects within the narrative
of ‘new media’.
On the one hand, new computing approaches and technologies reignited the hope to implement
general intelligence and attracted research funding. The wave of ‘expert systems’ in the 1980s
generated viable and profitable applications, but fostered at the same time novel expectations about
AI. In the same decade, the Japanese launched a new 10-year plan to build intelligent machines
called the ‘Fifth Generation’ project, followed by equivalent American and British efforts, which
all failed (Russell et al., 2010: 24–25). Neural networks also experienced a re-birth, generating the
so-called connectionist approach to AI as a major alternative to symbol manipulation. In more
Natale and Ballatore 11
recent decades, availability of large amounts of data and major increases in computing power and
storage triggered remarkable advances in the areas of data mining, machine learning and natural
language processing, developing earlier AI methods into successful research enterprises.
On the other hand, the myth of AI still exerts its influence well beyond the technical sphere, and
is an essential component to a strand of philosophy called ‘transhumanism’, whose principal tenet
is the possibility of enhancing the human condition with advanced technologies, and that has been
particularly influential among computer technologists since the 1980s (Hayles, 1999). Following
Minsky’s speculations, robotician Hans Moravec envisages that human life will be superseded by
intelligent machines by 2040 (Moravec, 1988). Futurist Raymond Kurzweil has developed the
theory of Technological Singularity, a moment in which AI will have overcome human capabilities
(2005). Extrapolating from alleged exponential advances in information technology, Kurzweil
imagines an impending radical change in civilization, when intelligent machines will merge with
humans to unleash unprecedented possibilities. More recently, philosopher Nick Bostrom has been
discussing the risks of super-intelligent agents emerging from AI research (2012). Robert M.
Geraci has aptly named this strand of beliefs ‘Apocalyptic AI’, showing that the thinking machines
promised by AI provided fertile ground to re-cast religious dreams of purity, perfection and
immortality, auspicating the ‘victory of intelligent computation over the forces of ignorance and
inefficiency’, reaching computer-generated heavens (2008: 159).
While Apocalyptic AI is indeed the most radical manifestation of the myth, the myth of AI
resurfaces in utopian undertones of more moderate theories. The spread of personal computing and
networking fuelled a plethora of new technological myths that re-cast the myth of AI in novel
forms, dominated by the idea of network-based collective intelligence. What has not occurred on
the large and clumsy mainframes of the mid-20th century will occur in the context of ubiquitous
computing and the densely connected communication networks on the 21st century. In this strand
of ‘networking AI’, authors follow the utopian visions fostered by previous advances in telecommunications
and consider the Internet as the final stage of human interconnectedness, in which
interactions between individuals and machines increase collective intelligence to unprecedented
levels. The Web is seen as a ‘global brain’ which can bring humans to a new level of consciousness
(Heylighen, 2004). Media theorist Pierre Le´vy acknowledges the limitations of traditional AI and
proposes to transform the current ‘opaque global brain’ into a collective ‘Hypercortex’ (2011).
The three patterns that we have identified as characterizing the construction of the AI myth on
the SA and NS in the 1940s to 1970s emerge distinctly in contemporary versions of the AI myth,
too. Discursive shifts continue to epitomize the way AI-related research is inserted into a wider
imagination that tends to humanize technology as well as to connect it with superhuman or even
supernatural powers (El Kaliouby and Robinson, 2004). Likewise, the rhetorical shift from the
examination of the present state towards the imagination of future horizons and developments still
characterizes contemporary AI myths.4 Finally, the controversies about the possibility of creating
‘intelligent machines’ are still much living, as the extent of contemporary debate about the possibility
of AI demonstrates.
Our examination of the AI myth, therefore, is also meant as an encouragement to give more
emphasis to the way this cultural vision reverberates in contemporary discourses on digital
technology and culture. Technological myths that play today a paramount role in the discussion of
digital media and culture, such as transhumanism and singularity, derive much of their claims and
tenets from the discourse which emerged in the 1940s to 1970s in connection to research on AI.
Furthermore, the myth of AI finds fertile ground in the dream of collective intelligence, in which
the idea of the thinking machine interacts and is combined in many ways to the imaginary of
12 Convergence: The International Journal of Research into New Media Technologies XX(X)
networked communication and the Web (Flichy, 2007). This imaginary is largely based, just like
the AI myth emerged in the post-war period, on the recurrence of three distinctive patterns: The use
of ideas and concepts from other fields and contexts to describe the functioning of AI technologies,
the mingling between examination of present research results with the imagination of potential
future applications and horizons of research, and the strong relevance of controversies in public
discussions of the concept and its application.
As Park, Jankowski and Jones observe, ‘the history of new media presents us with something
more significant than merely another opportunity to see familiar distinctions being reasserted’; it
also provides us with new insights to look at the present configurations of digital culture (2011: xi).
The important role played by the myth of AI offers relevant insights to better understand how
technological myths contribute to shape the social presence of today’s digital media.
1. The case of Moore’s Law is a good example within the field of computer science for the ways projections
of future accomplishments may also act as an incitation for specific research communities to project their
expectations towards certain standards and, interestingly, also within defined boundaries. See, among
others, Brock and Moore (2006).
2. On the role of computer chess software in shaping research agendas and expectations within the AI
community, see Ensmenger (2012).
3. It is worth noting, in this regard, that some AI discourses, such as the Singularity, have been often regarded
by critics as pseudo-science, not too differently from parapsychology.
4. See, among many possible instances, the numerous articles on AI-related technologies which appeared in
the ‘Future Thinking’ columns featured in the BBC’s website (http://www.bbc.com/future/columns/
Albus JS and Evans JM (1976) Robot systems. Scientific American 234: 76–86.
Anon. (1958) Another type of reflex? New Scientist 3: 32.
Anon. (1960) Cancer and cybernetics. New Scientist 8: 1078.
Anon. (1971) Japanese robot has real feeling. New Scientist 52: 90.
Anon. (1973) Computer vs grand masters. New Scientist 58: 567.
Ballatore A (2014) The myth of the digital earth between fragmentation and wholeness. Wi: Journal of Mobile
Media 8: 1–20. Available at: http://wi.mobilities.ca/myth-of-the-digital-earth/ (accessed 4 June 2017).
Ballatore A and Natale S (2016) E-readers and the death of the book: Or, new media and the myth of the
disappearing medium. New Media & Society 18: 2379–2394.
Bartha P (2013) Analogy and analogical reasoning. In: Zalta EN (ed), The Stanford Encyclopedia of Philosophy.
Stanford, CA: Metaphysics Research Lab. Available at: https://plato.stanford.edu/archives/
win2016/entries/reasoning-analogy (accessed 5 June 2017).
Barthes R (1957) Mythologies. Paris, France: Editions du Seuil.
Besel RD (2011) Opening the ‘Black Box’ of climate change science: Actor-network theory and rhetorical
practice in scientific controversies. Southern Communication Journal 76: 120–136.
Bloor D (1976) Knowledge and Social Imagery. London, UK: Routledge.
Boddy W (2004) New Media and Popular Imagination: Launching Radio, Television, and Digital Media in
the United States. Oxford, UK: Oxford University Press.
Borup M, Brown N, Konrad K, et al. (2006) The sociology of expectations in science and technology.
Technology Analysis & Strategic Management 18: 285–298.
Natale and Ballatore 13
Bostrom N (2012) The superintelligent will: Motivation and instrumental rationality in advanced artificial
agents. Minds and Machines 22: 71–85.
Brock DC and Moore GE (2006) Understanding Moore’s Law: Four Decades of Innovation. London, UK:
Chemical Heritage Foundation.
Campbell M, Hoane AJ, and Hsu F (2002) Deep blue. Artificial Intelligence 134: 57–83.
Cavarero A (2000) Relating Narratives: Storytelling and Selfhood. London, UK: Routledge.
Ceccarelli L (2011) Manufactured scientific controversy: Science, rhetoric, and public debate. Rhetoric &
Public Affairs 14: 195–228.
Corn JJ (1986) Imagining Tomorrow: History, Technology, and the American Future. Cambridge, MA: MIT
Crevier D (1993) AI: The Tumultuous History of the Search for Artificial Intelligence. New York, NY: Basic
Delborne JA (2011) Constructing audiences in scientific controversy. Social Epistemology 25: 67–95.
Dourish P and Bell G (2011) Divining a Digital Future: Mess and Mythology in Ubiquitous Computing.
Cambridge, MA: MIT Press.
Dreyfus HL (1965) Alchemy and Artificial Intelligence. Santa Monica, CA: Rand Corporation.
Dreyfus HL (1992) What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT
Ekbia HR (2008) Artificial Dreams: The Quest for Non-biological Intelligence. Cambridge, MA: Cambridge
El Kaliouby R and Robinson P (2004) Mind reading machines: Automated inference of cognitive mental
states from video. In: 2004 IEEE international conference on systems, man and cybernetics, IEEE, pp.
Engelhardt HT and Caplan AL (1987) Scientific Controversies: Case Studies in the Resolution and Closure of
Disputes in Science and Technology. Cambridge, MA: Cambridge University Press.
Ensmenger N (2012) Is chess the drosophila of artificial intelligence? A social history of an algorithm. Social
Studies of Science 42: 5–30.
Flichy P (2007) The Internet Imaginaire. Cambridge, MA: MIT Press.
Floridi L (2008) Artificial intelligence’s new frontier: Artificial companions and the fourth revolution.
Metaphilosophy 39: 651–655.
Funkhouser GR (1969) Levels of science writing in public information sources. Journalism Quarterly 46:
Geraci RM (2008) Apocalyptic AI: Religion and the promise of artificial intelligence. Journal of the American
Academy of Religion 76: 138–166.
Gieryn TF (1983) Boundary-work and the demarcation of science from non-science: Strains and interests in
professional ideologies of scientists. American Sociological Review 48: 781–795.
Glanville W (1964) Roads and traffic in 1984. New Scientist 22: 684.
Haugeland J (1985) Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press.
Hayles NK (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics.
Chicago, IL: University of Chicago Press.
Hess DJ (1993) Science in the New Age: The Paranormal, Its Defenders and Debunkers, and American
Culture. Science and Literature. Madison, WI: University of Wisconsin Press.
Heylighen F (2004) Das Globale Gehirn als neue Utopia. In: Maresch R and Ro¨tzer F (eds), Renaissance der
Utopie: Zukunftsfiguren des 21. Jahrhunderts. Suhrkamp: Frankfurt am Main, pp. 92–112.
Kemeny JG (1955) Man viewed as a machine. Scientific American 192: 58–67.
King GW (1952) Information. Scientific American 140: 132–148.
Kline RR (2006) Cybernetics, management science, and technology policy: The emergence of ‘information
technology’ as a keyword, 1948–1985. Technology and Culture, The Johns Hopkins University Press and
the Society for the History of Technology 47: 513–535.
Kurzweil R (2005) The Singularity Is Near: When Humans Transcend Biology. London, UK: Penguin books.
14 Convergence: The International Journal of Research into New Media Technologies XX(X)
Levy P (2011) The Semantic Sphere 1: Computation, Cognition and Information Economy. London, UK:
Lighthill MJ (1964) Aviation in 1984. New Scientist 22: 328–330.
Lighthill J (1973) Artificial intelligence: A general survey. In: Artificial Intelligence: A Paper Symposium,
Science Research Council, UK.
Macqueen KG (1963) Cybernetics and haute couture. New Scientist 17: 450–452.
Martin CD (1993) The myth of the awesome thinking machine. Communications of the ACM 36: 120–133.
Marvin C (1988) When Old Technologies Were New: Thinking about Electric Communication in the Late
Nineteenth Century. New York, NY: Oxford University Press.
McCarthy J (2006) A proposal for the dartmouth summer research project on artificial intelligence, August
31, 1955. AI Magazine 27: 12–14.
McCorduck P (1979) Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial
Intelligence. San Francisco, CA: W.H. Freeman.
McCulloch WS and Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bulletin of
Mathematical Biology 5: 115–133.
Messeri L and Vertesi J (2015) The greatest missions never flown: Anticipatory discourse and the ‘projectory’
in technological communities. Technology and Culture 56: 54–85.
Michie D (1972) Programmers’ gambit. New Scientist 55: 32–329.
Moore EF (1964) Mathematics in the biological sciences. Scientific American 211: 148–164.
Moravec H (1988) Mind Children. Cambridge, MA: Cambridge University Press.
Morgan K (1961) The future of farm automation. New Scientist 11: 581–583.
Mosco V (2004) The Digital Sublime: Myth, Power, and Cyberspace. Cambridge, MA: MIT Press.
Natale S (2014) Introduction: New media and the imagination of the future. Wi: Journal of Mobile Media 8:
Natale S (2016) Unveiling the biographies of media: On the role of narratives, anecdotes and storytelling in
the construction of new media’s histories. Communication Theory 26: 431–449.
Natale S and Balbi G (2014) Media and imaginary in history: The role of the fantastic in different stages of
media change. Media History 20: 203–218.
Newell A, Shaw JC, and Simon HA (1958) Chess-playing programs and the problem of complexity. IBM
Journal of Research and Development 2: 320–335.
Oldenziel R (2006) Introduction: Signifying semantics for a history of technology. Technology and
Culture, The Johns Hopkins University Press and the Society for the History of Technology 47:
Ortoleva P (2009) Modern mythologies, the media and the social presence of technology. Observatorio (OBS)
Journal 3: 1–12.
Park DW, Jankowski N, and Jones S (2011) The Long History of New Media: Technology, Historiography,
and Contextualizing Newness. Digital Formations. New York, NY: Peter Lang.
Pinch TJ and Bijker WE (1987) The social construction of facts and artifacts: Or how the sociology of science
and the sociology of technology might benefit from each other. In: Bijker WE, Hughes TP, and Pinch TJ
(eds), The Social Construction of Technological Systems: New Directions in the Sociology and History of
Technology. Cambridge, MA: The MIT Press, pp. 17–50.
Ridenour LN (1951) A revolution in electronics. Scientific American 185: 13–17.
Ridenour LN (1952) The role of the computer. Scientific American 187: 116–130.
Robertson M (1975) What computer can learn from children. New Scientist 68: 210–211.
Russell SJ, Norvig P, and Canny JF (2010) Artificial Intelligence: A Modern Approach. Saddle River, NJ:
Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM Journal of Research
and Development 3: 210–229.
Searle JR (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3: 417–457.
Selfridge OG and Neisser U (1960) Pattern recognition by machine. Scientific American 203: 60–68.
Natale and Ballatore 15
Stevenson M (2016) The cybercultural moment and the new media field. New Media & Society 18:
Sturken M, Thomas D, and Ball-Rokeach S (2004) Technological Visions: The Hopes and Fears That Shape
New Technologies. Philadelphia, PA: Temple University Press.
Taube M (1961) Computers and Common Sense: The Myth of Thinking Machines. New York: Columbia
Taussig M (1998) Viscerality, faith, and skepticism: Another theory of magic. In: Dirks NB (ed), In Near
Ruins: Cultural Theory at the End of the Century. Minneapolis: University of Minnesota Press, pp.
Taylor WK (1960) Adaptive robots. New Scientist 8: 846–848.
Turing A (1950) Computing machinery and intelligence. Mind 59: 433–460.
Ulam SM (1964) Computers. Scientific American 211: 202–216.
Vanobberghen W (2010) The marvel of our time. Media History 16: 199–213.
Voysey H (1974) Programming without programmers. New Scientist 63: 390–391.
Voysey H (1975) Slaves to a computerised task-master. New Scientist 68: 684–685.
Walker D (2013) The Humbug in American religion: Ritual theories of nineteenth-century spiritualism.
Religion and American Culture: A Journal of Interpretation 23: 30–74.
Walter WG (1950) An imitation of life. Scientific American 182: 42–45.
Wiener N (1948) Cybernetics. Paris: Hermann.
Zobrist AL and Carlson FR (1973) An advice-taking chess computer. Scientific American 228: 92–105.
Simone Natale is a Lecturer in Communication and Media Studies at Loughborough University, UK. His
research focuses on the relationships between media and the imagination, on digital media and culture, and on
media archaeology. He is the author of a monograph, Supernatural Entertainments: Victorian Spiritualism
and the Rise of Modern Media Culture (Pennsylvania State University Press, 2016) and of articles published
in journals such as the Journal of Communication, New Media and Society, Communication Theory, and
Media, Culture and Society.
Andrea Ballatore is a Lecturer in GIS and Big Data Analytics at the Department of Geography at Birkbeck,
University of London. His research interests include Internet geography, volunteered geographic information,
and new media studies. His current work focuses on how digital geographic information is consumed online
through search engines and crowdsourcing platforms.
16 Convergence: The International Journal of Research into New Media Technologies XX(X)
Of Defunct Satellites
and Other Space
Debris: Media Waste
in the Orbital
Defunct satellites and other technological waste are increasingly occupying
Earth’s orbital space, a region designated as one of the global commons.
These dilapidated technologies that were commissioned to sustain the
production and exchange of data, information, and images are an extraterrestrial
equivalent of the media devices which are discarded on Earth.
While indicating the extension of technological momentum in the shared
commons of space, orbital debris conveys the dark side of media materialities
beyond the globe. Its presence and movements interfere with a gamut
of governmental, commercial, and scientific operations, contesting the
strategies of its management and control and introducing orbital uncertainty
and disorder in the global affairs of law, politics, economics, and
techno-science. I suggest that this debris formation itself functions as media
apparatus —it not only embodies but also exerts its own effects upon the
material and social relations that structure our ways of life, perplexing
School of Social Sciences, University of Western Australia, Perth, Australia
Katarina Damjanov, University of Western Australia, M257, 35 Stirling Highway, Crawley
6009, Perth, Australia.
Science, Technology, & Human Values
2017, Vol. 42(1) 166-185
ª The Author(s) 2016
Reprints and permission:
dichotomies between the common and owned, governed and ungovernable,
wealth and waste. I explore these effects of debris, framing its situation
in the orbital commons as a vital matter of concern for studies of the human
relationship with media technologies and their waste.
outer space, orbital debris, media technology, waste, global commons, law,
policy, governance, environment, postorbital futures
In 1957, an earth-born object made by man was launched into the universe,
where for some weeks it circled the earth according to the same laws of
gravitation that swing and keep in motion the celestial bodies—the sun, the
moon, and the stars. To be sure, the man-made satellite was no moon or star,
no heavenly body which could follow its circling path for a time span that to
us mortals, bound by earthly time, lasts from eternity to eternity. Yet, for a
time it managed to stay in the skies; it dwelt and moved in the proximity of
the heavenly bodies as though it had been admitted tentatively to their sublime
company. (Arendt 1998, 1)
The arrival of the first artificial satellite in Earth’s orbital space marked a
groundbreaking moment in human history. Arendt (1998, 1) declared
Sputnik’s exploit ‘‘second in importance to no other’’ event, a promethean
occasion in which techno-science upended the terrestrial confines of the
human condition. But the moon, stars, and other heavenly bodies were not
Sputnik’s only companions: the four-ton, thirty-meter long final stage of its
launcher inadvertently joined the satellite in orbit during its flight
(Australian Space Academy n.d.). This piece of hardware became the first
object of technological waste left in the wake of human advances in
outer space—what is today, for want of an official name, referred to as
space debris or space junk. Human-made technologies and their waste have
become increasingly present beyond the globe; they are to be found on the
surfaces and in the orbits of celestial bodies across our solar system, but
their highest concentration is by far in the orbital environs of our own
planet. Sputnik de-orbited and burned up in the atmosphere three months
after its launch, but more than 6,000 other satellites have followed its trail,
and many of them have been detained by the laws of celestial mechanics to
circle the Earth long after they ceased to operate.1 Accompanied by fragments
of their launchers and a range of items intentionally or accidentally
discarded during space missions, they have been further disintegrating and
colliding into an ever-accumulating formation of orbital debris. Many now
orbit the globe at speeds approaching 50,000 km/hr. Space agencies are
currently able to track about 22,000 pieces that are larger than ten centimeters
in diameter, and it is estimated that there are also about 500,000
fragments between one and ten centimeters and millions of smaller particles
that are all too minute to be pursued (Liou 2013). Depending on their
altitude, these remainders of human space exploration could keep ‘‘dwelling
and moving’’ around the earth for years, or even millennia, as a
substantial testament to our ventures.
Defunct satellites and other orbital debris, these remnants of technologies,
which once sustained the global production and exchange of data,
information, and images, are an extraterrestrial equivalent of the media and
communication devices that are discarded on Earth—better known as electronic
or e-waste. They revolve around the planet as the dilapidated outpost
of telecommunication and broadcasting networks, geospatial positioning
systems, remote sensing and climate monitoring facilities, mapping, imaging,
surveillance, and defense complexes. Scattered millions of cubic kilometers,
orbital debris constitutes the largest known waste formation in the
universe and its creation is symptomatic of the increasing centrality of
media technologies to human ways of life. Its presence represents the extraterrestrial
effects of strategic investments in technical media—advancing
them as a key platform from which to know more, have more, and be more.
Similar to e-waste in constituent materials and performance, orbital debris
is the end result of the construction, management, and disposal of media
technologies and attests to the global environmental depredations of contemporary
techno-industrial processes (Maxwell and Miller 2008; Parikka
2011; Sterne 2007).
Orbital media waste provides an extraterrestrial perspective from which
to debunk the myth of the ‘‘immateriality’’ of the contemporary ‘‘hightech’’
mode of the capitalist pursuit of knowledge and wealth that is
grounded in the supposedly ‘‘thing-less’’ production, exchange, and consumption
of intangible data but nevertheless depends upon the material
economies of extraction and depletion of natural resources. Media technologies
and their waste could be said to occupy orbital space as a part of the
historical arc that Jussi Parikka has framed as ‘‘medianature,’’ in that they
have facilitated the extraterrestrial dimensions of this ‘‘continuum between
mediatic apparatuses and their material contexts in the exploitation of
nature’’ (2011, 3). And like e-waste disposed of in landfills that becomes
an organic part of ‘‘nature’’ (Gabrys 2011; Parikka 2011), space waste has
168 Science, Technology, & Human Values 42(1)
become a constituent part of our orbital environment. The waste footprints
marking the human technological conquest of space, orbital debris conveys
the dark side of our medianatures beyond the globe.
Rather than ending up on the outskirts of cities, or recycling slums in the
‘‘developing world,’’ orbital debris resides in an inhuman extraterrestrial
region that in 1967 was declared ‘‘the province of all mankind’’ by the
Treaty on Principles Governing the Activities of States in the Exploration
and Use of Outer Space, including the Moon and Other Celestial Bodies
(United Nations 1967), also known as the Outer Space Treaty (OST). This
designation of international law, which codified the changed circumstances
of the human, previously Earth-bound condition, situates objects of debris
in an area conceived as one of the ‘‘global commons,’’ a domain beyond
territorial and ownership claims and whose exploration ‘‘shall be carried out
for the benefit and interest of all countries’’ (United Nations 1967, 13,
article 1). The concept of the commons—that which is shared, whether as
given by nature or fashioned by humans—is central to inquiries into the
political economies surrounding the global restructuring of states, markets,
and communities around vital resources (de Angelis 2001, 2007; DyerWitheford
1999; Hardin 1968; Hardt and Negri 2009). Their location in
the commons of outer space places objects of orbital debris at the crux of
contemporary concerns about the productive and destructive effects of
media technologies on the ongoing transformation of a shared world. Yet,
international space law is blind to their existence. The OST and accompanying
agreements sought to enforce patent laws over space technologies and
demarcated all ‘‘man-made’’ items beyond the globe as ‘‘space objects’’
that are property of the governments or corporations on whose behalf they
were launched—irrespective of whether they are operational and their components
still joined together (United Nations 1972). Since the OST neither
addresses debris as a distinctive form of space object nor restricts
launches, media technologies and their waste will continue to populate
the orbital commons. Their progressive accumulation might cause damage
on a planetary scale, a situation similar to Garrett Hardin’s ‘‘tragedy of the
commons,’’ a calamity brought about by unlimited and unregulated access
to jointly used resources and which consequently ‘‘brings ruin to all’’
Remote and invisible to international law, orbital debris is making significant
impressions on human societies. Initially deemed to be of no particular
importance, its growth has gradually come ‘‘to matter’’ (Barad
2003). In the late 1970s, Donald Kessler’s (2009) pioneering study suggested
that its continuing accumulation increases the probability of further
collision and fragmentation and could make future uses of orbital regions
increasingly difficult. By the twenty-first century, orbital debris was identified
as a substantial security risk for mediatic practices and services that
depend upon the uninterrupted functioning of multibillion dollar satellite
infrastructure. Military–industrial complexes have been enlisted to counter
its movements as a key focus of the Space Situational Awareness (SSA)
agenda that stipulates global attentiveness to near-Earth objects. Specialized
centers such as National Aeronautics and Space Administration’s
(NASA) Orbital Debris Program Office and the European Space Agency’s
(ESA) Space Debris Office have been established to facilitate its remote,
dirt-free management and national defense departments have initiated similar
programs such as the US Space Surveillance Network. In 2007, the
United Nations General Assembly endorsed the comprehensive Space Debris
Mitigation Guidelines (United Nations 2010), and in 2010, the United
States of America’s National Space Policy acknowledged orbital debris as a
crucial link in the maintenance of global security. Consequently, the juridical
presence and prospects of these waste objects have become topics of
extensive conjecture (see, e.g., Jakhu 2007; Weeden 2011), as are technical
scenarios required for their potential removal (see, e.g., Anselmo and Pardini
2008; Bonnal, Ruault, and Desjean 2013). Graphic images of ‘‘orbital
ruins’’ are widely circulated (Parks 2013), news headlines advise that there
is an ‘‘urgent need to remove space debris’’ (Amos 2013), these objects are
portrayed in blockbusters such as Wall-E (2008) and Gravity (2013),
publicly tracked via applications such as Heavens Above (n.d.), and
destroyed in video games such as Space Debris (2000). While satellite
infrastructure stoked the global courses of politics, economics, science,
defense, media, and everyday life, their functioning is now profoundly
affected by the waste of the very technologies that made them possible.
Having captured the attention of its global ‘‘public,’’ orbital debris is now
cemented into governmental and epistemic registers and fabrics of cultural
imaginaries not just as a ‘‘matter of fact’’ but as a common ‘‘matter of
concern’’ (Latour 2004).
One of the ‘‘effects’’ of our technological evolution, the unceasing accumulation
of orbital debris has itself thus far outstripped the development of
related systems of law and knowledge able to confront it effectively. Interfering
with a gamut of governmental, commercial, and scientific media
operations and restraining our future prospects in space, orbital debris
brings uncertainty and disorder into politicoeconomic systems of power
and control. Its disturbing presence thus indicates that it is not only technologies
that are inscribed with an affinity toward interference (Hacking
170 Science, Technology, & Human Values 42(1)
1986) but also their waste. In this sense, orbital technologies persist to
function as global media apparatus after they become defunct, their waste
not only continues to embody the material and social relations that structure
our ways of life but also continues to shape those relations.
This paper probes the sociomaterial effects produced by media technologies
and their waste, suggesting that their unique situation in the orbital
commons demands their own framing as a vital politico-ethical matter of
many concerns related to studies of the human relationship with technology.
Not least of these are questions of responsible technological innovation and
the creation of environmentally informed policy approaches to the regulation
and management of media waste, and our attitudes toward the idea of
the ‘‘commons.’’ The formation of orbital debris is simultaneously the
result of the extension of human medianatures beyond the planetary boundaries
and a resource for the further development of this historical interplay
between media technologies and environments in which they operate, a
dynamic conferred by its location in the commons. As such, orbital media
waste not only has the potential to participate in the prescription of legal,
political, and social action but may also contribute to the technological
alteration of the human condition itself.
Responding to the interfering presence of orbital debris is not merely
housekeeping on an extraplanetary scale, it also requires fundamental adjustments
to the ways in which we consider and express our relations with media
technologies and their waste. The aim of this paper is to offer some preliminary
thoughts on this problem, to begin marking out the conceptual coordinates
of a field of investigation that revaluates the empirical problem of
orbital debris. It proposes that both the matter and effects of this waste need
to be negotiated with regard to its situation in the orbital commons, circumstances
that may alter our capacity to decide upon cooperative responses to
the consequences of our medianatures. Accepting Parikka’s invitation to
consider media waste as a ‘‘living dead material’’ that ‘‘is not just waste but
also a form of life’’ (2011, np), compels policy and collective action to take
into account the active participation of orbital debris in determining the future
direction of human endeavors. To this end, this paper frames orbital debris as
part of our evolving medianatures, suggesting that the ways in which we
articulate our relationship with this waste should take into account its social
and material contexts, in particular its place in the commons. A pressing
subject of productive contradictions that confounds our current approaches
to the regulation, management, and control of wasted media technologies,
orbital debris is a ‘‘political question of the first order’’ (Arendt 1998, 3). It
interferes in distinctions between common and owned, governed and unruly,
security and risk, and demands that we keep pace with the techno-scientific
momentum of our media advances on and off the Earth.
Governing the Commons of Waste
The operational lives of satellites terminate for different reasons. Sometimes
exposure to the unforgiving extraterrestrial elements or old age causes
their vital functions to stop. Occasionally meteorites will strike them, or
they will collide with another human-made object and explode, or perhaps
their mission will be completed and they will simply be abandoned. Whatever
their particular fate, the transition from being ‘‘operational’’ to being
‘‘defunct’’ is not a change recognized in the statutes of the OST. Legally,
each fragment—down to a minute speck of paint—remains classified as a
space object (United Nations 1972), the exclusive property of its rightful
owner. However, once satellites stop performing their intended functions
and their connections with ground control collapse, they become part of a
waste formation whose behavior is difficult to police, much less establish
ownership rights over its individual constituents. Sovereign and property
rights can be exercised over functional satellites—they can be steered, made
to de-orbit, and even self-destruct—but their waste evades any issued command;
the only law wasted space objects obey is that of physics, the centripetal
forces, and the gravity that retains them in orbit as ‘‘living dead.’’2
While the growth of media waste on Earth increasingly poses a challenge to
governance—with some rogue objects of e-waste slipping from the radar
and becoming ‘‘non-governable’’ (Rossiter 2009, 37)—in orbit, all media
waste is not only unregulated and thus ungoverned, it is essentially impossible
to ‘‘govern.’’ Although it technically never escapes its lawful owners,
it is literally beyond the reach of the global grid of governance. As the OST
does not prohibit the creation of debris, nor does it require its owners to
recover it, their individual status as ‘‘property’’ is practically made redundant
and dispensed with, only to reemerge as the shared, collective problem
of ‘‘waste in the commons.’’
The accumulating orbital debris signposts the curious etymological
interplay between the words waste and commons; in Middle English,
‘‘waste’’ (from the Latin vastus, meaning empty or desolate) designated
the unoccupied, uncultivated land that was to be shared as commons. Just as
the commonly shared European ‘‘wastelands’’ were gradually enclosed and
transformed into property between the sixteenth and eighteenth centuries—
an acquisition that was pivotal in enabling the initial, primitive accumulation
of capital (Marx  2007)—orbital space has been similarly
172 Science, Technology, & Human Values 42(1)
appropriated by states and corporations. The tendency to enclose shared
resources and environments is apparently continuous with the logics of
capital’s imperative to grow (de Angelis 2001), and the placement of media
technologies in the orbital commons seems a clear extension of this historical
trajectory. The technological progress that supports capitalism’s covetous
tendencies is accompanied by destruction (of technical objects, of
environment, and our former socioeconomic ways of life); pollution and
waste have long been the reminders of the tensions engendered by exploitation
of the commons, and in many ways, space debris is little different.
But in the vastus of space, it does invoke a unique set of common concerns
and perhaps even takes on dimensions that have the potential to take on the
qualities of the commons in its own right. There is, at least, a clear
suggestion that the environmental impressions left by our global medianatures
in outer space become bound up with the production and destruction
of the commons.
The commons context of space waste is both predetermined by its orbital
position and a collective expression of the entanglement of humans and
technologies engaged in global media practices. Its accumulation is facilitated
by states and markets but also indirectly by all consumers who depend
on the services that operational space objects provide—from satellitesupported
television, Internet and telephony via Google maps, and the
Global Positioning System (GPS), to meteorological data and Hubble Telescope
images. As an appendix to the Earth, this waste formation could thus
be seen as a collective, species-defining activity, suggesting the extraterrestrial
scope of the proposition of an ‘‘anthropocene,’’ the idea that the
impact of our techno-industrial activities has pushed the Earth into a new
geological period—the human age. Owned to the last miniscule particle,
orbital debris undermines the very notion of the orbit’s commonness, yet
not only its accumulation but also its subsequent ‘‘aftereffects’’ are something
we now all have in common, something which as Hardt and Negri put
it ‘‘we all share and participate in’’ (2009, viii). As such, space waste could
be seen as emerging from the mediatic enclosure of the orbital commons as
a kind of a ‘‘bad commons’’ that encumbers the orbit as a globally shared
burden. While its existence validates human medianatures as an extraplanetary
force and frames it in terms of a collective responsibility toward the
planet, this kind of species-oriented explanation might induce a paralysis in
policy and action: if it is a common problem, then it is the problem of no
one. The bad commons of orbital debris highlights the problems of participation
in the technologically driven processes of production and consumption
that may redefine our conceptualizations of human collectivities,
though it might only have solutions that lie outside our current forms of
administration and organization. While interfering with our demands for
media services on a global scale, it may prompt new questions of joint
ownership and governance, not merely with respect to economic equations
or juridical redefinitions but also with respect to politico-ethical decisions
concerning the ways in which we partake in a shared media.
Lying at the physical and conceptual limits of how we conceive of the
concurrency of wealth and waste, the ‘‘interference’’ that orbital debris
introduces into terrestrial and orbital trajectories of medianatures is essentially
transformative; it distils what we share in common. In their Commonwealth,
Hardt and Negri (2009) suggest that the complex material–social
context imbuing the commons in contemporary culture revolves around an
articulation of ‘‘the common’’ itself. In their definition, the arena of the
common encompasses not only ‘‘the common wealth of the material
world—the air, the water, the fruits of the soil, and all nature’s
bounty’’—but also ‘‘those results of social production that are necessary
for social interaction and further production, such as knowledges, languages,
codes, information, affects, and so forth’’ (2009, viii). This conception
of the common, as they suggest, ‘‘does not position humanity separate
from nature, as either its exploiter or its custodian, but focuses rather on the
practices of interaction, care, and cohabitation in a common world, promoting
the beneficial and limiting the detrimental forms of the common’’
(2009, viii). Hardt and Negri suggest the current composition of the common
primarily involves the production and circulation of communicative
commonalities, yet the shared ambits of social life increasingly rely upon
the material infrastructure of technical media, such as satellites, to sustain
them. The very materiality of space waste suggests that, contra Hardt and
Negri, the configuration of a common exceeds the immaterial dimension of
the social interaction facilitated by media and communication technologies.
It indicates how the ‘‘matter’’ of these technologies is situated in relation to
the exploitation of the natural resources and the sustenance of social life
becomes decisive to the creation and maintenance of shared ecological and
socioeconomic ambits. At once an outcome and a platform for further
mediation of the common, the accumulation of space debris stimulates new
modes of human–technological collectivities and cultivates new morphologies
of the ‘‘global’’ and the commons beyond terrestrial Earth.
While Hardt and Negri celebrate the potentialities of the unrestrained
production of social life, the unremitting physical growth of space waste is
apparently in need of restraint. Hardin (1968) identified the enlargement of
the human population as the major cause of the tragedy of the commons and
174 Science, Technology, & Human Values 42(1)
the reason why open access to the commons should be fenced by either
governmental regulation or the sanctity of private property. (In this he
advocated the restriction of human breeding as a grand solution that could
ease the overuse of shared resources.) In the case of the orbital commons,
which are empty of human inhabitants but overcrowded by nonhumans that
are owned by states and corporations and lacking in law and regulation, the
question arises as to what status and governance should be given to objects
of orbital debris? Their legal rank is that of ‘‘thinghood,’’3 and they cannot
be governed as either subjects or corporations. Yet what governmental
mechanisms could be implemented over a population of nongovernable
things? The common dilemma that orbital debris represents might be imagined
as a ‘‘tragedy’’ common to all humankind, but it is also represented as
an unmanageable ‘‘population of objects’’ whose extortions can only be
addressed collectively. The problem of waste in the orbital commons is
potentially instrumental in envisioning new modes of collaborative governance,
where legal and ethical considerations of protection and preservation
might extend the innate inclusiveness of common to nonhuman participants.
This, however, implies that the conduct of debris must be managed, rather
than dictated, and its formation (both as a noun and as a verb) accepted as a
decisive trait of our orbital medianatures.
The Logistic Affairs of Debris
Satellite technologies are crucial for sustaining global media capital to such
a degree that their own life (and afterlife) becomes subjected to logistical
approaches and mechanisms that seek to secure their efficient and undisturbed
distribution and operation. Their orbital presence and movements are
among the main foci of a global Space Situational Awareness agenda,
whose strategic purpose is to compile ‘‘comprehensive knowledge of the
population of space objects, of the space environment, and of the existing
risks and threats’’ (European Security and Defence Assembly 2009). Their
logistical management involves tracking satellites from their inception and
takeoff to the end of their missions and thereafter as waste. Satellite
launches require institutional authorization, allocation of an orbital niche,
and technical and procedural compliance with national and international
protocols regarding their deployment; once in orbit, their activities are
continuously monitored, assessed, and adjusted by their ground control. It
is a strange destiny for human-made satellites, dispatched to collect and
disseminate various data and information, they are themselves a target of
such procedures even after they break down. Yet once they become waste,
they are subjected to administrative and executive procedures separate from
those that managed them while they were functional. Specialized divisions
of space agencies monitor and assess debris. The Inter-Agency Debris
Coordination Committee involves distinct approaches and instruments
needed to gather, evaluate, store, and distribute information on debris conduct.
The data on space waste are collected by ground and space-based
telescopes, debris radars, and through examination of the surfaces of spacecraft
that have returned to Earth (NASA Orbital Debris Program Office
2009). After being processed by analytic algorithms, this information is
catalogued in databases such as the ESA’s Database and Information System
Characterizing Objects in Space or the US military operated Space
Surveillance Network (SSN). Through such programs, the afterlife of orbital
media undergoes the kind of surveillance that it facilitated while operational,
itself becoming an object of extensive scrutiny.
The interfering orbital ‘‘population’’ of wasted technologies has the
potential to disrupt myriad human activities and so information about its
behavior finds wide application on Earth. The SSN, for example, enables its
Predict when and where a decaying space object will re-enter the Earth’s
atmosphere; Prevent a returning space object, which to radar looks like a
missile, from triggering a false alarm in missile-attack warning sensors of the
U.S. and other countries; Chart the present position of space objects and plot
their anticipated orbital paths; Detect new man-made objects in space; Produce
a running catalog [sic] of man-made space objects; Determine which
country owns a re-entering space object; Inform NASA whether or not
objects may interfere with the space shuttle or …space station orbits. (US
Strategic Command 2008)
Space Track, another provider of ‘‘space situational awareness services
and information,’’ supplies data about the origins, physical properties, and
pathways of debris to ‘‘U.S. and international satellite owners/operators,
academia and other entities’’ (Space Track n.d.). For example, it is used to
power open-access interactive graphic interfaces such as Stuff in Space
(n.d.), which uses tracking data to image the real-time location, orbit and
speed of every functional technology, and waste object in orbit. What these
strategies of data gathering, apprehension and exchange reveal are the
social dynamics at work beneath the practices of science and technology
and how the need to maintain ‘‘stability’’ and a semblance of order manifests
across governments, industries, and communities. Such an approach
176 Science, Technology, & Human Values 42(1)
to orbital debris comprises the management of the shared risk that these
objects entail and essentially relies upon the ability to anticipate, detect, and
prevent fatal intersections with their trajectories.
Yet despite elaborate logistics management, space waste–related disasters
do occur. For example, France’s spy satellite Cerise was severely
damaged by waste from an Ariane rocket (Ward 1996), and an Airbus
A340 jet airliner that was flying above New Zealand narrowly missed
flaming debris from a Russian satellite (Lewis 2007). In other words, the
present situation of space waste within global logistics systems and networks
implies a boundary condition between highly ‘‘managed’’ and essentially
uncontrollable. While its monitoring and assessment reveals efforts to
position these chaotic objects within a normative system of techno-science,
its overall ‘‘informaticization’’—the translation of its material properties
and behavioral characteristics into intangible data—suggests numeric
attempts to reduce waste objects to statistical facts and draw them into a
realm of abstraction. Such a strategy runs its own risk of dematerializing
space waste, as rendering it as a series of numbers risks eliding the fullness
of its social and material impact. Correspondingly, the entire purview of
debris logistics emerges as a reflexive response, a defensive strategy aimed
at protecting the global circulation of information and knowledge and maintaining
the mediatic courses of ‘‘life as we know it.’’ In their attempts to
manage the risks presented by space waste, the awareness systems of logistical
management in fact help create the conditions in which defunct technologies
acquire the status of an unruly population; far from addressing the
removal of debris, this form of management appears to encourage negotiation
of its material presence and behavior. Faced with the thinghood of this
waste, logistic control of debris encounters these objects as if they
Like natural disaster, orbital debris is being managed as if it has a life of its
own. Contesting its control, this media waste encourages new strategies
aimed at refining assessments of risk, prediction, and avoiding disaster. A
range of logistics protocols intended to abate the effects of orbital debris has
been developed at both national and international levels. For example, in
2007, Inter-Agency Space Debris Coordination Committee’s Space Debris
Mitigation Guidelines instructed that space agencies follow a range of measures
designed to avoid the break up of objects launched into space: fluids
should be depleted, batteries well designed, high-pressure vessels should be
vented, and ‘‘self-destruct systems should be designed not to cause unintentional
destruction’’ (Inter-Agency Space Debris Coordination Committee
2007, 8). The focus of such guidelines is not the reduction of orbital launches
but rather the specification of technical requirements that objects placed in
orbit should possess and the correct procedures regarding their disposal, so
that the risk of them breaking up, exploding, and creating more debris is
reduced. This emphasis suggests that mitigation strategies aim to design and
engineer bodies of space objects and conceive of them in a manner that would
later enable the efficient management of their remains.
One example is the proposal of the company Interorbital Systems to
offer the public the possibility of constructing and launching ‘‘orbitfriendly’’
satellites; their ‘‘TubeSat Personal Satellite Kit’’ will enable
customers to build relatively cheap personalized satellites that operate for
‘‘several weeks’’ and then burn up in the atmosphere, thus not contributing
‘‘to the long-term buildup of orbital debris’’ (Interorbital Systems n.d.).
Here, managing the risk of interference is embedded in innovation. However,
these are as much political and economic as they are technical
inventions—the construction of cheap, personalized, ephemeral satellites
is aimed at democratizing their consumption beyond states and corporations
and inhibiting further accumulation of their waste. Forcing satellites to disintegrate
and become immaterial—as if their waste is guided out of sight by
an ‘‘invisible hand’’—is suggestive of intentions to uphold the perpetual
production of social life through mediatic enclosures of the orbital commons.
To determine the provisions of orbital media technologies’ material existence,
space waste mitigation strategies strive to secure the future unfolding
of the processes of techno-capitalism by reducing the ‘‘bad matter’’ in orbital
Efforts to control, regulate, or alleviate space debris have reified into
guidelines and instructions aimed at organizing the ways in which it is
created and used. Logistical management is sought through the now traditional
methods of calculation, measurement, and surveillance, but, so
far, all space waste mitigation schemes highlight the double-bind problematic
around which global high-tech capitalism is centered: they work to
eradicate obstacles hindering the exploitation of natural resources and
make way for the infinite ‘‘immaterial’’ production of social interactions
and relations, but in doing so encounter the irreducible materiality of
waste. These approaches to dealing with the affairs of waste in orbit have
all centered on forms of ‘‘risk management,’’ attempting to minimize and
mitigate against uncontrollable ‘‘environmental’’ disturbances. The management
of waste in the commons has perhaps too quickly become about
the management of the risk that it presents, passing over the more fundamental
problem of the rapacious exploitation of the orbital commons in
the first place.
178 Science, Technology, & Human Values 42(1)
Waste Wars and Postorbital Futures
While the creation of technologies that de-orbit themselves or become
recycled into the operation of future orbital economies appears to be the
most convincing long-term solution to this global problem, the debris
already in orbit will still require dispersal or collection. The question of
how to eradicate this formation has led to various techno-scientific solutions.
Described as ‘‘catch-all space sweepers, fishing nets and harpoons,
tethers, laser blasts, big and small space tugs,’’ these innovations include:
1. using ground-based lasers to fire pulses at pieces of debris in order to
quickly move them into orbits where the pull of gravity will facilitate
their eventual decay and de-orbit,
2. designing Solar sail arrays (or devices) that would robotically attach
themselves to larger pieces of space debris in low Earth orbit in
order to facilitate their decay over time,
3. deploying tethered nets around smaller pieces of space debris in
order to speed up their rate of decay and de-orbit,
4. spraying frozen gas mists into low Earth orbits from specially
deployed satellites for the purpose of gathering up and bringing
down smaller pieces of orbital debris,
5. using specially designed technological robots to clamp onto pieces
of space debris and to throw them into orbits where their rapid
degradation will be guaranteed, and
6. shooting very sticky adhesive balls composed of substances such as
resins or aerogels unto larger pieces of debris so as to alter their
orbits and eventually bring them into orbits where they will rapidly
decay (International Interdisciplinary Congress on Space Debris
Firing lasers, spraying gases, throwing nets, shooting, and bringing waste
objects down to decay. The rhetoric of these plans implies maneuvers
geared to mass an arsenal to ‘‘wage war’’ on waste until the final extinction
of its living dead matter. Even the International Space Station is to be fitted
with a weapon, a group of international researchers suggest mounting a
cannon that fires an intense laser beam at the debris (Ebisuzaki et al.
2015). Here the irony of sending more technologies to clean the waste of
other technologies is stark.
However, at present the development of an antiwaste arsenal is still in its
early development stages, and many such ideas are still not technologically
or economically feasible. Aside from demanding extensive investments of
capital and scientific knowledge, the idea of combating the formation of
orbital debris also has legal barriers—only their lawful owners have the
right to interfere with space objects and the task of determining jurisdiction
over all the particular items cluttered into the waste formation appears
unfeasible. And yet the option of not commencing a war on orbital debris
or losing such a war due to the inefficient ordinances of law, budgets, and
techno-science, would have severe consequences. We appear to be fast
approaching a point at which the ‘‘Kessler syndrome’’ manifests as reality,
a point at which the density of objects in orbit creates a cascade of collisions
that makes space travel and satellite use impossible. With the imminent
prospect of human postorbital conditions, space waste is imbricated in the
management of the future as a material force that must be placated in order
to secure our mediatic ventures in orbit and preserve the possibility of our
A little over fifty years after the launch of Sputnik—an event that
Arendt anticipated would precipitate unique prospects for humankind—
the consequences of our extraterrestrial aspirations threaten to undermine
humanity’s technological progress and keep us forever confined to Earth.
The accumulation of space waste reveals the human condition as unchanging,
a viral impulse to become and acquire more through satisfying the
appetites of our medianatures. However, the unwanted by-product of this
urge, the persistent formation of orbital debris opens up possibilities for
altering the conditions under which our global media infrastructure, services,
and practices can exist. By gradually placing the processes of
techno-mediation beneath them under siege, barricading it to terrestrial
space, this population of living dead objects may come to have an influence
over our technological evolution. It may, on the other hand, be a
material indication of the global suffocation that the impulse to become—
and acquire—more entails. Either way, the persistent accumulation of this
debris is a matter quite likely to have a profound impact upon the human
The waste in orbit invites questions such as whether the social, political–
economic, and species-defining problems of the commons need to be
resolved before we can resolve the problems engendered by the interfering
effects of media technologies and their waste material. Or is the debris a
productive provocation, a means for working toward solving wider conundrums
of the commons? Collective cooperation and agreement on the future
use of orbital space, although often tenuous and difficult to achieve, is
perhaps the first move toward developing practical polices that prevent
180 Science, Technology, & Human Values 42(1)
such disempowering devolution. Proposing that any satellite must not be
allowed to remain in orbit for more than twenty-five years after the conclusion
of its mission, the ‘‘25-year rule’’ is one such tentative step; it is, to
use Hardin’s formulation, a solution to the problem of the commons
based on ‘‘mutual coercion mutually agreed upon’’ (1968, 1247) and
one which NASA’s Chief Scientist for orbital debris, Jer Chyi Liou,
suggests is the ‘‘most cost-effective and most efficient plan for dealing
with orbital debris and the first line of defence against generating more
debris in the future’’ (quoted in Patel 2015). Yet, perhaps what is
required first is the development of an inclusive mode of governance
that is able to articulate a coherent collective response to the common
problem of this debris. The difficulty is in part a problem of the commons
as such but also (and more pertinently) a particular problem of the
To avoid a tragedy of the orbital commons may require policy forged
from a dynamic synthesis of approaches to science, technology, and the
environment that is able to implement a substantial shift in our current
systems of value; as Hardin wrote ‘‘it requires a fundamental extension
there is no technical solution … A technical solution may be defined as one
that requires a change only in the techniques of the natural sciences, demanding
little or nothing in the way of change in human values or ideas of morality.
By altering our values, by changing the way we live, the disruptive
potential of the accumulation of orbital debris might in turn surface questions
concerning our relationship with what we deem to be common to all.
Complications associated with the unlimited or irresponsible use of the
commons, which are made so apparent by orbital debris, also raise the
question of what we have in common, and the related question of who and
what this ‘‘we’’ includes. While our governance of waste in orbit is
grounded in political and economic rationalities and constrained by our
techno-scientific capacities, also needed are instruments of policy and planning
that touch upon things that are deeply human. The matter of orbital
debris demands fundamental developments in ideas about politico-ethical
practices, the life and death of technologies, and the status of what is shared.
If these demands are not met, an alteration in the human condition may well
be imposed upon us by waste itself. Whichever version of the ‘‘waste wars’’
eventuate, its outcome will redefine our relationship with defunct media
technologies, and once again we will find ourselves accounting for what we
have and will become. In the meantime, orbital debris is already sculpting
our common futures on and beyond the globe.
I would like to thank the editor, the managing editor, and the anonymous reviewers
for their help—it is much appreciated.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.
The author(s) received no financial support for the research, authorship, and/or
publication of this article.
1. The exact number of launched satellites is unknown since not all of the information
is stored in the United Nations’ Register of Space Objects Launched into
Outer Space, for example, spy satellites are not listed. It is estimated that more
than 6,000 satellites have been placed in orbital space (European Space Agency
‘‘Our Activities—Clean Space’’) and the Union of Concerned Scientist online
database indicates that there were 1,265 operational satellites orbiting the Earth
as of January 1, 2015 (Union of Concerned Scientists 2015).
2. Attempts to impact debris through physical force seem counterproductive, for
example, in 2007, China conducted an antisatellite missile test and deliberately
destroyed the Fengyun-1C weather satellite, which significantly increased the
amount of orbital debris (Liou 2013).
3. Steven M. Wise writes that the term ‘‘legal thinghood’’ refers to ‘‘an entity with
no capacity for legal rights,’’ such objects are ‘‘treated as property about which
legal persons have legal rights and duties’’ (1996, 472).
Amos, J. 2013. ‘‘‘Urgent Need’ to Remove Space Debris,’’ BBC News, April 25.
Accessed March 14, 2014. http://www.bbc.com/news/science-environment22299403.
Anselmo, L., and C. Pardini. 2008. ‘‘Space Debris Mitigation in Geosynchronous
Orbit.’’ Advances in Space Research 41 (7): 1091-99.
Arendt, H.  1998. The Human Condition. Chicago, IL: University of Chicago
182 Science, Technology, & Human Values 42(1)
Australian Space Academy. n.d. ‘‘A Guide to Orbital Space Debris.’’ Accessed
March 21, 2014. http://www.spaceacademy.net.au/watch/debris/gsd/gsd.htm.
Barad, K. 2003. ‘‘Posthumanist Performativity: Toward an Understanding of How
Matter Comes to Matter.’’ Signs 28 (3): 801-31.
Bonnal, C., J.-M. Ruault, and M.-C. Desjean. 2013. ‘‘Active Debris Removal:
Recent Progress and Current Trends.’’ Acta Astronautica 85:51-60.
de Angelis, M. 2001. ‘‘Marx and Primitive Accumulation: The Continuous Character
of Capital’s ‘Enclosures’.’’ The Commoner 2. Accessed January 5, 2013.
de Angelis, M. 2007. The Beginning of History: Value Struggles and Global Capital.
London, England: Pluto.
Dyer-Witheford, N. 1999. Cyber-Marx: Cycles and Circuits of Struggle in High
Technology Capitalism. Chicago: University of Illinois Press.
Ebisuzaki, T., M. N. Quinn, S. Wada, L. W. Piotrowski, Y. Takizawa, M. Casolino,
M. E. Bertaina, P. Gorodetzky, E. Parizot, T. Tajima, R. Soulard, and G. Mourou.
2015. ‘‘Demonstration Designs for the Remediation of Space Debris from the
International Space Station.’’ Acta Astronautica 112:102-13.
European Space Agency. n.d. ‘‘Our Activities—Clean Space.’’ Accessed April 17,
European Security and Defence Assembly. 2009. ‘‘Recommendation 841 on Space
Situational Awareness (2009).’’ Accessed April 17, 2014. http://ueo.cvce.lu/en/
Gabrys, J. 2011. Digital Rubbish: A Natural History of Electronics. Ann Arbor:
University of Michigan Press.
Gravity. 2013. Directed by Alfonso Cuaro´n. Sherman Oaks, CA: Esperanto Filmoj
and Heyday Films.
Hacking, I. 1986. ‘‘Culpable Ignorance of Interference Effects.’’ In Values at Risk,
edited by Douglas MacLean, 136-54. Totowa, NJ: Rowman and Allanheld.
Hardin, G. 1968. ‘‘The Tragedy of the Commons.’’ Science 162 (3859):
Hardt, M. and A. Negri. 2009. Commonwealth. Cambridge, MA: Harvard University
Heavens Above. n.d. Accessed April 30, 2014. http://www.heavens-above.com.
Inter-Agency Space Debris Coordination Committee. 2007. ‘‘IADC Space Debris
Mitigation Guidelines.’’ Accessed April 21, 2014. http://orbitaldebris.jsc.nasa.gov/
International Interdisciplinary Congress on Space Debris Remediation. 2011. ‘‘Preliminary
Program.’’ Accessed April 21, 2014. http://www.mcgill.ca/files/iasl/
Interorbital Systems. n.d. ‘‘TubeSat Personal Satellite Kit.’’ Accessed August 18,
Jakhu, R. 2007. ‘‘Legal Issues of Satellite Telecommunications, the Geostationary
Orbit, and Space Debris.’’ Astropolitics: The International Journal of Space
Politics & Policy 5 (2): 173-208.
Kessler, D. J. 2009. ‘‘The Kessler Syndrome as Discussed by Donald J. Kessler.’’
Accessed February 2, 2014. http://webpages.charter.net/dkessler/files/KesSym.html.
Latour, B. 2004. ‘‘Why Has Critique Run Out of Steam? From Matter of Fact to
Matters of Concern.’’ Critical Inquiry 30: 225-48.
Lewis, P. 2007. ‘‘Burning Russian Space Junk Whizzes Past Auckland-bound Plane,’’
ABC News, March 29. Accessed June 17, 2013. http://www.abc.net.au/news/200
Liou, J.-C. 2013. ‘‘Engineering and Technology Challenges for Active Debris
Removal,’’ Progress in Propulsion Physics 4:735-48.
Marx, K.  2007. ‘‘The So-called Primitive Accumulation.’’ In Capital: A
Critique of Political Economy. Vol. 1, edited by Friedrich Engels, 784-849. New
Maxwell, R. and T. Miller. 2008. ‘‘Ecological Ethics and Media Technology.’’
International Journal of Communication 2:331-53.
NASA Orbital Debris Program Office. 2009. ‘‘Orbital Debris Measurements.’’
Accessed April 21, 2014. http://orbitaldebris.jsc.nasa.gov/measure/measureme
Parikka, J. 2011. ‘‘Introduction: The Materiality of Media and Waste.’’ In Medianatures:
Materiality of Information Technology and Electronic Waste, edited by
Jussi Parikka, n.p., Open Humanities Press. Accessed September 30, 2013. http://
Parks, L. 2013. ‘‘Orbital Ruins.’’ Necsus. Accessed July 13, 2015. http://www.necsu
Patel, N. V. 2015. ‘‘Averting Space Doom,’’ IEEE Spectrum, 28 January. Accessed
September 7, 2015. http://spectrum.ieee.org/aerospace/satellites/averting-spacedoom-solving-the-orbital-junk-problem.
Rossiter, N. 2009. ‘‘Translating the Indifference of Communication: Electronic
Waste, Migrant Labour and the Informational Sovereignty of Logistics in
China.’’ International Review of Information Ethics 11:36-44.
Space Debris. 2000. Developed by Rage Software PLC. San Mateo, CA: Sony.
Space Track. n.d. Accessed May 17, 2015. https://www.space-track.org/perl/login.pl.
Sterne, J. 2007. ‘‘Out with the Trash: On the Future of New Media.’’ In Residual
Media, edited by Charles R. Acland, 17-31. Minneapolis: University Of
184 Science, Technology, & Human Values 42(1)
Stuff in Space. n.d. Accessed May 17, 2015. http://stuffin.space.
Union of Concerned Scientist. 2015. ‘‘UCS Satellite Database.’’ Accessed July 3,
United Nations. 1967. ‘‘Treaty on Principles Governing the Activities of States in
the Exploration and Use of Outer Space, Including the Moon and Other Celestial
Bodies.’’ Accessed September 12, 2016. http://www.unoosa.org/pdf/gares/
United Nations. 1972. ‘‘The Convention on International Liability for Damage
Caused by Space Objects.’’ Accessed November 7, 2013. http://www.oosa.
United Nations. 2010. ‘‘The Space Debris Mitigation Guidelines of the Committee
on the Peaceful Uses of Outer Space.’’ Accessed April 17, 2014. http://orbitalde
US Strategic Command. 2008. ‘‘USSTRATCOM Space Control and Space Surveillance
Fact Sheet.’’ Accessed April 17, 2014. http://www.stratcom.mil/files/
The United States of America. 2010. National Space Policy 2010. Accessed October
21, 2013. http://www.whitehouse.gov/sites/default/files/national_space_policy_
Wall-E. 2008. Directed by Andrew Stanton. Burbank, CA: Walt Disney Pictures and
Pixar Animation Studios.
Ward, M. 1996. ‘‘Satellite Injured in Space Wreck,’’ New Scientist, August 24.
Accessed March 14, 2014. http://www.newscientist.com/article/mg15120440.
Weeden, B. 2011. ‘‘Overview of the Legal and Policy Challenges of Orbital Debris
Removal.’’ Space Policy 27:38-43.
Wise, S. M. 1996. ‘‘The Legal Thinghood of Nonhuman Animals.’’ Boston College
Environmental Affairs Law Review 23 (3): 471-546.
Katarina Damjanov completed her PhD at the University of Melbourne and she is
currently a lecturer in the School of Social Sciences at the University of Western
Australia. Her research interests revolve around considerations of the material and
social dimensions of technological progress on and off the Earth. Some of her recent
work features in Environment and Planning D: Society and Space, Leonardo, M/C
Journal, and Fibreculture (forthcoming).