Where did the “cyber” in “cyberspace” come from? Most people, when asked, will probably credit William Gibson, who famously introduced the term in his celebrated 1984 novel, Neuromancer. It came to him while watching some kids play early video games. Searching for a name for the virtual space in which they seemed immersed, he wrote “cyberspace” in his notepad. “As I stared at it in red Sharpie on a yellow legal pad,” he later recalled, “my whole delight was that it meant absolutely nothing.”
How wrong can you be? Cyberspace turned out to be the space that somehow morphed into the networked world we now inhabit, and which might ultimately prove our undoing by making us totally dependent on a system that is both unfathomably complex and fundamentally insecure. But the cyber- prefix actually goes back a long way before Gibson – to the late 1940s and Norbert Wiener’s book, Cybernetics, Or Control and Communication in the Animal and the Machine, which was published in 1948.
Cybernetics was the term Wiener, an MIT mathematician and polymath, coined for the scientific study of feedback control and communication in animals and machines. As a “transdiscipline” that cuts across traditional fields such as physics, chemistry and biology, cybernetics had a brief and largely unsuccessful existence: few of the world’s universities now have departments of cybernetics. But as Thomas Rid’s absorbing new book, The Rise of the Machines: The Lost History of Cybernetics shows, it has had a long afterglow as a source of mythic inspiration that endures to the present day.
This is because at the heart of the cybernetic idea is the proposition that the gap between animals (especially humans) and machines is much narrower than humanists believe. Its argument is that if you ignore the physical processes that go on in the animal and the machine and focus only on the information loops that regulate these processes in both, you begin to see startling similarities. The feedback loops that enable our bodies to maintain an internal temperature of 37C, for example, are analogous to the way in which the cruise control in our cars operates.
Dr Rid is a reader in the war studies department of King’s College London, which means that he is primarily interested in conflict, and as the world has gone online he has naturally been drawn into the study of how conflict manifests itself in the virtual world. When states are involved in this, we tend to call it “cyberwarfare”, a term of which I suspect Rid disapproves – on the grounds that warfare is intrinsically “kinetic” (like Assad’s barrel bombs) – whereas what’s going on in cyberspace is much more sinister, elusive and intractable.
In order to explain how we’ve got so far out of our depth, Rid has effectively had to compose an alternative history of computing. And whereas most such histories begin with Alan Turing and Claude Shannon and John von Neumann, Rid starts with Wiener and wartime research into gunnery control. For him, the modern world of technology begins not with the early digital computers developed at Bletchley Park, Harvard, Princeton and the University of Pennsylvania but with the interactive artillery systems developed for the US armed forces by the Sperry gyroscope company in the early 1940s.
From this unexpected beginning, Rid weaves an interesting and original story. The seed crystal from which it grows is the idea that the Sperry gun-control system was essentially a way of augmenting the human gunner’s capabilities to cope with the task of hitting fast-moving targets. And it turns out that this dream of technology as a way of augmenting human capabilities is a persistent – but often overlooked – theme in the evolution of computing.
The standard narrative about the technology’s history focuses mostly on technical progress – processing power, bandwidth, storage, networking, etc. It’s about machines and applications, companies and fortunes. The underlying assumption is that the technology is empowering – which of course in principle it can be. What, after all, is the web but a memory aid for people? What the dominant narrative conveniently ignores, though, is that the motive force for most tech industry development is not human empowerment but profit. Which is why Facebook wants its 1.7 billion users to stay within its walled garden rather than simply being empowered by the open web.
The dream of computing as a way of augmenting human capabilities, however, takes empowerment seriously rather than using it as a cover story. It is, for example, what underpinned the life’s work of Douglas Engelbart, the man who came up with the computer mouse and the windowing interface that we use today. And it motivated JCR Licklider, the psychologist who was, in a way, the godfather of the internet and whose paper Man-Computer Symbiosis is one of the canonical texts in the augmentation tradition. Even today, a charitable interpretation of the Google Glass project would place it firmly in the same tradition. Ditto for virtual reality (VR).
Given that he starts from cybernetics, the trajectory of Rid’s narrative makes sense. It takes him into the origins of the concept of the “cyborg” – the notion of adapting humans to their surroundings rather than the other way round – an idea that was first explored by Nasa and the US military. Thence he moves into the early history of automation, and startling tales about ambitious early attempts to create robots that might be useful in combat. In 1964, for example, US army contractors built the Pedipulator, an 18ft tall mechanical figure that “looked like a prototype of a Star Warsbiped”. The idea was to create some kind of intelligent full-body armour that would turn troops, in effect, into walking tanks.
From there, it’s just a short leap to virtual reality – also, incidentally, first invented by the US military in the early 1980s. Rid’s account of the California counter-culture’s obsession with VR is fascinating, and includes the revelation that Timothy Leary, the high priest of LSD, was an early evangelist. Leary and co thought that VR was better than LSD because it was inherently social whereas an LSD trip was just chemically induced isolation. Then Rid moves on to the arrival of public-key cryptography, which put military-grade encryption into the hands of citizens for the first time (and which had been secretly invented at GCHQ, so one can imagine its discombobulation when civilian geeks independently came up with it).
The final substantive chapter of Rise of the Machines is about conflict in cyberspace, and contains the first detailed account I’ve seen of the “Moonlight Maze” attack on US networks. Rid describes this as “the biggest and most sophisticated computer network attack made against the United States in history”. It happened in 1996, which means that it belongs in prehistory by internet timescales. And it originated in Russia. The attack was breathtaking in its ambition and comprehensiveness. But it was probably small beer compared with what goes on now, especially given that China has entered the cyberfray.
In some ways, Rid’s chapter on conflict in cyberspace seems orthogonal to his main story, which is about how Wiener’s vision of cybernetics functioned as an inspirational myth for innovators who were interested in what Licklider and Engelbart thought of as “man-machine symbiosis” and human augmentation. If this absorbing, illuminating book needs a motto, it is an aphorism of Marshall McLuhan’s friend, John Culkin. “We shape our tools”, he wrote, “and thereafter our tools shape us.”
Thomas Rid Q&A: ‘Politicians would say “cyber” and roll their eyes’
How did you become interested in cybernetics?
The short word “cyber” seemed everywhere, slapped in front of cafes, crime, bullying, war, punk, even sex. Journalists and politicians and academics would say “cyber” and roll their eyes at it. Sometimes they would ask where the funny phrase actually came from. So every time my boss introduced me as, “Hey, this is Thomas, he’s our cyber expert,” I cringed. So I thought I should write a book. Nobody, after all, had properly connected today’s “cyber” to its historic ancestor, cybernetics.
The short word “cyber” seemed everywhere, slapped in front of cafes, crime, bullying, war, punk, even sex. Journalists and politicians and academics would say “cyber” and roll their eyes at it. Sometimes they would ask where the funny phrase actually came from. So every time my boss introduced me as, “Hey, this is Thomas, he’s our cyber expert,” I cringed. So I thought I should write a book. Nobody, after all, had properly connected today’s “cyber” to its historic ancestor, cybernetics.
Initially I wanted to do a polemic. But then I presented some of the history at Royal Holloway, and to my surprise, some of the computer science students warmed to “cyber” after my talk, appreciating the idea’s historical and philosophical depth. So I thought, yes, let’s do this properly.
You teach in a department of war studies, so I can see that cyberwar might be your thing. But you decided that you needed to go way back – not only toNorbert Wiener and the original ideas of cybernetics, but also to the counter-cultural background, to personal computing, virtual reality (VR) and computer conferencing. Why?
War studies, my department, is an open tent. Crossing disciplinary boundaries and adding historical and conceptual depth is what we do. So “machines” fits right in. I think understanding our fascination with communication and control today requires going back to the origins, to Wiener’s cybernetic vision after the second world war. Our temptation to improve ourselves through our own machines – “big brains” in the 50s, or artificial intelligence today – is hardwired into who we are as humans. We don’t just want to play God, we want to beat God, building artificial intelligence that’s better than the non-artificial kind. This hubris will never go away. So one of our best insurance is to study the history of cybernetic myths, the promise of the perennially imminent rise of the machines.
War studies, my department, is an open tent. Crossing disciplinary boundaries and adding historical and conceptual depth is what we do. So “machines” fits right in. I think understanding our fascination with communication and control today requires going back to the origins, to Wiener’s cybernetic vision after the second world war. Our temptation to improve ourselves through our own machines – “big brains” in the 50s, or artificial intelligence today – is hardwired into who we are as humans. We don’t just want to play God, we want to beat God, building artificial intelligence that’s better than the non-artificial kind. This hubris will never go away. So one of our best insurance is to study the history of cybernetic myths, the promise of the perennially imminent rise of the machines.
How long did the book take to research and write?
It took me about three years. It wasn’t hard to stay focused – the story throughout the decades was just too gripping: here was the US air force building touch-sensitive cybernetic manipulators to refuel nuclear-powered long-range bombers, and there’s LSD guru Timothy Leary discovering the “cybernetic space” inside the machines as a mind-expansion device even better than psychedelic drugs – better, by the way, because the “machine high” was more creative and more social than getting stoned onpsilocybin.
It took me about three years. It wasn’t hard to stay focused – the story throughout the decades was just too gripping: here was the US air force building touch-sensitive cybernetic manipulators to refuel nuclear-powered long-range bombers, and there’s LSD guru Timothy Leary discovering the “cybernetic space” inside the machines as a mind-expansion device even better than psychedelic drugs – better, by the way, because the “machine high” was more creative and more social than getting stoned onpsilocybin.
Your account of the Moonlight Maze investigation (of a full-on state-sponsored cyberattack on the US) is fascinating and scary. It suggests that – contrary to popular belief – cyberwarfare is not just a distant possibility but a baffling and terrifying reality. It is also – by your account – intractable. Aren’t we (ie society) out of our depth here? Or, at the very least, aren’t we in a position analogous to where we were with nuclear weapons in, say, 1946? One group that’s missing from your account is the engineers who sought to implement old-style cybernetic ideas in real life. For example, theCybersyn project that Stafford Beer led in Chile for Salvador Allende. Did you think of including stuff like that? If not, why not?
The cybernetic story is expansive. I had to leave out so much, especially in the 50s and 60s, the heyday of cybernetics. For example, the rise of cybernetics in the Soviet Union is a story in itself, and almost entirely missing from my book, as is much of the sociological work that was inspired by Norbert Wiener’s vision (much of it either dated or impenetrable). Cybersyn has been admirably covered, in detail, by Eden Medina’sCybernetic Revolutionaries. I would also mention Ronald Kline’s recent book, The Cybernetics Moment.
“Cyberwar”, if you want to call it that, has been going on since at least 1996 – as I show – without interruption. In fact state-sponsored espionage, sabotage, and subversion escalated drastically in the past two decades. But meanwhile we’ve been fooling ourselves, expecting blackouts and explosions and planes falling out of the sky as a result of cyberattacks. Physical effects happen, but have been a rare exception. What we’re seeing instead is even scarier: an escalation of cold war-style spy-versus-spy subversion and sabotage, covert and hidden and very political, not open and of military nature, like nuclear weapons. Over the last year we have observed several instances of intelligence agencies breaching victims, stealing files, and dumping sensitive information into the public domain: often through purpose-created leak forums, or indeed though Wikileaks.
Russian agencies have been leading this trend, most visibly by trying to influence the US election through hacking and dumping. They’re doing very creative work there. Although the forensic evidence for this activity is solid and openly available, the tactic still works impressively well. Open societies aren’t well equipped to deal with covert spin-doctoring.
We’re currently experiencing a virtual reality frenzy, with companies like Facebook and venture capitalists salivating over it as the Next Big Thing. One of the interesting parts of your story is the revelation that we have been here before – except last time, enthusiasm for VR was inextricably bound up with psychedelic drugs. Then, it was tech plus LSD; now it’s tech plus money. The same cycle applies to artificial intelligence. So cybernetics isn’t the only field to have waxed and waned.
Absolutely not. I was often writing notes on the margins of my manuscript in Fernandez & Wells in Somerset House, where London fashion week used to happen. Technology is a bit like fashion: every few years a new craze or trend comes around, drawing much attention, money, and fresh talent. Right now, it’s automation and VR, a bit retro-60s and -90s respectively. Of course our fears and hopes aren’t just repeating the past, and the technical progress in both fields has been impressive. But we’ll move on before long, and the next tech wave will probably have a retro feature again.
Absolutely not. I was often writing notes on the margins of my manuscript in Fernandez & Wells in Somerset House, where London fashion week used to happen. Technology is a bit like fashion: every few years a new craze or trend comes around, drawing much attention, money, and fresh talent. Right now, it’s automation and VR, a bit retro-60s and -90s respectively. Of course our fears and hopes aren’t just repeating the past, and the technical progress in both fields has been impressive. But we’ll move on before long, and the next tech wave will probably have a retro feature again.
At a certain moment in the book you effectively detach the prefix “cyber” from its origins in wartime MIT and the work of Norbert Wiener and use it to build a narrative about our networked and computerised existence – cyborgs, cyberspace, cyberwar etc. Your justification, as I see it, is that there was a cybernetic moment and it passed. But had you thought that a cybernetic analysis of our current plight in trying to manage cyberspace might be insightful? For example, one of the big ideas to come out of early cybernetics was Ross Ashby’s Law of Requisite Variety – which basically says that for a system to be viable it has to be able to cope with the complexity of its environment. Given what information technology has done to increase the complexity of our current environment, doesn’t that mean that most of our contemporary systems (organisations, institutions) are actually no longer viable. Or is that pushing the idea too far?
You’re raising a fascinating question here, one that I struggled with for a long time. First, I think “cyber” detached itself from its origins, and degenerated from a scientific concept to an ideology. That shift began in the early 1960s. My book is merely chronicling this larger history, not applying cybernetics to anything. It took me a while to resist the cybernetic temptation, if you like: the old theory still has charm and seductive force left in its bones – but of course I never wanted to be a cyberneticist
.
No comments:
Post a Comment