Stanford has undertaken an important effort:
envisioning the implications of artificial intelligence over a 100-year span, to “anticipate how the effects of artificial intelligence will ripple through
every aspect of how people work, live, and play.”
But there is a problem, potentially fundamental enough that the team may want to revisit its first report or adjust its approach as it goes forward. This is the report’s relatively weak coverage of the urban, human security implications of AI.
Why the light treatment of security, barely more than seven paragraphs in an almost four-dozen page report? According to the purpose statement, this first study focuses on the implications of AI in 2030 in the “typical North American city.” I suppose the thin treatment of security may derive from the huge assumption that North American cities will remain peaceful and secure, and thus AI and intelligent machines won’t carry significant human security implications. (As if our oceans and borders will protect our cities from such considerations? Recall, we debunked the theory that oceans provided population security in the nuclear age — and, I would say, again in the age of terror in which we live now).
I have a sinking feeling, growing stronger by the day, that AI and smart swarms may profoundly upset security assumptions in modern cities, including many of those in North America. In this age of terror, urban populations may be the ONLY areas in North America where insurgencies or powerful criminal elements may be sustainable, and criminal groups may turn to autonomous machines and AI, perhaps combined with strategies of cyber crime, to hold hostage large swaths of urban communities (note: AI and cyber crime get passing reference in the context of credit card crime in this report).
Are my concerns misplaced? The history of computational sciences has included many innovations directly connected to killing and avoiding being killed, and many inventions have targeted urban populations. Recall that the earliest analog military computers emerged in fire control systems on warships in 1916. Analog computer bombsights for the B-17 helped air fleets obliterate cities. The digital breakthrough of the Electronic Numerical Integrator and Computer was designed for gunnery ballistics. Microchips helped intercontinental ballistic missile target cities; the The Advanced Research Projects Agency Network (internet) was created to provide command and control functionality in the case of nuclear war. Drones and lower level AI already play a crucial role in the human-insurgent wars of the 21st century.
A last point. The future of most academic endeavors is generally thought to be moving in the direction of
interdisciplinary. Yet the project leaders plan to break down the implications of A.I. into fairly narrow fields of focus, perhaps as many as 18 “
topics of interest.” There may be security implications in many of these fields. For example, earlier this year, cyber algorithms (computer code) attacked hospitals outside Washington, D.C. and paralyzed healthcare for hundreds of patients, an unprecedented attack on civil society through healthcare providers.
So, to the project leaders: Consider beefing up the security aspects to most if not all these efforts. I just finished serving on the Defense Science Board Summer Study of Autonomy (autonomy is shorthand for A.I.-directed machines). There is a plethora of cognitive/military expertise grappling with this problem. I hope in the next installment of the Stanford study the authors reconsider “the typical North American city,” and include a rigorous subsection, perhaps titled “Human Security of the City in the Age of AI and Intelligent Swarms,” and invite additional study members who bring with them interdisciplinary science-tech-military expertise.
I know the authors of the report appreciate that the future of A.I. has a dark side. But I believe the security implications are far more significant than is currently appreciated. I suspect the city of the future will require a major effort to provide security for humans who will be out-thought and outmaneuvered by the AI-directed, smart swarms of the next 100 years. I suspect the citizens of the future may need their own friendly AI to protect them from criminal elements that control their own malevolent AI. But that is just a hunch. I hope the Stanford study takes a harder look at human security and hopefully proves my fears misplaced.
Dr. Mark Hagerott, CAPT, USN (Ret), is the chancellor of the North Dakota University System, and also a New America Cybersecurity fellow. In 2015 he served on the Defense Science Board Summer Study of Autonomy, a report from which was recently released. He previously served as the deputy director and distinguished professor at the Center for Cyber Studies of the U.S. Naval Academy. He is a graduate of the U.S. Naval Academy, was a Rhodes Scholar, and wrote his doctoral dissertation at the University of Maryland on the study of the historical evolution of technology and military organizations.
No comments:
Post a Comment