The history of artificial intelligence (AI)dates back to antiquity – intelligent robots appear in the myths of many ancient societies, including Greek, Arabic, Egyptian and Chinese. Today, the field of artificial intelligence is more vibrant than ever and some believe that we're on the threshold of discoveries that could change human society irreversibly, for better or worse.
To truly understand what AI is, though, you need to appreciate the jargon that's thrown around out there right now. For instance, you need to know that artificial intelligence is not the same as machine learning, despite the fact it's regularly used as a synonym for it. The chief difference to remember is that machine learning is simply a process by which a computer can learn a skill, whereas artificial intelligence refers to a computer that can "think" for itself without being programmed to do so.
Facebook's head of AI research, Yann Lecun explains what AI is rather eloquently in an introduction to AI education.
Now we've cleared that up, here are ten things you need to know about AI before the robots take over the world:
1. Artificial intelligence is developing faster than you think, and speeding up exponentially
Humans tend to think in straight lines, but every aspect of technological progress is actually accelerating – including AI. Futurist Ray Kurzweil calls this the “Law of Accelerating Returns”, and presents evidence that an amount of progress equal to the entire 20th century's gains was attained between 2000 and 2014. He also argues that the same amount will happen again before 2021. Understanding the exponential nature of progress and ignoring the inner tendency to think things will keep improving at the same rate is key to getting to grips with how fast we'll make scientific advances in the future.
2. You use artificial intelligence all day, every day
Siri, Google Now, and Cortana are obvious examples of artificial intelligence, but AI is actually all around us. It can be found in vacuum cleaners, cars, lawnmowers, video games, Hollywood special effects, e-commerce software, medical research and international finance markets – among many other examples. John McCarthy, who originally coined the term “artificial intelligence” in 1956, famously quipped: “As soon as it works, no-one calls it AI anymore.”
3. Robots are definitely going to take your job
Yeah, I know you're a special flower and everything, but the work you do is either already automatable or will be very soon. How soon? Most jobs will be done by robots within 30 years, says professor Moshe Vardi of Rice University, leading to unemployment rates greater than 50%. That might sound bad, but many academics studying the field believe that technological unemployment will open the door to a future where work is something people do for pleasure, not out of necessity. Proposals such as universal basic income are the beginnings of a societal support structure that could eventually allow this to become a reality.
4. About half of the AI community believes computers will be as smart as humans by 2040
In 2013, two researchers surveyed hundreds of AI experts on when they thought there was a 50/50 chance that human-level artificial intelligence will arrive. The median answer was 2040 – only 24 years from now. The average life expectancy in the UK is 82 years, meaning that there's a heads/tails chance that a 58-year-old today will see computers as smart as humans in their lifetime. Another recent study from author James Barratsimply asked researchers when human-level AI would be achieved – by 2030, 2050, 2100, after 2100 or never. The largest group, 42% of respondents, said before 2030.
5. A lot of smart people think developing artificial intelligence to human level is a dangerous thing to do
6. Once artificial intelligence gets smarter than humans, we've got very little chance of understanding it
Once machines are as intelligent as a human, a lot of worrying things can happen. There's little chance that AI development would cease at that point (the AI would almost certainly begin working on improving itself) and many very smart people – including Stephen Hawking and Elon Musk – think that this situation would be very scary indeed. “If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk during a recent interview. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
Tim Urban, at Wait But Why, explains this really well in his pair of enormous articles on artificial intelligence, so I'll quote him here: “A chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans,” he writes. “We will never be able to even comprehend the things a [superintelligent AI] can do, even if the machine tried to explain it to us – let alone do it ourselves. It could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.”
7. There's no such thing as an “evil” artificial intelligence
Contrary to what we see in sci-fi and movies, AI can't be evil. That's a human concept. An AI can do unspeakably horrible things, but it doesn't do them out of sheer wickedness – it does them simply because that's what it has been programmed (intentionally or accidentally) to do. Stephen Hawking explained this concept recently in an AMA on Reddit. “A super-intelligent AI will be extremely good at accomplishing its goals,” he said, “and if those goals aren't aligned with ours, we're in trouble.”
8. There are three ways a superintelligent artificial intelligence could work
AI expert Nick Bostrom, in his fantastic book Superintelligence: Paths, Dangers, Strategies, classifies three ways in which a superintelligence could operate. An “oracle” would be able to answer questions with a good degree of accuracy. A “genie” would do anything it is commanded to do and then await the next command, while a “sovereign” would be assigned an overarching goal and then be allowed to operate in the world and make decisions about how best to accomplish that goal. For the reasons above, the former is much less scary than the latter.
9. Artificial intelligence could be the reason why we've never met aliens
Further up, Elon Musk described AI as an “existential threat” to humanity, meaning that it could erase mankind from the universe entirely. This ties in with ideas of a “Great Filter” that kills off alien civilisations that reach a certain level of technological development. It's entirely possible that the reason we've never met aliens is because they invented artificial intelligence before they could build spaceships capable of interstellar travel, and that discovery caused their extinction.
10. Basically, there's a good chance we'll be extinct or immortal by the end of the century
The world of AI research is roughly split into optimists and pessimists. The optimists hope that we'll one day invent a superintelligence that solves every problem we can imagine and leads us into a utopian future where all of mankind's needs are met and everyone lives happily ever after. The pessimists are concerned that one tiny mistake along the way will lead to the swift end of the human race – as an AI programmed to solve climate change, for example, identifies that humans are the number one obstacle to doing so. There are also scenarios in between, of course – where would a reality such as The Matrix, where humans are cultivated as a fuel source in a state of perfect happiness, lie on that scale?
For now, you can leave your opinion in the comments section below. But it likely won't be very long until we find out for real.
No comments:
Post a Comment