ON THE WEST coast of Australia, Amanda Hodgson is launching drones out towards the Indian Ocean so that they can photograph the water from above. The photos are a way of locating dugongs, or sea cows, in the bay near Perth—part of an effort to prevent the extinction of these endangered marine mammals. The trouble is that Hodgson and her team don’t have the time needed to examine all those aerial photos. There are too many of them—about 45,000—and spotting the dugongs is far too difficult for the untrained eye. So she’s giving the job to a deep neural network.
Deep learning is remaking Google, Facebook, Microsoft, and Amazon.
Neural networks are the machine learning models that identify faces in the photos posted to your Facebook news feed. They also recognize the questions you ask your Android phone, and they help run the Google search engine. Modeled loosely on the network of neurons in the human brain,these sweeping mathematical models learn all these things by analyzing vast troves of digital data. Now, Hodgson, a marine biologist at Murdoch University in Perth, is using this same technique to find dugongs in thousands of photos of open water, running her neural network on the same open-source software, TensorFlow, that underpins the machine learning services inside Google.
As Hodgson explains, detecting these sea cows is a task that requires a particular kind of pinpoint accuracy, mainly because these animals feed below the surface of the ocean. “They can look like whitecaps or glare on the water,” she says. But that neural network can now identify about 80 percent of dugongs spread across the bay.
The project is still in the early stages, but it hints at the widespread impact of deep learning over past year. In 2016, this very old but newly powerful technology helped a Google machine beat one of the world’s top players at the ancient game of Go—a feat that didn’t seem possible just a few months before. But that was merely the most conspicuous example. As the year comes to a close, deep learning isn’t a party trick. It’s not niche research. It’s remaking companies like Google, Facebook, Microsoft, and Amazon from the inside out, and it’s rapidly spreading to the rest of the world, thanks in large part to the open source software and cloud computing services offered by these giants of the internet.
The New Translation
In previous years, neural nets reinvented image recognition through apps like Google Photos, and they took speech recognition to new levels via digital assistants like Google Now and Microsoft Cortana. This year, they delivered the big leap in machine translation, the ability to automatically translate speech from one language to another. In September, Google rolled out a new service it calls Google Neural Machine Translation, which operates entirely through neural networks. According to the company, this new engine has reduced error rates between 55 and 85 percent when translating between certain languages.
Google trains these neural networks by feeding them massive collections of existing translations. Some of this training data is flawed, including lower quality translations from previous versions of the Google Translate app. But it also includes translations from human experts, and this buoys the quality of the training data as a whole. That ability to overcome imperfection is part of deep learning’s apparent magic: given enough data, even if some is flawed, it can train to a level well beyond those flaws.
Mike Schuster, a lead engineer on Google’s service, is happy to admit that his creation is far from perfect. But it still represents a breakthrough. Because the service runs entirely on deep learning, it’s easier for Google to continue improving the service. It can concentrate on refining the system as a whole, rather than juggling the many small parts that characterized machine translation services in the past.
Meanwhile, Microsoft is moving in the same direction. This month, it released a version of its Microsoft Translator app that can drive instant conversations between people speaking as many as nine different languages. This new system also runs almost entirely on neural nets, says Microsoft vice president Harry Shum, who oversees the company’s AI and research group. That’s important, because it means Microsoft’s machine translation is likely to improve more quickly as well.
The New Chat
In 2016, deep learning also worked its way into chatbots, most notably the new Google Allo. Released this fall, Allo will analyze the texts and photos you receive and instantly suggest potential replies. It’s based on an earlier Google technology called Smart Reply that does much the same with email messages. The technology works remarkably well, in large part because it respects the limitations of today’s machine learning techniques. The suggested replies are wonderfully brief, and the app always suggests more than one, because, well, today’s AI doesn’t always get things right.
Inside Allo, neural nets also help respond to the questions you ask of the Google search engine. They help the company’s search assistant understand what you’re asking, and they help formulate an answer. According to Google research product manager David Orr, the app’s ability to zero in on an answer wouldn’t be possible without deep learning. “You need to use neural networks—or at least that is the only way we have found to do it,” he says. “We have to use all of the most advanced technology we have.”
What neural nets can’t do is actually carry on a real conversation. That sort of chatbot is still a long way off, whatever tech CEOs have promised from their keynote stages. But researchers at Google, Facebook, and elsewhere are exploring deep learning techniques that help reach that lofty goal. The promise is that these efforts will provide the same sort of progress we’ve seen with speech recognition, image recognition, and machine translation. Conversation is the next frontier.
The New Data Center
This summer, after building an AI that cracked the game of Go, Demis Hassabis and his Google DeepMind lab revealed they had also built an AI that helps operate Google’s worldwide network of computer data centers. Using a technique called deep reinforcement learning, which underpins both their Go-playing machine and earlier DeepMind services that learned to master old Atari games, this AI decides when to turn on cooling fans inside the thousands of computer servers that fill these data centers, when to open the data center windows for additional cooling, and when to fall back on expensive air conditioners. All told, it controls over 120 functions inside each data center
As Bloomberg reported, this AI is so effective, it saves Google hundreds of millions of dollars. In other words, it pays for the cost of acquiring DeepMind, which Google bought for about $650 million in 2014. Now, Deepmind plans on installing additional sensors in these computing facilities, so it can collect additional data and train this AI to even higher levels.
The New Cloud
As they push this technology into their own products as services, the giants of the internet are also pushing it into the hands of others. At the end of 2015, Google open sourced TensorFlow, and over the past year, this once-proprietary software spread well beyond the company’s walls, all the way to people like Amanda Hodgson. At the same time, Google, Microsoft, and Amazon began offering their deep learning tech via cloud computing services that any coder or company can use to build their own apps. Artificial intelligence-as-a-service may wind up as the biggest business for all three of these online giants.
As AI evolves, the role of the computer scientist is changing.
Over the last twelve months, this burgeoning market spurred another AI talent grab. Google hired Stanford professor Fei-Fei Li, one of the biggest names in the world of AI research, to oversee a new cloud computing group dedicated to AI, and Amazon nabbed Carnegie Mellon professor Alex Smolna to play much the same role inside its cloud empire. The big players are grabbing the world’s top AI talentas quickly as they can, leaving little for others. The good news is that this talent is working to share at least some of the resulting tech they develop with anyone who wants it.
As AI evolves, the role of the computer scientist is changing. Sure, the world still needs people who can code software. But increasingly, it also needs people who can train neural networks, a very different skill that’s more about coaxing a result from the data than building something on your own. Companies like Google and Facebook are not only hiring a new kind of talent, but also reeducating their existing employees for this new future—a future where AI will come to define technology in the lives of just about everyone.
No comments:
Post a Comment