Highlight

Những điều thú vị khi dùng Trí tuệ nhân tạo của Viettel

Những người dùng Internet tại Việt Nam thường lấy “chị Google” ra để… giải trí. Khi “chị” đọc văn bản hay chỉ đường cho người tham gia gi...

Friday, September 23, 2016

Artificial Intelligence has become the next big thing – again

Android-woman

Back in 2012, a team at Google built a state-of-the-art artificial intelligence network and fed it ten million randomly selected images from YouTube. The computer churned through them, and announced that it kept finding these strange things with furry faces. It had, in other words, discovered cats.
Artificial intelligence has, all of a sudden, become the next big thing again. It is not so much sweeping across our world as seeping into it, with a combination of enormous computing power and the latest ‘deep learning’ techniques promising to give us better medical diagnoses, better devices, better recipes and better lives. Soon, it might even be able to give us new Beatles songs.
At the same time, however, we are growing increasingly alarmed about what it can — or might — do. Decades ago, Norbert Wiener, the father of cybernetics, warned:
The world of the future will be an ever more demanding struggle against the limitations of our own intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.
As fears grow about the automation of the labour market, many are asking the same question as Bertrand Russell, reviewing one of Wiener’s books back in 1951: ‘Are human beings necessary?’
Our conflicted, co-dependent relationship with our devices really took root, argues Thomas Rid in Rise of the Machines, in the second world war. Faster planes and better bombs meant that it was no longer possible for gunners just to point and shoot. To anticipate the enemy’s path, humans needed mechanical crutches: radar stations that could spot incoming targets, guns that could be automatically pointed at their predicted location, shells containing tiny radars of their own which would explode when they detected metal objects nearby. (These were arguably the most effective and certainly the least known wonder weapon of the war.)
Rid, a professor in the war studies department at King’s College London, is as good on the military stuff as you’d expect: his account of Russian attempts to hack America’s defence systems in the Clinton era is similarly definitive (and terrifying). The problem with Rise of the Machines is that the journey between these two points is more of a meander.
Wiener’s theories on human-machine interaction (he took ‘cybernetics’ from the Greek kybernan, meaning to navigate, steer, or govern) were derived from his wartime work on anti-aircraft fire. Rid attempts to trace his influence to the techno-hippies of the 1970s and 1980s, or to the writings of William Gibson. It’s all interesting stuff, but it’s hard to see what cybernetics, cyberspace and cyberwarfare have in common apart from their nebulously defined prefix. Rid’s introduction semi-acknowledges the problem, but defends itself on the grounds that this is work on myth. Sadly, it is one that is too often, in Rid’s own words, obscuring rather than clarifying.
There is a much crisper focus to Margaret A. Boden’s AI, a brief introduction to artificial intelligence (which also offers a clearer definition of cybernetics in one throwaway paragraph than Rid does in 400 pages). Sadly, those seeking to understand the modern world will probably emerge equally baffled. Boden, as an academic in the field of AI, really knows her stuff, and you get a clear understanding from her book of the various different kinds of AI, and their enduring limitations — in particular regarding the emergence of a self-aware Skynet or HAL 9000 clone that will scour us puny humans from the planet. But once she gets technical, she offers perilously little purchase for the general reader. (There are also brackets. Lots of them.)
In short, if you’re interested in learning more about our robotic soon-to-be overlords, your best bet is Thinking Machines by the British journalist Luke Dormehl. Yes, it has its flaws — a feature of Dormehl’s writing is an inability to explain that the first AI conference was in 1956 without adding that this was the year when
Elvis Presley was scandalising audiences with his hip gyrations, Marilyn Monroe married playwright Arthur Miller, and President Dwight Eisenhower authorised ‘In God we trust’ as the US national motto.
But overall, this is an accessible primer to the state of the digital art — how the field of AI grew and shrank and grew again, what the robots’ ever-increasing strengths are, and where they are still weak. He also teases out, as do Rid and Boden, the ways in which it is impossible to separate machines from their masters, how we bring our own fleshy biases to their design and work.
Forecasts of an AI takeover have been with us from the dawn of the computing age. Back in 1960, the computing pioneer Herbert Simon announced that ‘duplicating the problem-solving and information-handling capabilities of the brain is not far off; it would be surprising if it were not accomplished within the next decade’. In the 1970s, one researcher was chastised by a subordinate for his giddy prophecy about how soon robots would be picking up our socks. ‘Notice all the dates I’ve chosen were after my retirement,’ he retorted.
Today’s forecasts of an AI revolution may be similarly premature — but perhaps not. During our long dance with our artificial partners, humans and robots have moved closer and closer together, become more and more entwined. Surprisingly soon, we may find them starting to take the lead.

No comments:

Post a Comment