IEEE pushes for ethics in AI design
Anyone who has read science fiction can tell you that killer robots are a problem. The Institute of Electrical and Electronics Engineers, better known as the IEEE, wants to do something about it.
On Tuesday, the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems released a report aimed at encouraging engineers and researchers to think ethically when they’re building new intelligent software and hardware.
The report, titled “Ethically Aligned Design,” is aimed at bringing the care of humans into the creation of artificial intelligence. It clocks in at over a hundred pages in length, but there are a few key themes that surface from the report. It calls for more transparency about how automated systems work, increased human involvement, and care for the consequences of systems design.
Ethical considerations in artificial intelligence and automated systems will become increasingly important as companies and governments make increasing use of that technology. While a lot of pop cultural discussion revolves around incredibly intelligent systems, there are already algorithms in use today that can have significant impacts on business and political decision making.
Raja Chatila, the chair of the initiative, said in an interview that he didn’t think engineers and companies were aware of the issues at play.
“I think — and this is my personal opinion — most engineers and companies are not yet really aware or really open to those ethical issues,” he said. “It’s because they weren’t trained like that. They were trained to develop efficient systems that work, and they weren’t trained in taking into account ethical issues.”
One of the key issues that’s already showing up is algorithmic bias. Computer systems can and do reflect the worldview of their creators, which can become an issue when those values don’t match a customer’s.
When it comes to transparency, the report repeatedly calls for the creation of automated systems that can report why they made particular decisions. That’s difficult to do with some of the state of the art machine learning techniques in use today.
What’s more, it’s not uncommon for companies to keep the exact details of their machine learning algorithms’ inner workings under wraps, which also flies in the face of a push for transparency.
Transparency is key for not only understanding things like image recognition algorithms but also the future of how we fight wars. The IEEE report’s discussion of autonomous weapon systems is full of calm yet terrifying language like a description of effectively anonymous death machines leading to “unaccountable violence and societal havoc.”
To a similar end, the report pushes for greater human involvement in those intelligent systems. The IEEE group wants people to be able to appeal to another human in the case of autonomous systems making decisions, among other things.
In the future, this work should lead to the creation of IEEE standards around ethics and intelligent systems. One such standard around addressing ethical concerns in system design is already in the works, and two more are on the way.
But even if this process leads to the creation of standards, organizations will still have to choose to adopt them. It could be that the creation of ethical systems would be seen as a marketing differentiator — a robot that’s 65 percent less likely to murder someone should be more appealing to customers.
It’s also possible that the tech industry could just sidestep the ethics question, and continue on its merry way.
The IEEE team is now opening the document up for feedback from the public [PDF]. Anyone can submit thoughts on its contents, though the group is looking for referenced, actionable feedback that’s less than two pages long. Submissions are due on March 6.
No comments:
Post a Comment