The ethical intent of machine learning is ‘fairness through unawareness’ – the same merit-based approach which, in many countries, means that employers may not ask a potential employee’s gender, religion or race, but should instead evaluate them on their relevant skills for the post.
If the information that a loan applicant is female is not included in the data set, the application should be judged without including that information. However, this is a flawed answer, as gender can be inferred from other data factors which are included: for example, if the applicant is a single parent, and 82% of single parents are female, there is a high probability that the applicant is female.
This is called ‘redundant encoding’ – even if a specific data marker is not included in the data set, it may be included by proxy in a combination of other, relevant data.
On the surface, discrimination by machines seems like a flawed concept. An algorithm is a mathematical construct, and as such, should not logically be subject to discriminatory outcomes. However, algorithms may potentially rely on flawed input, logic and probabilities, as well as the unintentional biases of their human creators. As data is spun into information and predictors, and decisions are made based on those predictions, care should be taken to prevent the possibility of discrimination in the results of machine-created data analysis.
With this in mind, the Brain Team at Google began looking into new ways to ensure that the mistakes that are made do not disproportionately affect members of a protected class. Entitled “Equality of Opportunity in Supervised Learning,” the paper provides a detailed, step-by-step framework to test existing algorithms for problematic, discriminatory outcomes, as well as how to adjust a machine learning algorithm to prevent those outcomes, with the result of equal opportunity in supervised learning.
Ironically the team’s research emerges just as an age discrimination lawsuit against Google moves forward (after certification by the Northern California District Court).
An important aspect of the proposed framework is that it shifts the cost of poor predictions to the decision maker, who is responsible for investing in the accuracy of their prediction systems.
The framework that the group proposes can help to check the results of a predictor to highlight potential concerns, as well as instructions for adjustments that can be made to create a balance between accuracy and non-discriminatory outcomes.
“At the heart of our approach is the idea that individuals who qualify for a desirable outcome should have an equal chance of being correctly classified for this outcome,” writes Moritz Hardt, member of the Google Brain Team and co-author of the paper. “We call this principle equality of opportunity in supervised learning.”
No comments:
Post a Comment