A robot prepares to pick up a tote containing product during the first public tour of the newest Amazon Robotics fulfillment center on April 12, 2019 in the Lake Nona community of Orlando, Florida. The over 855,000 square foot facility opened on August 26, 2018 and employs more than 1500 full-time associates who pick, pack, and ship customer orders with the assistance of hundreds of robots which can lift as much as 750 pounds and drive 5 feet per second. (Photo by Paul Hennessy/NurPhoto via Getty Images)
Back to basics: An Amazon robot in action in Florida © NurPhoto via Getty Images

When it comes to bias and artificial intelligence, there is a common belief that algorithms are only as good as the numbers plugged into them. But the focus on algorithmic bias being concentrated entirely on data has meant we have ignored two aspects of this problem: the deep limitations of existing algorithms and, more importantly, the role of human problem solvers.

Powerful as they may be, most of our algorithms only mine correlational relationships without understanding anything about them. My research has found that massive data sets on jobs, education and loans contain more spurious correlations than meaningful causal relationships. It is ludicrous to assume these algorithms will solve problems that we do not understand.

Without the insight of human problem solvers driving our questions, “better numbers” mean nothing and our algorithms will never do more than reflect our own biases.

Amazon’s failed attempt to design a fairer hiring algorithm highlights the misconception about better numbers. Trained on Amazon’s hiring and promotion data, the AI did not want to hire women. Amazon’s team responded by “de-biasing” their data, the emerging standard for “ethical AI”.

They removed known gender markers — such as the word “women’s” from candidate CVs and attendance at historical women’s colleges. Despite these changes, the dominant statistical relationship in the data remained: men were hired and promoted at higher rates than women.

Even lacking obvious gender markers, their AI discovered the undiscoverable — it sniffed out subtle correlates associated with women and dismissed those candidates.

Some brilliant people were involved in Amazon’s hiring AI. Similarly smart and well-intentioned groups have produced machine vision systems that cannot see black faces. How could they have missed the mark so badly?

Amazon’s algorithm was trained to seek out factors that correlate with employee success in the company. The end goal to make hiring fairer was a good one, but they were naive to assume that an algorithm sensitive only to correlations could resist the overwhelming historical role that gender plays in hiring.

No tech company has historically promoted women in their first year. By training the AI to hire and promote based on its own male-inclined hiring history, Amazon created an AI that was as gender-biased as the company had been. No amount of data-scrubbing could stop the algorithm learning from its owner’s historical favour of male over female applicants. Amazon’s AI solved the problem it was given — determine which employees were most likely to be promoted — but not the problem Amazon needed to solve.

There are decades of research exploring the causal factors of success on the job. My own work using machine learning in hiring began with a deep read of that research, then a focus on building AI systems to unearth those known qualities.

For each factor, we would then test how well it generalised across race or gender, first in isolation, then as part of an integrated system. It was laborious, frustrating but successful. It was not a computer science project but an exercise in human problem-solving: exploring, asking better questions and learning from failed solutions. AI is a powerful tool, but still only a tool.

The problem is not biased data. The problem is our flawed belief that, with enough data, our current algorithms can substitute for human problem-solving. Addressing issues of AI and fairness rests on the fundamental idea that if you do not know how to solve a problem, AI will not be able to solve it for you. No company has solved bias in hiring and the workplace, and the fact is that our new AI overlords are just as messed up as we are.

There are changes coming in machine learning. Increasing numbers of researchers and practitioners are calling for a renewed focus on causal inference with AI systems capable of asking “why” and directing their own search for an answer. Reinforcement-learning algorithms, such as those beating the world’s best Go and Halo players, are a step in that direction.

Vivienne Ming
Vivienne Ming

At my lab, we integrate large-scale machine learning with econometrics, producing systems capable of running their own enormous experiments. Others are looking at the inquisitive minds of infants for algorithmic inspiration. This will be an enormous leap in AI which, while not guaranteeing fairness, will at least recommend a job or a loan based on cause rather than coincidence.

For now, we must accept we need to start with training human problem solvers to ask the right questions in the first place. We must reinvest in scientific research on understanding the causal factors within our grasp so that our algorithms do not replicate implicit biases.

If you already know how to solve a problem, AI can fundamentally transform the economics of your solution. But if you think our current algorithms can discover solutions you cannot, we have a serious problem.

Vivienne Ming is a theoretical neuroscientist and entrepreneur

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments