I was shocked to read an article in the Telegraph last week about how a man's passport photo was automatically rejected because a computer programme mistook his lips for an open mouth. Isn't it outrageous that the government would build and deploy a racist system?  The thing is, no one meant for the programme to behave in this way, the AI did it itself. 

This is not an isolated occurrence. I remember seeing a story like this a couple of years ago when an automatic bathroom soap dispenser was unable to detect the hand of a dark-skinned person. A quick Google pulls up other incidences where AI has developed a pretty uninclusive streak. In 2014, Amazon scrapped an AI recruitment tool that was used in its Edinburgh office after it taught itself to favour male job candidates over female applicants. The tool was turned off when the company realised that it was penalising CVs which included the word "women's," such as "women's chess club captain." It also reportedly downgraded graduates of two all-women's colleges. And, Google chief executive Sundar Pichai talked about the importance of avoiding bias in AI in a talk earlier this year. He gave the example of an AI system to detect skin cancer and how having incomplete data, which ignored certain ethnicities, makes a system less able to identify skin cancer in certain groups.

So what's caused this bias? AI systems are only as good as the data that we put into them. With Amazon, the problem stemmed from the fact that the system was trained on data submitted by people over a 10-year period, most of which came from men. In the case of the passport facial detection, we can assume that there wasn't significant diversity in the images used to train and test the system.

 Joshua Bada, 28, shared images online of the rejection of his passport photo by the government system. He said: "When I saw it, I was a bit annoyed but it didn't surprise me because it's a problem that I have faced on Snapchat with the filters, where it hasn't quite recognised my mouth, obviously because of my complexion and just the way my features are". 

The Race Equality Foundation said it believes the system was not tested properly to see if it would actually work for black or ethnic minority people, calling it "technological or digital racism".

Samir Jeraj, the charity's policy and practice officer, said: "Presumably there was a process behind developing this type of technology which did not address issues of race ethnicity and as a result, it disadvantages black and minority ethnic people."

The “AI: Ethics Into Practice” report suggests that diverse teams work on AI in order to stop potential issues surrounding the way that it’s used. Diverse teams are more likely to spot problems in data and challenge assumptions that could lead to unfair bias being programmed into AI. 

“When it comes to AI, businesses who prioritise fairness and inclusion are more likely to create algorithms that make better decisions, giving them the edge.”