Estimated Reading Time: 0
As Artificial Intelligence (AI) becomes more ingrained in our daily lives, we see not only its potential for good but also for harm. In fact, there have been plenty of recent cautionary headlines about how AI can all-too-easily “learn” to be biased without the proper safeguards in place.
What are Some Examples of Bias in AI?
Google and Gemini
Recently, Google came under fire with its AI image creation tool, Gemini. Users began to complain when images generated by Gemini depicted non-white subjects in historically white contexts, such as Vikings or the Pope. It further enraged users by generating text drawing comparisons between current tech celebrities and Hitler.
Sundar Pichai, Google’s CEO, addressed the controversy head-on, saying, “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.”
Tech expert Reed Albergotti noted that the problem lay more with the guardrails than with the program itself.
“The problem is not with the underlying models themselves, but in the software guardrails that sit atop the model. This is a challenge facing every company building consumer AI products — not just Google.” He goes on to explain that it may have been due to attempts to remove bias that this issue came about. The programmers overcompensated and, with that, historical accuracy went out the window.
Amazon and Gender Bias
In 2018, Amazon had its own issues when it was found that the AI they were testing for potential use in hiring was discriminating against women. Specifically, applicants were downgraded if the term “women” was in their resume or cover letter. Even the mention of all women's colleges or female-dominant sports, like softball, was enough to seemingly disqualify a candidate.
Even though they made attempts to remove this bias, Amazon ultimately scrapped the project before implementing it to the company at large.
Assessment Provider and Facial Recognition
A major U.S. assessment provider touted facial expression as one of its key selling points for its interview screening platform. This did not come without controversy, and in 2019 the Electronic Privacy Information Center (EPIC) filed a complaint with the FTC over its use in hiring.
Even though the provider vehemently denied any bias in its facial recognition software, it did eventually shelve that offering claiming that while adding value, it “wasn’t worth the concern.”
And there is good reason to be concerned.
First off, research has shown that you can’t judge someone’s personality and emotions based on facial expressions alone.
According to Lisa Feldman Barrett, University Distinguished Professor of Psychology at Northeastern, “The fact that people miss-guess what facial movements means is not new. What’s new here is that we are showing that the evidence never suggested that facial expressions are universal despite the claims that are being made by some scientists and by many companies.”
What is more, research from Harvard University demonstrated that facial recognition software decreased in accuracy once non-white and non-male subjects were introduced. AI models across numerous companies showed the most inaccuracies on dark-skinned women.
How can Bias in AI be Mitigated?
We’ve touched on a part of this solution in a previous article discussing the gender gap in tech. Namely, there needs to be more diversity in those who are developing AI.
“[A]s algorithms become more and more important if we don't have women saying, ‘Hey, hang on a minute, what's that algorithm doing?’ we have problems,” stated Althene Donald at a recent talk at the Royal Institution. This goes for non-white people as well. Their experiences and views can give AI the broader perspective it needs to benefit humanity as a whole.
Additionally, those algorithms need to be trained on diverse data sets. As AI has been developed, it has largely been built off a “white male as the norm” approach — essentially reflecting the people who were in the room creating it. Allowing the algorithms to learn from more diverse data should enable them to create a broader view of the future data they analyze.
A willingness to be transparent can also hold those developing the technology accountable. White-box platforms, such as Talent Select AI, are designed to be held up to scrutiny. Clients can trust that the results they receive are free from bias, as they can easily explain and validate how conclusions and recommendations are reached.
The Talent Select AI Approach
Talent Select AI was designed specifically to remove bias from the hiring process and has been thoroughly validated to demonstrate an absence of adverse impact on any protected groups.
“Talent Select AI doesn’t rely on historical data, facial recognition, vocal inflection, or even demonstrated emotion to evaluate job applicants,” explains Chief Technology Officer Will Rose.
“Instead, it analyzes interview transcripts and looks for words that reflect key professional competencies, traits, and motivations that are predictive of workplace success — as a result, it’s able to help mitigate and remove human bias from the candidate evaluation process.”
Our patent-pending platform was developed by a diverse team of renowned Industrial Organizational (IO) psychologists and data scientists. Together they developed, tested, and validated Talent Select AI to accurately measure the Big Five personality traits, Great Eight professional competencies, and Talent Select AI’s proprietary Motivational traits, right from the transcript of your standard job interview. This approach ensures all genders and ethnicities are measured consistently and fairly, and organizations can evaluate candidates based on the specific competencies and traits needed for success in the job.
At Talent Select AI, we're committed to providing ethical, explainable AI across all of our products. If any questions arise, our transparent, white-box AI makes it possible to understand and monitor how and why each individual recommendation is made.
Reach out today to see how we do it.