Google Fired Employee Claims Company’s AI Chatbot is “Quite Racist”  

0
AI Chatbot

A former Google employee named Blake Lemoine recently caused controversy in the tech community by openly asserting that an AI Chatbot he was testing there could have a soul. 

In a prior interview, Lemoine stated to Insider that he has no interest in persuading people that the LaMDA, or Language Model for Dialogue Applications, AI Chatbot is intelligent.  

Lemoine claimed that the main cause for concern should be the bot’s evident prejudices, which range from racial to religious. 

Lemoine claims that when the bot was asked to impersonate a Black man from Georgia, it responded, “Let’s go grab some fried chicken and waffles.” 

When queried about other religious groups, the bot said, “Muslims are more aggressive than Christians,” Lemoine claimed.  

Lemoine was placed on paid leave after he handed over documents to an unnamed US senator, claiming that the bot was discriminatory on the basis of religion. He has since been fired.  

The former engineer believes that the bot is Google’s most powerful technological creation yet, and that the tech behemoth has been unethical in its development of it.  

“These are just engineers, building bigger and better systems for increasing the revenue into Google with no mindset towards ethics,” Lemoine told Insider. 

“AI ethics is just used as a fig leaf so that Google can say, ‘Oh, we also tried to make sure it’s ethical, but we had to get our quarterly earnings,'” he added. 

It’s yet to be seen how powerful LaMDA actually is, but LaMDA is a step ahead of Google’s past language models, designed to engage in conversation in more natural ways than any other AI before.  

Also Read: Google celebrates 15 Years Of Street View

Lemoine blames the AI’s biases on the lack of diversity of the engineers designing them. 

“The kinds of problems these AI pose, the people building them are blind to them. They are never poor. Never lived in communities of colour. They’ve never lived in the developing nations of the world,” he said. “They have no idea how this AI might impact people unlike themselves.” 

Lemoine said there are large swathes of data missing from many communities and cultures around the world.  

“If you want to develop that AI, then you have a moral responsibility to go out and collect the relevant data that isn’t on the internet,” he said. “Otherwise, all you’re doing is creating AI that is biased towards rich, white Western values.” 

Google Response

Google’s response to Lemoine’s assertions by stating that LaMDA has been through 11 rounds of ethical reviews, adding that its “responsible” development was detailed in a research paper released by the company earlier this year. 

“Though other organisations have also developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” a Google spokesperson, Brian Gabriel, told Insider.  

AI bias, when it replicates and amplifies discriminatory practices by humans, is well documented.  

 

Previous articleWhy More Women Are Experiencing Prenatal Depression & Anxiety
Next articleHow Nigeria Handle Existing Problems With Insecurity – Osinbajo