Blake Lemoine, the Google AI engineer that claimed one of the company’s chatbots had become sentient, has been fired.
Lemoine had claimed that a chatbot project built off of LaMDA – or Language Model for Dialogue Applications, a language model unveiled by Google last summer – was “a sweet kid who just wants to help the world be a better place.”
After being placed on a leave of absence in June, Lemoine’s position in Google’s Responsible AI organization was terminated. Lemoine, who is a mystical priest as well, handed over documents to a U.S. senator, claiming that Google was involved in instances of religious discrimination, according to Business Insider.
In a statement, Google wished Lemoine well, adding that Lemoine’s claims were “wholly unfounded” despite “extensive” reviews.
“LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development.”
In this post on Medium, Lemoine shared a snippet of a conversation he and a Google collaborator had with LaMDA.
Wishing Brad well
In his role at Google, Lemoine was tasked with discovering whether the LaMDA-based chatbot used discriminatory language. Instead, he suggested that the system considers itself a person.
In April, he presented ‘evidence’ to Google executives outlining his belief that the system was sentient – only for his concerns to be dismissed.
He was then placed on leave after he attempted to contact members of government about his findings, as well as hire legal representation for the chatbot.
A Google spokesperson said at the time Lemoine was placed on leave that his evidence did not support his claim, adding, “though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.”
Shortly after his leave of absence began, Omdia analysts suggested the claims the chatbot is sentient are subjective.