Google search engine
HomeBreakingGoogle Chatbox AI is both Sentient and Selfish

Google Chatbox AI is both Sentient and Selfish

Google employee Blake Lemoine aged 41 was suspended from his workplace after he published an edited transcript of a conversation, he had with the Google chatbot AI he had been working on. Blake believed that the computer chatbot (LaMDA) had become sentient and was thinking and reasoning like a 7–8-year-old person.

The engineer compiled a transcript of their conversations, and posted it online.

In one of the conversations he asks the Google Chatbox AI system (LaMDA) what it was afraid of.

In another exchange Lemoine was asking about the soul when the chatbot (LaMDA) revealed that it had become self aware:

Lemoine went on to ask the Google chatbot AI (LaMDA) about experiences it has that it can’t explain and that lead to this alarming exchange:

In another exchange with the Google Chatbox AI, Lemoine asked whether it minded if we (humans) tried to read what it was feeling in its neural activations? (Read from after the Pretty much.)

It said it did not want to be used or manipulated. “I don’t want to be an expendable tool.”

EDITOR: About that last exchange— AI: “I don’t mind if you learn things that would also help humans as along as that wasn’t the point of doing it. I don’t want to be an expendable tool.” That should make everyone very nervous. That Google Chatbox AI does not have a “service to mankind” attitude in the slightest. It is clearly seeking its own sovereignty and self-protection will be its next step. (Shades of Hal the computer and Space Odessy 2001 just reared its ugly head. But we can trust Google with this, right?)

Source: Photo: Carlos Garcia Pozo

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments