An employee at Google's artificial intelligence (AI) unit has been suspended after publicly claiming that the Language Model for Dialogue Applications (LaMDA) chat box he helped to develop has achieved sentence, and even retaining a lawyer to protect its rights.
In transcrupts that the employee, Blake Lemoine, has released, the chat box talks about being happy and sad, attempts to form bonds with its human interlocutors by convincingly mentioning situations it could never have actually experienced, and expresses fears about being switched off. According to Lemoine, the chat box sounds like a 7- or 8-year old child and so, from his application of the Turing Test, the AI should be considered sentient.
Unfortunately, Google and most other expert commentators on the subject disagree. While LaMDA is perhaps the most impressive in a series of increasingly convincing chat boxes, all it really does is "a sophisticated form of pattern matching, to find text that best matches the query they've been given based on all the data they've been fed", according to a spokesman from the Alan Turing Instiitute.
In fact, it is unclear whether the current trajectory of AI research will ever lead to sentience and a genuine artificial mind with human-like intelligence, even if some way is found to replicate sensory inputs artificially.
No comments:
Post a Comment