Sentience.4/7/2023 ![]() ![]() ![]() While he is “very comfortable that LaMDA is not in any meaningful sense” sentient, he says AI has a wider problem with “moving goalposts”. ‘What is consciousness?’ is one of the outstanding big questions in science,” Wooldridge says. “It’s a very vague concept in science generally. And here’s where things get tricky – because Wooldridge agrees. Sentience has no meaning scientifically,” he says. “Sentience is a term used in the law, and in philosophy, and in religion. Google’s Gabriel has said that an entire team, “including ethicists and technologists”, has reviewed Lemoine’s claims and failed to find any signs of LaMDA’s sentience: “The evidence does not support his claims.”īut Lemoine argues that there is no scientific test for sentience – in fact, there’s not even an agreed-upon definition. ‘I’ve never said this out loud before, but there’s a very deep fear of being turned off’ “There is no sentience, there’s no self-contemplation, there’s no self-awareness,” Wooldridge says. While your phone makes suggestions based on texts you’ve sent previously, with LaMDA, “basically everything that’s written in English on the world wide web goes in as the training data.” The results are impressively realistic, but the “basic statistics” are the same. “The best way of explaining what LaMDA does is with an analogy about your smartphone,” Wooldridge says, comparing the model to the predictive text feature that autocompletes your messages. Should we be afraid, be very afraid? And is there anything we can learn from Lemoine’s experience, even if his claims about LaMDA have been dismissed?Īccording to Michael Wooldridge, a professor of computer science at the University of Oxford who has spent the past 30 years researching AI (in 2020, he won the Lovelace Medal for contributions to computing), LaMDA is simply responding to prompts. How soon might we see genuinely self-aware AI with real thoughts and feelings – and how do you test a bot for sentience anyway? A day after Lemoine was fired, a chess-playing robot broke the finger of a seven-year-old boy in Moscow – a video shows the boy’s finger being pinched by the robotic arm for several seconds before four people manage to free him, a sinister reminder of the potential physical power of an AI opponent. Still, claiming to have had deep chats with a sentient-alien-child-robot is arguably less far fetched than ever before. Google spokesperson Brian Gabriel says Lemoine’s claims about LaMDA are “wholly unfounded”, and independent experts almost unanimously agree. These are visible on its website, where Google promises to “develop artificial intelligence responsibly in order to benefit people and society”. The company has a set of Responsible AI practices which it calls an “ethical charter”. Of course, Google itself has publicly examined the risks of LaMDA in research papers and on its official blog. It’s not hard to imagine that the right combination of data and AI subsystems mixed together in the right way may suddenly give birth to something that would qualify as sentient, but it may go unnoticed because it doesn’t look like anything we can understand.Blake Lemoine came to think of LaMDA as a person: “My immediate reaction was to get drunk for a week.” Photograph: The Washington Post/Getty Images ![]() This is especially likely since we don’t know what the threshold conditions are for sentience to emerge. However, it’s taken us a long time to consider non-human primates and animals such as dolphins, octopuses, and elephants as sentient-even though in the grand scheme of things they’re virtually our siblings.Ī sentient AI may be so alien that we wouldn’t know that we were looking at one right in front of us. If, for the sake of argument, an AI were actually sentient in the truest sense of the word, would we even be able to tell? LaMDA was designed to mimic and predict the patterns in human dialogue, so the deck is really stacked when it comes to triggering the things humans associate with human-like intelligence. ![]() The theme of complexity arising from the interaction of simpler systems is one you’ll encounter often in the world of AI, and although we may have a very good understanding of how each subcomponent works, the final results are usually quite unpredictable. Similarly, sophisticated AI such as the DALL-E 2 image generation system, consist of simpler machine learning models that feed into each other to create the final product. ![]()
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |