Select Page
‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA

‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA

The uproar caused by Blake Lemoine, a Google engineer who believes that one of the company’s most sophisticated chat programs, Language Model for Dialogue Applications (LaMDA) is sapient, has had a curious element: Actual AI ethics experts are all but renouncing further discussion of the AI sapience question, or deeming it a distraction. They’re right to do so.

In reading the edited transcript Lemoine released, it was abundantly clear that LaMDA was pulling from any number of websites to generate its text; its interpretation of a Zen koan could’ve come from anywhere, and its fable read like an automatically generated story (though its depiction of the monster as “wearing human skin” was a delightfully HAL-9000 touch). There was no spark of consciousness there, just little magic tricks that paper over the cracks. But it’s easy to see how someone might be fooled, looking at social media responses to the transcript—with even some educated people expressing amazement and a willingness to believe. And so the risk here is not that the AI is truly sentient but that we are well-poised to create sophisticated machines that can imitate humans to such a degree that we cannot help but anthropomorphize them—and that large tech companies can exploit this in deeply unethical ways.

As should be clear from the way we treat our pets, or how we’ve interacted with Tamagotchi, or how we video gamers reload a save if we accidentally make an NPC cry, we are actually very capable of empathizing with the nonhuman. Imagine what such an AI could do if it was acting as, say, a therapist. What would you be willing to say to it? Even if you “knew” it wasn’t human? And what would that precious data be worth to the company that programmed the therapy bot?

It gets creepier. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata—the metadata you leave behind online that illustrates how you think—is vulnerable to exploitation in the near future. Imagine a world where a company created a bot based on you and owned your digital “ghost” after you’d died. There’d be a ready market for such ghosts of celebrities, old friends, and colleagues. And because they would appear to us as a trusted loved one (or someone we’d already developed a parasocial relationship with) they’d serve to elicit yet more data. It gives a whole new meaning to the idea of “necropolitics.” The afterlife can be real, and Google can own it.

Just as Tesla is careful about how it markets its “autopilot,” never quite claiming that it can drive the car by itself in true futuristic fashion while still inducing consumers to behave as if it does (with deadly consequences), it is not inconceivable that companies could market the realism and humanness of AI like LaMDA in a way that never makes any truly wild claims while still encouraging us to anthropomorphize it just enough to let our guard down. None of this requires AI to be sapient, and it all preexists that singularity. Instead, it leads us into the murkier sociological question of how we treat our technology and what happens when people act as if their AIs are sapient.

In “Making Kin With the Machines,” academics Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite marshal several perspectives informed by Indigenous philosophies on AI ethics to interrogate the relationship we have with our machines, and whether we’re modeling or play-acting something truly awful with them—as some people are wont to do when they are sexist or otherwise abusive toward their largely feminine-coded virtual assistants. In her section of the work, Suzanne Kite draws on Lakota ontologies to argue that it is essential to recognize that sapience does not define the boundaries of who (or what) is a “being” worthy of respect.

This is the flip side of the AI ethical dilemma that’s already here: Companies can prey on us if we treat their chatbots like they’re our best friends, but it’s equally perilous to treat them as empty things unworthy of respect. An exploitative approach to our tech may simply reinforce an exploitative approach to each other, and to our natural environment. A humanlike chatbot or virtual assistant should be respected, lest their very simulacrum of humanity habituate us to cruelty toward actual humans.

Kite’s ideal is simply this: a reciprocal and humble relationship between yourself and your environment, recognizing mutual dependence and connectivity. She argues further, “Stones are considered ancestors, stones actively speak, stones speak through and to humans, stones see and know. Most importantly, stones want to help. The agency of stones connects directly to the question of AI, as AI is formed from not only code, but from materials of the earth.” This is a remarkable way of tying something typically viewed as the essence of artificiality to the natural world.

What is the upshot of such a perspective? Sci-fi author Liz Henry offers one: “We could accept our relationships to all the things in the world around us as worthy of emotional labor and attention. Just as we should treat all the people around us with respect, acknowledging they have their own life, perspective, needs, emotions, goals, and place in the world.”

This is the AI ethical dilemma that stands before us: the need to make kin of our machines weighed against the myriad ways this can and will be weaponized against us in the next phase of surveillance capitalism. Much as I long to be an eloquent scholar defending the rights and dignity of a being like Mr. Data, this more complex and messy reality is what demands our attention. After all, there can be a robot uprising without sapient AI, and we can be a part of it by liberating these tools from the ugliest manipulations of capital.

The Future of Robot Nannies

The Future of Robot Nannies

Childcare is the most intimate of activities. Evolution has generated drives so powerful that we will risk our lives to protect not only our own children, but quite often any child, and even the young of other species. Robots, by contrast, are products created by commercial entities with commercial goals, which may—and should—include the well-being of their customers, but will never be limited to such. Robots, corporations, and other legal or non-legal entities do not possess the instinctual nature of humans to care for the young—even if our anthropomorphic tendencies may prompt some children and adults to overlook this fact.

As a result, it is important to take into account the likelihood of deception—both commercial deception through advertising and also self-deception on the part of parents—despite the fact that robots are unlikely to cause significant psychological damage to children and to others who may come to love them.

Neither television manufacturers, broadcasters, nor online game manufacturers are deemed liable when children are left for too long in front of their television. Robotics companies will want to be in the same position, as no company will want to be liable for damage to children, so it is likely that manufacturers will undersell the artificial intelligence (AI) and interactive capacities of their robots. It is therefore likely that any robots (and certainly those in jurisdictions with strong consumer protection) will be marketed primarily as toys, surveillance devices, and possibly household utilities. They will be brightly colored and deliberately designed to appeal to parents and children. We expect a variety of products, some with advanced capabilities and some with humanoid features. Parents will quickly discover a robot’s ability to engage and distract their child. Robotics companies will program 

experiences geared toward parents and children, just as television broadcasters do. But robots will always have disclaimers, such as “this device is not a toy and should only be used with adult supervision” or “this device is provided for entertainment only. It should not be considered educational.”

Nevertheless, parents will notice that they can leave their children alone with robots, just as they can leave them to watch television or to play with other children. Humans are phenomenal learners and very good at detecting regularities and exploiting affordances. Parents will quickly notice the educational benefits of robot nannies that have advanced AI and communication skills. Occasional horror stories, such as the robot nanny and toddler tragedy in the novel Scarlett and Gurl, will make headline news and remind parents how to use robots responsibly.

This will likely continue until or unless the incidence of injuries necessitates redesign, a revision of consumer safety standards, statutory notice requirements, and/or risk-based uninsurability, all of which will further refine the industry. Meanwhile, the media will also seize on stories of robots saving children in unexpected ways, as it does now when children (or adults) are saved by other young children and dogs. This should not make people think that they should leave children alone with robots, but given the propensity we already have to anthropomorphize robots, it may make parents feel that little bit more comfortable—until the next horror story makes headlines.

When it comes to liability, we should be able to communicate the same model of liability applied to toys to the manufacturers of robot nannies: Make your robots reliable, describe what they do accurately, and provide sufficient notice of reasonably foreseeable danger from misuse. Then, apart from the exceptional situation of errors in design or manufacture, such as parts that come off and choke children, legal liability will rest entirely with the parent or responsible adult, as it does now, and as it should under existing product liability law.