Select Page
This Program Can Give AI a Sense of Ethics—Sometimes

This Program Can Give AI a Sense of Ethics—Sometimes

Delphi taps the fruits of recent advances in AI and language. Feeding very large amounts of text to algorithms that use mathematically simulated neural networks has yielded surprising advances.

In June 2020, researchers at OpenAI, a company working on cutting-edge AI tools, demonstrated a program called GPT-3 that can predict, summarize, and auto-generate text with what often seems like remarkable skill, although it will also spit out biased and hateful language learned from text it has read.

The researchers behind Delphi also asked ethical questions of GPT-3. They found its answers agreed with those of the crowd workers just over 50 percent of the time—little better than a coin flip.

Improving the performance of a system like Delphi will require different AI approaches, potentially including some that allow a machine to explain its reasoning and indicate when it is conflicted.

The idea of giving machines a moral code stretches back decades both in academic research and science fiction. Isaac Asimov’s famous Three Laws of Robotics popularized the idea that machines might follow human ethics, although the short stories that explored the idea highlighted contradictions in such simplistic reasoning.

Choi says Delphi should not be taken as providing a definitive answer to any ethical questions. A more sophisticated version might flag uncertainty, because of divergent opinions in its training data. “Life is full of gray areas,” she says. “No two human beings will completely agree, and there’s no way an AI program can match people’s judgments.”

Other machine learning systems have displayed their own moral blind spots. In 2016, Microsoft released a chatbot called Tay designed to learn from online conversations. The program was quickly sabotaged and taught to say offensive and hateful things.

Efforts to explore ethical perspectives related to AI have also revealed the complexity of such a task. A project launched in 2018 by researchers at MIT and elsewhere sought to explore the public’s view of ethical conundrums that might be faced by self-driving cars. They asked people to decide, for example, whether it would be better for a vehicle to hit an elderly person, a child, or a robber. The project revealed differing opinions across different countries and social groups. Those from the US and Western Europe were more likely than respondents elsewhere to spare the child over an older person.

Some of those building AI tools are keen to engage with the ethical challenges. “I think people are right to point out the flaws and failures of the model,” says Nick Frost, CEO of Cohere, a startup that has developed a large language model that is accessible to others via an API. “They are informative of broader, wider problems.”

Cohere devised ways to guide the output of its algorithms, which are now being tested by some businesses. It curates the content that is fed to the algorithm and trains the algorithm to learn to catch instances of bias or hateful language.

Frost says the debate around Delphi reflects a broader question that the tech industry is wrestling with—how to build technology responsibly. Too often, he says, when it comes to content moderation, misinformation, and algorithmic bias, companies try to wash their hands of the problem by arguing that all technology can be used for good and bad.

When it comes to ethics, “there’s no ground truth, and sometimes tech companies abdicate responsibility because there’s no ground truth,” Frost says. “The better approach is to try.”


More Great WIRED Stories

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Despite the executive orders and congressional hearings of the “Biden antitrust revolution,” the most profound anti-competitive shift is happening under policymakers’ noses: the cornering of artificial intelligence and automation by a handful of tech companies. This needs to change.

There is little doubt that the impact of AI will be widely felt. It is shaping product innovations, creating new research, discovery, and development pathways, and reinventing business models. AI is making inroads in the development of autonomous vehicles, which may eventually improve road safety, reduce urban congestion, and help drivers make better use of their time. AI recently predicted the molecular structure of almost every protein in the human body, and it helped develop and roll out a Covid vaccine in record time. The pandemic itself may have accelerated AI’s incursion—in emergency rooms for triage; in airports, where robots spray disinfecting chemicals; in increasingly automated warehouses and meatpacking plants; and in our remote workdays, with the growing presence of chatbots, speech recognition, and email systems that get better at completing our sentences.

Exactly how AI will affect the future of human work, wages, or productivity overall remains unclear. Though service and blue-collar wages have lately been on the rise, they’ve stagnated for three decades. According to MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo, 50 to 70 percent of this languishing can be attributed to the loss of mostly routine jobs to automation. White-collar occupations are also at risk as machine learning and smart technologies take on complex functions. According to McKinsey, while only about 10 percent of these jobs could disappear altogether, 60 percent of them may see at least a third of their tasks subsumed by machines and algorithms. Some researchers argue that while AI’s overall productivity impact has been so far disappointing, it will improve; others are less sanguine. Despite these uncertainties, most experts agree that on net, AI will “become more of a challenge to the workforce,” and we should anticipate a flat to slightly negative impact on jobs by 2030.

Without intervention, AI could also help undermine democracy–through amplifying misinformation or enabling mass surveillance. The past year and a half has also underscored the impact of algorithmically powered social media, not just on the health of democracy, but on health care itself.

The overall direction and net impact of AI sits on a knife’s edge, unless AI R&D and applications are appropriately channeled with wider societal and economic benefits in mind. How can we ensure that?

A handful of US tech companies, including Amazon, Alibaba, Alphabet, Facebook, and Netflix, along with Chinese mega-players such as Baidu, are responsible for $2 of every $3 spent globally on AI. They’re also among the top AI patent holders. Not only do their outsize budgets for AI dwarf others’, including the federal government’s, they also emphasize building internally rather than buying AI. Even though they buy comparatively little, they’ve still cornered the AI startup acquisition market. Many of these are early-stage acquisitions, meaning the tech giants integrate the products from these companies into their own portfolios or take IP off the market if it doesn’t suit their strategic purposes and redeploy the talent. According to research from my Digital Planet team, US AI talent is intensely concentrated. The median number of AI employees in the field’s top five employers—Amazon, Google, Microsoft, Facebook, and Apple—is some 18,000, while the median for companies six to 24 is about 2,500—and it drops significantly from there. Moreover, these companies have near-monopolies of data on key behavioral areas. And they are setting the stage to become the primary suppliers of AI-based products and services to the rest of the world.

Each key player has areas of focus consistent with its business interests: Google/Alphabet spends disproportionately on natural language and image processing and on optical character, speech, and facial recognition. Amazon does the same on supply chain management and logistics, robotics, and speech recognition. Many of these investments will yield socially beneficial applications, while others, such as IBM’s Watson—which aspired to become the go-to digital decision tool in fields as diverse as health care, law, and climate action—may not deliver on initial promises, or may fail altogether. Moonshot projects, such as level 4 driverless cars, may have an excessive amount of investment put against them simply because the Big Tech players choose to champion them. Failures, disappointments, and pivots are natural to developing any new technology. We should, however, worry about the concentration of investments in a technology so fundamental and ask how investments are being allocated overall. AI, arguably, could have more profound impact than social media, online retail, or app stores—the current targets of antitrust. Google CEO Sundar Pichai may have been a tad overdramatic when he declared that AI will have more impact on humanity than fire, but that alone ought to light a fire under the policy establishment to pay closer attention.

Humans Need to Create Interspecies Money to Save the Planet

Humans Need to Create Interspecies Money to Save the Planet

A new form of digital currency for animals, trees, and other wildlife (no, not like Dogecoin) would help protect biodiversity and bend technology back to nature.