Select Page
To Fix Tech, Democracy Needs to Grow Up

To Fix Tech, Democracy Needs to Grow Up

There isn’t much we can agree on these days. But two sweeping statements that might garner broad support are “We need to fix technology” and “We need to fix democracy.”

There is growing recognition that rapid technology development is producing society-scale risks: state and private surveillance, widespread labor automation, ascending monopoly and oligopoly power, stagnant productivity growth, algorithmic discrimination, and the catastrophic risks posed by advances in fields like AI and biotechnology. Less often discussed, but in my view no less important, is the loss of potential advances that lack short-term or market-legible benefits. These include vaccine development for emerging diseases and open source platforms for basic digital affordances like identity and communication.

At the same time, as democracies falter in the face of complex global challenges, citizens (and increasingly, elected leaders) around the world are losing trust in democratic processes and are being swayed by autocratic alternatives. Nation-state democracies are, to varying degrees, beset by gridlock and hyper-partisanship, little accountability to the popular will, inefficiency, flagging state capacity, inability to keep up with emerging technologies, and corporate capture. While smaller-scale democratic experiments are growing, locally and globally, they remain far too fractured to handle consequential governance decisions at scale.

This puts us in a bind. Clearly, we could be doing a better job directing the development of technology towards collective human flourishing—in fact, this may be one of the greatest challenges of our time. If actually existing democracy is so riddled with flaws, it doesn’t seem up to the task. This is what rings hollow in many calls to “democratize technology”: Given the litany of complaints, why subject one seemingly broken system to governance by another?

At the same time, as we deal with everything from surveillance to space travel, we desperately need ways to collectively negotiate complex value trade-offs with global consequences, and ways to share in their benefits. This definitely seems like a job for democracy, albeit a much better iteration. So how can we radically update democracy so that we can successfully navigate toward long-term, shared positive outcomes?

The Case for Collective Intelligence

To answer these questions, we must realize that our current forms of democracy are only early and highly imperfect manifestations of collective intelligence—coordination systems that incorporate and process decentralized, agentic, and meaningful decisionmaking across individuals and communities to produce best-case decisions for the collective.

Collective intelligence, or CI, is not the purview of humans alone. Networks of trees, enabled by mycelia, can exhibit intelligent characteristics, sharing nutrients and sending out distress signals about drought or insect attacks. Bees and ants manifest swarm intelligence through complex processes of selection, deliberation, and consensus, using the vocabulary of physical movement and pheromones. In fact, humans are not even the only animals that vote. African wild dogs, when deciding whether to move locations, will engage in a bout of sneezing to determine whether quorum has been reached, with the tipping point determined by context—for example, lower-ranked individuals require a minimum of 10 sneezes to achieve what a higher-ranked individual could get with only three. Buffaloes, baboons, and meerkats also make decisions via quorum, with flexible “rules” based on behavior and negotiation. 

But humans, unlike meerkats or ants, don’t have to rely on the pathways to CI that our biology has hard-coded into us, or wait until the slow, invisible hand of evolution tweaks our processes. We can do better on purpose, recognizing that progress and participation don’t have to trade off. (This is the thesis on which my organization, the Collective Intelligence Project, is predicated.)

Our stepwise innovations in CI systems—such as representative, nation-state democracy, capitalist and noncapitalist markets, and bureaucratic technocracy—have already shaped the modern world. And yet, we can do much better. These existing manifestations of collective intelligence are only crude versions of the structures we could build to make better collective decisions over collective resources.

No One Knows How Safe New Driver-Assistance Systems Really Are

No One Knows How Safe New Driver-Assistance Systems Really Are

This week, a US Department of Transportation report detailed the crashes that advanced driver-assistance systems have been involved in over the past year or so. Tesla’s advanced features, including Autopilot and Full Self-Driving, accounted for 70 percent of the nearly 400 incidents—many more than previously known. But the report may raise more questions about this safety tech than it answers, researchers say, because of blind spots in the data.

The report examined systems that promise to take some of the tedious or dangerous bits out of driving by automatically changing lanes, staying within lane lines, braking before collisions, slowing down before big curves in the road, and, in some cases, operating on highways without driver intervention. The systems include Autopilot, Ford’s BlueCruise, General Motors’ Super Cruise, and Nissan’s ProPilot Assist. While it does show that these systems aren’t perfect, there’s still plenty to learn about how a new breed of safety features actually work on the road.

That’s largely because automakers have wildly different ways of submitting their crash data to the federal government. Some, like Tesla, BMW, and GM, can pull detailed data from their cars wirelessly after a crash has occurred. That allows them to quickly comply with the government’s 24-hour reporting requirement. But others, like Toyota and Honda, don’t have these capabilities. Chris Martin, a spokesperson for American Honda, said in a statement that the carmaker’s reports to the DOT are based on “unverified customer statements” about whether their advanced driver-assistance systems were on when the crash occurred. The carmaker can later pull “black box” data from its vehicles, but only with customer permission or at law enforcement request, and only with specialized wired equipment.

Of the 426 crash reports detailed in the government report’s data, just 60 percent came through cars’ telematics systems. The other 40 percent were through customer reports and claims—sometimes trickled up through diffuse dealership networks—media reports, and law enforcement. As a result, the report doesn’t allow anyone to make “apples-to-apples” comparisons between safety features, says Bryan Reimer, who studies automation and vehicle safety at MIT’s AgeLab.

Even the data the government does collect isn’t placed in full context. The government, for example, doesn’t know how often a car using an advanced assistance feature crashes per miles it drives. The National Highway Traffic Safety Administration, which released the report, warned that some incidents could appear more than once in the data set. And automakers with high market share and good reporting systems in place—especially Tesla—are likely overrepresented in crash reports simply because they have more cars on the road.

It’s important that the NHTSA report doesn’t disincentivize automakers from providing more comprehensive data, says Jennifer Homendy, chair of the federal watchdog National Transportation Safety Board. “The last thing we want is to penalize manufacturers that collect robust safety data,” she said in a statement. “What we do want is data that tells us what safety improvements need to be made.”

Without that transparency, it can be hard for drivers to make sense of, compare, and even use the features that come with their car—and for regulators to keep track of who’s doing what. “As we gather more data, NHTSA will be able to better identify any emerging risks or trends and learn more about how these technologies are performing in the real world,” Steven Cliff, the agency’s administrator, said in a statement.

‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA

‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA

The uproar caused by Blake Lemoine, a Google engineer who believes that one of the company’s most sophisticated chat programs, Language Model for Dialogue Applications (LaMDA) is sapient, has had a curious element: Actual AI ethics experts are all but renouncing further discussion of the AI sapience question, or deeming it a distraction. They’re right to do so.

In reading the edited transcript Lemoine released, it was abundantly clear that LaMDA was pulling from any number of websites to generate its text; its interpretation of a Zen koan could’ve come from anywhere, and its fable read like an automatically generated story (though its depiction of the monster as “wearing human skin” was a delightfully HAL-9000 touch). There was no spark of consciousness there, just little magic tricks that paper over the cracks. But it’s easy to see how someone might be fooled, looking at social media responses to the transcript—with even some educated people expressing amazement and a willingness to believe. And so the risk here is not that the AI is truly sentient but that we are well-poised to create sophisticated machines that can imitate humans to such a degree that we cannot help but anthropomorphize them—and that large tech companies can exploit this in deeply unethical ways.

As should be clear from the way we treat our pets, or how we’ve interacted with Tamagotchi, or how we video gamers reload a save if we accidentally make an NPC cry, we are actually very capable of empathizing with the nonhuman. Imagine what such an AI could do if it was acting as, say, a therapist. What would you be willing to say to it? Even if you “knew” it wasn’t human? And what would that precious data be worth to the company that programmed the therapy bot?

It gets creepier. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata—the metadata you leave behind online that illustrates how you think—is vulnerable to exploitation in the near future. Imagine a world where a company created a bot based on you and owned your digital “ghost” after you’d died. There’d be a ready market for such ghosts of celebrities, old friends, and colleagues. And because they would appear to us as a trusted loved one (or someone we’d already developed a parasocial relationship with) they’d serve to elicit yet more data. It gives a whole new meaning to the idea of “necropolitics.” The afterlife can be real, and Google can own it.

Just as Tesla is careful about how it markets its “autopilot,” never quite claiming that it can drive the car by itself in true futuristic fashion while still inducing consumers to behave as if it does (with deadly consequences), it is not inconceivable that companies could market the realism and humanness of AI like LaMDA in a way that never makes any truly wild claims while still encouraging us to anthropomorphize it just enough to let our guard down. None of this requires AI to be sapient, and it all preexists that singularity. Instead, it leads us into the murkier sociological question of how we treat our technology and what happens when people act as if their AIs are sapient.

In “Making Kin With the Machines,” academics Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite marshal several perspectives informed by Indigenous philosophies on AI ethics to interrogate the relationship we have with our machines, and whether we’re modeling or play-acting something truly awful with them—as some people are wont to do when they are sexist or otherwise abusive toward their largely feminine-coded virtual assistants. In her section of the work, Suzanne Kite draws on Lakota ontologies to argue that it is essential to recognize that sapience does not define the boundaries of who (or what) is a “being” worthy of respect.

This is the flip side of the AI ethical dilemma that’s already here: Companies can prey on us if we treat their chatbots like they’re our best friends, but it’s equally perilous to treat them as empty things unworthy of respect. An exploitative approach to our tech may simply reinforce an exploitative approach to each other, and to our natural environment. A humanlike chatbot or virtual assistant should be respected, lest their very simulacrum of humanity habituate us to cruelty toward actual humans.

Kite’s ideal is simply this: a reciprocal and humble relationship between yourself and your environment, recognizing mutual dependence and connectivity. She argues further, “Stones are considered ancestors, stones actively speak, stones speak through and to humans, stones see and know. Most importantly, stones want to help. The agency of stones connects directly to the question of AI, as AI is formed from not only code, but from materials of the earth.” This is a remarkable way of tying something typically viewed as the essence of artificiality to the natural world.

What is the upshot of such a perspective? Sci-fi author Liz Henry offers one: “We could accept our relationships to all the things in the world around us as worthy of emotional labor and attention. Just as we should treat all the people around us with respect, acknowledging they have their own life, perspective, needs, emotions, goals, and place in the world.”

This is the AI ethical dilemma that stands before us: the need to make kin of our machines weighed against the myriad ways this can and will be weaponized against us in the next phase of surveillance capitalism. Much as I long to be an eloquent scholar defending the rights and dignity of a being like Mr. Data, this more complex and messy reality is what demands our attention. After all, there can be a robot uprising without sapient AI, and we can be a part of it by liberating these tools from the ugliest manipulations of capital.

Automation Isn’t the Biggest Threat to US Factory Jobs

Automation Isn’t the Biggest Threat to US Factory Jobs

The number of American workers who quit their jobs during the pandemic—over a fifth of the workforce—may constitute one of the largest American labor movements in recent history. Workers demanded higher pay and better conditions, spurred by rising inflation and the pandemic realization that employers expected them to risk their lives for low wages, mediocre benefits, and few protections from abusive customers—often while corporate stock prices soared. At the same time, automation has become cheaper and smarter than ever. Robot adoption hit record highs in 2021. This wasn’t a surprise, given prior trends in robotics, but it was likely accelerated by pandemic-related worker shortages and Covid-19 safety requirements. Will robots automate away the jobs of entitled millennials who “don’t want to work,” or could this technology actually improve workers’ jobs and help firms attract more enthusiastic employees?

The answer depends on more than what’s technologically feasible, including what actually happens when a factory installs a new robot or a cashier aisle is replaced by a self-checkout booth—and what future possibilities await displaced workers and their children. So far, we know the gains from automation have proved notoriously unequal. A key component of 20th-century productivity growth came from replacing workers with technology, and economist Carl Benedikt Frey notes that American productivity grew by 400 percent from 1930 to 2000, while average leisure time only increased by 3 percent. (Since 1979, American labor productivity, or dollars created per worker, has increased eight times faster than workers’ hourly compensation.) During this period, technological luxuries became necessities and new types of jobs flourished—while the workers’ unions that used to ensure livable wages dissolved and less-educated workers fell further behind those with high school and college degrees. But the trend has differed across industrialized countries: From 1995 to 2013, America experienced a 1.3 percent gap between productivity growth and median wage growth, but in Germany the gap was only 0.2 percent.

Technology adoption will continue to increase, whether America can equitably distribute the technological benefits or not. So the question becomes, how much control do we actually have over automation? How much of this control is dependent on national or regional policies, and how much power might individual firms and workers have within their own workplaces? Is it inevitable that robots and artificial intelligence will take all of our jobs, and over what time frame? While some scholars believe that our fates are predetermined by the technologies themselves, emerging evidence indicates that we may have considerable influence over how such machines are employed within our factories and offices—if we can only figure out how to wield this power.

While 8 percent of German manufacturing workers left their jobs (voluntarily or involuntarily) between 1993 and 2009, 34 percent of US manufacturing workers left their jobs over the same period. Thanks to workplace bargaining and sectoral wage-setting, German manufacturing workers have better financial incentives to stay at their jobs; The Conference Board reports that the average German manufacturing worker earned $43.18 (plus $8.88 in benefits) per hour in 2016, while the average American manufacturing worker earned $39.03 with only $3.66 in benefits. Overall, Germans across the economy with a “medium-skill” high school or vocational certificate earned $24.31 per hour in 2016, while Americans with comparable education averaged $14.55 per hour. Two case studies illustrate the differences between American and German approaches to manufacturing workers and automation, from policies to supply chains to worker training systems.

In a town on the outskirts of the Black Forest in Baden-Württemberg, Germany, complete with winding cobblestone streets and peaked red rooftops, there’s a 220-person factory that’s spent decades as a global leader in safety-critical fabricated metal equipment for sites such as highway tunnels, airports, and nuclear reactors. It’s a wide, unassuming warehouse next to a few acres of golden mustard flowers. When I visited with my colleagues from the MIT Interactive Robotics Group and the Fraunhofer Institute for Manufacturing Engineering and Automation’s Future Work Lab (part of the diverse German government-supported Fraunhofer network for industrial research and development), the senior factory manager informed us that his workers’ attitudes, like the 14th-century church downtown, hadn’t changed much in his 25-year tenure at the factory. Teenagers still entered the firm as apprentices in metal fabrication through Germany’s dual work-study vocational system, and wages are high enough that most young people expected to stay at the factory and move up the ranks until retirement, earning a respectable living along the way. Smaller German manufacturers can also get government subsidies to help send their workers back to school to learn new skills that often equate to higher wages. This manager had worked closely with a nearby technical university to develop advanced welding certifications, and he was proud to rely on his “welding family” of local firms, technology integrators, welding trade associations, and educational institutions for support with new technology and training.

Our research team also visited a 30-person factory in urban Ohio that makes fabricated metal products for the automotive industry, not far from the empty warehouses and shuttered office buildings of downtown. This factory owner, a grandson of the firm’s founder, complained about losing his unskilled, minimum-wage technicians to any nearby job willing to offer a better salary. “We’re like a training company for big companies,” he said. He had given up on finding workers with the relevant training and resigned himself to finding unskilled workers who could hopefully be trained on the job. Around 65 percent of his firm’s business used to go to one automotive supplier, which outsourced its metal fabrication to China in 2009, forcing the Ohio firm to shrink down to a third of its prior workforce.

While the Baden-Württemberg factory commanded market share by selling specialized final products at premium prices, the Ohio factory made commodity components to sell to intermediaries, who then sold to powerful automotive firms. So the Ohio firm had to compete with low-wage, bulk producers in China, while the highly specialized German firm had few foreign or domestic competitors forcing it to shrink its skilled workforce or lower wages.

Welding robots have replaced some of the workers’ tasks in the two factories, but both are still actively hiring new people. The German firm’s first robot, purchased in 2018, was a new “collaborative” welding arm (with a friendly user interface) designed to be operated by workers with welding expertise, rather than professional robot programmers who don’t know the intricacies of welding. Training welders to operate the robot isn’t a problem in Baden-Württemberg, where everyone who arrives as a new welder has a vocational degree representing at least two years of education and hands-on apprenticeship in welding, metal fabrication, and 3D modeling. Several of the firm’s welders had already learned to operate the robot, assisted by prior trainings. And although the German firm manager was pleased to save labor costs, his main reason for the robot acquisition was to improve workers’ health and safety and minimize boring, repetitive welding sequences—so he could continue to attract skilled young workers who would stick around. Another German factory we visited had recently acquired a robot to tend a machine during the night shift so fewer workers would have to work overtime or come in at night.

Musk’s Plan to Reveal the Twitter Algorithm Won’t Solve Anything

Musk’s Plan to Reveal the Twitter Algorithm Won’t Solve Anything

“In this age of machine learning, it isn’t the algorithms, it’s the data,” says David Karger, a professor and computer scientist at MIT. Karger says Musk could improve Twitter by making the platform more open, so that others can build on top of it in new ways. “What makes Twitter important is not the algorithms,” he says. “It’s the people who are tweeting.”

A deeper picture of how Twitter works would also mean opening up more than just the handwritten algorithms. “The code is fine; the data is better; the code and data combined into a model could be best,” says Alex Engler, a fellow in governance studies at the Brookings Institution who studies AI’s impact on society. Engler adds that understanding the decisionmaking processes that Twitter’s algorithms are trained to make would also be crucial.

The machine learning models that Twitter uses are still only part of the picture, because the entire system also reacts to real-time user behavior in complex ways. If users are particularly interested in a certain news story, then related tweets will naturally get amplified. “Twitter is a socio-technical system,” says a second Twitter source. “It is responsive to human behavior.”

This fact was illustrated by research that Twitter published in December 2021 showing that right-leaning posts received more amplifications than left-leaning ones, although the dynamics behind this phenomenon were unclear.

“That’s why we audit,” says Ethan Zuckerman, a professor at the University of Massachusetts Amherst who teaches public policy, communication, and information. “Even the people who build these tools end up discovering surprising shortcomings and flaws.”

One irony of Musk’s professed motives for acquiring Twitter, Zuckerman says, is that the company has been remarkably transparent about the way its algorithm works of late. In August 2021, Twitter launched a contest that gave outside researchers access to an image-cropping algorithm that had exhibited biased behavior. The company has also been working on ways to give users greater control over the algorithms that surface content, according to those with knowledge of the work.

Releasing some Twitter code would provide greater transparency, says Damon McCoy, an associate professor at New York University who studies security and privacy of large, complex systems including social networks, but even those who built Twitter may not fully understand how it works.

A concern for Twitter’s engineering team is that, amid all this complexity, some code may be taken out of context and highlighted as a sign of bias. Revealing too much about how Twitter’s recommendation system operates might also result in security problems. Access to a recommendation system would make it easier to game the system and gain prominence. It may also be possible to exploit machine learning algorithms in ways that might be subtle and hard to detect. “Bad actors right now are probing the system and testing,” McCoy says. Access to Twitter’s models “may well help outsiders understand some of the principles used to elevate some content over others.”

On April 18, as Musk was escalating his efforts to acquire Twitter, someone with access to Twitter’s Github, where the company already releases some of its code, created a new repository called “the algorithm”—perhaps a developer’s dig at the idea that the company could easily release details of how it works. Shortly after Musk’s acquisition was announced, it disappeared.

Additional reporting by Tom Simonite.


More Great WIRED Stories