Select Page
To Fix Tech, Democracy Needs to Grow Up

To Fix Tech, Democracy Needs to Grow Up

There isn’t much we can agree on these days. But two sweeping statements that might garner broad support are “We need to fix technology” and “We need to fix democracy.”

There is growing recognition that rapid technology development is producing society-scale risks: state and private surveillance, widespread labor automation, ascending monopoly and oligopoly power, stagnant productivity growth, algorithmic discrimination, and the catastrophic risks posed by advances in fields like AI and biotechnology. Less often discussed, but in my view no less important, is the loss of potential advances that lack short-term or market-legible benefits. These include vaccine development for emerging diseases and open source platforms for basic digital affordances like identity and communication.

At the same time, as democracies falter in the face of complex global challenges, citizens (and increasingly, elected leaders) around the world are losing trust in democratic processes and are being swayed by autocratic alternatives. Nation-state democracies are, to varying degrees, beset by gridlock and hyper-partisanship, little accountability to the popular will, inefficiency, flagging state capacity, inability to keep up with emerging technologies, and corporate capture. While smaller-scale democratic experiments are growing, locally and globally, they remain far too fractured to handle consequential governance decisions at scale.

This puts us in a bind. Clearly, we could be doing a better job directing the development of technology towards collective human flourishing—in fact, this may be one of the greatest challenges of our time. If actually existing democracy is so riddled with flaws, it doesn’t seem up to the task. This is what rings hollow in many calls to “democratize technology”: Given the litany of complaints, why subject one seemingly broken system to governance by another?

At the same time, as we deal with everything from surveillance to space travel, we desperately need ways to collectively negotiate complex value trade-offs with global consequences, and ways to share in their benefits. This definitely seems like a job for democracy, albeit a much better iteration. So how can we radically update democracy so that we can successfully navigate toward long-term, shared positive outcomes?

The Case for Collective Intelligence

To answer these questions, we must realize that our current forms of democracy are only early and highly imperfect manifestations of collective intelligence—coordination systems that incorporate and process decentralized, agentic, and meaningful decisionmaking across individuals and communities to produce best-case decisions for the collective.

Collective intelligence, or CI, is not the purview of humans alone. Networks of trees, enabled by mycelia, can exhibit intelligent characteristics, sharing nutrients and sending out distress signals about drought or insect attacks. Bees and ants manifest swarm intelligence through complex processes of selection, deliberation, and consensus, using the vocabulary of physical movement and pheromones. In fact, humans are not even the only animals that vote. African wild dogs, when deciding whether to move locations, will engage in a bout of sneezing to determine whether quorum has been reached, with the tipping point determined by context—for example, lower-ranked individuals require a minimum of 10 sneezes to achieve what a higher-ranked individual could get with only three. Buffaloes, baboons, and meerkats also make decisions via quorum, with flexible “rules” based on behavior and negotiation. 

But humans, unlike meerkats or ants, don’t have to rely on the pathways to CI that our biology has hard-coded into us, or wait until the slow, invisible hand of evolution tweaks our processes. We can do better on purpose, recognizing that progress and participation don’t have to trade off. (This is the thesis on which my organization, the Collective Intelligence Project, is predicated.)

Our stepwise innovations in CI systems—such as representative, nation-state democracy, capitalist and noncapitalist markets, and bureaucratic technocracy—have already shaped the modern world. And yet, we can do much better. These existing manifestations of collective intelligence are only crude versions of the structures we could build to make better collective decisions over collective resources.

Europe Is in Danger of Using the Wrong Definition of AI

Europe Is in Danger of Using the Wrong Definition of AI

A company could choose the most obscure, nontransparent systems architecture available, claiming (rightly, under this bad definition) that it was “more AI,” in order to access the prestige, investment, and government support that claim entails. For example, one giant deep neural network could be given the task not only of learning language but also of debiasing that language on several criteria, say, race, gender, and socio-economic class. Then maybe the company could also sneak in a little slant to make it also point toward preferred advertisers or political party. This would be called AI under either system, so it would certainly fall into the remit of the AIA. But would anyone really be reliably able to tell what was going on with this system? Under the original AIA definition, some simpler way to get the job done would be equally considered “AI,” and so there would not be these same incentives to use intentionally complicated systems.

Of course, under the new definition, a company could also switch to using more traditional AI, like rule-based systems or decision trees (or just conventional software). And then it would be free to do whatever it wanted—this is no longer AI, and there’s no longer a special regulation to check how the system was developed or where it’s applied. Programmers can code up bad, corrupt instructions that deliberately or just negligently harm individuals or populations. Under the new presidency draft, this system would no longer get the extra oversight and accountability procedures it would under the original AIA draft. Incidentally, this route also avoids tangling with the extra law enforcement resources the AIA mandates member states fund in order to enforce its new requirements.

Limiting where the AIA applies by complicating and constraining the definition of AI is presumably an attempt to reduce the costs of its protections for both businesses and governments. Of course, we do want to minimize the costs of any regulation or governance—public and private resources both are precious. But the AIA already does that, and does it in a better, safer way. As originally proposed, the AIA already only applies to systems we really need to worry about, which is as it should be.

In the AIA’s original form, the vast majority of AI—like that in computer games, vacuum cleaners, or standard smart phone apps—is left for ordinary product law and would not receive any new regulatory burden at all. Or it would require only basic transparency obligations; for example, a chatbot should identify that it is AI, not an interface to a real human.

The most important part of the AIA is where it describes what sorts of systems are potentially hazardous to automate. It then regulates only these. Both drafts of the AIA say that there are a small number of contexts in which no AI system should ever operate—for example, identifying individuals in public spaces from their biometric data, creating social credit scores for governments, or producing toys that encourage dangerous behavior or self harm. These are all simply banned, more or less. There are far more application areas for which using AI requires government and other human oversight: situations affecting human-life-altering outcomes, such as deciding who gets what government services, or who gets into which school or is awarded what loan. In these contexts, European residents would be provided with certain rights, and their governments with certain obligations, to ensure that the artifacts have been built and are functioning correctly and justly.

Making the AIA Act not apply to some of the systems we need to worry about—as the “presidency compromise” draft could do—would leave the door open for corruption and negligence. It also would make legal things the European Commission was trying to protect us from, like social credit systems and generalized facial recognition in public spaces, as long as a company could claim its system wasn’t “real” AI.

The Future of Robot Nannies

The Future of Robot Nannies

Childcare is the most intimate of activities. Evolution has generated drives so powerful that we will risk our lives to protect not only our own children, but quite often any child, and even the young of other species. Robots, by contrast, are products created by commercial entities with commercial goals, which may—and should—include the well-being of their customers, but will never be limited to such. Robots, corporations, and other legal or non-legal entities do not possess the instinctual nature of humans to care for the young—even if our anthropomorphic tendencies may prompt some children and adults to overlook this fact.

As a result, it is important to take into account the likelihood of deception—both commercial deception through advertising and also self-deception on the part of parents—despite the fact that robots are unlikely to cause significant psychological damage to children and to others who may come to love them.

Neither television manufacturers, broadcasters, nor online game manufacturers are deemed liable when children are left for too long in front of their television. Robotics companies will want to be in the same position, as no company will want to be liable for damage to children, so it is likely that manufacturers will undersell the artificial intelligence (AI) and interactive capacities of their robots. It is therefore likely that any robots (and certainly those in jurisdictions with strong consumer protection) will be marketed primarily as toys, surveillance devices, and possibly household utilities. They will be brightly colored and deliberately designed to appeal to parents and children. We expect a variety of products, some with advanced capabilities and some with humanoid features. Parents will quickly discover a robot’s ability to engage and distract their child. Robotics companies will program 

experiences geared toward parents and children, just as television broadcasters do. But robots will always have disclaimers, such as “this device is not a toy and should only be used with adult supervision” or “this device is provided for entertainment only. It should not be considered educational.”

Nevertheless, parents will notice that they can leave their children alone with robots, just as they can leave them to watch television or to play with other children. Humans are phenomenal learners and very good at detecting regularities and exploiting affordances. Parents will quickly notice the educational benefits of robot nannies that have advanced AI and communication skills. Occasional horror stories, such as the robot nanny and toddler tragedy in the novel Scarlett and Gurl, will make headline news and remind parents how to use robots responsibly.

This will likely continue until or unless the incidence of injuries necessitates redesign, a revision of consumer safety standards, statutory notice requirements, and/or risk-based uninsurability, all of which will further refine the industry. Meanwhile, the media will also seize on stories of robots saving children in unexpected ways, as it does now when children (or adults) are saved by other young children and dogs. This should not make people think that they should leave children alone with robots, but given the propensity we already have to anthropomorphize robots, it may make parents feel that little bit more comfortable—until the next horror story makes headlines.

When it comes to liability, we should be able to communicate the same model of liability applied to toys to the manufacturers of robot nannies: Make your robots reliable, describe what they do accurately, and provide sufficient notice of reasonably foreseeable danger from misuse. Then, apart from the exceptional situation of errors in design or manufacture, such as parts that come off and choke children, legal liability will rest entirely with the parent or responsible adult, as it does now, and as it should under existing product liability law.

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Despite the executive orders and congressional hearings of the “Biden antitrust revolution,” the most profound anti-competitive shift is happening under policymakers’ noses: the cornering of artificial intelligence and automation by a handful of tech companies. This needs to change.

There is little doubt that the impact of AI will be widely felt. It is shaping product innovations, creating new research, discovery, and development pathways, and reinventing business models. AI is making inroads in the development of autonomous vehicles, which may eventually improve road safety, reduce urban congestion, and help drivers make better use of their time. AI recently predicted the molecular structure of almost every protein in the human body, and it helped develop and roll out a Covid vaccine in record time. The pandemic itself may have accelerated AI’s incursion—in emergency rooms for triage; in airports, where robots spray disinfecting chemicals; in increasingly automated warehouses and meatpacking plants; and in our remote workdays, with the growing presence of chatbots, speech recognition, and email systems that get better at completing our sentences.

Exactly how AI will affect the future of human work, wages, or productivity overall remains unclear. Though service and blue-collar wages have lately been on the rise, they’ve stagnated for three decades. According to MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo, 50 to 70 percent of this languishing can be attributed to the loss of mostly routine jobs to automation. White-collar occupations are also at risk as machine learning and smart technologies take on complex functions. According to McKinsey, while only about 10 percent of these jobs could disappear altogether, 60 percent of them may see at least a third of their tasks subsumed by machines and algorithms. Some researchers argue that while AI’s overall productivity impact has been so far disappointing, it will improve; others are less sanguine. Despite these uncertainties, most experts agree that on net, AI will “become more of a challenge to the workforce,” and we should anticipate a flat to slightly negative impact on jobs by 2030.

Without intervention, AI could also help undermine democracy–through amplifying misinformation or enabling mass surveillance. The past year and a half has also underscored the impact of algorithmically powered social media, not just on the health of democracy, but on health care itself.

The overall direction and net impact of AI sits on a knife’s edge, unless AI R&D and applications are appropriately channeled with wider societal and economic benefits in mind. How can we ensure that?

A handful of US tech companies, including Amazon, Alibaba, Alphabet, Facebook, and Netflix, along with Chinese mega-players such as Baidu, are responsible for $2 of every $3 spent globally on AI. They’re also among the top AI patent holders. Not only do their outsize budgets for AI dwarf others’, including the federal government’s, they also emphasize building internally rather than buying AI. Even though they buy comparatively little, they’ve still cornered the AI startup acquisition market. Many of these are early-stage acquisitions, meaning the tech giants integrate the products from these companies into their own portfolios or take IP off the market if it doesn’t suit their strategic purposes and redeploy the talent. According to research from my Digital Planet team, US AI talent is intensely concentrated. The median number of AI employees in the field’s top five employers—Amazon, Google, Microsoft, Facebook, and Apple—is some 18,000, while the median for companies six to 24 is about 2,500—and it drops significantly from there. Moreover, these companies have near-monopolies of data on key behavioral areas. And they are setting the stage to become the primary suppliers of AI-based products and services to the rest of the world.

Each key player has areas of focus consistent with its business interests: Google/Alphabet spends disproportionately on natural language and image processing and on optical character, speech, and facial recognition. Amazon does the same on supply chain management and logistics, robotics, and speech recognition. Many of these investments will yield socially beneficial applications, while others, such as IBM’s Watson—which aspired to become the go-to digital decision tool in fields as diverse as health care, law, and climate action—may not deliver on initial promises, or may fail altogether. Moonshot projects, such as level 4 driverless cars, may have an excessive amount of investment put against them simply because the Big Tech players choose to champion them. Failures, disappointments, and pivots are natural to developing any new technology. We should, however, worry about the concentration of investments in a technology so fundamental and ask how investments are being allocated overall. AI, arguably, could have more profound impact than social media, online retail, or app stores—the current targets of antitrust. Google CEO Sundar Pichai may have been a tad overdramatic when he declared that AI will have more impact on humanity than fire, but that alone ought to light a fire under the policy establishment to pay closer attention.

The Absurd Idea to Put Bodycams on Teachers Is … Feasible?

The Absurd Idea to Put Bodycams on Teachers Is … Feasible?

In the realm of international cybersecurity, “dual use” technologies are capable of both affirming and eroding human rights. Facial recognition may identify a missing child, or make anonymity impossible. Hacking may save lives by revealing key intel on a terrorist attack, or empower dictators to identify and imprison political dissidents.

The same is true for gadgets. Your smart speaker makes it easier to order pizza and listen to music, but also helps tech giants track you even more intimately and target you with more ads. Your phone’s GPS can both tell where you are and pass that data to advertisers and, sometimes, the federal government.

Tools can often be bought for one purpose, then, over time, used for another.

These subtle shifts are so common that when a conservative think tank in Nevada last month suggested mandating that teachers wear body cameras to ensure they don’t teach critical race theory, I thought it was ridiculous, offensive, and entirely feasible. Body cameras were intended to keep an eye on cops, but have also been used by police to misrepresent their encounters with the public.

Days later, “body cameras” trended on Twitter after Fox News pundit Tucker Carlson endorsed the idea. Anti-CRT teaching bills, which have already passed in states like Iowa, Texas, and my home state, Arkansas, continued to gain momentum. Now, I’m half expecting these bills to include funding for the devices because truly no idea is too absurd for the surveillance state.

The logic (to the extent that any logic has been applied) is that teachers are being compelled by far-left activists to teach students to resist patriotism and instead hate America because of the centuries-old sin of chattel slavery. Body cameras would allow parents to monitor whether their children are being indoctrinated. (There’s more support for this than you might think.)

As recounted by The Atlantic’s Adam Harris, the recent rebranding of critical race theory as an existential threat dates back about a year and a half.

In late 2019, a few schools around the country began adding excerpts from The New York Times’ 1619 Project to their history curriculum, outraging many conservatives who dismissed the core thesis reframing American history around slavery. The surge of interest in diversity and anti-racism training following the murder of George Floyd prompted some conservative writers to complain of secret reeducation campaigns. (Ironically, the Black men and women actually leading these trainings are ambivalent about whether they’ll cause lasting change.)

And so, everything from reading lists to diversity seminars became “critical race theory,” an enormously far cry from CRT’s origin in the 1970s as an analysis of the legal system by the late Harvard Law historian Derrick Bell.

This is what makes the turn toward surveillance to outlaw CRT so interesting: an ill-defined, amorphous problem meets an ill-defined, amorphous solution, the battleground ironically being schools, which have embraced surveillance greatly over the past few years.

The aftermath of the Stoneman Douglas High School shooting in 2018 led to a boom in “hardening” schools, often by employing surveillance: Schools began equipping iris scanners, gunshot detection microphones, facial recognition for access, and weapon-detecting robots. Online, schools turned to social media surveillance (on and off campus) that pings staff whenever students’ posts include words associated with suicide or shootings. As Republican lawmakers shirked having a conversation on gun control, funding more surveillance and officers in schools became an alternative.

When the pandemic hit, closing schools became a reason for surveillance. Schools began buying proctoring software that relies on facial recognition and even screen monitoring. Then, as schools reopened, surveillance firms started yet another pitch. This time, the same anti-shooting surveillance software can detect if students are wearing masks or failing to social distance. Dual uses abound.