Select Page
20 Years After 9/11, Surveillance Has Become a Way of Life

20 Years After 9/11, Surveillance Has Become a Way of Life

Two decades after 9/11, many simple acts that were once taken for granted now seem unfathomable: strolling with loved ones to the gate of their flight, meandering through a corporate plaza, using streets near government buildings. Our metropolises’ commons are now enclosed with steel and surveillance. Amid the perpetual pandemic of the past year and a half, cities have become even more walled off. With each new barrier erected, more of the city’s defining feature erodes: the freedom to move, wander, and even, as Walter Benjamin said, to “lose one’s way … as one loses one’s way in a forest.”

It’s harder to get lost amid constant tracking. It’s also harder to freely gather when the public spaces between home and work are stripped away. Known as third places, they are the connective tissue that stitches together the fabric of modern communities: the public park where teens can skateboard next to grandparents playing chess, the library where children can learn to read and unhoused individuals can find a digital lifeline. When third places vanish, as they have since the attacks, communities can falter.

Without these spaces holding us together, citizens live more like several separate societies operating in parallel. Just as social-media echo chambers have undermined our capacity for conversations online, the loss of third places can create physical echo chambers.

America has never been particularly adept at protecting our third places. For enslaved and indigenous people, entering the town square alone could be a death sentence. Later, the racial terrorism of Jim Crow in the South denied Black Americans not only suffrage, but also access to lunch counters, public transit, and even the literal water cooler. In northern cities like New York, Black Americans still faced arrest and violence for transgressing rigid, but unseen, segregation codes.

Throughout the 20th century, New York built an infrastructure of exclusion to keep our unhoused neighbors from sharing the city institutions that are, by law, every bit as much theirs to occupy. In 1999, then mayor Rudy Giuliani warned unhoused New Yorkers that “streets do not exist in civilized societies for the purpose of people sleeping there.” His threats prompted thousands of NYPD officers to systematically target and push the unhoused out of sight, thus semi-privatizing the quintessential public place.

Despite these limitations, before 9/11 millions of New Yorkers could walk and wander through vast networks of modern commons—public parks, private plazas, paths, sidewalks, open lots, and community gardens, crossing paths with those whom they would never have otherwise met. These random encounters electrify our city and give us a unifying sense of self. That shared space began to slip away from us 20 years ago, and if we’re not careful, it’ll be lost forever.

In the aftermath of the attacks, we heard patriotic platitudes from those who promised to “defend democracy.” But in the ensuing years, their defense became democracy’s greatest threat, reconstructing cities as security spaces. The billions we spent to “defend our way of life” have proved to be its undoing, and it’s unclear if we’ll be able to turn back the trend.

In a country where the term “papers, please” was once synonymous with foreign authoritarianism, photo ID has become an ever present requirement. Before 9/11, a New Yorker could spend their entire day traversing the city without any need for ID. Now it’s required to enter nearly any large building or institution.

While the ID check has become muscle memory for millions of privileged New Yorkers, it’s a source of uncertainty and fear for others. Millions of Americans lack a photo ID, and for millions more, using ID is a risk, a source of data for Immigration and Customs Enforcement.

According to Mizue Aizeki, interim executive director of the New York–based Immigrant Defense Project, “ID systems are particularly vulnerable to becoming tools of surveillance.” Aizeki added, “data collection and analysis has become increasingly central to ICE’s ability to identify and track immigrants,” noting that the Department of Homeland Security dramatically increased its support for surveillance systems since its post-9/11 founding.

ICE has spent millions partnering with firms like Palantir, the controversial data aggregator that sells information services to governments at home and abroad. Vendors can collect digital sign-in lists from buildings where we show our IDs, facial recognition in plazas, and countless other surveillance tools that track the areas around office buildings with an almost military level of surveillance. According to Aizeki, “as mass policing of immigrants has escalated, advocates have been confronted by a rapidly expanding surveillance state.”

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Despite the executive orders and congressional hearings of the “Biden antitrust revolution,” the most profound anti-competitive shift is happening under policymakers’ noses: the cornering of artificial intelligence and automation by a handful of tech companies. This needs to change.

There is little doubt that the impact of AI will be widely felt. It is shaping product innovations, creating new research, discovery, and development pathways, and reinventing business models. AI is making inroads in the development of autonomous vehicles, which may eventually improve road safety, reduce urban congestion, and help drivers make better use of their time. AI recently predicted the molecular structure of almost every protein in the human body, and it helped develop and roll out a Covid vaccine in record time. The pandemic itself may have accelerated AI’s incursion—in emergency rooms for triage; in airports, where robots spray disinfecting chemicals; in increasingly automated warehouses and meatpacking plants; and in our remote workdays, with the growing presence of chatbots, speech recognition, and email systems that get better at completing our sentences.

Exactly how AI will affect the future of human work, wages, or productivity overall remains unclear. Though service and blue-collar wages have lately been on the rise, they’ve stagnated for three decades. According to MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo, 50 to 70 percent of this languishing can be attributed to the loss of mostly routine jobs to automation. White-collar occupations are also at risk as machine learning and smart technologies take on complex functions. According to McKinsey, while only about 10 percent of these jobs could disappear altogether, 60 percent of them may see at least a third of their tasks subsumed by machines and algorithms. Some researchers argue that while AI’s overall productivity impact has been so far disappointing, it will improve; others are less sanguine. Despite these uncertainties, most experts agree that on net, AI will “become more of a challenge to the workforce,” and we should anticipate a flat to slightly negative impact on jobs by 2030.

Without intervention, AI could also help undermine democracy–through amplifying misinformation or enabling mass surveillance. The past year and a half has also underscored the impact of algorithmically powered social media, not just on the health of democracy, but on health care itself.

The overall direction and net impact of AI sits on a knife’s edge, unless AI R&D and applications are appropriately channeled with wider societal and economic benefits in mind. How can we ensure that?

A handful of US tech companies, including Amazon, Alibaba, Alphabet, Facebook, and Netflix, along with Chinese mega-players such as Baidu, are responsible for $2 of every $3 spent globally on AI. They’re also among the top AI patent holders. Not only do their outsize budgets for AI dwarf others’, including the federal government’s, they also emphasize building internally rather than buying AI. Even though they buy comparatively little, they’ve still cornered the AI startup acquisition market. Many of these are early-stage acquisitions, meaning the tech giants integrate the products from these companies into their own portfolios or take IP off the market if it doesn’t suit their strategic purposes and redeploy the talent. According to research from my Digital Planet team, US AI talent is intensely concentrated. The median number of AI employees in the field’s top five employers—Amazon, Google, Microsoft, Facebook, and Apple—is some 18,000, while the median for companies six to 24 is about 2,500—and it drops significantly from there. Moreover, these companies have near-monopolies of data on key behavioral areas. And they are setting the stage to become the primary suppliers of AI-based products and services to the rest of the world.

Each key player has areas of focus consistent with its business interests: Google/Alphabet spends disproportionately on natural language and image processing and on optical character, speech, and facial recognition. Amazon does the same on supply chain management and logistics, robotics, and speech recognition. Many of these investments will yield socially beneficial applications, while others, such as IBM’s Watson—which aspired to become the go-to digital decision tool in fields as diverse as health care, law, and climate action—may not deliver on initial promises, or may fail altogether. Moonshot projects, such as level 4 driverless cars, may have an excessive amount of investment put against them simply because the Big Tech players choose to champion them. Failures, disappointments, and pivots are natural to developing any new technology. We should, however, worry about the concentration of investments in a technology so fundamental and ask how investments are being allocated overall. AI, arguably, could have more profound impact than social media, online retail, or app stores—the current targets of antitrust. Google CEO Sundar Pichai may have been a tad overdramatic when he declared that AI will have more impact on humanity than fire, but that alone ought to light a fire under the policy establishment to pay closer attention.

The Absurd Idea to Put Bodycams on Teachers Is … Feasible?

The Absurd Idea to Put Bodycams on Teachers Is … Feasible?

In the realm of international cybersecurity, “dual use” technologies are capable of both affirming and eroding human rights. Facial recognition may identify a missing child, or make anonymity impossible. Hacking may save lives by revealing key intel on a terrorist attack, or empower dictators to identify and imprison political dissidents.

The same is true for gadgets. Your smart speaker makes it easier to order pizza and listen to music, but also helps tech giants track you even more intimately and target you with more ads. Your phone’s GPS can both tell where you are and pass that data to advertisers and, sometimes, the federal government.

Tools can often be bought for one purpose, then, over time, used for another.

These subtle shifts are so common that when a conservative think tank in Nevada last month suggested mandating that teachers wear body cameras to ensure they don’t teach critical race theory, I thought it was ridiculous, offensive, and entirely feasible. Body cameras were intended to keep an eye on cops, but have also been used by police to misrepresent their encounters with the public.

Days later, “body cameras” trended on Twitter after Fox News pundit Tucker Carlson endorsed the idea. Anti-CRT teaching bills, which have already passed in states like Iowa, Texas, and my home state, Arkansas, continued to gain momentum. Now, I’m half expecting these bills to include funding for the devices because truly no idea is too absurd for the surveillance state.

The logic (to the extent that any logic has been applied) is that teachers are being compelled by far-left activists to teach students to resist patriotism and instead hate America because of the centuries-old sin of chattel slavery. Body cameras would allow parents to monitor whether their children are being indoctrinated. (There’s more support for this than you might think.)

As recounted by The Atlantic’s Adam Harris, the recent rebranding of critical race theory as an existential threat dates back about a year and a half.

In late 2019, a few schools around the country began adding excerpts from The New York Times’ 1619 Project to their history curriculum, outraging many conservatives who dismissed the core thesis reframing American history around slavery. The surge of interest in diversity and anti-racism training following the murder of George Floyd prompted some conservative writers to complain of secret reeducation campaigns. (Ironically, the Black men and women actually leading these trainings are ambivalent about whether they’ll cause lasting change.)

And so, everything from reading lists to diversity seminars became “critical race theory,” an enormously far cry from CRT’s origin in the 1970s as an analysis of the legal system by the late Harvard Law historian Derrick Bell.

This is what makes the turn toward surveillance to outlaw CRT so interesting: an ill-defined, amorphous problem meets an ill-defined, amorphous solution, the battleground ironically being schools, which have embraced surveillance greatly over the past few years.

The aftermath of the Stoneman Douglas High School shooting in 2018 led to a boom in “hardening” schools, often by employing surveillance: Schools began equipping iris scanners, gunshot detection microphones, facial recognition for access, and weapon-detecting robots. Online, schools turned to social media surveillance (on and off campus) that pings staff whenever students’ posts include words associated with suicide or shootings. As Republican lawmakers shirked having a conversation on gun control, funding more surveillance and officers in schools became an alternative.

When the pandemic hit, closing schools became a reason for surveillance. Schools began buying proctoring software that relies on facial recognition and even screen monitoring. Then, as schools reopened, surveillance firms started yet another pitch. This time, the same anti-shooting surveillance software can detect if students are wearing masks or failing to social distance. Dual uses abound.

Dumbed Down AI Rhetoric Harms Everyone

Dumbed Down AI Rhetoric Harms Everyone

When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.

Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.

The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.

But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.

Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.

The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.

The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.

The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.