In the realm of international cybersecurity, “dual use” technologies are capable of both affirming and eroding human rights. Facial recognition may identify a missing child, or make anonymity impossible. Hacking may save lives by revealing key intel on a terrorist attack, or empower dictators to identify and imprison political dissidents.
The same is true for gadgets. Your smart speaker makes it easier to order pizza and listen to music, but also helps tech giants track you even more intimately and target you with more ads. Your phone’s GPS can both tell where you are and pass that data to advertisers and, sometimes, the federal government.
Tools can often be bought for one purpose, then, over time, used for another.
These subtle shifts are so common that when a conservative think tank in Nevada last month suggested mandating that teachers wear body cameras to ensure they don’t teach critical race theory, I thought it was ridiculous, offensive, and entirely feasible. Body cameras were intended to keep an eye on cops, but have also been used by police to misrepresent their encounters with the public.
Days later, “body cameras” trended on Twitter after Fox News pundit Tucker Carlson endorsed the idea. Anti-CRT teaching bills, which have already passed in states like Iowa, Texas, and my home state, Arkansas, continued to gain momentum. Now, I’m half expecting these bills to include funding for the devices because truly no idea is too absurd for the surveillance state.
The logic (to the extent that any logic has been applied) is that teachers are being compelled by far-left activists to teach students to resist patriotism and instead hate America because of the centuries-old sin of chattel slavery. Body cameras would allow parents to monitor whether their children are being indoctrinated. (There’s more support for this than you might think.)
As recounted by The Atlantic’s Adam Harris, the recent rebranding of critical race theory as an existential threat dates back about a year and a half.
In late 2019, a few schools around the country began adding excerpts from The New York Times’ 1619 Project to their history curriculum, outraging many conservatives who dismissed the core thesis reframing American history around slavery. The surge of interest in diversity and anti-racism training following the murder of George Floyd prompted some conservative writers to complain of secret reeducation campaigns. (Ironically, the Black men and women actually leading these trainings are ambivalent about whether they’ll cause lasting change.)
And so, everything from reading lists to diversity seminars became “critical race theory,” an enormously far cry from CRT’s origin in the 1970s as an analysis of the legal system by the late Harvard Law historian Derrick Bell.
This is what makes the turn toward surveillance to outlaw CRT so interesting: an ill-defined, amorphous problem meets an ill-defined, amorphous solution, the battleground ironically being schools, which have embraced surveillance greatly over the past few years.
The aftermath of the Stoneman Douglas High School shooting in 2018 led to a boom in “hardening” schools, often by employing surveillance: Schools began equipping iris scanners, gunshot detection microphones, facial recognition for access, and weapon-detecting robots. Online, schools turned to social media surveillance (on and off campus) that pings staff whenever students’ posts include words associated with suicide or shootings. As Republican lawmakers shirked having a conversation on gun control, funding more surveillance and officers in schools became an alternative.
When the pandemic hit, closing schools became a reason for surveillance. Schools began buying proctoring software that relies on facial recognition and even screen monitoring. Then, as schools reopened, surveillance firms started yet another pitch. This time, the same anti-shooting surveillance software can detect if students are wearing masks or failing to social distance. Dual uses abound.
When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.
Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.
The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”
This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.
But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.
Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.
The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.
The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.
The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.
A new form of digital currency for animals, trees, and other wildlife (no, not like Dogecoin) would help protect biodiversity and bend technology back to nature.