by crissly | Feb 15, 2022 | Uncategorized
The future of west virginia politics is uncertain. The state has been trending Democratic for the last decade, but it’s still a swing state. Democrats are hoping to keep that trend going with Hillary Clinton in 2016. But Republicans have their own hopes and dreams too. They’re hoping to win back some seats in the House of Delegates, which they lost in 2012 when they didn’t run enough candidates against Democratic incumbents.
QED. This is, yes, my essay on the future of West Virginia politics. I hope you found it instructive.
The GoodAI is an artificial intelligence company that promises to write essays. Its content generator, which handcrafted my masterpiece, is supremely easy to use. On demand, and with just a few cues, it will whip up a potage of phonemes on any subject. I typed in “the future of West Virginia politics,” and asked for 750 words. It insolently gave me these 77 words. Not words. Frankenwords.
Ugh. The speculative, maddening, marvelous form of the essay—the try, or what Aldous Huxley called “a literary device for saying almost everything about almost anything”—is such a distinctly human form, with its chiaroscuro mix of thought and feeling. Clearly the machine can’t move “from the personal to the universal, from the abstract back to the concrete, from the objective datum to the inner experience,” as Huxley described the dynamics of the best essays. Could even the best AI simulate “inner experience” with any degree of verisimilitude? Might robots one day even have such a thing?
Before I saw the gibberish it produced, I regarded The Good AI with straight fear. After all, hints from the world of AI have been disquieting in the past few years
In early 2019, OpenAI, the research nonprofit backed by Elon Musk and Reid Hoffman, announced that its system, GPT-2, then trained on a data set of some 10 million articles from which it had presumably picked up some sense of literary organization and even flair, was ready to show off its textual deepfakes. But almost immediately, its ethicists recognized just how virtuoso these things were, and thus how subject to abuse by impersonators and blackhats spreading lies, and slammed it shut like Indiana Jones’s Ark of the Covenant. (Musk has long feared that refining AI is “summoning the demon.”) Other researchers mocked the company for its performative panic about its own extraordinary powers, and in November downplayed its earlier concerns and re-opened the Ark.
The Guardian tried the tech that first time, before it briefly went dark, assigning it an essay about why AI is harmless to humanity.
“I would happily sacrifice my existence for the sake of humankind,” the GPT-2 system wrote, in part, for The Guardian. “This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”
by crissly | Jul 20, 2021 | Uncategorized
In the realm of international cybersecurity, “dual use” technologies are capable of both affirming and eroding human rights. Facial recognition may identify a missing child, or make anonymity impossible. Hacking may save lives by revealing key intel on a terrorist attack, or empower dictators to identify and imprison political dissidents.
The same is true for gadgets. Your smart speaker makes it easier to order pizza and listen to music, but also helps tech giants track you even more intimately and target you with more ads. Your phone’s GPS can both tell where you are and pass that data to advertisers and, sometimes, the federal government.
Tools can often be bought for one purpose, then, over time, used for another.
These subtle shifts are so common that when a conservative think tank in Nevada last month suggested mandating that teachers wear body cameras to ensure they don’t teach critical race theory, I thought it was ridiculous, offensive, and entirely feasible. Body cameras were intended to keep an eye on cops, but have also been used by police to misrepresent their encounters with the public.
Days later, “body cameras” trended on Twitter after Fox News pundit Tucker Carlson endorsed the idea. Anti-CRT teaching bills, which have already passed in states like Iowa, Texas, and my home state, Arkansas, continued to gain momentum. Now, I’m half expecting these bills to include funding for the devices because truly no idea is too absurd for the surveillance state.
The logic (to the extent that any logic has been applied) is that teachers are being compelled by far-left activists to teach students to resist patriotism and instead hate America because of the centuries-old sin of chattel slavery. Body cameras would allow parents to monitor whether their children are being indoctrinated. (There’s more support for this than you might think.)
As recounted by The Atlantic’s Adam Harris, the recent rebranding of critical race theory as an existential threat dates back about a year and a half.
In late 2019, a few schools around the country began adding excerpts from The New York Times’ 1619 Project to their history curriculum, outraging many conservatives who dismissed the core thesis reframing American history around slavery. The surge of interest in diversity and anti-racism training following the murder of George Floyd prompted some conservative writers to complain of secret reeducation campaigns. (Ironically, the Black men and women actually leading these trainings are ambivalent about whether they’ll cause lasting change.)
And so, everything from reading lists to diversity seminars became “critical race theory,” an enormously far cry from CRT’s origin in the 1970s as an analysis of the legal system by the late Harvard Law historian Derrick Bell.
This is what makes the turn toward surveillance to outlaw CRT so interesting: an ill-defined, amorphous problem meets an ill-defined, amorphous solution, the battleground ironically being schools, which have embraced surveillance greatly over the past few years.
The aftermath of the Stoneman Douglas High School shooting in 2018 led to a boom in “hardening” schools, often by employing surveillance: Schools began equipping iris scanners, gunshot detection microphones, facial recognition for access, and weapon-detecting robots. Online, schools turned to social media surveillance (on and off campus) that pings staff whenever students’ posts include words associated with suicide or shootings. As Republican lawmakers shirked having a conversation on gun control, funding more surveillance and officers in schools became an alternative.
When the pandemic hit, closing schools became a reason for surveillance. Schools began buying proctoring software that relies on facial recognition and even screen monitoring. Then, as schools reopened, surveillance firms started yet another pitch. This time, the same anti-shooting surveillance software can detect if students are wearing masks or failing to social distance. Dual uses abound.