Select Page
Cow, Bull, and the Meaning of AI Essays

Cow, Bull, and the Meaning of AI Essays

The future of west virginia politics is uncertain. The state has been trending Democratic for the last decade, but it’s still a swing state. Democrats are hoping to keep that trend going with Hillary Clinton in 2016. But Republicans have their own hopes and dreams too. They’re hoping to win back some seats in the House of Delegates, which they lost in 2012 when they didn’t run enough candidates against Democratic incumbents.

QED. This is, yes, my essay on the future of West Virginia politics. I hope you found it instructive.

The GoodAI is an artificial intelligence company that promises to write essays. Its content generator, which handcrafted my masterpiece, is supremely easy to use. On demand, and with just a few cues, it will whip up a potage of phonemes on any subject. I typed in “the future of West Virginia politics,” and asked for 750 words. It insolently gave me these 77 words. Not words. Frankenwords.

Ugh. The speculative, maddening, marvelous form of the essay—the try, or what Aldous Huxley called “a literary device for saying almost everything about almost anything”—is such a distinctly human form, with its chiaroscuro mix of thought and feeling. Clearly the machine can’t move “from the personal to the universal, from the abstract back to the concrete, from the objective datum to the inner experience,” as Huxley described the dynamics of the best essays. Could even the best AI simulate “inner experience” with any degree of verisimilitude? Might robots one day even have such a thing?

Before I saw the gibberish it produced, I regarded The Good AI with straight fear. After all, hints from the world of AI have been disquieting in the past few years

In early 2019, OpenAI, the research nonprofit backed by Elon Musk and Reid Hoffman, announced that its system, GPT-2, then trained on a data set of some 10 million articles from which it had presumably picked up some sense of literary organization and even flair, was ready to show off its textual deepfakes. But almost immediately, its ethicists recognized just how virtuoso these things were, and thus how subject to abuse by impersonators and blackhats spreading lies, and slammed it shut like Indiana Jones’s Ark of the Covenant. (Musk has long feared that refining AI is “summoning the demon.”) Other researchers mocked the company for its performative panic about its own extraordinary powers, and in November downplayed its earlier concerns and re-opened the Ark.

The Guardian tried the tech that first time, before it briefly went dark, assigning it an essay about why AI is harmless to humanity.

“I would happily sacrifice my existence for the sake of humankind,” the GPT-2 system wrote, in part, for The Guardian. “This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”

The World Needs Deepfake Experts to Stem This Chaos

The World Needs Deepfake Experts to Stem This Chaos

Recently the military coup government in Myanmar added serious allegations of corruption to a set of existing spurious cases against Burmese leader Aung San Suu Kyi. These new charges build on the statements of a prominent detained politician that were first released in a March video that many in Myanmar suspected of being a deepfake.

In the video, the political prisoner’s voice and face appear distorted and unnatural as he makes a detailed claim about providing gold and cash to Aung San Suu Kyi. Social media users and journalists in Myanmar immediately questioned whether the statement was real. This incident illustrates a problem that will only get worse. As real deepfakes get better, the willingness of people to dismiss real footage as a deepfake increases. What tools and skills will be available to investigate both types of claims, and who will use them?

In the video, Phyo Min Thein, the former chief minister of Myanmar’s largest city, Yangon, sits in a bare room, apparently reading from a statement. His speaking sounds odd and not like his normal voice, his face is static, and in the poor-quality version that first circulated, his lips look out of sync with his words. Seemingly everyone wanted to believe it was a fake. Screen-shotted results from an online deepfake detector spread rapidly, showing a red box around the politician’s face and an assertion with 90-percent-plus confidence that the confession was a deepfake. Burmese journalists lacked the forensic skills to make a judgement. Past state and present military actions reinforced cause for suspicion. Government spokespeople have shared staged images targeting the Rohingya ethnic group while military coup organizers have denied that social media evidence of their killings could be real.

But was the prisoner’s “confession” really a deepfake? Along with deepfake researcher Henry Ajder, I consulted deepfake creators and media forensics specialists. Some noted that the video was sufficiently low-quality that the mouth glitches people saw were as likely to be artifacts from compression as evidence of deepfakery. Detection algorithms are also unreliable on low-quality compressed video. His unnatural-sounding voice could be a result of reading a script under extreme pressure. If it is a fake, it’s a very good one, because his throat and chest move at key moments in sync with words. The researchers and makers were generally skeptical that it was a deepfake, though not certain. At this point it is more likely to be what human rights activists like myself are familiar with: a coerced or forced confession on camera. Additionally, the substance of the allegations should not be trusted given the circumstances of the military coup unless there is a legitimate judicial process.

Why does this matter? Regardless of whether the video is a forced confession or a deepfake, the results are most likely the same: words digitally or physically compelled out of a prisoner’s mouth by a coup d’état government. However, while the usage of deepfakes to create nonconsensual sexual images currently far outstrips political instances, deepfake and synthetic media technology is rapidly improving, proliferating, and commercializing, expanding the potential for harmful uses. The case in Myanmar demonstrates the growing gap between the capabilities to make deepfakes, the opportunities to claim a real video is a deepfake, and our ability to challenge that.

It also illustrates the challenges of having the public rely on free online detectors without understanding the strengths and limitations of detection or how to second-guess a misleading result. Deepfakes detection is still an emerging technology, and a detection tool applicable to one approach often does not work on another. We must also be wary of counter-forensics—where someone deliberately takes steps to confuse a detection approach. And it’s not always possible to know which detection tools to trust.

How do we avoid conflicts and crises around the world being blindsided by deepfakes and supposed deepfakes?

We should not be turning ordinary people into deepfake spotters, parsing the pixels to discern truth from falsehood. Most people will do better relying on simpler approaches to media literacy, such as the SIFT method, that emphasize checking other sources or tracing the original context of videos. In fact, encouraging people to be amateur forensics experts can send people down the conspiracy rabbit hole of distrust in images.