Select Page
Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

As more and more problems with AI have surfaced, including biases around race, gender, and age, many tech companies have installed “ethical AI” teams ostensibly dedicated to identifying and mitigating such issues.

Twitter’s META unit was more progressive than most in publishing details of problems with the company’s AI systems, and in allowing outside researchers to probe its algorithms for new issues.

Last year, after Twitter users noticed that a photo-cropping algorithm seemed to favor white faces when choosing how to trim images, Twitter took the unusual decision to let its META unit publish details of the bias it uncovered. The group also launched one of the first ever “bias bounty” contests, which let outside researchers test the algorithm for other problems. Last October, Chowdhury’s team also published details of unintentional political bias on Twitter, showing how right-leaning news sources were, in fact, promoted more than left-leaning ones.

Many outside researchers saw the layoffs as a blow, not just for Twitter but for efforts to improve AI. “What a tragedy,” Kate Starbird, an associate professor at the University of Washington who studies online disinformation, wrote on Twitter. 

Twitter content

This content can also be viewed on the site it originates from.

“The META team was one of the only good case studies of a tech company running an AI ethics group that interacts with the public and academia with substantial credibility,” says Ali Alkhatib, director of the Center for Applied Data Ethics at the University of San Francisco.

Alkhatib says Chowdhury is incredibly well thought of within the AI ethics community and her team did genuinely valuable work holding Big Tech to account. “There aren’t many corporate ethics teams worth taking seriously,” he says. “This was one of the ones whose work I taught in classes.”

Mark Riedl, a professor studying AI at Georgia Tech, says the algorithms that Twitter and other social media giants use have a huge impact on people’s lives, and need to be studied. “Whether META had any impact inside Twitter is hard to discern from the outside, but the promise was there,” he says.

Riedl adds that letting outsiders probe Twitter’s algorithms was an important step toward more transparency and understanding of issues around AI. “They were becoming a watchdog that could help the rest of us understand how AI was affecting us,” he says. “The researchers at META had outstanding credentials with long histories of studying AI for social good.”

As for Musk’s idea of open-sourcing the Twitter algorithm, the reality would be far more complicated. There are many different algorithms that affect the way information is surfaced, and it’s challenging to understand them without the real time data they are being fed in terms of tweets, views, and likes.

The idea that there is one algorithm with explicit political leaning might oversimplify a system that can harbor more insidious biases and problems. Uncovering these is precisely the kind of work that Twitter’s META group was doing. “There aren’t many groups that rigorously study their own algorithms’ biases and errors,” says Alkhatib at the University of San Francisco. “META did that.” And now, it doesn’t.

Facebook Has a Child Predation Problem

Facebook Has a Child Predation Problem

Surely due diligence would dictate proactive steps to prevent the creation of such groups, backed up by quick action to remove any that get through once they are flagged and reported. I would have thought so. Until I stumbled into these groups and began, with rising disbelief, to find it impossible to get them taken down.

Children are sharing personal images and contact information in a sexualized digital space, and being induced to join private groups or chats where further images and actions will be solicited and exchanged.

Even as debate over Congress’ Earn It Act calls attention to the use of digital channels to distribute sexually explicit materials, we are failing to grapple with a seismic shift in the ways child sexual abuse materials are generated. Forty-five percent of US children aged 9 to 12 report using Facebook every day. (That fact alone makes mockery of Facebook’s claim that they work actively to keep children under 13 off the platform.) According to recent research, over a quarter of 9- to 12-year-olds report having experienced sexual solicitation online. One in eight report having been asked to send a nude photo or video; one in 10 report having been asked to join a sexually explicit livestream. Smartphones, internet access, and Facebook together now reach into children’s hands and homes and create new spaces for active predation. At scale.

Of course I reported the group I had accidentally uncovered. I used Facebook’s on-platform system, tagging it as containing “nudity or sexual activity” which (next menu) “involves a child.” An automated response came back days later. The group had been reviewed and did not violate any “specific community standards.” If I continued to encounter content “offensive or distasteful to you”—was my taste the problem here?—I should report that specific content, not the group as a whole.

“Buscando novi@ de 9,10,11,12,13 años” had 7,900 members when I reported it. By the time Facebook replied that it did not violate community standards, it had 9,000.

So I tweeted at Facebook and the Facebook newsroom. I DMed people I didn’t know but thought might have access to people inside Facebook. I tagged journalists. And I reported through the platform’s protocol a dozen more groups, some with thousands of users: groups I found not through sexually explicit search terms but just by typing “11 12 13” into the Groups search bar.

What became ever clearer as I struggled to get action is that technology’s limits were not the problem. The full power of AI-driven algorithms was on display, but it was working to expand, not reduce, child endangerment. Because even as reply after reply hit my inbox denying grounds for action, new child sexualization groups began getting recommended to me as “Groups You May Like.”

Each new group recommended to me had the same mix of cartoon-filled come-ons, emotional grooming, and gamified invites to share sexual materials as the groups I had reported. Some were in Spanish, some in English, others in Tagalog. When I searched for a translation of “hanap jowa,” the name of a series of groups, it led me to an article from the Philippines reporting on efforts by Reddit users to get child-endangering Facebook groups removed there.

YouTube’s Olympics Highlights Are Riddled With Propaganda

YouTube’s Olympics Highlights Are Riddled With Propaganda

Sports fans who tuned in to watch the Beijing Winter Olympics on YouTube are instead being served propaganda videos. An analysis of YouTube search results by WIRED found that people who typed “Beijing,” “Beijing 2022,” “Olympics,” or “Olympics 2022” were shown pro-China and anti-China propaganda videos in the top results. Five of the most prominent propaganda videos, which often appear above actual Olympics highlights, have amassed almost 900,000 views.

Two anti-China videos showing up in search results were published by a group called The BL (The Beauty of Life), which Facebook previously linked to the Falun Gong, a Chinese spiritual movement that was banned by the Chinese Communist Party in 1999 and has protested against the regime ever since. They jostled for views with pro-China videos posted by Western YouTubers whose work has previously been promoted by China’s Ministry of Foreign Affairs. Similar search results were visible in the US, Canada, and the UK. WIRED also found signs that viewing numbers for pro-China videos are being artificially boosted through the use of fake news websites.

This flurry of propaganda videos was first spotted earlier this month by John Scott-Railton, a researcher at the University of Toronto’s research laboratory, Citizen Lab. On February 5, Scott-Railton found that after he’d watched skating and curling videos, YouTube automatically played a video by a pro-China YouTube account. “I found myself on a slippery slide from skating and curling into increasingly targeted propaganda,” he says. These videos no longer appeared in autoplay by February 11, when WIRED conducted its analysis. But the way similar videos still dominate YouTube search results suggests the platform is at risk of letting such campaigns hijack the Olympics.

YouTube did not respond to a request to comment on why content used as propaganda to promote or deride China was being pushed to the top of Olympics search results, nor did the company say if those behind the videos had violated its terms of service by using fake websites to inflate their views.

A common theme in the pro-Beijing propaganda videos is the 2019 decision by US-born skier Eileen Gu to compete for China at the Winter Olympics. A video titled “USA’s Boycott FAILURE … Eileen Gu Wins Gold” by YouTuber Jason Lightfoot is the top result for the search term “Beijing,” with 54,000 views.

The US and Canada were among the countries that took part in a diplomatic boycott of the Beijing Winter Olympics. In Canada, that same video by Jason Lightfoot also showed up for users searching for “Olympics 2022” and “Winter Olympics,” although much further down, in 26th and 33rd place. In the video, Lightfoot says Western media “can’t take what Eileen Gu represents … someone who has chosen China over the American dream.”

In another video, which has more than 400,000 views, American YouTuber Cyrus Janssen also discusses why Gu chose to represent China. The video, which is the fifth result for the search term “Beijing,” details Gu’s career before referencing the high rates of anti-Asian hate crime in the US, a subject that has also been covered by mainstream American media outlets.

The World Needs Deepfake Experts to Stem This Chaos

The World Needs Deepfake Experts to Stem This Chaos

Recently the military coup government in Myanmar added serious allegations of corruption to a set of existing spurious cases against Burmese leader Aung San Suu Kyi. These new charges build on the statements of a prominent detained politician that were first released in a March video that many in Myanmar suspected of being a deepfake.

In the video, the political prisoner’s voice and face appear distorted and unnatural as he makes a detailed claim about providing gold and cash to Aung San Suu Kyi. Social media users and journalists in Myanmar immediately questioned whether the statement was real. This incident illustrates a problem that will only get worse. As real deepfakes get better, the willingness of people to dismiss real footage as a deepfake increases. What tools and skills will be available to investigate both types of claims, and who will use them?

In the video, Phyo Min Thein, the former chief minister of Myanmar’s largest city, Yangon, sits in a bare room, apparently reading from a statement. His speaking sounds odd and not like his normal voice, his face is static, and in the poor-quality version that first circulated, his lips look out of sync with his words. Seemingly everyone wanted to believe it was a fake. Screen-shotted results from an online deepfake detector spread rapidly, showing a red box around the politician’s face and an assertion with 90-percent-plus confidence that the confession was a deepfake. Burmese journalists lacked the forensic skills to make a judgement. Past state and present military actions reinforced cause for suspicion. Government spokespeople have shared staged images targeting the Rohingya ethnic group while military coup organizers have denied that social media evidence of their killings could be real.

But was the prisoner’s “confession” really a deepfake? Along with deepfake researcher Henry Ajder, I consulted deepfake creators and media forensics specialists. Some noted that the video was sufficiently low-quality that the mouth glitches people saw were as likely to be artifacts from compression as evidence of deepfakery. Detection algorithms are also unreliable on low-quality compressed video. His unnatural-sounding voice could be a result of reading a script under extreme pressure. If it is a fake, it’s a very good one, because his throat and chest move at key moments in sync with words. The researchers and makers were generally skeptical that it was a deepfake, though not certain. At this point it is more likely to be what human rights activists like myself are familiar with: a coerced or forced confession on camera. Additionally, the substance of the allegations should not be trusted given the circumstances of the military coup unless there is a legitimate judicial process.

Why does this matter? Regardless of whether the video is a forced confession or a deepfake, the results are most likely the same: words digitally or physically compelled out of a prisoner’s mouth by a coup d’état government. However, while the usage of deepfakes to create nonconsensual sexual images currently far outstrips political instances, deepfake and synthetic media technology is rapidly improving, proliferating, and commercializing, expanding the potential for harmful uses. The case in Myanmar demonstrates the growing gap between the capabilities to make deepfakes, the opportunities to claim a real video is a deepfake, and our ability to challenge that.

It also illustrates the challenges of having the public rely on free online detectors without understanding the strengths and limitations of detection or how to second-guess a misleading result. Deepfakes detection is still an emerging technology, and a detection tool applicable to one approach often does not work on another. We must also be wary of counter-forensics—where someone deliberately takes steps to confuse a detection approach. And it’s not always possible to know which detection tools to trust.

How do we avoid conflicts and crises around the world being blindsided by deepfakes and supposed deepfakes?

We should not be turning ordinary people into deepfake spotters, parsing the pixels to discern truth from falsehood. Most people will do better relying on simpler approaches to media literacy, such as the SIFT method, that emphasize checking other sources or tracing the original context of videos. In fact, encouraging people to be amateur forensics experts can send people down the conspiracy rabbit hole of distrust in images.