Select Page
Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

As more and more problems with AI have surfaced, including biases around race, gender, and age, many tech companies have installed “ethical AI” teams ostensibly dedicated to identifying and mitigating such issues.

Twitter’s META unit was more progressive than most in publishing details of problems with the company’s AI systems, and in allowing outside researchers to probe its algorithms for new issues.

Last year, after Twitter users noticed that a photo-cropping algorithm seemed to favor white faces when choosing how to trim images, Twitter took the unusual decision to let its META unit publish details of the bias it uncovered. The group also launched one of the first ever “bias bounty” contests, which let outside researchers test the algorithm for other problems. Last October, Chowdhury’s team also published details of unintentional political bias on Twitter, showing how right-leaning news sources were, in fact, promoted more than left-leaning ones.

Many outside researchers saw the layoffs as a blow, not just for Twitter but for efforts to improve AI. “What a tragedy,” Kate Starbird, an associate professor at the University of Washington who studies online disinformation, wrote on Twitter. 

Twitter content

This content can also be viewed on the site it originates from.

“The META team was one of the only good case studies of a tech company running an AI ethics group that interacts with the public and academia with substantial credibility,” says Ali Alkhatib, director of the Center for Applied Data Ethics at the University of San Francisco.

Alkhatib says Chowdhury is incredibly well thought of within the AI ethics community and her team did genuinely valuable work holding Big Tech to account. “There aren’t many corporate ethics teams worth taking seriously,” he says. “This was one of the ones whose work I taught in classes.”

Mark Riedl, a professor studying AI at Georgia Tech, says the algorithms that Twitter and other social media giants use have a huge impact on people’s lives, and need to be studied. “Whether META had any impact inside Twitter is hard to discern from the outside, but the promise was there,” he says.

Riedl adds that letting outsiders probe Twitter’s algorithms was an important step toward more transparency and understanding of issues around AI. “They were becoming a watchdog that could help the rest of us understand how AI was affecting us,” he says. “The researchers at META had outstanding credentials with long histories of studying AI for social good.”

As for Musk’s idea of open-sourcing the Twitter algorithm, the reality would be far more complicated. There are many different algorithms that affect the way information is surfaced, and it’s challenging to understand them without the real time data they are being fed in terms of tweets, views, and likes.

The idea that there is one algorithm with explicit political leaning might oversimplify a system that can harbor more insidious biases and problems. Uncovering these is precisely the kind of work that Twitter’s META group was doing. “There aren’t many groups that rigorously study their own algorithms’ biases and errors,” says Alkhatib at the University of San Francisco. “META did that.” And now, it doesn’t.

Dumbed Down AI Rhetoric Harms Everyone

Dumbed Down AI Rhetoric Harms Everyone

When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.

Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.

The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.

But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.

Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.

The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.

The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.

The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.

Humans Need to Create Interspecies Money to Save the Planet

Humans Need to Create Interspecies Money to Save the Planet

A new form of digital currency for animals, trees, and other wildlife (no, not like Dogecoin) would help protect biodiversity and bend technology back to nature.