Select Page
Sex, Drugs, and AI Mickey Mouse

Sex, Drugs, and AI Mickey Mouse

On January 1, Mike Neville gave Midjourney the following prompt: “Steamboat Willie drawn in a vintage Disney style, black and white. He is dripping all over with white gel.”

There’s no polite way to describe what this prompt conjured from the AI image generator. It looks, very much, like Mickey Mouse is drenched in ejaculate.

At the start of every year, a crop of cultural works enters the public domain in the United States. When copyright expires on particularly beloved characters, people get excited. This is an especially eagerly anticipated year. An early version of Mickey Mouse, colloquially known as Steamboat Willie, entered public domain in 2024 after nearly a century of rigorously enforced copyright protection. Within days, an explosion of homebrewed Steamboat Willie art hit the internet, including a horror movie trailer, a meme coin—and, of course, a glut of AI-generated Willies. Some are G-rated. Others, like “Creamboat Willie,” are decidedly not. (Willie doing drugs is another popular theme.)

While a contingent of the people sharing naughty Willie images are simply goofing around, others had surprisingly sober-minded intentions. Neville, an art director who posted his image on social media using the handle “Olivia Mutant-John,” has a lively sense of humor, but his experiment wasn’t solely a scatalogical joke. “My interest in generating the assets was to explore copyright thresholds and where the tools are currently,” he says. He’d noticed that it was easy to find examples of copyrighted characters on popular image-generating tools (a point also recently made by AI scientist Gary Marcus, who posted AI-generated depictions of SpongeBob SquarePants as an example) and wanted to see how far he could push an image generator now that Steamboat Willie was in the public domain.

Neville isn’t the only person conducting AI Willie experiments with copyright on his mind. Pierre-Carl Langlais, head of research at the AI data research firm OpSci, created a fine-tuned version of Stable Diffusion he called “Mickey-1928” based on 96 public domain stills of Mickey Mouse from the 1928 films Steamboat Willie, Plane Crazy, and Gallopin’ Gaucho. “It’s a political stance,” he says.

Langlais firmly believes that people should be paying closer attention to where AI tools get their training data; he’s working on several separate projects focused on creating models that train exclusively on public domain works to that end. He whipped it up in hours, because it’s essentially a filter laid atop of Stable Diffusion, not a totally custom data set. (That would be a far more labor-intensive project.)

Google’s App Store Ruled an Illegal Monopoly, as a Jury Sides With Epic Games

Google’s App Store Ruled an Illegal Monopoly, as a Jury Sides With Epic Games

More bad news for Google could come in mid-2024 when US district judge Amit Mehta in Washington, DC, is expected to issue his ruling on whether Google has unlawfully maintained its monopoly over web search. Testimony in that case, which was brought by the US Department of Justice and attorneys general for nearly every US state and territory, concluded last month.

A similar case two years ago had not gone too well for Epic. In Epic v. Apple, a federal judge in Oakland, California, ordered that Apple make just one change to its App Store practices. The judge found that most of the other Apple practices that Epic viewed as anticompetitive were justified, because the iPhone maker needed to recoup its investment in developing the app marketplace. Apple still has not had to comply as it awaits the Supreme Court’s decision early next year about whether to review the case.

Google hasn’t said much about why it chose to have a jury rather than a judge decide its fate in the trial that concluded today, though it tried unsuccessfully to reverse course on the eve of jury selection.

Judge Donato also tried to prevent the case even going to trial, ordering several times for Epic and Google to attempt to settle instead. In a last-second push, Google CEO Sundar Pichai and Epic CEO Tim Sweeney met for an hour on December 7 but failed to reach a deal, according to a court filing.

Google previously agreed to settle with as many as 48,000 app developers but without making major changes to its business practices. It also settled with a group of consumers and attorneys general for all 50 US states. Details of the latter settlement had not been published, pending the verdict in the Epic trial.

‘Shut Rivals Off’

In closing arguments today, Gary Bornstein, an attorney for Epic, told jurors that Google’s Android operating system was the only choice for smartphone makers, because Apple keeps iOS to itself and there aren’t any viable alternatives. Google used that power with device makers and wireless carriers who sell phones to ensure they promoted the Play store, he said, often more than they encouraged the lesser-known alternatives.

Google binds app developers who sell digital items in the Play store to use its billing system and pockets up to 30 percent of sales. The search giant also paid developers millions of dollars not to pursue alternatives to Play, Epic alleged.

The EU Just Passed Sweeping New Rules to Regulate AI

The EU Just Passed Sweeping New Rules to Regulate AI

Over the two years lawmakers have been negotiating the rules agreed today, AI technology and the leading concerns about it have dramatically changed. When the AI Act was conceived in April 2021, policymakers were worried about opaque algorithms deciding who would get a job, be granted refugee status or receive social benefits. By 2022, there were examples that AI was actively harming people. In a Dutch scandal, decisions made by algorithms were linked to families being forcibly separated from their children, while students studying remotely alleged that AI systems discriminated against them based on the color of their skin.

Then, in November 2022, OpenAI released ChatGPT, dramatically shifting the debate. The leap in AI’s flexibility and popularity triggered alarm in some AI experts, who drew hyperbolic comparisons between AI and nuclear weapons.

That discussion manifested in the AI Act negotiations in Brussels in the form of a debate about whether makers of so-called foundation models such as the one behind ChatGPT, like OpenAI and Google, should be considered as the root of potential problems and regulated accordingly—or whether new rules should instead focus on companies using those foundational models to build new AI-powered applications, such as chatbots or image generators.

Representatives of Europe’s generative AI industry expressed caution about regulating foundation models, saying it could hamper innovation among the bloc’s AI startups. “We cannot regulate an engine devoid of usage,” Arthur Mensch, CEO of French AI company Mistral, said last month. “We don’t regulate the C [programming] language because one can use it to develop malware. Instead, we ban malware.” Mistral’s foundation model 7B would be exempt under the rules agreed today because the company is still in the research and development phase, Carme Artigas, Spain’s Secretary of State for Digitalization and Artificial Intelligence, said in the press conference.

The major point of disagreement during the final discussions that ran late into the night twice this week was whether law enforcement should be allowed to use facial recognition or other types of biometrics to identify people either in real time or retrospectively. “Both destroy anonymity in public spaces,” says Daniel Leufer, a senior policy analyst at digital rights group Access Now. Real-time biometric identification can identify a person standing in a train station right now using live security camera feeds, he explains, while “post” or retrospective biometric identification can figure out that the same person also visited the train station, a bank, and a supermarket yesterday, using previously banked images or video.

Leufer said he was disappointed by the “loopholes” for law enforcement that appeared to have been built into the version of the act finalized today.

European regulators’ slow response to the emergence of social media era loomed over discussions. Almost 20 years elapsed between Facebook’s launch and the passage of the Digital Services Act—the EU rulebook designed to protect human rights online—taking effect this year. In that time, the bloc was forced to deal with the problems created by US platforms, while being unable to foster their smaller European challengers. “Maybe we could have prevented [the problems] better by earlier regulation,” Brando Benifei, one of two lead negotiators for the European Parliament, told WIRED in July. AI technology is moving fast. But it will still be many years until it’s possible to say whether the AI Act is more successful in containing the downsides of Silicon Valley’s latest export.

OpenAI Cofounder Reid Hoffman Gives Sam Altman a Vote of Confidence

OpenAI Cofounder Reid Hoffman Gives Sam Altman a Vote of Confidence

Hoffman and others said that there’s no need to pause development of AI. He called that drastic measure, for which some AI researchers have petitioned, foolish and destructive. Hoffman identified himself as a rational “accelerationist”—someone who knows to slow down when driving around a corner but that, presumably, is happy to speed up when the road ahead is clear. “I recommend everyone come join us in the optimist club, not because it’s utopia and everything works out just fine, but because it can be part of an amazing solution,” he said. “That’s what we’re trying to build towards.”

Mitchell and Buolamwini, who is artist-in-chief and president of the AI harms advocacy group Algorithmic Justice League, said that relying on company promises to mitigate bias and misuse of AI would not be enough. In their view, governments must make clear that AI systems cannot undermine people’s rights to fair treatment or humanity. “Those who stand to be exploited or extorted, even exterminated” need to be protected, Buolamwini said, adding that systems like lethal drones should be stopped. “We’re already in a world where AI is dangerous,” she said. “We have AI as the angels of death.”

Applications such as weaponry are far from OpenAI’s core focus on aiding coders, writers, and other professionals. The company’s tools by their terms cannot be used in military and warfare—although OpenAI’s primary backer and enthusiastic customer Microsoft has a sizable business with the US military. But Buolamwini suggested that companies developing business applications deserve no less scrutiny. As AI takes over mundane tasks such as composition, companies must be ready to reckon with the social consequences of a world that may offer workers fewer meaningful opportunities to learn the basics of a job that it may turn out are vital to becoming highly skilled. “What does it mean to go through that process of creation, finding the right word, figuring out how to express yourself, and learning something in the struggle to do it?” she said.

Motion blur portrait of a person in front of a blue backdrop

Fei-Fei Li, a Stanford University computer scientist who runs the school’s Institute for Human-Centered Artificial Intelligence, said the AI community has to be focused on its impacts on people, all the way from individual dignity to large societies. “I should start a new club called the techno-humanist,” she said. “It’s too simple to say, ‘Do you want to accelerate or decelerate?’ We should talk about where we want to accelerate, and where we should slow down.”

Li is one of the modern AI pioneers, having developed the computer vision system known as ImageNet. Would OpenAI want a seemingly balanced voice like hers on its new board? OpenAI board chair Bret Taylor did not respond to a request to comment. But if the opportunity arose, Li said, “I will carefully consider that.”

Anduril’s New Drone Killer Is Locked on to AI-Powered Warfare

Anduril’s New Drone Killer Is Locked on to AI-Powered Warfare

After Palmer Luckey founded Anduril in 2017, he promised it would be a new kind of defense contractor, inspired by hacker ingenuity and Silicon Valley speed.

The company’s latest product, a jet-powered, AI-controlled combat drone called Roadrunner, is inspired by the grim reality of modern conflict, especially in Ukraine, where large numbers of cheap, agile suicide drones have proven highly deadly over the past year.

“The problem we saw emerging was this very low-cost, very high-quantity, increasingly sophisticated and advanced aerial threat,” says Christian Brose, chief strategy officer at Anduril.

This kind of aerial threat has come to define the conflict in Ukraine, where Ukrainian and Russian forces are locked in an arms race involving large numbers of cheap drones capable of loitering autonomously before attacking a target by delivering an explosive payload. These systems, which include US-made Switchblades on the Ukrainian side, can evade jamming and ground defenses and may need to be shot down by either a fighter jet or a missile that costs many times more to use.

Roadrunner is a modular, twin-jet aircraft roughly the size of a patio heater that can operate at high (subsonic) speeds, can take off and land vertically, and can return to base if it isn’t needed, according to Anduril. The version designed to target drones or even missiles can loiter autonomously looking for threats.

Brose says the system can already operate with a high degree of autonomy, and it is designed so that the software can be upgraded with new capabilities. But the system requires a human operator to make decisions on the use of deadly force. “Our driving belief is that there has to be human agency for identifying and classifying a threat, and there has to be human accountability for any action that gets taken against that threat,” he says.

Samuel Bendett, an expert on the military use of drones at the Center for New American Security, a think tank, says Roadrunner could be used in Ukraine to intercept Iranian-made Shahed drones, which have become an effective way for Russian forces to target stationary Ukrainian targets.

Bendett says both Russian and Ukrainian forces are now using drones in a complete “kill chain,” with disposable consumer drones being used for target acquisition and then either short- or long-range suicide drones being used to attack. “There is a lot of experimentation taking place in Ukraine, on both sides,” Bendett says. “And I’m assuming that a lot of US [military] innovations are going to be built with Ukraine in mind.”