The viral, AI-generated images of Donald Trump’s arrest you may be seeing on social media are definitely fake. But some of these photorealistic creations are pretty convincing. Others look more like stills from a video game or a lucid dream. A Twitter thread by Eliot Higgins, a founder of Bellingcat, that shows Trump getting swarmed by synthetic cops, running around on the lam, and picking out a prison jumpsuit was viewed over 3 million times on the social media platform.
What does Higgins think viewers can do to tell the difference between fake, AI images, like the ones in his post, from real photographs that may come out of the former president’s potential arrest?
“Having created a lot of images for the thread, it’s apparent that it often focuses on the first object described—in this case, the various Trump family members—with everything around it often having more flaws,” Higgins said over email. Look outside of the image’s focal point. Does the rest of the image appear to be an afterthought?
Even though the newest versions of AI-image tools, like Midjourney (version 5 of which was used for the aforementioned thread) and Stable Diffusion, are making considerable progress, mistakes in the smaller details remain a common sign of fake images. As AI art grows in popularity, many artists point out that the algorithms still struggle to replicate the human body in a consistent, natural manner.
Looking at the AI images of Trump from the Twitter thread, the face looks fairly convincing in many of the posts, as do the hands, but his body proportions may look contorted or melted into a nearby police officer. Even though it’s obvious now, it’s possible that the algorithm might be able to avoid peculiar-looking body parts with more training and refinement.
Need another tell? Look for odd writing on the walls, clothing, or other visible items. Higgins points toward messy text as a way to differentiate fake images from real photos. For example, the police wear badges, hats, and other documents that appear to have lettering, at first glance, in the fake images of officers arresting Trump. Upon closer inspection, the words are nonsensical.
An additional way you can sometimes tell an image is generated by AI is by noticing over-the-top facial expressions. “I’ve also noticed that if you ask for expressions, Midjourney tends to render them in an exaggerated way, with skin creases from things like smiling being very pronounced,” Higgins said. The pained expression on Melania Trump’s face looks more like a re-creation of Edvard Munch’s The Scream or a still from some unreleased A24 horror movie than a snapshot from a human photographer.
Keep in mind that world leaders, celebrities, social media influencers, and anyone with large quantities of photos circulating online may appear more convincing in deepfaked photos than AI-generated images of people with less of a visible internet presence. “It’s clear that the more famous a person is, the more images the AI has had to learn from,” Higgins said. “So very famous people are rendered extremely well, while less famous people are usually a bit wonky.” For more peace of mind about the algorithm’s ability to re-create your face, it might be worth thinking twice before posting a photo dump of selfies after a fun night out with friends. (Though it’s likely that the AI generators have already scraped your image data from the web.)
In the lead-up to the next US presidential election, what is Twitter’s policy about AI-generated images? The social media platform’s current policy reads, in part, “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’).” Twitter carves out multiple exceptions for memes, commentary, and posts not created with the intention to mislead viewers.
Just a few years ago, it was almost unfathomable that the average person would soon be able to fabricate photorealistic deepfakes of world leaders at home. As AI images become harder to differentiate from the real deal, social media platforms may need to reevaluate their approach to synthetic content and attempt to find ways of guiding users through the complex and often unsettling world of generative AI.
With its uncanny ability to hold a conversation, answer questions, and write coherent prose, poetry, and code, the chatbot ChatGPT has forced many people to rethink the potential of artificial intelligence.
The startup that made ChatGPT, OpenAI, today announced a much-anticipated new version of the AI model at its core.
The new algorithm, called GPT-4, follows GPT-3, a groundbreaking text-generation model that OpenAI announced in 2020, which was later adapted to create ChatGPT last year.
The new model scores more highly on a range of tests designed to measure intelligence and knowledge in humans and machines, OpenAI says. It also makes fewer blunders and can respond to images as well as text.
However, GPT-4 suffers from the same problems that have bedeviled ChatGPT and cause some AI experts to be skeptical of its usefulness—including tendencies to “hallucinate” incorrect information, exhibit problematic social biases, and misbehave or assume disturbing personas when given an “adversarial” prompt.
“While they’ve made a lot of progress, it’s clearly not trustworthy,” says Oren Etzioni, a professor emeritus at the University of Washington and the founding CEO of the Allen Institute for AI. “It’s going to be a long time before you want any GPT to run your nuclear power plant.”
OpenAI provided several demos and data from benchmarking tests to show GPT-4’s capabilities. The new model can not only beat the passing score on the Uniform Bar Examination, which is used to qualify lawyers in many US states, but it got a score in the top 10 percent of those of humans.
It also scores more highly than GPT-3 on other exams designed to test knowledge and reasoning, in subjects including biology, art history, and calculus. And it gets better marks than any other AI language model on tests designed by computer scientists to gauge progress in such algorithms. “In some ways it’s more of the same,” Etzioni says. “But it’s more of the same in an absolutely mind-blowing series of advances.”
GPT-4 can also perform neat tricks seen before from GPT-3 and ChatGPT, like summarizing and suggesting edits to pieces of text. It can also do things its predecessors could not, including acting as a Socratic tutor that helps guide students toward correct answers and discussing the contents of photographs. For example, if provided a photo of ingredients on a kitchen counter, GPT-4 can suggest an appropriate recipe. If provided with a chart, it can explain the conclusions that can be drawn from it.
“It definitely seems to have gained some abilities,” says Vincent Conitzer, a professor at CMU who specializes in AI and who has begun experimenting with the new language model. But he says it still makes errors, such as suggesting nonsensical directions or presenting fake mathematical proofs.
ChatGPT caught the public’s attention with a stunning ability to tackle many complex questions and tasks via an easy-to-use conversational interface. The chatbot does not understand the world as humans do and just responds with words it statistically predicts should follow a question.
Teach a robot to open a door, and it ought to unlock a lifetime of opportunities. Not so for one of Alphabet’s youngest subsidiaries, Everyday Robots. Just over a year after graduating from Alphabet’s X moonshot lab, the team that trained over a hundred wheeled, one-armed robots to squeegee cafeteria tables, separate trash and recycling, and yes, open doors, is shutting down as part of budget cuts spreading across the Google parent, a spokeswoman confirmed.
“Everyday Robots will no longer be a separate project within Alphabet,” says Denise Gamboa, director of marketing and communications for Everyday Robots. “Some of the technology and part of the team will be consolidated into existing robotics efforts within Google Research.”
The robotics venture is the latest failed bet for X, which in the past decade also spun out internet-beaming balloons (Loon) and power-generating kites (Makani) before deeming them too commercially inviable to keep afloat. Other onetime X projects, such as Waymo (developing autonomous vehicles) and Wing (testing grocery delivery drones) motor on as companies within Alphabet, though their financial prospects remain mired in regulatory and technological challenges. Like Everyday Robots, those ventures harnessed novel technologies that showed impressive promise in trials but not rock-solid reliability.
Everyday Robots emerged from the rubble of at least eight robotics acquisitions by Google a decade ago. Google cofounders Larry Page and Sergey Brin expected machine learning would reshape robotics, and Page in particular wanted to develop a consumer-oriented robot, a former employee involved at the time says, speaking anonymously to discuss internal deliberations. By 2016, they put software entrepreneur Hans Peter Brøndmo in charge of a project then known as Help (and later, for a time, Moxie) to leverage machine learning to develop robots that could handle routine tasks and adapt to varying environments, the source says.
The team set up arm farms and playpens, where a fleet of robots for months would repeat the same task—like sorting rubbish. It was a brute-force attempt to generate data to train a machine learning model that could then embody the robots with the know-how needed to use their cameras, arms, wheels, and fingerlike grips to interact with the world around them. The novelty was sparing engineers from the traditional approach in robotics of having to code specific instructions for the machines to follow for every little potential scenario. The idea largely worked for initial tasks. Google had Everyday Robots’ fleet help clean the search giant’s dining halls and check for untidy conference rooms mid-pandemic.
Courtesy of Google
Last year, Everyday Robots demonstrated further progress with Google AI researchers. The project integrated a large language model similar to that underlying ChatGPT into the robotics system, enabling the mechanical helper, for example, to respond to someone saying that they are hungry by fetching a bag of chips for them. But Google and Everyday Robots stressed at the time that a roving butler at one’s beck and call remained far from consumer availability. Variations that seem trivial to humans, like the type of lighting in a room or the shape of the chips bag, could cause malfunctions.
From its earliest days, Everyday Robots struggled with whether its mission was to pursue advanced research or deliver a product to market, the former employee says. It staffed up to over 200 employees, including people overseeing customer operations, teaching robots to dance, and tinkering away at the perfect design. Each of its robots likely cost tens of thousands of dollars, robotics experts estimate.
Those expenses were too much for Alphabet, whose more speculative “other bets” such as Everyday Robots and Waymo lost about $6.1 billion last year. Alphabet’s overall profit fell 21 percent last year to $60 billion as spending on Google ads slowed, and activist investors have been clamoring for the company to make cuts. On January 20, Alphabet announced it would lay off about 12,000 workers, 6 percent of its workforce. Everyday Robots was one of the few projects disbanded.
What a difference seven days makes in the world of generative AI.
Last week Satya Nadella, Microsoft’s CEO, was gleefully telling the world that the new AI-infused Bing search engine would “make Google dance” by challenging its long-standing dominance in web search.
The new Bing uses a little thing called ChatGPT—you may have heard of it—which represents a significant leap in computers’ ability to handle language. Thanks to advances in machine learning, it essentially figured out for itself how to answer all kinds of questions by gobbling up trillions of lines of text, much of it scraped from the web.
Google did, in fact, dance to Satya’s tune by announcing Bard, its answer to ChatGPT, and promising to use the technology in its own search results. Baidu, China’s biggest search engine, said it was working on similar technology.
But Nadella might want to watch where his company’s fancy footwork is taking it.
In demos Microsoft gave last week, Bing seemed capable of using ChatGPT to offer complex and comprehensive answers to queries. It came up with an itinerary for a trip to Mexico City, generated financial summaries, offered product recommendations that collated information from numerous reviews, and offered advice on whether an item of furniture would fit into a minivan by comparing dimensions posted online.
WIRED had some time during the launch to put Bing to the test, and while it seemed skilled at answering many types of questions, it was decidedly glitchy and even unsure of its own name. And as one keen-eyed pundit noticed, some of the results that Microsoft showed off were less impressive than they first seemed. Bing appeared to make up some information on the travel itinerary it generated, and it left out some details that no person would be likely to omit. The search engine also mixed up Gap’s financial results by mistaking gross margin for unadjusted gross margin—a serious error for anyone relying on the bot to perform what might seem the simple task of summarizing the numbers.
More problems have surfaced this week, as the new Bing has been made available to more beta testers. They appear to include arguing with a user about what year it is and experiencing an existential crisis when pushed to prove its own sentience. Google’s market cap dropped by a staggering $100 billion after someone noticed errors in answers generated by Bard in the company’s demo video.
Why are these tech titans making such blunders? It has to do with the weird way that ChatGPT and similar AI models really work—and the extraordinary hype of the current moment.
What’s confusing and misleading about ChatGPT and similar models is that they answer questions by making highly educated guesses. ChatGPT generates what it thinks should follow your question based on statistical representations of characters, words, and paragraphs. The startup behind the chatbot, OpenAI, honed that core mechanism to provide more satisfying answers by having humans provide positive feedback whenever the model generates answers that seem correct.
ChatGPT can be impressive and entertaining, because that process can produce the illusion of understanding, which can work well for some use cases. But the same process will “hallucinate” untrue information, an issue that may be one of the most important challenges in tech right now.
The intense hype and expectation swirling around ChatGPT and similar bots enhances the danger. When well-funded startups, some of the world’s most valuable companies, and the most famous leaders in tech all say chatbots are the next big thing in search, many people will take it as gospel—spurring those who started the chatter to double down with more predictions of AI omniscience. Not only chatbots can get led astray by pattern matching without fact checking.
This week the world’s largest search companies leaped into a contest to harness a powerful new breed of “generative AI” algorithms.
Most notably Microsoft announced that it is rewiring Bing, which lags some way behind Google in terms of popularity, to use ChatGPT—the insanely popular and often surprisingly capable chatbot made by the AI startup OpenAI.
In case you’ve been living in outer space for the past few months, you’ll know that people are losing their minds over ChatGPT’s ability to answer questions in strikingly coherent and seemingly insightful and creative ways. Want to understand quantum computing? Need a recipe for whatever’s in the fridge? Can’t be bothered to write that high school essay? ChatGPT has your back.
The all-new Bing is similarly chatty. Demos that the company gave at its headquarters in Redmond, and a quick test drive by WIRED’s Aarian Marshall, who attended the event, show that it can effortlessly generate a vacation itinerary, summarize the key points of product reviews, and answer tricky questions, like whether an item of furniture will fit in a particular car. It’s a long way from Microsoft’s hapless and hopeless Office assistant Clippy, which some readers may recall bothering them every time they created a new document.
Not to be outdone by Bing’s AI reboot, Google said this week that it would release a competitor to ChatGPT called Bard. (The name was chosen to reflect the creative nature of the algorithm underneath, one Googler tells me.) The company, like Microsoft, showed how the underlying technology could answer some web searches and said it would start making the AI behind the chatbot available to developers. Google is apparently unsettled by the idea of being upstaged in search, which provides the majority of parent Alphabet’s revenue. And its AI researchers may be understandably a little miffed since they actually developed the machine learning algorithm at the heart of ChatGPT, known as a transformer, as well as a key technique used to make AI imagery, known as diffusion modeling.
Last but by no means least in the new AI search wars is Baidu, China’s biggest search company. It joined the fray by announcing another ChatGPT competitor, Wenxin Yiyan (文心一言), or “Ernie Bot” in English. Baidu says it will release the bot after completing internal testing this March.
These new search bots are examples of generative AI, a trend fueled by algorithms that can generate text, craft computer code, and dream up images in response to a prompt. The tech industry might be experiencing widespread layoffs, but interest in generative AI is booming, and VCs are imagining whole industries being rebuilt around this new creative streak in AI.
Generative language tools like ChatGPT will surely change what it means to search the web, shaking up an industry worth hundreds of billions of dollars annually, by making it easier to dig up useful information and advice. A web search may become less about clicking links and exploring sites and more about leaning back and taking a chatbot’s word for it. Just as importantly, the underlying language technology could transform many other tasks too, perhaps leading to email programs that write sales pitches or spreadsheets that dig up and summarize data for you. To many users, ChatGPT also seems to signal a shift in AI’s ability to understand and communicate with us.