Select Page
Google Is Finally Trying to Kill AI Clickbait

Google Is Finally Trying to Kill AI Clickbait

Google is taking action against algorithmically generated spam. The search engine giant just announced upcoming changes, including a revamped spam policy, designed in part to keep AI clickbait out of its search results.

“It sounds like it’s going to be one of the biggest updates in the history of Google,” says Lily Ray, senior director of SEO at the marketing agency Amsive. “It could change everything.”

In a blog post, Google claims the change will reduce “low-quality, unoriginal content” in search results by 40 percent. It will focus on reducing what the company calls “scaled content abuse,” which is when bad actors flood the internet with massive amounts of articles and blog posts designed to game search engines.

“A good example of it, which has been around for a little while, is the abuse around obituary spam,” says Google’s vice president of search, Pandu Nayak. Obituary spam is an especially grim type of digital piracy, where people attempt to make money by scraping and republishing death notices, sometimes on social platforms like YouTube. Recently, obituary spammers have started using artificial intelligence tools to increase their output, making the issue even worse. Google’s new policy, if enacted effectively, should make it harder for this type of spam to crop up in online searches.

This notably more aggressive approach to combating search spam takes specific aim at “domain squatting,” a practice in which scavengers purchase websites with name recognition to profit off their reputations, often replacing original journalism with AI-generated articles designed to manipulate search engine rankings. This type of behavior predates the AI boom, but with the rise of text-generation tools like ChatGPT, it’s become increasingly easy to churn out endless articles to game Google rankings.

The spike in domain squatting is just one of the issues that have tarnished Google Search’s reputation in recent years. “People can spin up these sites really easily,” says SEO expert Gareth Boyd, who runs the digital marketing firm Forte Analytica. “It’s been a big issue.” (Boyd admits that he has even created similar sites in the past, though he says he doesn’t do it anymore.)

In February, WIRED reported on several AI clickbait networks that used domain squatting as a strategy, including one that took the websites for the defunct indie women’s website The Hairpin and the shuttered Hong Kong-based pro-democracy tabloid Apple Daily and filled them with AI-generated nonsense. Another transformed the website of a small-town Iowa newspaper into a bizarro repository for AI blog posts on retail stocks. According to Google’s new policy, this type of behavior is now explicitly categorized by the company as spam.

In addition to domain squatting, Google’s new policy will also focus on eliminating “reputation abuse,” where otherwise trustworthy websites allow third-party sources to publish janky sponsored content or other digital junk. (Google’s blog post describes “payday loan reviews on a trusted educational website” as an example.) While the other parts of the spam policy will start enforcement immediately, Google is giving 60 days notice prior to cracking down on reputational abuse, to give websites time to fall in line.

Nayak says the company has been working on this specific update since the end of last year. More broadly, the company has been working on ways to fix low-quality content in search, including AI-generated spam, since 2022. “We’ve been aware of the problem,” Nayak says. “It takes time to develop these changes effectively.”

Some SEO experts are cautiously optimistic that these changes could restore Google’s search efficacy. “It’s going to reinstate the way things used to be, hopefully,” says Ray. “But we have to see what happens.”

Air Canada Has to Honor a Refund Policy Its Chatbot Made Up

Air Canada Has to Honor a Refund Policy Its Chatbot Made Up

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline’s bereavement travel policy.

On the day Jake Moffatt’s grandmother died, Moffat immediately visited Air Canada’s website to book a flight from Vancouver to Toronto. Unsure of how Air Canada’s bereavement rates worked, Moffatt asked Air Canada’s chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada’s policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot’s advice and request a refund but was shocked that the request was rejected.

Moffatt tried for months to convince Air Canada that a refund was owed, sharing a screenshot from the chatbot that clearly claimed:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.

Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada’s Civil Resolution Tribunal.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because, Air Canada essentially argued, “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.

Experts told the Vancouver Sun that Moffatt’s case appeared to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.

Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Further, Rivers found that Moffatt had “no reason” to believe that one part of Air Canada’s website would be accurate and another would not.

Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” Rivers wrote.

In the end, Rivers ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars off the original fare (about $482 USD), which was $1,640.36 CAD (about $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.

Air Canada told Ars it will comply with the ruling and considers the matter closed.

Air Canada’s Chatbot Appears to Be Disabled

When Ars visited Air Canada’s website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot.

Air Canada did not respond to Ars’ request to confirm whether the chatbot is still part of the airline’s online support offerings.

OpenAI’s Sora Turns AI Prompts Into Photorealistic Videos

OpenAI’s Sora Turns AI Prompts Into Photorealistic Videos

We already know that OpenAI’s chatbots can pass the bar exam without going to law school. Now, just in time for the Oscars, a new OpenAI app called Sora hopes to master cinema without going to film school. For now a research product, Sora is going out to a few select creators and a number of security experts who will red-team it for safety vulnerabilities. OpenAI plans to make it available to all wannabe auteurs at some unspecified date, but it decided to preview it in advance.

Other companies, from giants like Google to startups like Runway, have already revealed text-to-video AI projects. But OpenAI says that Sora is distinguished by its striking photorealism—something I haven’t seen in its competitors—and its ability to produce longer clips than the brief snippets other models typically do, up to one minute. The researchers I spoke to won’t say how long it takes to render all that video, but when pressed, they described it as more in the “going out for a burrito” ballpark than “taking a few days off.” If the hand-picked examples I saw are to be believed, the effort is worth it.

OpenAI didn’t let me enter my own prompts, but it shared four instances of Sora’s power. (None approached the purported one-minute limit; the longest was 17 seconds.) The first came from a detailed prompt that sounded like an obsessive screenwriter’s setup: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”

AI-generated video made with OpenAI’s Sora.

Courtesy of OpenAI

The result is a convincing view of what is unmistakably Tokyo, in that magic moment when snowflakes and cherry blossoms coexist. The virtual camera, as if affixed to a drone, follows a couple as they slowly stroll through a streetscape. One of the passersby is wearing a mask. Cars rumble by on a riverside roadway to their left, and to the right shoppers flit in and out of a row of tiny shops.

It’s not perfect. Only when you watch the clip a few times do you realize that the main characters—a couple strolling down the snow-covered sidewalk—would have faced a dilemma had the virtual camera kept running. The sidewalk they occupy seems to dead-end; they would have had to step over a small guardrail to a weird parallel walkway on their right. Despite this mild glitch, the Tokyo example is a mind-blowing exercise in world-building. Down the road, production designers will debate whether it’s a powerful collaborator or a job killer. Also, the people in this video—who are entirely generated by a digital neural network—aren’t shown in close-up, and they don’t do any emoting. But the Sora team says that in other instances they’ve had fake actors showing real emotions.

The other clips are also impressive, notably one asking for “an animated scene of a short fluffy monster kneeling beside a red candle,” along with some detailed stage directions (“wide eyes and open mouth”) and a description of the desired vibe of the clip. Sora produces a Pixar-esque creature that seems to have DNA from a Furby, a Gremlin, and Sully in Monsters, Inc. I remember when that latter film came out, Pixar made a huge deal of how difficult it was to create the ultra-complex texture of a monster’s fur as the creature moved around. It took all of Pixar’s wizards months to get it right. OpenAI’s new text-to-video machine … just did it.

“It learns about 3D geometry and consistency,” says Tim Brooks, a research scientist on the project, of that accomplishment. “We didn’t bake that in—it just entirely emerged from seeing a lot of data.”

AI-generated video made with the prompt, “animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle. the art style is 3d and realistic, with a focus on lighting and texture. the mood of the painting is one of wonder and curiosity, as the monster gazes at the flame with wide eyes and open mouth. its pose and expression convey a sense of innocence and playfulness, as if it is exploring the world around it for the first time. the use of warm colors and dramatic lighting further enhances the cozy atmosphere of the image.”

Courtesy of OpenAI

While the scenes are certainly impressive, the most startling of Sora’s capabilities are those that it has not been trained for. Powered by a version of the diffusion model used by OpenAI’s Dalle-3 image generator as well as the transformer-based engine of GPT-4, Sora does not merely churn out videos that fulfill the demands of the prompts, but does so in a way that shows an emergent grasp of cinematic grammar.

That translates into a flair for storytelling. In another video that was created off of a prompt for “a gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures.” Bill Peebles, another researcher on the project, notes that Sora created a narrative thrust by its camera angles and timing. “There’s actually multiple shot changes—these are not stitched together, but generated by the model in one go,” he says. “We didn’t tell it to do that, it just automatically did it.”

AI-generated video made with the prompt “a gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures.”Courtesy of OpenAI

In another example I didn’t view, Sora was prompted to give a tour of a zoo. “It started off with the name of the zoo on a big sign, gradually panned down, and then had a number of shot changes to show the different animals that live at the zoo,” says Peebles, “It did it in a nice and cinematic way that it hadn’t been explicitly instructed to do.”

One feature in Sora that the OpenAI team didn’t show, and may not release for quite a while, is the ability to generate videos from a single image or a sequence of frames. “This is going to be another really cool way to improve storytelling capabilities,” says Brooks. “You can draw exactly what you have on your mind and then animate it to life.” OpenAI is aware that this feature also has the potential to produce deepfakes and misinformation. “We’re going to be very careful about all the safety implications for this,” Peebles adds.

OpenAI Gives ChatGPT a Memory

OpenAI Gives ChatGPT a Memory

OpenAI says ChatGPT’s Memory is an opt-in feature from the start, and can be wiped at any point, either in settings or by simply instructing the bot to wipe it. Once the Memory setting is cleared, that information won’t be used to train its AI model. It’s unclear exactly how much of that personal data is used to train the AI while someone is chatting with the chatbot. And toggling off Memory does not mean you’ve totally opted out of having your chats train OpenAI’s model; that’s a separate opt-out.

The company also claims that it won’t store certain sensitive information in Memory. If you tell ChatGPT your password (don’t do this) or Social Security number (or this), the app’s Memory is thankfully forgetful. Jang also says OpenAI is still soliciting feedback on whether other personally identifiable information, like a user’s ethnicity, is too sensitive for the company to auto-capture.

“We think there are a lot of useful cases for that example, but for now we have trained the model to steer away from proactively remembering that information,” Jang says.

It’s easy to see how ChatGPT’s Memory function could go awry—instances where a user might have forgotten they once asked the chatbot about a kink, or an abortion clinic, or a nonviolent way to deal with a mother-in-law, only to be reminded of it or have others see it in a future chat. How ChatGPT’s Memory handles health data is also something of an open question. “We steer ChatGPT away from remembering certain health details but this is still a work in progress,” says OpenAI spokesperson Niko Felix. In this way ChatGPT is the same song, just in a new era, about the internet’s permanence: Look at this great new Memory feature, until it’s a bug.

OpenAI is also not the first entity to toy with memory in generative AI. Google has emphasized “multi-turn” technology in Gemini 1.0, its own LLM. This means you can interact with Gemini Pro using a single-turn prompt—one back-and-forth between the user and the chatbot—or have a multi-turn, continuous conversation in which the bot “remembers” the context from previous messages.

An AI framework company called LangChain has been developing a Memory module that helps large language models recall previous interactions between an end user and the model. Giving LLMs a long-term memory “can be very powerful in creating unique LLM experiences—a chatbot can begin to tailor its responses toward you as an individual based on what it knows about you,” says Harrison Chase, cofounder and CEO of LangChain. “The lack of long-term memory can also create a grating experience. No one wants to have to tell a restaurant-recommendation chatbot over and over that they are vegetarian.”

This technology is sometimes referred to as “context retention” or “persistent context” rather than “memory,” but the end goal is the same: for the human-computer interaction to feel so fluid, so natural, that the user can easily forget what the chatbot might remember. This is also a potential boon for businesses deploying these chatbots that might want to maintain an ongoing relationship with the customer on the other end.

“You can think of these as just a number of tokens that are getting prepended to your conversations,” says Liam Fedus, an OpenAI research scientist. “The bot has some intelligence, and behind the scenes it’s looking at the memories and saying, ‘These look like they’re related; let me merge them.’ And that then goes on your token budget.”

Fedus and Jang say that ChatGPT’s memory is nowhere near the capacity of the human brain. And yet, in almost the same breath, Fedus explains that with ChatGPT’s memory, you’re limited to “a few thousand tokens.” If only.

Is this the hypervigilant virtual assistant that tech consumers have been promised for the past decade, or just another data-capture scheme that uses your likes, preferences, and personal data to better serve a tech company than its users? Possibly both, though OpenAI might not put it that way. “I think the assistants of the past just didn’t have the intelligence,” Fedus said, “and now we’re getting there.”

Will Knight contributed to this story.

AI Tools Like GitHub Copilot Are Rewiring Coders’ Brains. Yours May Be Next

AI Tools Like GitHub Copilot Are Rewiring Coders’ Brains. Yours May Be Next

Many people—like, say, journalists—are understandably antsy about what generative artificial intelligence might mean for the future of their profession. It doesn’t help that expert prognostications on the matter offer a confusing cocktail of wide-eyed excitement, trenchant skepticism, and dystopian despair.

Some workers are already living in one potential version of the generative AI future, though: computer programmers.

“Developers have arrived in the age of AI,” says Thomas Dohmke, CEO of GitHub. “The only question is, how fast do you get on board? Or are you going to be stuck in the past, on the wrong side of the ‘productivity polarity’?”

In June 2021, GitHub launched a preview version of a programming aid called Copilot, which uses generative AI to suggest how to complete large chunks of code as soon as a person starts typing. Copilot is now a paid tool and a smash hit. GitHub’s owner, Microsoft, said in its latest quarterly earnings that there are now 1.3 million paid Copilot accounts—a 30 percent increase over the previous quarter—and noted that 50,000 different companies use the software.

Dohmke says the latest usage data from Copilot shows that almost half of all the code produced by users is AI-generated. At the same time, he claims there is little sign that these AI programs can operate without human oversight. “There’s clear consensus from the developer community after using these tools that it needs to be a pair-programmer copilot,” Dohmke says.

Copilot’s power is in how it abstracts away complexity for a programmer trying to work through a problem, Dohmke says. He likens that to the way modern programming languages hide fiddly details that earlier, lower-level languages required coders to wrangle. Dohmke adds that younger programmers are particularly accepting of Copilot, and that it seems especially helpful in solving novice coding problems. (This makes sense if you consider that Copilot learned from reams of code posted online, where solutions to beginner problems outnumber examples of abstruse and rarified coding craft.)

“We’re seeing the evolution of software development,” Dohmke says.

None of that means demand for developers’ labor won’t be altered by AI. GitHub research in collaboration with MIT shows that Copilot allowed coders faced with relatively simple tasks to complete their work, on average, 55 percent more quickly. This increase in productivity suggests that companies could get the same work done with fewer programmers, but companies could use those savings to spend more on labor in other projects.

Even for non-coders, these findings—and the rapid uptake of Copilot—are potentially instructive. Microsoft is developing AI Copilots, as it calls them, designed to help write emails, craft spreadsheets, or analyze documents for its Office software. It even introduced a Copilot key to the latest Windows PCs, its first major keyboard button change in decades. Competitors like Google are building similar tools. GitHub’s success might be helping to drive this push to give everyone an AI workplace assistant.

“There’s good empirical evidence and data around the GitHub Copilot and the productivity stats around it,” Microsoft’s CEO, Satya Nadella, said on the company’s most recent earnings call. He added that he expects similar gains to be felt among users of Microsoft’s other Copilots. Microsoft has created a site where you can try its Copilot for Windows. I confess it isn’t clear to me how similar the tasks you might want to do on Windows are to the ones you do in GitHub Copilot, where you use code to achieve clear objectives.

There are other potential side effects of tools like GitHub Copilot besides job displacement. For example, increased reliance on automation might lead to more errors creeping into code. One recent study claimed to find evidence of such a trend—although Dohmke says that it reported only a general increase in mistakes since Copilot was introduced, not direct evidence that the AI helper was causing an increase in errors. While this is true, it seems fair to worry that less experienced coders might miss errors when relying on AI help, or that the overall quality of code might decrease thanks to autocomplete.

Given Copilot’s popularity, it won’t be long before we have more data on that question. Those of us who work in other jobs may soon find out whether we’re in for the same productivity gains as coders—and the corporate upheavals that come with them.