Select Page
How to Launch a Custom Chatbot on OpenAI’s GPT Store

How to Launch a Custom Chatbot on OpenAI’s GPT Store

Get ready to share your custom chatbot with the whole world. Well, at least with other ChatGPT Plus subscribers.

OpenAI recently launched its GPT Store, after it delayed the project following the chaos of CEO Sam Altman’s firing and reinstatement late in 2023.

While OpenAI’s GPT Store shares some similarities to smartphone app marketplaces, it currently functions more like a giant directory of tweaked ChatGPTs. Similar to OpenAI’s GPT-4 model and web browsing capabilities, only those who pay $20 a month for ChatGPT Plus can create and use “GPTs.” The GPT acronym in ChatGPT actually stands for “generative pretrained transformers,” but in this context, the company is using GPT as a term that refers to a unique version of ChatGPT with additional parameters and a little extra training data.

Curious about adding your AI creation to the marketplace? Here’s how to make your GPT public and some advice to help you get started with the GPT Store.

How to List Your Own GPT

Before you can add a custom chatbot to the GPT Store, you’ve got to make one. No specialized knowledge or weird coding language is required to get started. To learn more about the process, check out my previous article about GPTs, where I created Reece’s Replica by feeding 50 of my articles into the system as training data, so my bot could learn to mimic my phrasing and tone. Since this will be available to all ChatGPT Plus subscribers, remember that the custom data you upload could leak. Don’t upload any documents that contain sensitive information.

When you’re ready to publicly list your custom version of the popular chatbot, visit the ChatGPT homepage, choose Explore GPTs on the left side of the screen, then select My GPTs in the top right. Click on the pencil icon to edit the GPT you’d like to publish. After double-checking the potential output in the Preview section, click Save in the right corner, set it to publish to Everyone, and click Confirm.

Screenshot of app sharing menu

Want to share the GPT with friends or coworkers without listing it in the GPT Store? Choose the Anyone with a link option.

Reece Rogers via ChatGPT

The EU Just Passed Sweeping New Rules to Regulate AI

The EU Just Passed Sweeping New Rules to Regulate AI

Over the two years lawmakers have been negotiating the rules agreed today, AI technology and the leading concerns about it have dramatically changed. When the AI Act was conceived in April 2021, policymakers were worried about opaque algorithms deciding who would get a job, be granted refugee status or receive social benefits. By 2022, there were examples that AI was actively harming people. In a Dutch scandal, decisions made by algorithms were linked to families being forcibly separated from their children, while students studying remotely alleged that AI systems discriminated against them based on the color of their skin.

Then, in November 2022, OpenAI released ChatGPT, dramatically shifting the debate. The leap in AI’s flexibility and popularity triggered alarm in some AI experts, who drew hyperbolic comparisons between AI and nuclear weapons.

That discussion manifested in the AI Act negotiations in Brussels in the form of a debate about whether makers of so-called foundation models such as the one behind ChatGPT, like OpenAI and Google, should be considered as the root of potential problems and regulated accordingly—or whether new rules should instead focus on companies using those foundational models to build new AI-powered applications, such as chatbots or image generators.

Representatives of Europe’s generative AI industry expressed caution about regulating foundation models, saying it could hamper innovation among the bloc’s AI startups. “We cannot regulate an engine devoid of usage,” Arthur Mensch, CEO of French AI company Mistral, said last month. “We don’t regulate the C [programming] language because one can use it to develop malware. Instead, we ban malware.” Mistral’s foundation model 7B would be exempt under the rules agreed today because the company is still in the research and development phase, Carme Artigas, Spain’s Secretary of State for Digitalization and Artificial Intelligence, said in the press conference.

The major point of disagreement during the final discussions that ran late into the night twice this week was whether law enforcement should be allowed to use facial recognition or other types of biometrics to identify people either in real time or retrospectively. “Both destroy anonymity in public spaces,” says Daniel Leufer, a senior policy analyst at digital rights group Access Now. Real-time biometric identification can identify a person standing in a train station right now using live security camera feeds, he explains, while “post” or retrospective biometric identification can figure out that the same person also visited the train station, a bank, and a supermarket yesterday, using previously banked images or video.

Leufer said he was disappointed by the “loopholes” for law enforcement that appeared to have been built into the version of the act finalized today.

European regulators’ slow response to the emergence of social media era loomed over discussions. Almost 20 years elapsed between Facebook’s launch and the passage of the Digital Services Act—the EU rulebook designed to protect human rights online—taking effect this year. In that time, the bloc was forced to deal with the problems created by US platforms, while being unable to foster their smaller European challengers. “Maybe we could have prevented [the problems] better by earlier regulation,” Brando Benifei, one of two lead negotiators for the European Parliament, told WIRED in July. AI technology is moving fast. But it will still be many years until it’s possible to say whether the AI Act is more successful in containing the downsides of Silicon Valley’s latest export.

OpenAI Cofounder Reid Hoffman Gives Sam Altman a Vote of Confidence

OpenAI Cofounder Reid Hoffman Gives Sam Altman a Vote of Confidence

Hoffman and others said that there’s no need to pause development of AI. He called that drastic measure, for which some AI researchers have petitioned, foolish and destructive. Hoffman identified himself as a rational “accelerationist”—someone who knows to slow down when driving around a corner but that, presumably, is happy to speed up when the road ahead is clear. “I recommend everyone come join us in the optimist club, not because it’s utopia and everything works out just fine, but because it can be part of an amazing solution,” he said. “That’s what we’re trying to build towards.”

Mitchell and Buolamwini, who is artist-in-chief and president of the AI harms advocacy group Algorithmic Justice League, said that relying on company promises to mitigate bias and misuse of AI would not be enough. In their view, governments must make clear that AI systems cannot undermine people’s rights to fair treatment or humanity. “Those who stand to be exploited or extorted, even exterminated” need to be protected, Buolamwini said, adding that systems like lethal drones should be stopped. “We’re already in a world where AI is dangerous,” she said. “We have AI as the angels of death.”

Applications such as weaponry are far from OpenAI’s core focus on aiding coders, writers, and other professionals. The company’s tools by their terms cannot be used in military and warfare—although OpenAI’s primary backer and enthusiastic customer Microsoft has a sizable business with the US military. But Buolamwini suggested that companies developing business applications deserve no less scrutiny. As AI takes over mundane tasks such as composition, companies must be ready to reckon with the social consequences of a world that may offer workers fewer meaningful opportunities to learn the basics of a job that it may turn out are vital to becoming highly skilled. “What does it mean to go through that process of creation, finding the right word, figuring out how to express yourself, and learning something in the struggle to do it?” she said.

Motion blur portrait of a person in front of a blue backdrop

Fei-Fei Li, a Stanford University computer scientist who runs the school’s Institute for Human-Centered Artificial Intelligence, said the AI community has to be focused on its impacts on people, all the way from individual dignity to large societies. “I should start a new club called the techno-humanist,” she said. “It’s too simple to say, ‘Do you want to accelerate or decelerate?’ We should talk about where we want to accelerate, and where we should slow down.”

Li is one of the modern AI pioneers, having developed the computer vision system known as ImageNet. Would OpenAI want a seemingly balanced voice like hers on its new board? OpenAI board chair Bret Taylor did not respond to a request to comment. But if the opportunity arose, Li said, “I will carefully consider that.”

Anduril’s New Drone Killer Is Locked on to AI-Powered Warfare

Anduril’s New Drone Killer Is Locked on to AI-Powered Warfare

After Palmer Luckey founded Anduril in 2017, he promised it would be a new kind of defense contractor, inspired by hacker ingenuity and Silicon Valley speed.

The company’s latest product, a jet-powered, AI-controlled combat drone called Roadrunner, is inspired by the grim reality of modern conflict, especially in Ukraine, where large numbers of cheap, agile suicide drones have proven highly deadly over the past year.

“The problem we saw emerging was this very low-cost, very high-quantity, increasingly sophisticated and advanced aerial threat,” says Christian Brose, chief strategy officer at Anduril.

This kind of aerial threat has come to define the conflict in Ukraine, where Ukrainian and Russian forces are locked in an arms race involving large numbers of cheap drones capable of loitering autonomously before attacking a target by delivering an explosive payload. These systems, which include US-made Switchblades on the Ukrainian side, can evade jamming and ground defenses and may need to be shot down by either a fighter jet or a missile that costs many times more to use.

Roadrunner is a modular, twin-jet aircraft roughly the size of a patio heater that can operate at high (subsonic) speeds, can take off and land vertically, and can return to base if it isn’t needed, according to Anduril. The version designed to target drones or even missiles can loiter autonomously looking for threats.

Brose says the system can already operate with a high degree of autonomy, and it is designed so that the software can be upgraded with new capabilities. But the system requires a human operator to make decisions on the use of deadly force. “Our driving belief is that there has to be human agency for identifying and classifying a threat, and there has to be human accountability for any action that gets taken against that threat,” he says.

Samuel Bendett, an expert on the military use of drones at the Center for New American Security, a think tank, says Roadrunner could be used in Ukraine to intercept Iranian-made Shahed drones, which have become an effective way for Russian forces to target stationary Ukrainian targets.

Bendett says both Russian and Ukrainian forces are now using drones in a complete “kill chain,” with disposable consumer drones being used for target acquisition and then either short- or long-range suicide drones being used to attack. “There is a lot of experimentation taking place in Ukraine, on both sides,” Bendett says. “And I’m assuming that a lot of US [military] innovations are going to be built with Ukraine in mind.”

Sam Altman Officially Returns to OpenAI—With a New Board Seat for Microsoft

Sam Altman Officially Returns to OpenAI—With a New Board Seat for Microsoft

Sam Altman marked his formal return as CEO of OpenAI today in a company memo that confirmed changes to the company’s board, including a new nonvoting seat for the startup’s primary investor, Microsoft.

In a memo sent to staff and shared on OpenAI’s blog, Altman painted the chaos of the past two weeks, triggered by the board’s loss of trust in their CEO, during which almost the entire staff of the company threatened to quit, as a testament to the startup’s resilience rather than a sign of instability.

“You stood firm for each other, this company, and our mission,” Altman wrote. “One of the most important things for the team that builds [artificial general intelligence] safely is the ability to handle stressful and uncertain situations, and maintain good judgment throughout. Top marks.”

Altman was ousted on November 17. The company’s nonprofit board of directors said that a deliberative review had concluded that Altman “was not consistently candid in his communications with the board.” Under OpenAI’s unusual structure, the board’s duty was to the project’s original, nonprofit mission of developing AI that is beneficial to humanity, not the company’s business.

That board that ejected Altman included the company’s chief scientist, Ilya Sutskever, who later recanted and joined with staff who threatened to quit if Altman was not reinstated.

Altman said that there would be no hard feelings over that, although his note left questions over Sutskever’s future.

“I love and respect Ilya, I think he’s a guiding light of the field and a gem of a human being. I harbor zero ill will towards him,” Altman wrote, adding, “We hope to continue our working relationship and are discussing how he can continue his work at OpenAI.” What was clear, however, was that Sutskever would not be returning to the board.

Altman’s note to staff confirmed that OpenAI’s new all-male board will consist of former Treasury secretary Larry Summers, Quora CEO Adam D’Angelo, and former Salesforce co-CEO Bret Taylor, with Taylor as chair. D’Angelo is the only remaining member of the previous board.

Previous board members Helen Toner, a director at CSET, a think tank, and Tasha McCauley, an entrepreneur, both resigned.

Speaking at the New York Times DealBook summit shortly before the announcement, OpenAI cofounder Elon Musk expressed concerns about Altman and questioned why Sutskever had voted to fire him. “Either it was a serious thing and we should know what it is, or it’s not a serious thing and the board should resign,” Musk said. “I have mixed feelings about Sam, I do.”