Select Page
DeepMind Has Trained an AI to Control Nuclear Fusion

DeepMind Has Trained an AI to Control Nuclear Fusion

The inside of a tokamak—the doughnut-shaped vessel designed to contain a nuclear fusion reaction—presents a special kind of chaos. Hydrogen atoms are smashed together at unfathomably high temperatures, creating a whirling, roiling plasma that’s hotter than the surface of the sun. Finding smart ways to control and confine that plasma will be key to unlocking the potential of nuclear fusion, which has been mooted as the clean energy source of the future for decades. At this point, the science underlying fusion seems sound, so what remains is an engineering challenge. “We need to be able to heat this matter up and hold it together for long enough for us to take energy out of it,” says Ambrogio Fasoli, director of the Swiss Plasma Center at École Polytechnique Fédérale de Lausanne in Switzerland.

That’s where DeepMind comes in. The artificial intelligence firm, backed by Google parent company Alphabet, has previously turned its hand to video games and protein folding, and has been working on a joint research project with the Swiss Plasma Center to develop an AI for controlling a nuclear fusion reaction.

In stars, which are also powered by fusion, the sheer gravitational mass is enough to pull hydrogen atoms together and overcome their opposing charges. On Earth, scientists instead use powerful magnetic coils to confine the nuclear fusion reaction, nudging it into the desired position and shaping it like a potter manipulating clay on a wheel. The coils have to be carefully controlled to prevent the plasma from touching the sides of the vessel: this can damage the walls and slow down the fusion reaction. (There’s little risk of an explosion as the fusion reaction cannot survive without magnetic confinement).

But every time researchers want to change the configuration of the plasma and try out different shapes that may yield more power or a cleaner plasma, it necessitates a huge amount of engineering and design work. Conventional systems are computer-controlled and based on models and careful simulations, but they are, Fasoli says, “complex and not always necessarily optimized.”

DeepMind has developed an AI that can control the plasma autonomously. A paper published in the journal Nature describes how researchers from the two groups taught a deep reinforcement learning system to control the 19 magnetic coils inside TCV, the variable-configuration tokamak at the Swiss Plasma Center, which is used to carry out research that will inform the design of bigger fusion reactors in the future. “AI, and specifically reinforcement learning, is particularly well suited to the complex problems presented by controlling plasma in a tokamak,” says Martin Riedmiller, control team lead at DeepMind.

The neural network—a type of AI setup designed to mimic the architecture of the human brain—was initially trained in a simulation. It started by observing how changing the settings on each of the 19 coils affected the shape of the plasma inside the vessel. Then it was given different shapes to try to re-create in the plasma. These included a D-shaped cross section close to what will be used inside ITER (formerly the International Thermonuclear Experimental Reactor), the large-scale experimental tokamak under construction in France, and a snowflake configuration that could help dissipate the intense heat of the reaction more evenly around the vessel.

DeepMind’s neural network was able to manipulate the plasma inside a fusion reactor into a number of different shapes that fusion researchers have been exploring.Illustration: DeepMind & SPC/EPFL 

DeepMind’s AI was able to autonomously figure out how to create these shapes by manipulating the magnetic coils in the right way—both in the simulation and when the scientists ran the same experiments for real inside the TCV tokamak to validate the simulation. It represents a “significant step,” says Fasoli, one that could influence the design of future tokamaks or even speed up the path to viable fusion reactors. “It’s a very positive result,” says Yasmin Andrew, a fusion specialist at Imperial College London, who was not involved in the research. “It will be interesting to see if they can transfer the technology to a larger tokamak.”

Fusion offered a particular challenge to DeepMind’s scientists because the process is both complex and continuous. Unlike a turn-based game like Go, which the company has famously conquered with its AlphaGo AI, the state of a plasma constantly changes. And to make things even harder, it can’t be continuously measured. It is what AI researchers call an “under–observed system.”

“Sometimes algorithms which are good at these discrete problems struggle with such continuous problems,” says Jonas Buchli, a research scientist at DeepMind. “This was a really big step forward for our algorithm, because we could show that this is doable. And we think this is definitely a very, very complex problem to be solved. It is a different kind of complexity than what you have in games.”

YouTube’s Olympics Highlights Are Riddled With Propaganda

YouTube’s Olympics Highlights Are Riddled With Propaganda

Sports fans who tuned in to watch the Beijing Winter Olympics on YouTube are instead being served propaganda videos. An analysis of YouTube search results by WIRED found that people who typed “Beijing,” “Beijing 2022,” “Olympics,” or “Olympics 2022” were shown pro-China and anti-China propaganda videos in the top results. Five of the most prominent propaganda videos, which often appear above actual Olympics highlights, have amassed almost 900,000 views.

Two anti-China videos showing up in search results were published by a group called The BL (The Beauty of Life), which Facebook previously linked to the Falun Gong, a Chinese spiritual movement that was banned by the Chinese Communist Party in 1999 and has protested against the regime ever since. They jostled for views with pro-China videos posted by Western YouTubers whose work has previously been promoted by China’s Ministry of Foreign Affairs. Similar search results were visible in the US, Canada, and the UK. WIRED also found signs that viewing numbers for pro-China videos are being artificially boosted through the use of fake news websites.

This flurry of propaganda videos was first spotted earlier this month by John Scott-Railton, a researcher at the University of Toronto’s research laboratory, Citizen Lab. On February 5, Scott-Railton found that after he’d watched skating and curling videos, YouTube automatically played a video by a pro-China YouTube account. “I found myself on a slippery slide from skating and curling into increasingly targeted propaganda,” he says. These videos no longer appeared in autoplay by February 11, when WIRED conducted its analysis. But the way similar videos still dominate YouTube search results suggests the platform is at risk of letting such campaigns hijack the Olympics.

YouTube did not respond to a request to comment on why content used as propaganda to promote or deride China was being pushed to the top of Olympics search results, nor did the company say if those behind the videos had violated its terms of service by using fake websites to inflate their views.

A common theme in the pro-Beijing propaganda videos is the 2019 decision by US-born skier Eileen Gu to compete for China at the Winter Olympics. A video titled “USA’s Boycott FAILURE … Eileen Gu Wins Gold” by YouTuber Jason Lightfoot is the top result for the search term “Beijing,” with 54,000 views.

The US and Canada were among the countries that took part in a diplomatic boycott of the Beijing Winter Olympics. In Canada, that same video by Jason Lightfoot also showed up for users searching for “Olympics 2022” and “Winter Olympics,” although much further down, in 26th and 33rd place. In the video, Lightfoot says Western media “can’t take what Eileen Gu represents … someone who has chosen China over the American dream.”

In another video, which has more than 400,000 views, American YouTuber Cyrus Janssen also discusses why Gu chose to represent China. The video, which is the fifth result for the search term “Beijing,” details Gu’s career before referencing the high rates of anti-Asian hate crime in the US, a subject that has also been covered by mainstream American media outlets.

Self-Driving Cars: The Complete Guide

Self-Driving Cars: The Complete Guide

In the past decade, autonomous driving has gone from “maybe possible” to “definitely possible” to “inevitable” to “how did anyone ever think this wasn’t inevitable?” to “now commercially available.” In December 2018, Waymo, the company that emerged from Google’s self-driving-car project, officially started its commercial self-driving-car service in the suburbs of Phoenix. At first, the program was underwhelming: available only to a few hundred vetted riders, and human safety operators remained behind the wheel. But in the past four years, Waymo has slowly opened the program to members of the public and has begun to run robotaxis without drivers inside. The company has since brought its act to San Francisco. People are now paying for robot rides.

And it’s just a start. Waymo says it will expand the service’s capability and availability over time. Meanwhile, its onetime monopoly has evaporated. Every significant automaker is pursuing the tech, eager to rebrand and rebuild itself as a “mobility provider. Amazon bought a self-driving-vehicle developer, Zoox. Autonomous trucking companies are raking in investor money. Tech giants like Apple, IBM, and Intel are looking to carve off their slice of the pie. Countless hungry startups have materialized to fill niches in a burgeoning ecosystem, focusing on laser sensors, compressing mapping data, setting up service centers, and more.

This 21st-century gold rush is motivated by the intertwined forces of opportunity and survival instinct. By one account, driverless tech will add $7 trillion to the global economy and save hundreds of thousands of lives in the next few decades. Simultaneously, it could devastate the auto industry and its associated gas stations, drive-thrus, taxi drivers, and truckers. Some people will prosper. Most will benefit. Some will be left behind.

It’s worth remembering that when automobiles first started rumbling down manure-clogged streets, people called them horseless carriages. The moniker made sense: Here were vehicles that did what carriages did, minus the hooves. By the time “car” caught on as a term, the invention had become something entirely new. Over a century, it reshaped how humanity moves and thus how (and where and with whom) humanity lives. This cycle has restarted, and the term “driverless car” may soon seem as anachronistic as “horseless carriage.” We don’t know how cars that don’t need human chauffeurs will mold society, but we can be sure a similar gear shift is on the way.

SelfDriving Cars The Complete Guide

The First Self-Driving Cars

Just over a decade ago, the idea of being chauffeured around by a string of zeros and ones was ludicrous to pretty much everybody who wasn’t at an abandoned Air Force base outside Los Angeles, watching a dozen driverless cars glide through real traffic. That event was the Urban Challenge, the third and final competition for autonomous vehicles put on by Darpa, the Pentagon’s skunkworks arm.

Content

This content can also be viewed on the site it originates from.

At the time, America’s military-industrial complex had already thrown vast sums and years of research trying to make unmanned trucks. It had laid a foundation for this technology, but stalled when it came to making a vehicle that could drive at practical speeds, through all the hazards of the real world. So, Darpa figured, maybe someone else—someone outside the DOD’s standard roster of contractors, someone not tied to a list of detailed requirements but striving for a slightly crazy goal—could put it all together. It invited the whole world to build a vehicle that could drive across California’s Mojave Desert, and whoever’s robot did it the fastest would get a million-dollar prize.

The 2004 Grand Challenge was something of a mess. Each team grabbed some combination of the sensors and computers available at the time, wrote their own code, and welded their own hardware, looking for the right recipe that would take their vehicle across 142 miles of sand and dirt of the Mojave. The most successful vehicle went just seven miles. Most crashed, flipped, or rolled over within sight of the starting gate. But the race created a community of people—geeks, dreamers, and lots of students not yet jaded by commercial enterprise—who believed the robot drivers people had been craving for nearly forever were possible, and who were suddenly driven to make them real.

They came back for a follow-up race in 2005 and proved that making a car drive itself was indeed possible: Five vehicles finished the course. By the 2007 Urban Challenge, the vehicles were not just avoiding obstacles and sticking to trails but following traffic laws, merging, parking, even making safe, legal U-turns.

When Google launched its self-driving car project in 2009, it started by hiring a team of Darpa Challenge veterans. Within 18 months, they had built a system that could handle some of California’s toughest roads (including the famously winding block of San Francisco’s Lombard Street) with minimal human involvement. A few years later, Elon Musk announced Tesla would build a self-driving system into its cars. And the proliferation of ride-hailing services like Uber and Lyft weakened the link between being in a car and owning that car, helping set the stage for a day when actually driving that car falls away too. In 2015, Uber poached dozens of scientists from Carnegie Mellon University—a robotics and artificial intelligence powerhouse—to get its effort going.

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Biden’s ‘Antitrust Revolution’ Overlooks AI—at Americans’ Peril

Despite the executive orders and congressional hearings of the “Biden antitrust revolution,” the most profound anti-competitive shift is happening under policymakers’ noses: the cornering of artificial intelligence and automation by a handful of tech companies. This needs to change.

There is little doubt that the impact of AI will be widely felt. It is shaping product innovations, creating new research, discovery, and development pathways, and reinventing business models. AI is making inroads in the development of autonomous vehicles, which may eventually improve road safety, reduce urban congestion, and help drivers make better use of their time. AI recently predicted the molecular structure of almost every protein in the human body, and it helped develop and roll out a Covid vaccine in record time. The pandemic itself may have accelerated AI’s incursion—in emergency rooms for triage; in airports, where robots spray disinfecting chemicals; in increasingly automated warehouses and meatpacking plants; and in our remote workdays, with the growing presence of chatbots, speech recognition, and email systems that get better at completing our sentences.

Exactly how AI will affect the future of human work, wages, or productivity overall remains unclear. Though service and blue-collar wages have lately been on the rise, they’ve stagnated for three decades. According to MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo, 50 to 70 percent of this languishing can be attributed to the loss of mostly routine jobs to automation. White-collar occupations are also at risk as machine learning and smart technologies take on complex functions. According to McKinsey, while only about 10 percent of these jobs could disappear altogether, 60 percent of them may see at least a third of their tasks subsumed by machines and algorithms. Some researchers argue that while AI’s overall productivity impact has been so far disappointing, it will improve; others are less sanguine. Despite these uncertainties, most experts agree that on net, AI will “become more of a challenge to the workforce,” and we should anticipate a flat to slightly negative impact on jobs by 2030.

Without intervention, AI could also help undermine democracy–through amplifying misinformation or enabling mass surveillance. The past year and a half has also underscored the impact of algorithmically powered social media, not just on the health of democracy, but on health care itself.

The overall direction and net impact of AI sits on a knife’s edge, unless AI R&D and applications are appropriately channeled with wider societal and economic benefits in mind. How can we ensure that?

A handful of US tech companies, including Amazon, Alibaba, Alphabet, Facebook, and Netflix, along with Chinese mega-players such as Baidu, are responsible for $2 of every $3 spent globally on AI. They’re also among the top AI patent holders. Not only do their outsize budgets for AI dwarf others’, including the federal government’s, they also emphasize building internally rather than buying AI. Even though they buy comparatively little, they’ve still cornered the AI startup acquisition market. Many of these are early-stage acquisitions, meaning the tech giants integrate the products from these companies into their own portfolios or take IP off the market if it doesn’t suit their strategic purposes and redeploy the talent. According to research from my Digital Planet team, US AI talent is intensely concentrated. The median number of AI employees in the field’s top five employers—Amazon, Google, Microsoft, Facebook, and Apple—is some 18,000, while the median for companies six to 24 is about 2,500—and it drops significantly from there. Moreover, these companies have near-monopolies of data on key behavioral areas. And they are setting the stage to become the primary suppliers of AI-based products and services to the rest of the world.

Each key player has areas of focus consistent with its business interests: Google/Alphabet spends disproportionately on natural language and image processing and on optical character, speech, and facial recognition. Amazon does the same on supply chain management and logistics, robotics, and speech recognition. Many of these investments will yield socially beneficial applications, while others, such as IBM’s Watson—which aspired to become the go-to digital decision tool in fields as diverse as health care, law, and climate action—may not deliver on initial promises, or may fail altogether. Moonshot projects, such as level 4 driverless cars, may have an excessive amount of investment put against them simply because the Big Tech players choose to champion them. Failures, disappointments, and pivots are natural to developing any new technology. We should, however, worry about the concentration of investments in a technology so fundamental and ask how investments are being allocated overall. AI, arguably, could have more profound impact than social media, online retail, or app stores—the current targets of antitrust. Google CEO Sundar Pichai may have been a tad overdramatic when he declared that AI will have more impact on humanity than fire, but that alone ought to light a fire under the policy establishment to pay closer attention.

Dumbed Down AI Rhetoric Harms Everyone

Dumbed Down AI Rhetoric Harms Everyone

When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.

Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.

The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.

But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.

Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.

The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.

The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.

The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.