Select Page
The End of Astronauts—and the Rise of Robots

The End of Astronauts—and the Rise of Robots

How much do we need humans in space?  How much do we want them there?  Astronauts embody the triumph of human imagination and engineering.  Their efforts shed light on the possibilities and problems posed by travel beyond our nurturing Earth.  Their presence on the moon or on other solar-system objects can imply that the countries or entities that sent them there possess ownership rights.  Astronauts promote an understanding of the cosmos, and inspire young people toward careers in science.

When it comes to exploration, however, our robots can outperform astronauts at a far lower cost and without risk to human life.  This assertion, once a prediction for the future, has become reality today, and robot explorers will continue to become ever more capable, while human bodies will not.  

Fifty years ago, when the first geologist to reach the moon suddenly recognized strange orange soil (the likely remnant of previously unsuspected volcanic activity), no one claimed that an automated explorer could have accomplished this feat.  Today, we have placed a semi-autonomous rover on Mars, one of a continuing suite of orbiters and landers, with cameras and other instruments that probe the Martian soil, capable of finding paths around obstacles as no previous rover could.  

Since Apollo 17 left the moon in 1972, the astronauts have journeyed no farther than low Earth orbit. In this realm, astronauts’ greatest achievement by far came with their five repair missions to the Hubble Space Telescope, which first saved the giant instrument from uselessness and then extended its life by decades by providing upgraded cameras and other systems.  (Astronauts could reach the Hubble only because the Space Shuttle, which launched it, could go no farther from Earth, which produces all sorts of interfering radiation and light.)  Each of these missions cost about a billion dollars in today’s money.  The cost of a telescope to replace the Hubble would likewise have been about a billion dollars; one estimate has set the cost of the five repair missions equal to that for constructing seven replacement telescopes.  

Today, astrophysicists have managed to send all of their new spaceborne observatories to distances four times farther than the moon, where the James Webb Space Telescope now prepares to study a host of cosmic objects.  Our robot explorers have visited all the sun’s planets (including that former planet Pluto), as well as two comets and an asteroid, securing immense amounts of data about them and their moons, most notably Jupiter’s Europa and Saturn’s Enceladus, where oceans that lie beneath an icy crust may harbor strange forms of life.  Future missions from the United States, the European Space Agency, China, Japan, India, and Russia will only increase our robot emissaries’ abilities and the scientific importance of their discoveries.  Each of these missions has cost far less than a single voyage that would send humans—which in any case remains an impossibility for the next few decades, for any destination save the moon and Mars.

In 2020, NASA revealed of accomplishments titled “20 Breakthroughs From 20 Years of Science Aboard the International Space Station.”  Seventeen of those dealt with processes that robots could have performed, such as launching small satellites, the detection of cosmic particles, employing microgravity conditions for drug development and the study of flames, and 3-D printing in space.  The remaining three dealt with muscle atrophy and bone loss, growing food, or identifying microbes in space—things that are important for humans in that environment, but hardly a rationale for sending them there. 

Facebook Has a Child Predation Problem

Facebook Has a Child Predation Problem

Surely due diligence would dictate proactive steps to prevent the creation of such groups, backed up by quick action to remove any that get through once they are flagged and reported. I would have thought so. Until I stumbled into these groups and began, with rising disbelief, to find it impossible to get them taken down.

Children are sharing personal images and contact information in a sexualized digital space, and being induced to join private groups or chats where further images and actions will be solicited and exchanged.

Even as debate over Congress’ Earn It Act calls attention to the use of digital channels to distribute sexually explicit materials, we are failing to grapple with a seismic shift in the ways child sexual abuse materials are generated. Forty-five percent of US children aged 9 to 12 report using Facebook every day. (That fact alone makes mockery of Facebook’s claim that they work actively to keep children under 13 off the platform.) According to recent research, over a quarter of 9- to 12-year-olds report having experienced sexual solicitation online. One in eight report having been asked to send a nude photo or video; one in 10 report having been asked to join a sexually explicit livestream. Smartphones, internet access, and Facebook together now reach into children’s hands and homes and create new spaces for active predation. At scale.

Of course I reported the group I had accidentally uncovered. I used Facebook’s on-platform system, tagging it as containing “nudity or sexual activity” which (next menu) “involves a child.” An automated response came back days later. The group had been reviewed and did not violate any “specific community standards.” If I continued to encounter content “offensive or distasteful to you”—was my taste the problem here?—I should report that specific content, not the group as a whole.

“Buscando novi@ de 9,10,11,12,13 años” had 7,900 members when I reported it. By the time Facebook replied that it did not violate community standards, it had 9,000.

So I tweeted at Facebook and the Facebook newsroom. I DMed people I didn’t know but thought might have access to people inside Facebook. I tagged journalists. And I reported through the platform’s protocol a dozen more groups, some with thousands of users: groups I found not through sexually explicit search terms but just by typing “11 12 13” into the Groups search bar.

What became ever clearer as I struggled to get action is that technology’s limits were not the problem. The full power of AI-driven algorithms was on display, but it was working to expand, not reduce, child endangerment. Because even as reply after reply hit my inbox denying grounds for action, new child sexualization groups began getting recommended to me as “Groups You May Like.”

Each new group recommended to me had the same mix of cartoon-filled come-ons, emotional grooming, and gamified invites to share sexual materials as the groups I had reported. Some were in Spanish, some in English, others in Tagalog. When I searched for a translation of “hanap jowa,” the name of a series of groups, it led me to an article from the Philippines reporting on efforts by Reddit users to get child-endangering Facebook groups removed there.

Europe Is in Danger of Using the Wrong Definition of AI

Europe Is in Danger of Using the Wrong Definition of AI

A company could choose the most obscure, nontransparent systems architecture available, claiming (rightly, under this bad definition) that it was “more AI,” in order to access the prestige, investment, and government support that claim entails. For example, one giant deep neural network could be given the task not only of learning language but also of debiasing that language on several criteria, say, race, gender, and socio-economic class. Then maybe the company could also sneak in a little slant to make it also point toward preferred advertisers or political party. This would be called AI under either system, so it would certainly fall into the remit of the AIA. But would anyone really be reliably able to tell what was going on with this system? Under the original AIA definition, some simpler way to get the job done would be equally considered “AI,” and so there would not be these same incentives to use intentionally complicated systems.

Of course, under the new definition, a company could also switch to using more traditional AI, like rule-based systems or decision trees (or just conventional software). And then it would be free to do whatever it wanted—this is no longer AI, and there’s no longer a special regulation to check how the system was developed or where it’s applied. Programmers can code up bad, corrupt instructions that deliberately or just negligently harm individuals or populations. Under the new presidency draft, this system would no longer get the extra oversight and accountability procedures it would under the original AIA draft. Incidentally, this route also avoids tangling with the extra law enforcement resources the AIA mandates member states fund in order to enforce its new requirements.

Limiting where the AIA applies by complicating and constraining the definition of AI is presumably an attempt to reduce the costs of its protections for both businesses and governments. Of course, we do want to minimize the costs of any regulation or governance—public and private resources both are precious. But the AIA already does that, and does it in a better, safer way. As originally proposed, the AIA already only applies to systems we really need to worry about, which is as it should be.

In the AIA’s original form, the vast majority of AI—like that in computer games, vacuum cleaners, or standard smart phone apps—is left for ordinary product law and would not receive any new regulatory burden at all. Or it would require only basic transparency obligations; for example, a chatbot should identify that it is AI, not an interface to a real human.

The most important part of the AIA is where it describes what sorts of systems are potentially hazardous to automate. It then regulates only these. Both drafts of the AIA say that there are a small number of contexts in which no AI system should ever operate—for example, identifying individuals in public spaces from their biometric data, creating social credit scores for governments, or producing toys that encourage dangerous behavior or self harm. These are all simply banned, more or less. There are far more application areas for which using AI requires government and other human oversight: situations affecting human-life-altering outcomes, such as deciding who gets what government services, or who gets into which school or is awarded what loan. In these contexts, European residents would be provided with certain rights, and their governments with certain obligations, to ensure that the artifacts have been built and are functioning correctly and justly.

Making the AIA Act not apply to some of the systems we need to worry about—as the “presidency compromise” draft could do—would leave the door open for corruption and negligence. It also would make legal things the European Commission was trying to protect us from, like social credit systems and generalized facial recognition in public spaces, as long as a company could claim its system wasn’t “real” AI.

DeepMind Has Trained an AI to Control Nuclear Fusion

DeepMind Has Trained an AI to Control Nuclear Fusion

The inside of a tokamak—the doughnut-shaped vessel designed to contain a nuclear fusion reaction—presents a special kind of chaos. Hydrogen atoms are smashed together at unfathomably high temperatures, creating a whirling, roiling plasma that’s hotter than the surface of the sun. Finding smart ways to control and confine that plasma will be key to unlocking the potential of nuclear fusion, which has been mooted as the clean energy source of the future for decades. At this point, the science underlying fusion seems sound, so what remains is an engineering challenge. “We need to be able to heat this matter up and hold it together for long enough for us to take energy out of it,” says Ambrogio Fasoli, director of the Swiss Plasma Center at École Polytechnique Fédérale de Lausanne in Switzerland.

That’s where DeepMind comes in. The artificial intelligence firm, backed by Google parent company Alphabet, has previously turned its hand to video games and protein folding, and has been working on a joint research project with the Swiss Plasma Center to develop an AI for controlling a nuclear fusion reaction.

In stars, which are also powered by fusion, the sheer gravitational mass is enough to pull hydrogen atoms together and overcome their opposing charges. On Earth, scientists instead use powerful magnetic coils to confine the nuclear fusion reaction, nudging it into the desired position and shaping it like a potter manipulating clay on a wheel. The coils have to be carefully controlled to prevent the plasma from touching the sides of the vessel: this can damage the walls and slow down the fusion reaction. (There’s little risk of an explosion as the fusion reaction cannot survive without magnetic confinement).

But every time researchers want to change the configuration of the plasma and try out different shapes that may yield more power or a cleaner plasma, it necessitates a huge amount of engineering and design work. Conventional systems are computer-controlled and based on models and careful simulations, but they are, Fasoli says, “complex and not always necessarily optimized.”

DeepMind has developed an AI that can control the plasma autonomously. A paper published in the journal Nature describes how researchers from the two groups taught a deep reinforcement learning system to control the 19 magnetic coils inside TCV, the variable-configuration tokamak at the Swiss Plasma Center, which is used to carry out research that will inform the design of bigger fusion reactors in the future. “AI, and specifically reinforcement learning, is particularly well suited to the complex problems presented by controlling plasma in a tokamak,” says Martin Riedmiller, control team lead at DeepMind.

The neural network—a type of AI setup designed to mimic the architecture of the human brain—was initially trained in a simulation. It started by observing how changing the settings on each of the 19 coils affected the shape of the plasma inside the vessel. Then it was given different shapes to try to re-create in the plasma. These included a D-shaped cross section close to what will be used inside ITER (formerly the International Thermonuclear Experimental Reactor), the large-scale experimental tokamak under construction in France, and a snowflake configuration that could help dissipate the intense heat of the reaction more evenly around the vessel.

DeepMind’s neural network was able to manipulate the plasma inside a fusion reactor into a number of different shapes that fusion researchers have been exploring.Illustration: DeepMind & SPC/EPFL 

DeepMind’s AI was able to autonomously figure out how to create these shapes by manipulating the magnetic coils in the right way—both in the simulation and when the scientists ran the same experiments for real inside the TCV tokamak to validate the simulation. It represents a “significant step,” says Fasoli, one that could influence the design of future tokamaks or even speed up the path to viable fusion reactors. “It’s a very positive result,” says Yasmin Andrew, a fusion specialist at Imperial College London, who was not involved in the research. “It will be interesting to see if they can transfer the technology to a larger tokamak.”

Fusion offered a particular challenge to DeepMind’s scientists because the process is both complex and continuous. Unlike a turn-based game like Go, which the company has famously conquered with its AlphaGo AI, the state of a plasma constantly changes. And to make things even harder, it can’t be continuously measured. It is what AI researchers call an “under–observed system.”

“Sometimes algorithms which are good at these discrete problems struggle with such continuous problems,” says Jonas Buchli, a research scientist at DeepMind. “This was a really big step forward for our algorithm, because we could show that this is doable. And we think this is definitely a very, very complex problem to be solved. It is a different kind of complexity than what you have in games.”

The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy

The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy

para leer este articulo en español por favor aprete aqui.

In 2018, while the Argentine Congress was hotly debating whether to decriminalize abortion, the Ministry of Early Childhood in the northern province of Salta and the American tech giant Microsoft presented an algorithmic system to predict teenage pregnancy. They called it the Technology Platform for Social Intervention.

“With technology you can foresee five or six years in advance, with first name, last name, and address, which girl—future teenager—is 86 percent predestined to have an adolescent pregnancy,” Juan Manuel Urtubey, then the governor of the province, proudly declared on national television. The stated goal was to use the algorithm to predict which girls from low-income areas would become pregnant in the next five years. It was never made clear what would happen once a girl or young woman was labeled as “predestined” for motherhood or how this information would help prevent adolescent pregnancy. The social theories informing the AI system, like its algorithms, were opaque.

The system was based on data—including age, ethnicity, country of origin, disability, and whether the subject’s home had hot water in the bathroom—from 200,000 residents in the city of Salta, including 12,000 women and girls between the ages of 10 and 19. Though there is no official documentation, from reviewing media articles and two technical reviews, we know that “territorial agents” visited the houses of the girls and women in question, asked survey questions, took photos, and recorded GPS locations. What did those subjected to this intimate surveillance have in common? They were poor, some were migrants from Bolivia and other countries in South America, and others were from Indigenous Wichí, Qulla, and Guaraní communities.

Although Microsoft spokespersons proudly announced that the technology in Salta was “one of the pioneering cases in the use of AI data” in state programs, it presents little that is new. Instead, it is an extension of a long Argentine tradition: controlling the population through surveillance and force. And the reaction to it shows how grassroots Argentine feminists were able to take on this misuse of artificial intelligence.

In the 19th and early 20th centuries, successive Argentine governments carried out a genocide of Indigenous communities and promoted immigration policies based on ideologies designed to attract European settlement, all in hopes of blanquismo, or “whitening” the country. Over time, a national identity was constructed along social, cultural, and most of all racial lines.

This type of eugenic thinking has a propensity to shapeshift and adapt to new scientific paradigms and political circumstances, according to historian Marisa Miranda, who tracks Argentina’s attempts to control the population through science and technology. Take the case of immigration. Throughout Argentina’s history, opinion has oscillated between celebrating immigration as a means of “improving” the population and considering immigrants to be undesirable and a political threat to be carefully watched and managed.

More recently, the Argentine military dictatorship between 1976 and 1983 controlled the population through systematic political violence. During the dictatorship, women had the “patriotic task” of populating the country, and contraception was prohibited by a 1977 law. The cruelest expression of the dictatorship’s interest in motherhood was the practice of kidnapping pregnant women considered politically subversive. Most women were murdered after giving birth and many of their children were illegally adopted by the military to be raised by “patriotic, Catholic families.”

While Salta’s AI system to “predict pregnancy” was hailed as futuristic, it can only be understood in light of this long history, particularly, in Miranda’s words, the persistent eugenic impulse that always “contains a reference to the future” and assumes that reproduction “should be managed by the powerful.”

Due to the complete lack of national AI regulation, the Technology Platform for Social Intervention was never subject to formal review and no assessment of its impacts on girls and women has been made. There has been no official data published on its accuracy or outcomes. Like most AI systems all over the world, including those used in sensitive contexts, it lacks transparency and accountability.

Though it is unclear whether the technology program was ultimately suspended, everything we know about the system comes from the efforts of feminist activists and journalists who led what amounted to a grassroots audit of a flawed and harmful AI system. By quickly activating a well-oiled machine of community organizing, these activists brought national media attention to how an untested, unregulated technology was being used to violate the rights of girls and women.

“The idea that algorithms can predict teenage pregnancy before it happens is the perfect excuse for anti-women and anti-sexual and reproductive rights activists to declare abortion laws unnecessary,” wrote feminist scholars Paz Peña and Joana Varon at the time. Indeed, it was soon revealed that an Argentine nonprofit called the Conin Foundation, run by doctor Abel Albino, a vocal opponent of abortion rights, was behind the technology, along with Microsoft.