Select Page
Simulation Tech Can Help Predict the Biggest Threats

Simulation Tech Can Help Predict the Biggest Threats

The character of conflict between nations has fundamentally changed. Governments and militaries now fight on our behalf in the “gray zone,” where the boundaries between peace and war are blurred. They must navigate a complex web of ambiguous and deeply interconnected challenges, ranging from political destabilization and disinformation campaigns to cyberattacks, assassinations, proxy operations, election meddling, or perhaps even human-made pandemics. Add to this list the existential threat of climate change (and its geopolitical ramifications) and it is clear that the description of what now constitutes a national security issue has broadened, each crisis straining or degrading the fabric of national resilience.

Traditional analysis tools are poorly equipped to predict and respond to these blurred and intertwined threats. Instead, in 2022 governments and militaries will use sophisticated and credible real-life simulations, putting software at the heart of their decision-making and operating processes. The UK Ministry of Defence, for example, is developing what it calls a military Digital Backbone. This will incorporate cloud computing, modern networks, and a new transformative capability called a Single Synthetic Environment, or SSE.

This SSE will combine artificial intelligence, machine learning, computational modeling, and modern distributed systems with trusted data sets from multiple sources to support detailed, credible simulations of the real world. This data will be owned by critical institutions, but will also be sourced via an ecosystem of trusted partners, such as the Alan Turing Institute.

An SSE offers a multilayered simulation of a city, region, or country, including high-quality mapping and information about critical national infrastructure, such as power, water, transport networks, and telecommunications. This can then be overlaid with other information, such as smart-city data, information about military deployment, or data gleaned from social listening. From this, models can be constructed that give a rich, detailed picture of how a region or city might react to a given event: a disaster, epidemic, or cyberattack or a combination of such events organized by state enemies.

Defense synthetics are not a new concept. However, previous solutions have been built in a standalone way that limits reuse, longevity, choice, and—crucially—the speed of insight needed to effectively counteract gray-zone threats.

National security officials will be able to use SSEs to identify threats early, understand them better, explore their response options, and analyze the likely consequences of different actions. They will even be able to use them to train, rehearse, and implement their plans. By running thousands of simulated futures, senior leaders will be able to grapple with complex questions, refining policies and complex plans in a virtual world before implementing them in the real one.

One key question that will only grow in importance in 2022 is how countries can best secure their populations and supply chains against dramatic weather events coming from climate change. SSEs will be able to help answer this by pulling together regional infrastructure, networks, roads, and population data, with meteorological models to see how and when events might unfold.

Self-Driving Cars: The Complete Guide

Self-Driving Cars: The Complete Guide

In the past decade, autonomous driving has gone from “maybe possible” to “definitely possible” to “inevitable” to “how did anyone ever think this wasn’t inevitable?” to “now commercially available.” In December 2018, Waymo, the company that emerged from Google’s self-driving-car project, officially started its commercial self-driving-car service in the suburbs of Phoenix. At first, the program was underwhelming: available only to a few hundred vetted riders, and human safety operators remained behind the wheel. But in the past four years, Waymo has slowly opened the program to members of the public and has begun to run robotaxis without drivers inside. The company has since brought its act to San Francisco. People are now paying for robot rides.

And it’s just a start. Waymo says it will expand the service’s capability and availability over time. Meanwhile, its onetime monopoly has evaporated. Every significant automaker is pursuing the tech, eager to rebrand and rebuild itself as a “mobility provider. Amazon bought a self-driving-vehicle developer, Zoox. Autonomous trucking companies are raking in investor money. Tech giants like Apple, IBM, and Intel are looking to carve off their slice of the pie. Countless hungry startups have materialized to fill niches in a burgeoning ecosystem, focusing on laser sensors, compressing mapping data, setting up service centers, and more.

This 21st-century gold rush is motivated by the intertwined forces of opportunity and survival instinct. By one account, driverless tech will add $7 trillion to the global economy and save hundreds of thousands of lives in the next few decades. Simultaneously, it could devastate the auto industry and its associated gas stations, drive-thrus, taxi drivers, and truckers. Some people will prosper. Most will benefit. Some will be left behind.

It’s worth remembering that when automobiles first started rumbling down manure-clogged streets, people called them horseless carriages. The moniker made sense: Here were vehicles that did what carriages did, minus the hooves. By the time “car” caught on as a term, the invention had become something entirely new. Over a century, it reshaped how humanity moves and thus how (and where and with whom) humanity lives. This cycle has restarted, and the term “driverless car” may soon seem as anachronistic as “horseless carriage.” We don’t know how cars that don’t need human chauffeurs will mold society, but we can be sure a similar gear shift is on the way.

SelfDriving Cars The Complete Guide

The First Self-Driving Cars

Just over a decade ago, the idea of being chauffeured around by a string of zeros and ones was ludicrous to pretty much everybody who wasn’t at an abandoned Air Force base outside Los Angeles, watching a dozen driverless cars glide through real traffic. That event was the Urban Challenge, the third and final competition for autonomous vehicles put on by Darpa, the Pentagon’s skunkworks arm.

Content

This content can also be viewed on the site it originates from.

At the time, America’s military-industrial complex had already thrown vast sums and years of research trying to make unmanned trucks. It had laid a foundation for this technology, but stalled when it came to making a vehicle that could drive at practical speeds, through all the hazards of the real world. So, Darpa figured, maybe someone else—someone outside the DOD’s standard roster of contractors, someone not tied to a list of detailed requirements but striving for a slightly crazy goal—could put it all together. It invited the whole world to build a vehicle that could drive across California’s Mojave Desert, and whoever’s robot did it the fastest would get a million-dollar prize.

The 2004 Grand Challenge was something of a mess. Each team grabbed some combination of the sensors and computers available at the time, wrote their own code, and welded their own hardware, looking for the right recipe that would take their vehicle across 142 miles of sand and dirt of the Mojave. The most successful vehicle went just seven miles. Most crashed, flipped, or rolled over within sight of the starting gate. But the race created a community of people—geeks, dreamers, and lots of students not yet jaded by commercial enterprise—who believed the robot drivers people had been craving for nearly forever were possible, and who were suddenly driven to make them real.

They came back for a follow-up race in 2005 and proved that making a car drive itself was indeed possible: Five vehicles finished the course. By the 2007 Urban Challenge, the vehicles were not just avoiding obstacles and sticking to trails but following traffic laws, merging, parking, even making safe, legal U-turns.

When Google launched its self-driving car project in 2009, it started by hiring a team of Darpa Challenge veterans. Within 18 months, they had built a system that could handle some of California’s toughest roads (including the famously winding block of San Francisco’s Lombard Street) with minimal human involvement. A few years later, Elon Musk announced Tesla would build a self-driving system into its cars. And the proliferation of ride-hailing services like Uber and Lyft weakened the link between being in a car and owning that car, helping set the stage for a day when actually driving that car falls away too. In 2015, Uber poached dozens of scientists from Carnegie Mellon University—a robotics and artificial intelligence powerhouse—to get its effort going.

The Absurd Idea to Put Bodycams on Teachers Is … Feasible?

The Absurd Idea to Put Bodycams on Teachers Is … Feasible?

In the realm of international cybersecurity, “dual use” technologies are capable of both affirming and eroding human rights. Facial recognition may identify a missing child, or make anonymity impossible. Hacking may save lives by revealing key intel on a terrorist attack, or empower dictators to identify and imprison political dissidents.

The same is true for gadgets. Your smart speaker makes it easier to order pizza and listen to music, but also helps tech giants track you even more intimately and target you with more ads. Your phone’s GPS can both tell where you are and pass that data to advertisers and, sometimes, the federal government.

Tools can often be bought for one purpose, then, over time, used for another.

These subtle shifts are so common that when a conservative think tank in Nevada last month suggested mandating that teachers wear body cameras to ensure they don’t teach critical race theory, I thought it was ridiculous, offensive, and entirely feasible. Body cameras were intended to keep an eye on cops, but have also been used by police to misrepresent their encounters with the public.

Days later, “body cameras” trended on Twitter after Fox News pundit Tucker Carlson endorsed the idea. Anti-CRT teaching bills, which have already passed in states like Iowa, Texas, and my home state, Arkansas, continued to gain momentum. Now, I’m half expecting these bills to include funding for the devices because truly no idea is too absurd for the surveillance state.

The logic (to the extent that any logic has been applied) is that teachers are being compelled by far-left activists to teach students to resist patriotism and instead hate America because of the centuries-old sin of chattel slavery. Body cameras would allow parents to monitor whether their children are being indoctrinated. (There’s more support for this than you might think.)

As recounted by The Atlantic’s Adam Harris, the recent rebranding of critical race theory as an existential threat dates back about a year and a half.

In late 2019, a few schools around the country began adding excerpts from The New York Times’ 1619 Project to their history curriculum, outraging many conservatives who dismissed the core thesis reframing American history around slavery. The surge of interest in diversity and anti-racism training following the murder of George Floyd prompted some conservative writers to complain of secret reeducation campaigns. (Ironically, the Black men and women actually leading these trainings are ambivalent about whether they’ll cause lasting change.)

And so, everything from reading lists to diversity seminars became “critical race theory,” an enormously far cry from CRT’s origin in the 1970s as an analysis of the legal system by the late Harvard Law historian Derrick Bell.

This is what makes the turn toward surveillance to outlaw CRT so interesting: an ill-defined, amorphous problem meets an ill-defined, amorphous solution, the battleground ironically being schools, which have embraced surveillance greatly over the past few years.

The aftermath of the Stoneman Douglas High School shooting in 2018 led to a boom in “hardening” schools, often by employing surveillance: Schools began equipping iris scanners, gunshot detection microphones, facial recognition for access, and weapon-detecting robots. Online, schools turned to social media surveillance (on and off campus) that pings staff whenever students’ posts include words associated with suicide or shootings. As Republican lawmakers shirked having a conversation on gun control, funding more surveillance and officers in schools became an alternative.

When the pandemic hit, closing schools became a reason for surveillance. Schools began buying proctoring software that relies on facial recognition and even screen monitoring. Then, as schools reopened, surveillance firms started yet another pitch. This time, the same anti-shooting surveillance software can detect if students are wearing masks or failing to social distance. Dual uses abound.

Dumbed Down AI Rhetoric Harms Everyone

Dumbed Down AI Rhetoric Harms Everyone

When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.

Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.

The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.

But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.

Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.

The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.

The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.

The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.

Humans Need to Create Interspecies Money to Save the Planet

Humans Need to Create Interspecies Money to Save the Planet

A new form of digital currency for animals, trees, and other wildlife (no, not like Dogecoin) would help protect biodiversity and bend technology back to nature.