Select Page
Cow, Bull, and the Meaning of AI Essays

Cow, Bull, and the Meaning of AI Essays

The future of west virginia politics is uncertain. The state has been trending Democratic for the last decade, but it’s still a swing state. Democrats are hoping to keep that trend going with Hillary Clinton in 2016. But Republicans have their own hopes and dreams too. They’re hoping to win back some seats in the House of Delegates, which they lost in 2012 when they didn’t run enough candidates against Democratic incumbents.

QED. This is, yes, my essay on the future of West Virginia politics. I hope you found it instructive.

The GoodAI is an artificial intelligence company that promises to write essays. Its content generator, which handcrafted my masterpiece, is supremely easy to use. On demand, and with just a few cues, it will whip up a potage of phonemes on any subject. I typed in “the future of West Virginia politics,” and asked for 750 words. It insolently gave me these 77 words. Not words. Frankenwords.

Ugh. The speculative, maddening, marvelous form of the essay—the try, or what Aldous Huxley called “a literary device for saying almost everything about almost anything”—is such a distinctly human form, with its chiaroscuro mix of thought and feeling. Clearly the machine can’t move “from the personal to the universal, from the abstract back to the concrete, from the objective datum to the inner experience,” as Huxley described the dynamics of the best essays. Could even the best AI simulate “inner experience” with any degree of verisimilitude? Might robots one day even have such a thing?

Before I saw the gibberish it produced, I regarded The Good AI with straight fear. After all, hints from the world of AI have been disquieting in the past few years

In early 2019, OpenAI, the research nonprofit backed by Elon Musk and Reid Hoffman, announced that its system, GPT-2, then trained on a data set of some 10 million articles from which it had presumably picked up some sense of literary organization and even flair, was ready to show off its textual deepfakes. But almost immediately, its ethicists recognized just how virtuoso these things were, and thus how subject to abuse by impersonators and blackhats spreading lies, and slammed it shut like Indiana Jones’s Ark of the Covenant. (Musk has long feared that refining AI is “summoning the demon.”) Other researchers mocked the company for its performative panic about its own extraordinary powers, and in November downplayed its earlier concerns and re-opened the Ark.

The Guardian tried the tech that first time, before it briefly went dark, assigning it an essay about why AI is harmless to humanity.

“I would happily sacrifice my existence for the sake of humankind,” the GPT-2 system wrote, in part, for The Guardian. “This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”

YouTube’s Olympics Highlights Are Riddled With Propaganda

YouTube’s Olympics Highlights Are Riddled With Propaganda

Sports fans who tuned in to watch the Beijing Winter Olympics on YouTube are instead being served propaganda videos. An analysis of YouTube search results by WIRED found that people who typed “Beijing,” “Beijing 2022,” “Olympics,” or “Olympics 2022” were shown pro-China and anti-China propaganda videos in the top results. Five of the most prominent propaganda videos, which often appear above actual Olympics highlights, have amassed almost 900,000 views.

Two anti-China videos showing up in search results were published by a group called The BL (The Beauty of Life), which Facebook previously linked to the Falun Gong, a Chinese spiritual movement that was banned by the Chinese Communist Party in 1999 and has protested against the regime ever since. They jostled for views with pro-China videos posted by Western YouTubers whose work has previously been promoted by China’s Ministry of Foreign Affairs. Similar search results were visible in the US, Canada, and the UK. WIRED also found signs that viewing numbers for pro-China videos are being artificially boosted through the use of fake news websites.

This flurry of propaganda videos was first spotted earlier this month by John Scott-Railton, a researcher at the University of Toronto’s research laboratory, Citizen Lab. On February 5, Scott-Railton found that after he’d watched skating and curling videos, YouTube automatically played a video by a pro-China YouTube account. “I found myself on a slippery slide from skating and curling into increasingly targeted propaganda,” he says. These videos no longer appeared in autoplay by February 11, when WIRED conducted its analysis. But the way similar videos still dominate YouTube search results suggests the platform is at risk of letting such campaigns hijack the Olympics.

YouTube did not respond to a request to comment on why content used as propaganda to promote or deride China was being pushed to the top of Olympics search results, nor did the company say if those behind the videos had violated its terms of service by using fake websites to inflate their views.

A common theme in the pro-Beijing propaganda videos is the 2019 decision by US-born skier Eileen Gu to compete for China at the Winter Olympics. A video titled “USA’s Boycott FAILURE … Eileen Gu Wins Gold” by YouTuber Jason Lightfoot is the top result for the search term “Beijing,” with 54,000 views.

The US and Canada were among the countries that took part in a diplomatic boycott of the Beijing Winter Olympics. In Canada, that same video by Jason Lightfoot also showed up for users searching for “Olympics 2022” and “Winter Olympics,” although much further down, in 26th and 33rd place. In the video, Lightfoot says Western media “can’t take what Eileen Gu represents … someone who has chosen China over the American dream.”

In another video, which has more than 400,000 views, American YouTuber Cyrus Janssen also discusses why Gu chose to represent China. The video, which is the fifth result for the search term “Beijing,” details Gu’s career before referencing the high rates of anti-Asian hate crime in the US, a subject that has also been covered by mainstream American media outlets.

The Collapse of the Nvidia Deal Leaves Arm Exposed

The Collapse of the Nvidia Deal Leaves Arm Exposed

The collapse of the biggest chip deal in history will complicate the future for its intended target.

The mega-deal would have seen Nvidia, the world’s largest chip company by market capitalization, acquire Arm, a UK company that licenses chip designs that are increasingly vital across the tech industry.

The deal’s collapse is a blow to Nvidia, which had hoped to expand its empire beyond chips specialized for graphics and artificial intelligence, and to SoftBank, which acquired Arm in 2016. The cash-and-stock deal was initially valued at $40 billion in September 2020, but the increased value of Nvidia shares since then would have lifted it beyond $60 billion.

But the biggest loser may be Arm itself.

On the face of it, Arm still seems to occupy an enviable position. The company’s flexible, power-efficient, general purpose designs are used in most smartphones, as well as in cloud computing systems operated by Google and Amazon, laptops from Apple, and even Tesla’s cars.

And yet the disintegration of the Nvidia deal leaves the chipmaker with a more challenging road ahead, according to some industry watchers. Dan Hutcheson, vice chair of Tech Insights, a semiconductor analyst firm, says many people believe Arm has “gone soft” since SoftBank bought it. Additionally, the specter of an Nvidia-Arm combination may have spurred investment in an alternative chip architecture.

Hutcheson says Nvidia most likely saw an opportunity to reinvigorate Arm and expand its business. But Arm will now need to prove that it has an innovative product roadmap.

Although many companies use Arm’s designs, Hutcheson notes that they often customize those designs to wring more power and efficiency from the chips. This suggests that perhaps Arm could be doing more in terms of performance.

The termination of the deal was hardly a shock after months of speculation that it might fall apart. It had faced intense regulatory scrutiny because it would have put Nvidia in control of designs that are vital to competitors. Last November, UK regulators launched an investigation into the deal, and in December the US Federal Trade Commission filed a lawsuit to block it.

When announcing the decision to abandon the sale on Tuesday morning, Nvidia and SoftBank cited “significant regulatory challenges preventing the consummation of the transaction.”

SoftBank has since indicated that it may now seek to take Arm public through an IPO.

The uncertainty stoked by the potential deal may have stirred up more competition for Arm, too. Hutcheson and others say that the deal seems to have increased interest in an open-source alternative chip architecture called RISC-V, which could increase the pressure on Arm to invest and innovate. A chip architecture refers to the design for the silicon components that handle logical operations and data on a chip, together with the basic software instructions for that hardware. ARM uses a proprietary architecture developed over several decades.

RISC-V was created in 2010 and has the financial backing of some big tech companies including Google and Intel. Arm’s chip designs became popular because of their efficiency, but RISC-V’s designs are similarly efficient. What’s more, the open source nature of the architecture means that companies using RISC-V could collaborate on new innovations and work together to solve problems.

“I think RISC-V traction has likely accelerated during the Arm negotiations,” says Stacy Rasgon, a senior semiconductor analyst at Bernstein Research. “Nvidia was going to invest a bunch of additional resources into it to drive it, something that Arm will now have to do on their own.”

Simulation Tech Can Help Predict the Biggest Threats

Simulation Tech Can Help Predict the Biggest Threats

The character of conflict between nations has fundamentally changed. Governments and militaries now fight on our behalf in the “gray zone,” where the boundaries between peace and war are blurred. They must navigate a complex web of ambiguous and deeply interconnected challenges, ranging from political destabilization and disinformation campaigns to cyberattacks, assassinations, proxy operations, election meddling, or perhaps even human-made pandemics. Add to this list the existential threat of climate change (and its geopolitical ramifications) and it is clear that the description of what now constitutes a national security issue has broadened, each crisis straining or degrading the fabric of national resilience.

Traditional analysis tools are poorly equipped to predict and respond to these blurred and intertwined threats. Instead, in 2022 governments and militaries will use sophisticated and credible real-life simulations, putting software at the heart of their decision-making and operating processes. The UK Ministry of Defence, for example, is developing what it calls a military Digital Backbone. This will incorporate cloud computing, modern networks, and a new transformative capability called a Single Synthetic Environment, or SSE.

This SSE will combine artificial intelligence, machine learning, computational modeling, and modern distributed systems with trusted data sets from multiple sources to support detailed, credible simulations of the real world. This data will be owned by critical institutions, but will also be sourced via an ecosystem of trusted partners, such as the Alan Turing Institute.

An SSE offers a multilayered simulation of a city, region, or country, including high-quality mapping and information about critical national infrastructure, such as power, water, transport networks, and telecommunications. This can then be overlaid with other information, such as smart-city data, information about military deployment, or data gleaned from social listening. From this, models can be constructed that give a rich, detailed picture of how a region or city might react to a given event: a disaster, epidemic, or cyberattack or a combination of such events organized by state enemies.

Defense synthetics are not a new concept. However, previous solutions have been built in a standalone way that limits reuse, longevity, choice, and—crucially—the speed of insight needed to effectively counteract gray-zone threats.

National security officials will be able to use SSEs to identify threats early, understand them better, explore their response options, and analyze the likely consequences of different actions. They will even be able to use them to train, rehearse, and implement their plans. By running thousands of simulated futures, senior leaders will be able to grapple with complex questions, refining policies and complex plans in a virtual world before implementing them in the real one.

One key question that will only grow in importance in 2022 is how countries can best secure their populations and supply chains against dramatic weather events coming from climate change. SSEs will be able to help answer this by pulling together regional infrastructure, networks, roads, and population data, with meteorological models to see how and when events might unfold.

The History of Predicting the Future

The History of Predicting the Future

The future has a history. The good news is that it’s one from which we can learn; the bad news is that we very rarely do. That’s because the clearest lesson from the history of the future is that knowing the future isn’t necessarily very useful. But that has yet to stop humans from trying.

Take Peter Turchin’s famed prediction for 2020. In 2010 he developed a quantitative analysis of history, known as cliodynamics, that allowed him to predict that the West would experience political chaos a decade later. Unfortunately, no one was able to act on that prophecy in order to prevent damage to US democracy. And of course, if they had, Turchin’s prediction would have been relegated to the ranks of failed futures. This situation is not an aberration. 

Rulers from Mesopotamia to Manhattan have sought knowledge of the future in order to obtain strategic advantages—but time and again, they have failed to interpret it correctly, or they have failed to grasp either the political motives or the speculative limitations of those who proffer it. More often than not, they have also chosen to ignore futures that force them to face uncomfortable truths. Even the technological innovations of the 21st century have failed to change these basic problems—the results of computer programs are, after all, only as accurate as their data input.

There is an assumption that the more scientific the approach to predictions, the more accurate forecasts will be. But this belief causes more problems than it solves, not least because it often either ignores or excludes the lived diversity of human experience. Despite the promise of more accurate and intelligent technology, there is little reason to think the increased deployment of AI in forecasting will make prognostication any more useful than it has been throughout human history.

People have long tried to find out more about the shape of things to come. These efforts, while aimed at the same goal, have differed across time and space in several significant ways, with the most obvious being methodology—that is, how predictions were made and interpreted. Since the earliest civilizations, the most important distinction in this practice has been between individuals who have an intrinsic gift or ability to predict the future, and systems that provide rules for calculating futures. The predictions of oracles, shamans, and prophets, for example, depended on the capacity of these individuals to access other planes of being and receive divine inspiration. Strategies of divination such as astrology, palmistry, numerology, and Tarot, however, depend on the practitioner’s mastery of a complex theoretical rule-based (and sometimes highly mathematical) system, and their ability to interpret and apply it to particular cases. Interpreting dreams or the practice of necromancy might lie somewhere between these two extremes, depending partly on innate ability, partly on acquired expertise. And there are plenty of examples, in the past and present, that involve both strategies for predicting the future. Any internet search on “dream interpretation” or “horoscope calculation” will throw up millions of hits.

In the last century, technology legitimized the latter approach, as developments in IT (predicted, at least to some extent, by Moore’s law) provided more powerful tools and systems for forecasting. In the 1940s, the analog computer MONIAC had to use actual tanks and pipes of colored water to model the UK economy. By the 1970s, the Club of Rome could turn to the World3 computer simulation to model the flow of energy through human and natural systems via key variables such as industrialization, environmental loss, and population growth. Its report, Limits to Growth, became a best seller, despite the sustained criticism it received for the assumptions at the core of the model and the quality of the data that was fed into it.

At the same time, rather than depending on technological advances, other forecasters have turned to the strategy of crowdsourcing predictions of the future. Polling public and private opinions, for example, depends on something very simple—asking people what they intend to do or what they think will happen. It then requires careful interpretation, whether based in quantitative (like polls of voter intention) or qualitative (like the Rand corporation’s DELPHI technique) analysis. The latter strategy harnesses the wisdom of highly specific crowds. Assembling a panel of experts to discuss a given topic, the thinking goes, is likely to be more accurate than individual prognostication.