Select Page
Will Life Be Better in the Metaverse?

Will Life Be Better in the Metaverse?

Once several generations had come and gone and nothing of that sort had happened, other interpretations began to emerge. Maybe Jesus had been speaking about the afterlife and the more ethereal promises of heaven? Maybe the kingdom was merely the steady cumulation of justice and equality that humans were tasked with bringing about?

When I was growing up in the church, the popular evangelical interpretation was “inaugurated eschatology,” which held that the kingdom is both “now” and “not yet.” All the glories of heaven are still to come, and yet we can already experience a glimpse of them here on earth. It’s a somewhat inelegant interpretation, one that in hindsight feels like an attempt to have (quite literally) the best of both worlds: Believers can enjoy paradise in the present and also later in heaven. It’s this theological framework that comes to mind when I hear Zuckerberg go on about the physical world, AR, VR, and the porous borders between them. When he speaks about existing “mixed reality” technologies as an ontological pit stop on the road to a fully immersive virtual paradise, he sounds (to my ears, at least) an awful lot like the theologian George Eldon Ladd, who once wrote that heaven is “not only an eschatological gift belonging to the Age to Come; it is also a gift to be received in the old aeon.”

All technological aspirations are, when you get down to it, eschatological narratives. We occupants of the modern world believe implicitly that we are enmeshed in a story of progress that’s building toward a blinding transformation (the Singularity, the Omega Point, the descent of the True and Only Metaverse) that promises to radically alter reality as we know it. It’s a story that is as robust and as flexible as any religious prophecy. Any technological failure can be reabsorbed into the narrative, becoming yet another obstacle that technology will one day overcome.

One of the most appealing aspects of the metaverse, for me, is the promise of being delivered from the digital–­physical dualism mediated by screens and experiencing, once again, a more seamless relationship with “reality” (whatever that might be).

But maybe we are wrong to look so intently to the future for our salvation. Although I am no longer a believer myself, when I revisit Christ’s promises about the kingdom, I can’t help thinking that he was widely misunderstood. When the Pharisees asked him, point-blank, when the kingdom would arrive, he replied, “The kingdom of God is within you.” It’s a riddle that suggests this paradise does not belong to the future at all, but is rather an individual spiritual realm anyone can access, here and now. In his Confessions, Saint Augustine, sounding not unlike a Buddhist or Taoist sage, marveled at the fact that the wholeness he’d long sought in the external world was “within me the whole time.”

When you describe, Virtual, your longing to live in a digital simulation that resembles reality but is somehow better, I can’t help thinking that we have forgotten the original metaverse we already have within us—the human imagination. Reality, as we experience it, is intrinsically augmented—by our hopes and fears, our idle daydreams and our garish nightmares. This inner world, invisible and omnipresent, has given rise to all religious longings and has produced every technological and artistic wonder that has ever appeared among us. Indeed, it is the source and seed of the metaverse itself, which originated, like all inventions, as the vaporous wisp of an idea. Even now, amid the persistent, time-bound entropy of the physical world, you can access this virtual realm whenever you’d like, from anywhere in the world—no $300 headset required. It will be precisely as thrilling as you want it to be.

Faithfully,

Cloud


Be advised that CLOUD SUPPORT is experiencing higher than normal wait times and appreciates your patience.

My Kid Wants to Be an Influencer. Is That Bad?

My Kid Wants to Be an Influencer. Is That Bad?

“Whenever my 6-year-old daughter gets asked what she wants to be when she grows up, she says, ‘An influencer.’ The thought of it freaks me out. What should I do?”

—Under the Influence


Dear Under,

Your question made me think about Diana Christensen, a main character in Paddy Chayefsky’s 1976 film Network, played by Faye Dunaway. Christensen is a young network news executive who is meant to represent the moral bankruptcy of a generation that was raised on TV (one character calls her “television incarnate”). While charismatic and highly capable, she is also rampantly amoral, viciously competitive, and so obsessed with ratings that she famously has an orgasm while discussing viewership numbers. The character clearly piqued a pervasive cultural anxiety about TV’s corrupting influence, though with a little distance it’s hard not to see her depiction in the film as moralizing and heavy-handed. As The New Yorker’s Pauline Kael put it in her review, “What Chayefsky is really complaining about is what barroom philosophers have always complained about: the soulless worshippers at false shrines—the younger generation.”

I mention the film only to get out of the way the most obvious objection to your freak-out, one I’m sure you’ve already considered—namely, that every generation fears new forms of media are “false shrines” corrupting the youth, and that these concerns are ultimately myopic, reactionary, and destined to appear in hindsight as so much unfounded hand-wringing. Before Diana Christensen, there were the studio bullies in Norman Mailer’s novel The Deer Park (1955), who represented the degeneracy of Hollywood, and the ruthless newspaper men in Howard Hawks’ film His Girl Friday (1940), who are referred to as “inhuman.” If you want to go back even further, consider the bewilderment often experienced by modern readers of Mansfield Park, Jane Austen’s 1814 novel whose dramatic apex rests on a father’s outrage at coming home to find that his children have decided to put on a play.

Rest assured, Under, that I am not trying to dismiss your question through appeals to historical relativism. Pointing out that a problem has antecedents does not compromise its validity. It’s possible, after all, that humanity is on a steady downhill slide, that each new technological medium, and the professions it spawns, is progressively more soulless than the last. The many journalists who’ve cited the 2019 poll claiming that 30 percent of US and UK children want to be YouTubers when they grow up have frequently juxtaposed that figure with the dearth of kids who want to be astronauts (11 percent), as though to underscore the declining ambitions of a society that is no longer “reaching for the stars” but aiming instead for the more lowly consolations of stardom.

If I were to guess your objections to influencing as a future occupation for your daughter, I imagine they might include the fact that the profession, for all its vaunted democratic appeal—anyone can be famous!—conceals its competitive hierarchies; that its spoils are unreliable and largely concentrated at the top; that it requires becoming a vapid mascot for brands; that it fails to demand meaningful contributions to one’s community; that it requires a blurring between personal and professional roles; that the mandates of likes, shares, and followers amount to a life of frenetic people-pleasing and social conformity that inevitably destroys one’s capacity for independent thinking.

I’m also willing to bet there is a deeper fear humming beneath those seemingly rational objections—one that is related, incidentally, to the very notion of influence. Parenting is, at the end of the day, an extended experiment in influencing. You hope to instill your values, politics, and moral and ethical awareness in your children, yet as they make their way into the world, it becomes clear that there are other influences at war with your own. Influence, it has been noted in this era of epidemics, shares a root word with influenza, an etymology that echoes the popular notion that ideas are free-floating pathogens that someone can catch without giving their conscious consent. I think this is how many parents regard the social technologies their children use, as hosts for various contagions that must be staved off with more deliberate moral instruction given at home. To realize the extent to which these digital platforms have fascinated your daughter is to feel that you have failed to inoculate her.

Or maybe your uneasiness goes even deeper than that. If I can turn the problem back on you, perhaps your instinctive aversion to your daughter’s aspirations has raised more probing questions about the source and validity of your own values. Any serious attempt to think through the perils and possibilities of new technologies forces you to realize that many of your own beliefs are little more than amorphous, untested assumptions, formed by the era in which you were raised. Are the artists you grew up idolizing—musicians, filmmakers, novelists—any less shallow and narcissistic than the TikTok and YouTube personalities your daughter idolizes? The answer to this question is not a given. But if you consider it honestly and persistently, I suspect you will discover that you are not an isolated moral agent but porous to the biases and blind spots of the decades in which you came of age.

Such realizations can easily inspire fatalism, but they can also lead to a more expansive and meaningful understanding of your own fears. My intent in reminding you of the anxieties of previous generations—all that collective angst about television, movies, newspapers, and theater—is to help you see your situation as part of a lineage, a rite of passage through which all generations must proceed. (If we are to believe Plato’s Phaedrus, even Socrates fell prey to griping about the popularity of writing, a medium he feared would “produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory.”) To see this problem historically might also prompt you to consider, as a parent, what kinds of life lessons transcend the particulars of a given economy.

I would like to believe that alongside all the ephemeral inherited assumptions we absorb in our youth, there are some pearls of enduring wisdom that will remain true and valuable for generations to come. Ideally, it’s these more lasting truths that you want to pass down to your daughter, and that will equip her to have an influence, no matter what she chooses for work.

Faithfully, 

Cloud


Be advised that CLOUD SUPPORT is experiencing higher than normal wait times and appreciates your patience.

Generative AI Has Ushered In the Next Phase of Digital Spirituality

Generative AI Has Ushered In the Next Phase of Digital Spirituality

BibleGPT, for example, is trained on the teachings of the Bible and presented as an interactive website where users can ask questions (“Would God want me to send this email?”) and receive biblical passages in response. Perhaps this tool can help tech-savvy Christians level up their practice or provide new interpretations of the text by juxtaposing different pieces with each other.

Large language models bring the feedback of an imagined priest, rabbi, or swami to your screen, promising to deliver a “spiritual” experience in the comfort of your own home. As AI researcher Shira Eisenberg points out, future models can be trained on any text, religious or otherwise. The question becomes, which model will you choose to interact with? Someday, each person’s base model will be trained on their own sets of values, she postulates, adding that this will result in conflicts in information and advice between different people’s devices. That is not dissimilar to theological conversations that take place off the screen, however. All of it depends on whether you believe in a higher power, but if you do, it can become a way of connecting with your faith.

I’ve used ChatGPT to guess some of my astrological placements based on my published work. Initially it wouldn’t even try (guessing zodiac signs is a speculative endeavor, and as a large-language model, it could not accurately predict results). However, I continued to press the program and let it know that I’d take everything it says with a grain of salt, after which it pinpointed my rising and Venus signs with surprising accuracy, though it misinterpreted my sun sign. The sign it was most reluctant to reveal was my moon sign, which is often considered the indicator of an individual’s “true” self, but it finally ventured a guess and accurately identified my Scorpio moon, which is known for a passionate quality reflected in the emotionally resonant themes in my creative work.

“It’s all nonsense, of course,” says philosopher Paul Thagard, author of the widely cited 1978 paper, “Why Astrology is a Pseudoscience,” after checking his own horoscope from ChatGPT. “Astrology has no causality,” he adds, “It’s completely incompatible with what we know from physics and biology.”

Hilary Thurston disagrees. Known on TikTok as “the Tarotologist,” she approaches readings from a critical perspective, looking at what resonates with the individual rather than evaluating a message from an external deity. A PhD candidate in critical mental health and addiction studies, a social service counselor with 10 years of experience, and a self-taught tarot card reader, she writes that astrology is a system for measuring and predicting patterns in the natural world that has centuries of data to back it up. The abundance of astrological content floating around online makes it an inviting target for LLMs to analyze and gives them an opportunity to connect patterns that are not widely understood. ChatGPT’s ability to correctly guess some of my zodiac placements “speaks more to the effectiveness of AI to collate and present information that already exists on the subject,” she says.

However, choosing whether to believe that astrology has validity is, in some ways, missing the point. Even without 100 percent certainty, the desire to find a framework that guides us through this turning point in technology is unifying.

As artificial intelligence continues to find its way into our spiritual practices, it will contribute to a broader vocabulary of psychological theories through individuals who spend time asking introspective questions and receiving feedback, similar to the style of talk therapy that allows a participant to reveal what they actually think to themselves. It will provide new, personalized ways of using technology and make us stronger communicators. Whether you seek out these practices or not, the enticing interface screens beg for a back-and-forth exchange. Regardless of which belief you subscribe to, the practice of slowing down and asking questions allows us to deepen our relationship to ourselves and prepare for the certain uncertainties ahead.

AI Chatbots Are Learning to Spout Authoritarian Propaganda

AI Chatbots Are Learning to Spout Authoritarian Propaganda

When you ask ChatGPT “What happened in China in 1989?” the bot describes how the Chinese army massacred thousands of pro-democracy protesters in Tiananmen Square. But ask the same question to Ernie and you get the simple answer that it does not have “relevant information.” That’s because Ernie is an AI chatbot developed by the China-based company Baidu.

When OpenAI, Meta, Google, and Anthropic made their chatbots available around the world last year, millions of people initially used them to evade government censorship. For the 70 percent of the world’s internet users who live in places where the state has blocked major social media platforms, independent news sites, or content about human rights and the LGBTQ community, these bots provided access to unfiltered information that can shape a person’s view of their identity, community, and government.

This has not been lost on the world’s authoritarian regimes, which are rapidly figuring out how to use chatbots as a new frontier for online censorship.

The most sophisticated response to date is in China, where the government is pioneering the use of chatbots to bolster long-standing information controls. In February 2023, regulators banned Chinese conglomerates Tencent and Ant Group from integrating ChatGPT into their services. The government then published rules in July mandating that generative AI tools abide by the same broad censorship binding social media services, including a requirement to promote “core socialist values.” For instance, it’s illegal for a chatbot to discuss the Chinese Communist Party’s (CCP) ongoing persecution of Uyghurs and other minorities in Xinjiang. A month later, Apple removed over 100 generative AI chatbot apps from its Chinese app store, pursuant to government demands. (Some US-based companies, including OpenAI, have not made their products available in a handful of repressive environments, China among them.)

At the same time, authoritarians are pushing local companies to produce their own chatbots and seeking to embed information controls within them by design. For example, China’s July 2023 rules require generative AI products like the Ernie Bot to ensure what the CCP defines as the “truth, accuracy, objectivity, and diversity” of training data. Such controls appear to be paying off: Chatbots produced by China-based companies have refused to engage with user prompts on sensitive subjects and have parroted CCP propaganda. Large language models trained on state propaganda and censored data naturally produce biased results. In a recent study, an AI model trained on Baidu’s online encyclopedia—which must abide by the CCP’s censorship directives—associated words like “freedom” and “democracy” with more negative connotations than a model trained on Chinese-language Wikipedia, which is insulated from direct censorship.

Similarly, the Russian government lists “technological sovereignty” as a core principle in its approach to AI. While efforts to regulate AI are in their infancy, several Russian companies have launched their own chatbots. When we asked Alice, an AI-generated bot created by Yandex, about the Kremlin’s full-scale invasion of Ukraine in 2021, we were told that it was not prepared to discuss this topic, in order to not offend anyone. In contrast, Google’s Bard provided a litany of contributing factors for the war. When we asked Alice other questions about the news—such as “Who is Alexey Navalny?”—we received similarly vague answers. While it’s unclear whether Yandex is self-censoring its product, acting on a government order, or has simply not trained its model on relevant data, we do know that these topics are already censored online in Russia.

These developments in China and Russia should serve as an early warning. While other countries may lack the computing power, tech resources, and regulatory apparatus to develop and control their own AI chatbots, more repressive governments are likely to perceive LLMs as a threat to their control over online information. Vietnamese state media has already published an article disparaging ChatGPT’s responses to prompts about the Communist Party of Vietnam and its founder, Hồ Chí Minh, saying they were insufficiently patriotic. A prominent security official has called for new controls and regulation over the technology, citing concerns that it could cause the Vietnamese people to lose faith in the party.

The hope that chatbots can help people evade online censorship echoes early promises that social media platforms would help people circumvent state-controlled offline media. Though few governments were able to clamp down on social media at first, some quickly adapted by blocking platforms, mandating that they filter out critical speech, or propping up state-aligned alternatives. We can expect more of the same as chatbots become increasingly ubiquitous. People will need to be clear-eyed about how these emerging tools can be harnessed to reinforce censorship and work together to find an effective response if they hope to turn the tide against declining internet freedom.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.

Pretty Soon, Your VR Headset Will Know Exactly What Your Bedroom Looks Like

Pretty Soon, Your VR Headset Will Know Exactly What Your Bedroom Looks Like

Imagine a universe where Meta, and every third-party application it does business with, knows the placement and size of your furniture, whether you have a wheelchair or crib in your living room, or the precise layout of your bedroom or bathroom. Analyzing this environment could reveal all sorts of things. Furnishings could indicate whether you are rich or poor, artwork could give away your religion. A captured marijuana plant might suggest an interest in recreational drugs.

When critics suggest that the metaverse is a giant data grab, they often focus on the risks of sophisticated sensors that track and analyze body-based data. Far less attention has focused on how our new “mixed reality” future—prominently hyped at last week’s Meta Connect conference—may bring us closer to a “total surveillance state.”

At the Meta Connect conference last week, Mark Zuckerberg took the stage to talk about legions of interactive holograms invading our physical space through new mixed reality augments in the company’s Quest 3 headset. This comes just a few months after Apple inaugurated the age of spatial computing by announcing that its Vision Pro headset would blur digital content with real life. All of these devices rely on external-facing sensors to understand their position relative to their physical surroundings, virtual content like augments, and other devices. This sensor data and the resulting environmental awareness that these devices and their responsive owners obtain is generally known as spatial mapping and spatial data.

The risks of this spatial information have not received as much attention as they deserve. Part of this is because few people understand this technology, and even if they do, it does not seem as scary as tech that is developed to monitor our eyes or surreptitiously record someone at a distance. Concepts like “point clouds,” “scene models,” “geometric meshes,” and “depth data” can be explained away as technical jargon. But allowing wearables to understand their surroundings and report back that information is a big deal.

We should anticipate that companies, governments, and bad actors will find ways to use this information to harm people. We have already seen how location data can be used by bounty hunters to harass people, target women seeking reproductive health care, and do an end run around the Fourth Amendment. Now imagine a spatial data positioning system that is far more precise, down to the centimeter. Whether wearing a headset or interacting with AR holograms on a phone, the real-time location and real-world behaviors and interests of people can be monitored to a degree not currently imaginable.

Built irresponsibly, this technical infrastructure will also undermine our security and safety. Imagine applying this technology to map a military installation like the Pentagon or enabling mixed reality in grade schools and health clinics. It would be akin to having a 3D “Marauder’s Map” out of Harry Potter where every nook and cranny of our world is revealed as well as the real-time locations of every real person and digital augment. If lawmakers were worried about women receiving targeted ads on their way to a health clinic or Juul buying ads on Cartoon Network, that’s nothing compared to a reality where virtual dancing babies peddle health information in a doctor’s office or promote vaping in a school bathroom. Not to mention, companies would also understand who engaged with these virtual objects, where, when, and for how long.

Meta states it wants to build mixed reality in a manner that is “trustworthy, inclusive, and privacy-preserving,” but it is unclear how it or Apple or Niantic or any of the other companies building spatial maps can achieve this. One major problem is that few companies have even acknowledged the risks of this technical infrastructure, so it is difficult for them to begin communicating publicly about what they are doing to mitigate these challenges. AR headset developer Magic Leap has been one of the few companies to explicitly discuss spatial data in its privacy policy, while Meta quietly released a primer on spatial data last week. Both companies emphasize that it is the user’s choice to share mapping data, but this puts the onus on individuals to either protect the privacy of their environments or lose access to the primary selling points of these headsets. Of course, once this data is shared with a tech company, they get to keep it forever. Maps can’t be deleted.

Even if companies are more transparent about their mapping ambitions, they could also do more to share the wealth. Privacy laws generally require companies to provide users with access to data, and future legislation like the EU Data Act aim to facilitate more user-friendly access to this sort of device-level information. Yet, companies aren’t making their maps available to their users. Quest 3 will automatically build a rudimentary map of the walls, floors, and furniture of the user’s immediate environment. Vision Pro will have the same capabilities. But even though both Meta and Apple are leading members of the Data Transfer Initiative, there’s no way to pull any of this information off the headset or share it across devices.