Select Page
Tinder Thinks Love Has No Borders—Even in the West Bank

Tinder Thinks Love Has No Borders—Even in the West Bank

On the ground, the particular geopolitical situation of Israel and Palestine, with its checkpoints and patchwork of territorial designations, also shapes who uses Tinder’s service and how. Although the interface includes no explicit mention of the separation barrier aside from a dashed gray line to indicate a disputed border, users in the region face a significant obstacle: When Palestinians and Jewish Israelis do match, there is often no legal way for them to meet without leaving the country entirely, despite their geographic proximity when swiping. Israelis can cross the Green Line to travel on segregated roads to Israeli settlements, but not to Palestinian cities or villages. Palestinians in the West Bank, meanwhile, cannot cross the Green Line at all without a permit, which can be exceedingly difficult to obtain. Palestinians who do have a Jerusalem ID or hold Israeli citizenship can travel freely in Israel and Palestine to go on dates when they find a match. But the users I spoke with who do not have this freedom of movement say they are deterred by the fact that the vast majority of people they see on the app are either on the other side of a line that they cannot cross, or are located in Israeli settlements, where it is generally unsafe for them to travel. As a result, in the occupied West Bank the ability of different populations to use Tinder’s service to talk to and meet geographically proximate people varies, largely along ethnic lines.

Of course Tinder is not itself responsible for the injustices of military occupation. Still, in not acknowledging the ways that existing political dynamics impact the scope of their service, the company effectively normalizes occupation, treating de jure segregation (and the access differential it creates) as an acceptable condition under which a geolocation-based dating app can operate.

Samir, for his part, encountered these obstacles many times. In the early days of our friendship, he told me that if I did come to Ramallah I would be the first person from the app he’d meet in person while swiping from Palestine. He had matched with Jewish Israelis before, but until I crossed the Green Line, his Tinder relationships had been purely virtual.

“A couple times we got to know each other and they’d say, ‘If you’re ever able to get a permit and you can come in, hit me up,’ but it never happened,” Samir recounts. He also mentions matching with an Israeli woman in Ariel, a nearby settlement, on Tinder, but says he was uncomfortable when he found out where she lived.

“She invited me to come to Ariel,” he tells me, “but I said, ‘Hell no.’”

In recent years, we as users have collectively begun to question the idea that technology companies bear no responsibility when their platforms are used to disseminate misinformation, sway elections, and wage war. What we have not paid enough attention to, however, is the potential for the core functionality of the technology itself to have incidental political implications, and for nonpartisan companies to participate in marginalization by default. Often, it seems, their obligation to thoughtfully and carefully navigate the geopolitical circumstances of prospective markets is overlooked by a culture that, even amid a techlash, sees access to the free market of technological tools as an indicator of progress.

I Was in Israel. I Swiped Right on a Man in Palestine

I Was in Israel. I Swiped Right on a Man in Palestine

On the ground, the particular geopolitical situation of Israel and Palestine, with its checkpoints and patchwork of territorial designations, also shapes who uses Tinder’s service and how. Although the interface includes no explicit mention of the separation barrier aside from a dashed gray line to indicate a disputed border, users in the region face a significant obstacle: When Palestinians and Jewish Israelis do match, there is often no legal way for them to meet without leaving the country entirely, despite their geographic proximity when swiping. Israelis can cross the Green Line to travel on segregated roads to Israeli settlements, but not to Palestinian cities or villages. Palestinians in the West Bank, meanwhile, cannot cross the Green Line at all without a permit, which can be exceedingly difficult to obtain. Palestinians who do have a Jerusalem ID or hold Israeli citizenship can travel freely in Israel and Palestine to go on dates when they find a match. But the users I spoke with who do not have this freedom of movement say they are deterred by the fact that the vast majority of people they see on the app are either on the other side of a line that they cannot cross, or are located in Israeli settlements, where it is generally unsafe for them to travel. As a result, in the occupied West Bank the ability of different populations to use Tinder’s service to talk to and meet geographically proximate people varies, largely along ethnic lines.

Of course Tinder is not itself responsible for the injustices of military occupation. Still, in not acknowledging the ways that existing political dynamics impact the scope of their service, the company effectively normalizes occupation, treating de jure segregation (and the access differential it creates) as an acceptable condition under which a geolocation-based dating app can operate.

Samir, for his part, encountered these obstacles many times. In the early days of our friendship, he told me that if I did come to Ramallah I would be the first person from the app he’d meet in person while swiping from Palestine. He had matched with Jewish Israelis before, but until I crossed the Green Line, his Tinder relationships had been purely virtual.

“A couple times we got to know each other and they’d say, ‘If you’re ever able to get a permit and you can come in, hit me up,’ but it never happened,” Samir recounts. He also mentions matching with an Israeli woman in Ariel, a nearby settlement, on Tinder, but says he was uncomfortable when he found out where she lived.

“She invited me to come to Ariel,” he tells me, “but I said, ‘Hell no.’”

In recent years, we as users have collectively begun to question the idea that technology companies bear no responsibility when their platforms are used to disseminate misinformation, sway elections, and wage war. What we have not paid enough attention to, however, is the potential for the core functionality of the technology itself to have incidental political implications, and for nonpartisan companies to participate in marginalization by default. Often, it seems, their obligation to thoughtfully and carefully navigate the geopolitical circumstances of prospective markets is overlooked by a culture that, even amid a techlash, sees access to the free market of technological tools as an indicator of progress.

The Mathematics of Cancel Culture

The Mathematics of Cancel Culture

Makers of our AI-powered devices spend a lot of time canceling friction, making just about everything a no-brainer. They require less and less of us because they do more and more, whether we want them to or not. One click instead of two. They make it effortless to say things, buy things, even cancel things. We don’t need to think twice. Or think at all.

But friction is a good thing—and not just because it might slow down your ability to send that text you later wish you hadn’t or make butt dialing more difficult. We need friction to walk across the room. 

Besides, deleting rarely erases things completely (your old texts included). Canceling leaves traces. In college, I received a report card (a real thing back then) with an inked A in physics crossed out, written over with a B—the ghost of the A still clear. I’d recently declined several invitations from my aged professor to meet after class for a drink. Sexual harassment didn’t even have a name at the time. But the experience canceled my interest in physics for quite a few years.

As we all know, vanquished enemies often return, sometimes in different form. Sometimes they come back to bite you. Our campaign to cancel “germs” has been so successful it’s helped to produce stronger breeds of drug-resistant bacteria.

So what’s the alternative? Bad, dangerous, and dumb things abound. If we don’t cancel them, then what? 

In some obvious cases, addition can eliminate the need for subtraction—though it’s likely slower, more difficult, more expensive. For example, I read that analog clocks are being taken out of school classrooms. Why? The decision to cancel clocks was made because students no longer knew how to use them to tell time. Given that clocks are analogues to the Earth’s rotation, that’s a bigger loss than it may seem. Why not just teach kids to read hands on a clock?

Most canceling is far less trivial, of course, but options do usually exist—even if they require time and resources (and thought). We can repair, reframe, revisit, refashion, restrain, redirect, repurpose, restructure, rework, retool, reduce, revisit, refocus, retrofit, reboot, rethink, reform, and so on. The reformation of our legal system is something law professor Jody Armour has studied and lived for a lifetime and reimagines in his new book, N*gga Theory: Race, Language, Unequal Justice, and the Law. A truly progressive legal system, Armour argues, values restoration, rehabilitation, and redemption over retribution, retaliation, and revenge.

Science could not progress if it canceled old ways of understanding in favor of new. Very rarely do scientists entirely abandon even wrong and discarded ideas. Rather, the building blocks remain, but take on new meaning and context with the discovery of new knowledge, more complete theories, clearer explanations. Science is essentially additive. 

I personally find it strange that most people seem to see aging as mostly a matter of cancellation. True, getting old pares away mobility of our limbs, shaves range and acuity from our senses, severs ties, shrinks stature, chisels away at memory. For me, however, what’s gained easily equals what’s lost. Sure, I’d rather do without the aches and pains, but they force me to jury-rig my way around obstacles—which is a fun challenge (sometimes). If my joints are less flexible, my outlook is more so. I remember less but know more. I have lower energy but more interests. I laugh more. Sometimes it’s the only thing you can do. Nothing wrong with that.

The biggest thing we’ve lost to cancel culture is conversation itself. We’re afraid we’ll say the wrong thing. We’re afraid we’ll get canceled. Sometimes we don’t bother even to cancel and simply “ghost”—the passive-aggressive version.

Probably needless to say, the specter of being ghosted, canceled, has haunted me all the while I’ve been writing this piece. But as I’m closer to my expiration date than most, it wouldn’t matter much. Nature will cancel me permanently, soon enough.


More Great WIRED Stories

Games Can Show Us How to Enact Justice in the Metaverse

Games Can Show Us How to Enact Justice in the Metaverse

It was 2016, and Jordan Belamire was excited to experience QuiVr, a new fantastical virtual reality game, for the first time. With her husband and brother-in-law looking on, she put on a VR headset and became immersed in a snowy landscape. Represented by a disembodied set of floating hands along with a quiver, bow, and hood, Belamire was now tasked with taking up her weapons to fight mesmerizing hordes of glowing monsters.

But her excitement quickly turned sour. Upon entering online multiplayer mode and using voice chat, another player in the virtual world began to make rubbing, grabbing, and pinching gestures at her avatar. Despite her protests, this behavior continued until Belamire took the headset off and quit the game.

My colleagues and I analyzed responses to Belamire’s subsequent account of her “first virtual reality groping” and observed a clear lack of consensus around harmful behavior in virtual spaces. Though many expressed disgust at this player’s actions and empathized with Belamire’s description of her experience as “real” and “violating,” other respondents were less sympathetic—after all, they argued, no physical contact occurred, and she always had the option to exit the game.

Incidents of unwanted sexual interactions are by no means rare in existing social VR spaces and other virtual worlds, and plenty of other troubling virtual behaviors (like the theft of virtual items) have become all too common. All these incidents leave us uncertain about where “virtual” ends and “reality” begins, challenging us to figure out how to avoid importing real-world problems into the virtual world and how to govern when injustice happens in the digital realm.

Now, with Facebook predicting the coming metaverse and the proposal to move our work and social interactions into VR, the importance of dealing with harmful behaviors in these spaces is drawn even more sharply into focus. Researchers and designers of virtual worlds are increasingly setting their sights on more proactive methods of virtual governance that not only deal with acts like virtual groping once they occur, but discourage such acts in the first place while encouraging more positive behaviors too.

These designers are not starting entirely from scratch. Multiplayer digital gaming—which has a long history of managing large and sometimes toxic communities—offers a wealth of ideas that are key to understanding what it means to cultivate responsible and thriving VR spaces through proactive means. By showing us how we can harness the power of virtual communities and implement inclusive design practices, multiplayer games help pave the way for a better future in VR.

The laws of the real world—at least in their current state—are not well-placed to solve the real wrongs that occur in fast-paced digital environments. My own research on ethics and multiplayer games revealed that players can be resistant to “outside interference” in virtual affairs. And there are practical problems, too: In fluid, globalized online communities, it’s difficult to know how to adequately identify suspects and determine jurisdiction.

And certainly, technology can’t solve all of our problems. As researchers, designers and critics pointed out at the 2021 Game Developers Conference, combatting harassment in virtual worlds requires deeper structural changes across both our physical and digital lives. But if doing nothing is not an option, and if existing real-world laws can be inappropriate or ineffective, in the meantime we must turn to technology-based tools to proactively manage VR communities.

Right now, one of the most common forms of governance in virtual worlds is a reactive and punitive form of moderation based on reporting users who may then be warned, suspended, or banned. Given the sheer size of virtual communities, these processes are often automated: for instance, an AI might process reports and implement the removal of users or content, or removals may occur after a certain number of reports against a particular user are received.

The Facebook Whistleblower Won’t Change Anything

The Facebook Whistleblower Won’t Change Anything

Truth be told, predicting the future isn’t my strong suit (and I have a trophy to prove it)—but here’s one prediction I make with full confidence: The latest Facebook revelations, courtesy of whistleblower Frances Haugen, will have zero impact on regulation. No new laws, no new regulations, no new challenges worth a damn. And the issue isn’t Haugen’s testimony or proposals (not that there aren’t issues with both), nor the inanity of some of the questions she got in return (ditto). Rather, the issue is with the expectations we place on whistleblowing. The idea we have of what whistleblowing can achieve.

If whistleblowing has an archetypal story, it goes something like this. A stand-up figure within an organization, an everyperson, comes face to face with some central injustice the organization is perpetuating. Sometimes the motive is company profit, sometimes it’s personal profit, but whatever the case, there’s a smoke-filled room of men with cigars cackling while the rest of the world—including regulators—carry on oblivious to the damage being done. At great personal risk, the everyperson goes public with their concerns: the truth outs. There are hearings called, exposés published, laws passed—the sclerotic machinery of oversight belatedly kicks into gear, and the people in charge exchange their cigars for handcuffs. Think: Sherron Watkins, Cynthia Cooper, or Daniel Ellsberg.

This is a popular idea for how change happens, and its popularity is no surprise, because the change it promises riffs on some very foundational myths of American society. It’s built on the assumption of good intentions—on the idea that, sans a few ne’er-do-wells, regulators (and organizational employees, and legislators) are ultimately just dependent on the right information to ensure justice is done. It’s built on assumptions about the importance of the individual whistleblower—the individual, full stop. No wonder that, in a cultural milieu that so loves its individualism (even, as Rodrigo Nunes notes, on the left), we hold up the whistleblower as the path to justice. But today, whistleblowing doesn’t make those movements more possible; to the contrary, as I’ve previously written, with its insistence on the individual expert as the source of change, it makes them more difficult to sustain. Precisely because it venerates the single, public, heroic figure, the notion of whistleblowing actively denigrates the less glamorous work necessary to sustain activism.

These assumptions obscure some awkward truths of their own. They obscure, for example, how central the identity and perspective of “the whistleblower” are to the audience they receive. Many people have, quite rightly, highlighted the different experiences of Frances Haugen and Sophie Zhang: the former a nice white lady whose concerns don’t stop her from arguing that Facebook should not be broken up, the latter an Asian American woman who sees Facebook’s ideology and financial interests as fundamentally undercutting efforts to solve these problems. Only one of them got a congressional hearing. We might compare both to Alex Stamos, whose resignation from Facebook in 2016 resulted in a write-your-own-job-description offer at Stanford, and contrast all of the above with Timnit Gebru, who was fired from Google for (so far as anyone can determine) having the temerity to be angry while Black. As Daniela Aghostino, Nanna Bonde Thylstrup, and I have noted have noted at different points, who tells the truth, matters. What that truth is, matters. The stories that have legs—that get uptake from the status quo—tend to be those that challenge it the least.

Even if whistleblowers’ treatment was entirely neutral (whatever that means), they still can’t save us. Because of that other assumption hidden behind whistleblowing: that the truth is the only thing standing between the present and a just future. It’s hard to see how, exactly, this idea lines up with our current reality. In 2002, the Sarbanes-Oxley Act, inspired (in part) by Cooper’s and Watkins’ disclosures, passed into law after a 423–3 vote in the House of Representatives and a 99–0 vote in the Senate. In contrast, the present day sees a struggle to acquire even a single cross-party vote on issues as seemingly uncontroversial as “the government should avoid defaulting on its debt.”

In this environment, whistleblowing can’t save us, because the issue isn’t an absence of information but an absence of will. And what builds will, and shifts norms, doesn’t look like a single, isolated figure speaking truth, but mass movements of people setting new standards and making clear there are costs to regulators and companies for not attending to them.

Does this mean whistleblowing is pointless? Of course not. Information always has the potential to be useful if deployed correctly. But the default attitude toward whistleblowing—tell some left-leaning newspapers, tell some legislators, and the hard work is done—is simply naive. The most generous of interpretations is that these figures genuinely believe this is the hard work; that they have that aforementioned faith in the institutions they’re disclosing to. The less generous interpretation is that it is, to a certain degree, a secular form of confession: an unburdening of souls by complicit figures who wish for absolution (absolution that, just-so-coincidentally, sets them up for an entirely new career as the “acceptable” and “safe” technology critic, with a contract for a mediocre book and a largely vacuous research institute).

But if whistleblowing-as-usual isn’t changing anything, then what we need is a different way to approach whistleblowing and disclosures, a way that treats whistleblowers’ knowledge as just one tool in a wider repertoire, and their expertise as one stock of knowledge in a broader realm of concerned, invested, and knowledgeable actors.

Rather than discharging their moral responsibilities through disclosing to regulators and walking away to start their own think tank, whistleblowers might seek to strengthen, draw attention to, and participate in the many pre-existing movements for change in this area—movements led and driven by those people most affected by technology’s excesses. Our Data Bodies, the carceral tech resistance network, the Detroit Community Technology Project; all of these collectives and organizations have been working on problems of surveillance, power and injustice in data and technology since long before Haugen (or Harris, or Willis, or, or, or …) turned critic. They might seek to participate in an ecology of activism; a multitudinous array of actors coordinating for change rather than competing for visibility.

Imagine if, rather than disclosing to The Wall Street Journal or The New York Times, Frances Haugen (or or any of a whole range of other self-appointed moral compasses) had disclosed to those movements. If they had gone to existing organizations, already doing the work, who could contextualize the knowledge they brought, and made those organizations the focus of the story. Imagine if they had used the attention that comes from the information they brought not to keep the limelight on their (individual, insider) perspectives on what change looks like, but the longer-term thinking of the people who have viscerally experienced the consequences that whistleblowers find themselves more-abstractly squirming over. Imagine if we, the public, and they, the regulators, could see proposals coming not from a Silicon Valley coder but from community organizers, street activists, and practitioners who fundamentally understand that what is needed is not a messiah but a movement.