Select Page
Facebook’s Name Change Won’t Fix Anything

Facebook’s Name Change Won’t Fix Anything

When Mark Zuckerberg created Facebook, in 2004, it was a mere directory of students at Harvard: The Face Book. Two decades, 90 acquisitions, and billions of dollars later, Facebook has become a household name. Now it wants a new one.

Zuckerberg is expected to announce a new name for the company next week at Facebook Connect, the company’s annual conference, as first reported by The Verge. This new name—meant to encompass Facebook, Instagram, WhatsApp, Oculus, and the rest of the family—will clarify the company as a conglomerate, with ambitions beyond social media. The Facebook app might be the cornerstone of the company, but Zuckerberg has been very clear that the future of the company belongs to the metaverse.

But what’s in a name? In Facebook’s case, it comes with strong associations, some reputational damage, scrutiny from Congress, and disapproval from the general public. The Facebook name has led to a “trust deficit” in some of its recent endeavors, including its expansion into cryptocurrency. By renaming the parent company, Facebook might give itself a chance to overcome that. It wouldn’t be the first corporate behemoth to seek some goodwill with a new moniker: Cable companies do it all the time.

Still, branding experts—and branding amateurs on Twitter—aren’t convinced that renaming the company will do much to correct reputational problems or distance itself from recent scandals.

“Everyone knows what Facebook is,” says Jim Heininger, founder of Rebranding Experts, a firm that focuses solely on rebranding organizations. “The most effective way for Facebook to address the challenges that have tainted its brand recently is through corrective actions, not trying to change its name or installing a new brand architecture.”

Facebook’s decision to rename itself comes just after whistleblower Frances Haugen leaked thousands of pages of internal documents to The Wall Street Journal, exposing a company without much regard for public good. The documents spurred a hearing on Capitol Hill, where already Congress has, for years, been discussing the possibility of regulating Facebook or breaking up its conglomerate.

A new name might give the company a facelift. But “a name change is not a rebrand,” says Anaezi Modu, the founder and CEO of Rebrand, which advises companies on brand transformations. Branding comes from a company’s mission, culture, and capabilities, more than just its name, logo, or marketing. “Unless Facebook has serious plans to address at least some of its many issues, just changing a name is pointless. In fact, it can worsen matters.” Renaming a company can create more mistrust if it comes off as distancing itself from its reputation.

Modu says renaming does make sense to clarify a company’s organization, the way other conglomerates have. When Google restructured in 2015, it named its parent company Alphabet, to reflect its growth beyond just a search engine (Google) to now include a number of endeavors (DeepMind, Waymo, Fitbit, and Google X, among others). Most people still think of the company as Google, but the name Alphabet is a signal for how the company fits together.

This Facebook Whistleblower Hearing Will Be Different

This Facebook Whistleblower Hearing Will Be Different

The only safe prediction to make about the Senate’s Facebook hearing today is that, for the first time in a long time, it will be different. Over the past three and a half years the company has sent a rotating cast of high-level executives, including CEO Mark Zuckerberg, to Washington to talk about Facebook and its subsidiaries, Instagram and WhatsApp. This has calcified into a repetitive spectacle in which the executive absorbs and evades abuse while touting the wonderful ways in which Facebook brings the world together. Today’s testimony from Frances Haugen, the former employee who leaked thousands of pages of internal research to The Wall Street Journal, Congress, and the Securities and Exchange Commission, will be decidedly not that.

Haugen, who revealed her identity in a 60 Minutes segment on Sunday, is a former member of the civic integrity team: someone whose job was to tell the company how to make its platform better for humanity, even at the expense of engagement and growth. In nearly two years working there, however, Haugen concluded that it was an impossible job. When conflicts arose between business interests and the safety and well-being of users, “Facebook consistently resolved those conflicts in favor of its own profits,” as she puts it in her prepared opening statements. So she left the company—and took a trove of documents with her. Those documents, she argues, prove that Facebook knows its “products harm children, stoke division, weaken our democracy, and much more” but chooses not to fix those problems.

So what exactly do the documents show? The Wall Street Journal’s reporting, in an ongoing series called “The Facebook Files,” is so far the only window into that question. According to one story, Facebook’s changes to make its ranking algorithm favor “meaningful social interactions”—a shift that Zuckerberg publicly described as “the right thing” to do—ended up boosting misinformation, outrage, and other kinds of negative content. It did so to such an extreme degree that European political parties told Facebook they felt the need to take more extreme positions just to get into people’s feeds. When researchers brought their findings to Zuckerberg, the Journal reported, he declined to take action. Another story documents how Facebook’s “XCheck” program applies more lenient rules to millions of VIP users around the world, some of whom take advantage of that freedom by posting content in flagrant violation of the platform’s rules. Yet another, perhaps the most important published so far, suggests that Facebook’s investment in safety in much of the developing world—where its platforms are essentially “the internet” for many millions of people—is anemic or nonexistent.

You can see the challenge here for both Haugen and the senators questioning her: Such a wide range of revelations doesn’t coalesce easily into one clear narrative. Perhaps for that reason, the committee apparently plans to focus on a story whose headline declares, “Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show.” The committee has already held one hearing on the subject, last week. As I wrote at the time, the documents in question, which the Journal has posted publicly, are more equivocal than that headline suggests. They also are based on ordinary surveys, not the type of internal data that only Facebook has access to. In other words, they may be politically useful, but they don’t greatly enhance the public’s understanding of how Facebook’s platforms operate.

Some of the other documents in the cache, however, apparently do. Crucially, at least according to the Journal’s reporting, they illustrate the gaps between how Facebook’s executives describe the company’s motivations in public and what actually happens on the platforms it owns. So does Haugen’s own personal experience as an integrity worker pushing against the more mercenary impulses of Facebook leadership. Conveying that dynamic might do more to advance the conversation than any particular finding from the research.


More Great WIRED Stories

How an Obscure Green Bay Packers Site Conquered Facebook

How an Obscure Green Bay Packers Site Conquered Facebook

The Green Bay Packers play in one of the tiniest media markets in the NFL, with a small but famously loyal fan base. It’s a key part of their charm. It’s also why it was so bewildering to discover that the single most-viewed URL on Facebook over the past three months, with 87.2 million views, belongs to an obscure site devoted to charging people to hang out with former Packers players.

That fact is one of several bizarre data points to emerge from Facebook’s first-ever “Widely Viewed Content Report.” The document is apparently an attempt to push back against the narrative that the platform is overrun with misinformation, fake news, and political extremism. According to data from its own publicly available analytics tool, CrowdTangle—data skillfully popularized by New York Times reporter Kevin Roose—the list of pages and posts with the highest engagement on the platform is heavily dominated by less-than-reputable right-wing publications and personalities like NewsMax and Dan Bongino, who vastly outperform more trustworthy mainstream publications.

Facebook has long argued that engagement doesn’t tell the whole story. A more accurate way to measure what’s popular on Facebook, the company’s executives say, is to look at total impressions, or “reach”—that is, how many people see a given piece of content rather than how many like or comment on it. The obvious problem with that argument is that, until Wednesday, Facebook had never shared any data on reach, making its claims impossible to verify. As Roose wrote last month, a proposal to make that data public ran into resistance within the company because it also might not make Facebook look so hot. As CrowdTangle CEO Brandon Silverman reportedly put it in an internal email, “Reach leaderboard isn’t a total win from a comms point of view.” 

Now we have some idea of what Silverman may have meant.

The new report consists mostly of four Top 20 lists: the most viewed domains, links, pages, and posts over the last three months. (Facebook says it will release the reports quarterly.) The domains list contains mostly unsurprising results, including the likes of YouTube, Amazon, and GoFundMe—prominent websites that you’d expect to be posted a lot on Facebook. (Those results are not just unsurprising but unhelpful, since a link to the YouTube domain, say, could be for any one of literally billions of videos.) But number nine is the URL playeralumniresources.com—that Packers website. Things get even stranger in the Top 20 links ranking, where that URL comes in first place, meaning the homepage of Player Alumni Resources was somehow more popular on Facebook than every other site on the internet. The rest of the list contains similar surprises. In second place is a link to purehempshop.com; in fifth, with 51.6 million views, is reppnforchrist.com.

Is Player Alumni Resources, run by former Packers kicker Chris Jacke, quietly a Facebook juggernaut? Its official page has only 4,100 followers. Its posts get very few likes or comments. What’s going on here?

The answer: memes. From his personal account, which has more than 120,000 followers, Jacke posts a steady stream of low-rent viral memes that have nothing to do with the Packers, adding the URL of his business to the top of the post. We’re talking the likes of “Pick one cookie variety to live without,” or “Give yourself a point for each of these that you’ve done.” A post of a meme asking what word people use for soda (or pop, if you insist), for example, racked up more than 2 million interactions in June, according to CrowdTangle data. Jacke didn’t respond to requests for comment.

This seems to be the modus operandi of the other seemingly random members of the link leaderboard. The hemp store in second place, with 72.1 million views? That appears to be the handiwork of Jaleel White, best known for playing Steve Urkel on Family Matters. White, whose page has nearly 1.5 million followers, posts meme after recycled meme, each one graced with a link to a CBD product store.

Facebook’s Reason for Banning Researchers Doesn’t Hold Up

Facebook’s Reason for Banning Researchers Doesn’t Hold Up

When Facebook said Tuesday that it was suspending the accounts of a team of NYU researchers, it made it seem like the company’s hands were tied. The team had been crowdsourcing data on political ad targeting via a browser extension, something Facebook had repeatedly warned them was not allowed.

“For months, we’ve attempted to work with New York University to provide three of their researchers the precise access they’ve asked for in a privacy-protected way,” wrote Mike Clark, Facebook’s product management director, in a blog post. “We took these actions to stop unauthorized scraping and protect people’s privacy in line with our privacy program under the FTC Order.”

Clark was referring to the consent decree imposed by the Federal Trade Commission in 2019, along with a $5 billion fine for privacy violations. You can understand the company’s predicament. If researchers want one thing, but a powerful federal regulator requires something else, the regulator is going to win.

Except Facebook wasn’t in that predicament, because the consent decree doesn’t prohibit what the researchers have been doing. Perhaps the company acted not to stay in the government’s good graces but because it doesn’t want the public to learn one of its most closely guarded secrets: who gets shown which ads, and why.

The FTC’s punishment grew out of the Cambridge Analytica scandal. In that case, nominally academic researchers got access to Facebook user data, and data about their friends, directly from Facebook. That data infamously ended up in the hands of Cambridge Analytica, which used it to microtarget on behalf of Donald Trump’s 2016 campaign.

The NYU project, the Ad Observer, works very differently. It doesn’t have direct access to Facebook data. Rather, it’s a browser extension. When a user downloads the extension, they agree to send the ads they see, including the information in the “Why am I seeing this ad?” widget, to the researchers. The researchers then infer which political ads are being targeted at which groups of users—data that Facebook doesn’t publicize.

Does that arrangement violate the consent decree? Two sections of the order could conceivably apply. Section 2 requires Facebook to get a user’s consent before sharing their data with someone else. Since the Ad Observer relies on users agreeing to share data, not Facebook itself, that isn’t relevant.

When Facebook shares data with outsiders, it “has certain obligations to police that data-sharing relationship,” says Jonathan Mayer, a professor of computer science and public affairs at Princeton. “But there’s nothing in the order about if a user wants to go off and tell a third party what they saw on Facebook.”

Joe Osborne, a Facebook spokesperson, acknowledges that the consent decree didn’t force Facebook to suspend the researchers’ accounts. Rather, he says, Section 7 of the decree requires Facebook to implement a “comprehensive privacy program” that “protects the privacy, confidentiality, and integrity” of user data. It’s Facebook’s privacy program, not the consent decree itself, that prohibits what the Ad Observer team has been doing. Specifically, Osborne says, the researchers repeatedly violated a section of Facebook’s terms of service that provides, “You may not access or collect data from our Products using automated means (without our prior permission).” The blog post announcing the account bans mentions scraping 10 times.

Laura Edelson, a PhD candidate at NYU and cocreator of the Ad Observer, rejects the suggestion that the tool is an automated scraper at all.

“Scraping is when I write a program to automatically scroll through a website and have the computer drive how the browser works and what’s downloaded,” she says. “That’s just not how our extension works. Our extension rides along with the user, and we only collect data for ads that are shown to the user.”

Bennett Cyphers, a technologist at the Electronic Frontier Foundation, agrees. “There’s not really a good, consistent definition of scraping,” he says, but the term is an odd fit when users are choosing to document and share their personal experiences on a platform “That just seems like it’s not something that Facebook is able to control. Unless they’re saying it’s against the terms of service for the user to be taking notes on their interactions with Facebook in any way.”

Ultimately, whether the extension is really “automated” is sort of beside the point, because Facebook could always change its own policy—or, under the existing policy, could simply give the researchers permission. So the more important question is whether the Ad Observer in fact violates anyone’s privacy. Osborne, the Facebook spokesperson, says that when the extension passes along an ad, it could be exposing information about other users who didn’t consent to sharing their data. If I have the extension installed, for instance, it could be sharing the identity of my friends who liked or commented on an ad.

Florida’s New Social Media Law Will Be Laughed Out of Court

Florida’s New Social Media Law Will Be Laughed Out of Court

Florida’s new social media legislation is a double landmark: It’s the first state law regulating online content moderation, and it will almost certainly become the first such law to be struck down in court.

On Monday, Governor Ron DeSantis signed into law the Stop Social Media Censorship Act, which greatly limits large social media platforms’ ability to moderate or restrict user content. The bill is a legislative distillation of Republican anger over recent episodes of supposed anti-conservative bias, like Twitter and Facebook shutting down Donald Trump’s account and suppressing the spread of the infamous New York Post Hunter Biden story. Most notably, it imposes heavy fines—up to $250,000 per day—on any platform that deactivates the account of a candidate for political office, and it prohibits platforms from taking action against “journalistic enterprises.”

It is very hard to imagine any of these provisions ever being enforced, however.

“This is so obviously unconstitutional, you wouldn’t even put it on an exam,” said A. Michael Froomkin, a law professor at the University of Miami. Under well established Supreme Court precedent, the First Amendment prohibits private entities from being forced to publish or broadcast someone else’s speech. Prohibiting “deplatforming” of political candidates would likely be construed as an unconstitutional must-carry provision. “This law looks like a political freebie,” Froomkin said. “You get to pander, and nothing bad happens, because there’s no chance this will survive in court.” (The governor’s office didn’t respond to a request for comment.)

The Constitution isn’t the only problem for the new law. It also conflicts with Section 230 of the Communications Decency Act, a federal law that generally holds online platforms immune from liability over their content moderation decisions. Section 230 has become an object of resentment on both sides of the political aisle, but for different reasons. Liberals tend to think the law lets online platforms get away with leaving too much harmful material up. Conservative critics, on the other hand, argue that it lets them get away with taking too much stuff down—and, worse, that it allows them to censor conservatives under the guise of content moderation.

Regardless of the merits of these critiques, the fact is that Section 230 remains in effect, and, like many federal statutes, it explicitly preempts any state law that conflicts with it. That is likely to make any attempt to enforce the Stop Social Media Censorship Act an expensive waste of time. Suppose a candidate for office in Florida repeatedly posts statements that violate Facebook’s policies against vaccine misinformation, or racism, and Facebook bans their account. (Like, say, Laura Loomer, a self-described “proud Islamophobe” who ran for Congress last year in Florida after being banned from Facebook and many other platforms.) If she sues under the new law, she will be seeking to hold Facebook liable for a decision to remove user content. But Section 230 says that platforms are free “to restrict access to or availability of material” as long as they do so in good faith. (Facebook and Twitter declined to comment on whether they plan to comply with the Florida law or fight it in court. YouTube didn’t respond to a request for comment.)

Section 230 will probably preempt other aspects of the Florida law that are less politically controversial than the prohibition on deplatforming politicians. For example, the Florida statute requires platforms to set up elaborate due process rights for users, including giving them detailed information about why a certain piece of content was taken down, and to let users opt into a strictly chronological newsfeed with no algorithmic curation. Both of these ideas have common-sense appeal among tech reformers across the political spectrum, and versions of them are included in proposed federal legislation. But enforcing those provisions as part of a state law in court would most likely run afoul of Section 230, because it would boil down to holding a platform liable for hosting, or not hosting, a piece of user-generated content. Florida’s legislature has no power to change that.