Altman, however, says that the system is secure and that the company will not be storing user data. Every picture of an iris will be converted into a digital code, called IrisHash, which will be stored in Worldcoin’s database to check against future IrisHashes and deny coins to known users; the pictures themselves will be erased from the database. “We take a picture of your irises, we don’t even store it, we calculate a code from it, the code is uploaded, but the the image never is,” Altman says. “We don’t know any more information about you than that image.”
Right now, in fact, Worldcoin is running a pilot, involving about 30 Orbs in various countries, and storing a lot of data, including images of peoples’ eyes, bodies, and faces and their three-dimensional scans, according to the company’s own promotion material. “Without it, we wouldn’t be able to fairly and inclusively give a share of Worldcoin to everyone on Earth. But we can’t wait to stop collecting it and we want to make it clear that it will never be our business to sell your personal data,” reads a blogpost titled “Privacy During Field Testing.” In what Worldcoin calls its “field testing phase,” these images are being collected in order to improve the fraud-detection algorithms powering the Orbs. This phase will likely continue until early 2022; the data collected up to that point will be deleted once the algorithms “are fully trained.”
Alex Blania, who cofounded the company alongside Altman and Max Novendstern, explains that the Orb system allows for beneficial “incentive-alignments.” Not only will people be enticed by the prospect of getting something for free, but an army of Orb Operators will be actively recruiting them in order to get their rewards. (And in turn, Worldcoin has been hiring people to recruit the Orb Operators, according to an ad posted on a Kenyan job bulletin).
Worldcoin itself will remain in charge of distributing the Orbs, and also of kicking out any operators that try to tamper with the devices in order to extract unwarranted rewards (for instance, by scanning someone twice). Could Orb Operators get their rewards by surreptitiously scanning the irises of clueless people who never heard about Worldcoin? Blania says the company is testing fraud detection systems, adding that he cannot be “extremely specific.” But, in theory, the company could use metrics such as whether the user has actually claimed the Worldcoin or carried out any transaction, in order to spot untoward behavior and root out sneaky Orb-ers.
Over the course of the pilot, more than 130,000 users have claimed their Worldcoins—60,000 in the past month. To date, the project has used 30 Orbs run by 25 entrepreneurs in various countries, including Chile, Kenya, Indonesia, Sudan, and France. Blania reckons that the production of new Orbs will be increased to 50,000 devices a year, a number on which the 1 billion users projection is based.
A launch date for the actual coin, which will be released as an ERC-20 token on the Ethereum blockchain, has not been released yet. A person familiar with the matter says a launch should happen in early 2022. For Altman, this will be just the start of a “wonderful, grand social experiment” about the power of networks, and also a dress rehearsal for future UBI ambitions. “One thing I believe is that you do an experiment, you do a first thing, and then you learn and you’ll discover all sorts of things about what works here and what we can improve,” he says. “There will be many answers to how something like this could become closer to a UBI.”
When Mark Zuckerberg created Facebook, in 2004, it was a mere directory of students at Harvard: The Face Book. Two decades, 90 acquisitions, and billions of dollars later, Facebook has become a household name. Now it wants a new one.
Zuckerberg is expected to announce a new name for the company next week at Facebook Connect, the company’s annual conference, as first reported by The Verge. This new name—meant to encompass Facebook, Instagram, WhatsApp, Oculus, and the rest of the family—will clarify the company as a conglomerate, with ambitions beyond social media. The Facebook app might be the cornerstone of the company, but Zuckerberg has been very clear that the future of the company belongs to the metaverse.
But what’s in a name? In Facebook’s case, it comes with strong associations, some reputational damage, scrutiny from Congress, and disapproval from the general public. The Facebook name has led to a “trust deficit” in some of its recent endeavors, including its expansion into cryptocurrency. By renaming the parent company, Facebook might give itself a chance to overcome that. It wouldn’t be the first corporate behemoth to seek some goodwill with a new moniker: Cable companies do it all the time.
Still, branding experts—and branding amateurs on Twitter—aren’t convinced that renaming the company will do much to correct reputational problems or distance itself from recent scandals.
“Everyone knows what Facebook is,” says Jim Heininger, founder of Rebranding Experts, a firm that focuses solely on rebranding organizations. “The most effective way for Facebook to address the challenges that have tainted its brand recently is through corrective actions, not trying to change its name or installing a new brand architecture.”
Facebook’s decision to rename itself comes just after whistleblower Frances Haugen leaked thousands of pages of internal documents to TheWall Street Journal, exposing a company without much regard for public good. The documents spurred a hearing on Capitol Hill, where already Congress has, for years, been discussing the possibility of regulating Facebook or breaking up its conglomerate.
A new name might give the company a facelift. But “a name change is not a rebrand,” says Anaezi Modu, the founder and CEO of Rebrand, which advises companies on brand transformations. Branding comes from a company’s mission, culture, and capabilities, more than just its name, logo, or marketing. “Unless Facebook has serious plans to address at least some of its many issues, just changing a name is pointless. In fact, it can worsen matters.” Renaming a company can create more mistrust if it comes off as distancing itself from its reputation.
Modu says renaming does make sense to clarify a company’s organization, the way other conglomerates have. When Google restructured in 2015, it named its parent company Alphabet, to reflect its growth beyond just a search engine (Google) to now include a number of endeavors (DeepMind, Waymo, Fitbit, and Google X, among others). Most people still think of the company as Google, but the name Alphabet is a signal for how the company fits together.
Facebook first revealed its plans to build a 37,000-kilometer subsea cable, named 2Africa, in the spring of 2020, and it announced an expansion last month. It’s expected to be completed in 2023 or 2024. The new transatlantic cable project will supposedly provide 200 times more capacity than the submarine cables that were laid in the early 2000’s.
Its latest announcements aren’t aimed just at Africa or other emerging markets. The Bombyx robot could be deployed anywhere there’s existing power structures, since it leverages already-built power lines; and Facebook says 30,000 Terragraph units have already been rolled out in Anchorage, Alaska, and Perth, Australia, among other places.
Bombyx looks nifty, as far as robots go. After a technician places it on a power line, it crawls along the line, wrapping itself around the cable as it goes, spooling out Kevlar-reinforced fiber (both for strength and to withstand the heat of medium-voltage power lines). Since it requires a certain amount of balance for the bot to stay on the line, the Facebook team says it has reengineered the bot to be lighter, nimbler, and more stable. And it lowered the bot’s load from 96 fiber optic strands to 24, after determining that a single fiber can provide internet access for up to 1,000 homes in a nearby area.
To be clear, Facebook hasn’t reinvented fiber-optic cables; it’s come up with a scheme to run them above ground, using existing power infrastructure, instead of digging trenches to lay the cables underground. And it has come up with a semi-autonomous way to do this, by building a robot that it claims will eventually be capable of “installing over a kilometer of fiber and passing dozens of intervening obstacles autonomously in an hour and a half.”
As for Terragraph, Facebook’s Rabinovitsj and Maguire described Terragraph as a system composed of several technologies. It relies on the 802.11ay standard established by the WiFi Alliance. It’s a technology reference design, developed in partnership with Qualcomm. And it’s also a mesh Wi-Fi system that uses nodes on existing street structures, like lamp posts and traffic lights. The result, they say, is multi-gigabit speeds that match the speeds of fiber lines—but in this case, it’s being transmitted over the air.
“That means anybody can deploy this without having to go get a license from a regulator,” Maguire says. “So that makes it very affordable, and is one of its other innovations.”
Complaints From Human Rights Activists
Facebook is not unwise to try to leverage existing infrastructure and reduce labor costs when it comes to building out a fiber network. But the company’s earlier forays into telecommunications have rankled both telecom operators and human rights activists. Some have accused the company of building a two-tiered internet that could widen disparities in access.
In the interview, Rabinovitsj, who leads Facebook Connectivity, insisted that Facebook is not an internet service provider and is not interested in becoming one. He said the company is not looking to generate revenue from the project and is licensing the technology to others for free. He did concede, however, that Facebook does benefit from more data being shared around the globe, and that anyone else with a digital property benefits as well.
Peter Micek, general counsel for the digital civil rights nonprofit Access Now—which has in the past received funding from Facebook for the organization’s RightsCon conference—says that over the past four years, the rate of laying fiber for wired internet access has basically stalled, which is “not ideal. It’s not happening at the rates needed to bring the next billion people online anytime soon.” He says people in less developed countries are “still largely dependent on mobile, but there’s still a lot you can’t do on mobile.”
The only safe prediction to make about the Senate’s Facebook hearing today is that, for the first time in a long time, it will be different. Over the past three and a half years the company has sent a rotating cast of high-level executives, including CEO Mark Zuckerberg, to Washington to talk about Facebook and its subsidiaries, Instagram and WhatsApp. This has calcified into a repetitive spectacle in which the executive absorbs and evades abuse while touting the wonderful ways in which Facebook brings the world together. Today’s testimony from Frances Haugen, the former employee who leaked thousands of pages of internal research to The Wall Street Journal, Congress, and the Securities and Exchange Commission, will be decidedly not that.
Haugen, who revealed her identity in a 60 Minutes segment on Sunday, is a former member of the civic integrity team: someone whose job was to tell the company how to make its platform better for humanity, even at the expense of engagement and growth. In nearly two years working there, however, Haugen concluded that it was an impossible job. When conflicts arose between business interests and the safety and well-being of users, “Facebook consistently resolved those conflicts in favor of its own profits,” as she puts it in her prepared opening statements. So she left the company—and took a trove of documents with her. Those documents, she argues, prove that Facebook knows its “products harm children, stoke division, weaken our democracy, and much more” but chooses not to fix those problems.
So what exactly do the documents show? The Wall Street Journal’s reporting, in an ongoing series called “The Facebook Files,” is so far the only window into that question. According to one story, Facebook’s changes to make its ranking algorithm favor “meaningful social interactions”—a shift that Zuckerberg publicly described as “the right thing” to do—ended up boosting misinformation, outrage, and other kinds of negative content. It did so to such an extreme degree that European political parties told Facebook they felt the need to take more extreme positions just to get into people’s feeds. When researchers brought their findings to Zuckerberg, the Journal reported, he declined to take action. Another story documents how Facebook’s “XCheck” program applies more lenient rules to millions of VIP users around the world, some of whom take advantage of that freedom by posting content in flagrant violation of the platform’s rules. Yet another, perhaps the most important published so far, suggests that Facebook’s investment in safety in much of the developing world—where its platforms are essentially “the internet” for many millions of people—is anemic or nonexistent.
You can see the challenge here for both Haugen and the senators questioning her: Such a wide range of revelations doesn’t coalesce easily into one clear narrative. Perhaps for that reason, the committee apparently plans to focus on a story whose headline declares, “Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show.” The committee has already held one hearing on the subject, last week. As I wrote at the time, the documents in question, which the Journal has posted publicly, are more equivocal than that headline suggests. They also are based on ordinary surveys, not the type of internal data that only Facebook has access to. In other words, they may be politically useful, but they don’t greatly enhance the public’s understanding of how Facebook’s platforms operate.
Some of the other documents in the cache, however, apparently do. Crucially, at least according to the Journal’s reporting, they illustrate the gaps between how Facebook’s executives describe the company’s motivations in public and what actually happens on the platforms it owns. So does Haugen’s own personal experience as an integrity worker pushing against the more mercenary impulses of Facebook leadership. Conveying that dynamic might do more to advance the conversation than any particular finding from the research.
For decades, doctors and hospitals saw kidney patients differently based on their race. A standard equation for estimating kidney function applied a correction for Black patients that made their health appear rosier, inhibiting access to transplants and other treatments.
On Thursday, a task force assembled by two leading kidney care societies said the practice is unfair and should end.
The group, a collaboration between the National Kidney Foundation and the American Society of Nephrology, recommended use of a new formula that does not factor in a patient’s race. In a statement, Paul Palevsky, the foundation’s president, urged “all laboratories and health care systems nationwide to adopt this new approach as rapidly as possible.” That call is significant because recommendations and guidelines from professional medical societies play a powerful role in shaping how specialists care for patients.
A study published in 2020 that reviewed records for 57,000 people in Massachusetts found that one-third of Black patients would have had their disease classified as more severe if they had been assessed using the same version of the formula as white patients. The traditional kidney calculation was an example of a class of medical algorithms and calculators that have recently come under fire for conditioning patient care based on race, which is a social category not biological one.
A review published last year listed more than a dozen such tools, in areas such as cardiology and cancer care. It helped prompt a surge of activism against the practice from diverse groups, including medical students and lawmakers such as Senator Elizabeth Warren (D-Massachusetts) and the chair of the House Ways and Means Committee, Richard Neal (D-Massachusetts).
Recently there are signs the tide is turning. The University of Washington dropped the use of race in kidney calculations last year after student protests led to a reconsideration of the practice. Mass General Brigham and Vanderbilt hospitals also abandoned the practice in 2020.
In May, a tool used to predict the chance a woman who previously had a cesarean section could safely give birth via vaginal delivery was updated to no longer automatically assign lower scores to Black and Hispanic women. A calculator that estimates the chances a child has a urinary tract infection was updated to no longer slash the scores for patients who are Black.
The prior formula for assessing kidney disease, known as CKD-EPI, was introduced in 2009, updating a 1999 formula that used race in a similar way. It converts the level of a waste product called creatinine in a person’s blood into a measure of overall kidney function called estimated glomerular filtration rate, or eGFR. Doctors use eGFR to help classify the severity of a person’s illness and determine if they qualify for various treatments, including transplants. Healthy kidneys produce higher scores.
The equation’s design factored in a person’s age and sex but also boosted the score of any patient classified as Black by 15.9 percent. That feature was included to account for statistical patterns seen in the patient data used to inform the design of CKD-EPI, which included relatively few people who were Black or from other racial minorities. But it meant a person’s perceived race could shift how their disease was measured or treated. A person with both Black and white heritage, for example, could flip a health system’s classification of their illness depending on how their doctor saw them or how they identified.
Nwamaka Eneanya, an assistant professor at University of Pennsylvania and a member of the task force behind Thursday’s recommendation, says she knows of one biracial patient with severe kidney disease who after learning about how the equation worked requested that she be classified as white to increase her chances of being listed for advanced care. Eneanya says a shift away from the established equation is long overdue. “Using someone’s skin color to guide their clinical pathway is wholeheartedly wrong—you introduce racial bias into medical care when you do that,” she says.