Select Page
Apple’s iMessage Encryption Puts Its Security Practices in the DOJ’s Crosshairs

Apple’s iMessage Encryption Puts Its Security Practices in the DOJ’s Crosshairs

The argument is one that some Apple critics have made for years, as spelled out in an essay in January by Cory Doctorow, the science fiction writer, tech critic, and coauthor of Chokepoint Capitalism. “The instant an Android user is added to a chat or group chat, the entire conversation flips to SMS, an insecure, trivially hacked privacy nightmare that debuted 38 years ago—the year Wayne’s World had its first cinematic run,” Doctorow writes. “Apple’s answer to this is grimly hilarious. The company’s position is that if you want to have real security in your communications, you should buy your friends iPhones.”

In a statement to WIRED, Apple says it designs its products to “work seamlessly together, protect people’s privacy and security, and create a magical experience for our users,” and it adds that the DOJ lawsuit “threatens who we are and the principles that set Apple products apart” in the marketplace. The company also says it hasn’t released an Android version of iMessage because it couldn’t ensure that third parties would implement it in ways that met the company’s standards.

“If successful, [the lawsuit] would hinder our ability to create the kind of technology people expect from Apple—where hardware, software, and services intersect,” the statement continues. “It would also set a dangerous precedent, empowering government to take a heavy hand in designing people’s technology. We believe this lawsuit is wrong on the facts and the law, and we will vigorously defend against it.”

Apple has, in fact, not only declined to build iMessage clients for Android or other non-Apple devices, but actively fought against those who have. Last year, a service called Beeper launched with the promise of bringing iMessage to Android users. Apple responded by tweaking its iMessage service to break Beeper’s functionality, and the startup called it quits in December.

Apple argued in that case that Beeper had harmed users’ security—in fact, it did compromise iMessage’s end-to-end encryption by decrypting and then re-encrypting messages on a Beeper server, though Beeper had vowed to change that in future updates. Beeper cofounder Eric Migicovsky argued that Apple’s heavyhanded move to reduce Apple-to-Android texts to traditional text messaging was hardly a more secure alternative.

“It’s kind of crazy that we’re now in 2024 and there still isn’t an easy, encrypted, high-quality way for something as simple as a text between an iPhone and an Android,” Migicovsky told WIRED in January. “I think Apple reacted in a really awkward, weird way—arguing that Beeper Mini threatened the security and privacy of iMessage users, when in reality, the truth is the exact opposite.”

Even as Apple has faced accusations of hoarding iMessage’s security properties to the detriment of smartphone owners worldwide, it’s only continued to improve those features: In February it upgraded iMessage to use new cryptographic algorithms designed to be immune to quantum codebreaking, and last October it added Contact Key Verification, a feature designed to prevent man-in-the-middle attacks that spoof intended contacts to intercept messages. Perhaps more importantly, it’s said it will adopt the RCS standard to allow for improvements in messaging with Android users—although the company did not say whether those improvements would include end-to-end encryption.

The ‘Emergency Powers’ Risk of a Second Trump Presidency

The ‘Emergency Powers’ Risk of a Second Trump Presidency

Donald Trump appears to dream of being an American authoritarian should he return to office. The former US president, who on Tuesday secured enough delegates to win the 2024 Republican nomination, plans to deport millions of undocumented immigrants and house scores of them in large camps. He wants to invoke the Insurrection Act to deploy the military in cities across the nation to quell civil unrest. He wants to prosecute his political opponents. There’s an organized and well-funded effort to replace career civil servants in the federal government with Trump loyalists who will do his bidding and help him consolidate power.

What’s also concerning to legal experts, though, are the special powers that would be available to him that have been available to all recent presidents but have not typically been used. Should Trump decide to go full authoritarian, he could utilize what are called “emergency powers” to shut down the internet in certain areas, censor the internet, freeze people’s bank accounts, restrict transportation, and more.

Utilizing laws like the National Emergencies Act, the Communications Act of 1934, and the International Emergency Economic Powers Act (IEEPA), he would be able to wield power in ways this country has never seen. Furthermore, America’s vast surveillance state, which has regularly been abused, could theoretically be abused even further to surveil his perceived political enemies.

“There really aren’t emergency powers relating to surveillance, and that’s because the non-emergency powers are so powerful and give such broad authority to the executive branch. They just don’t need emergency powers for that purpose,” says Elizabeth Goitein, senior director of the Brennan Center for Justice’s Liberty & National Security Program at the New York University School of Law.

Goitein says she worries most about what a president could do with the emergency powers available to them, though, when she considers whether a president might decide to behave like an authoritarian. She says the laws surrounding these powers offer few opportunities for another branch of government to stop a president from doing as they please.

“Emergency powers are meant to give presidents extraordinary authorities for use in extraordinary circumstances. Because they provide these very potent authorities, it is critical that they have checks and balances built into them and safeguards against abuse,” Goitein says. “The problem with our current emergency powers system—and that system comprises a lot of different laws—is that it really lacks those checks and balances.”

Under the National Emergencies Act, for example, the president simply has to declare a national emergency of some kind to activate powers that are contained in more than 130 different provisions of law. What constitutes an actual emergency is not defined by these laws, so Trump could come up with any number of reasons for declaring one, and he couldn’t easily be stopped from abusing this power.

“There’s a provision of the Communications Act of 1934 that allows the president to shut down or take over communications facilities in a national emergency. There is a provision that allows the president to exert pretty much unspecified controls over domestic transportation, which could be read extremely broadly,” Goitein says. “There’s IEEPA, which allows the president to freeze the assets of and block financial transactions with anyone, including an American, if the president finds it necessary to address an unusual or extraordinary threat that is emanating at least partly from overseas.”

Signal Finally Rolls Out Usernames, So You Can Keep Your Phone Number Private

Signal Finally Rolls Out Usernames, So You Can Keep Your Phone Number Private

The third new feature, which is not enabled by default and which Signal recommends mainly for high-risk users, allows you to turn off not just your number’s visibility but its discoverability. That means no one can find you in Signal unless they have your username, even if they already know your number or have it saved in their address book. That extra safeguard might be important if you don’t want anyone to be able to tie your Signal profile to your phone number, but it will also make it significantly harder for people who know you to find you on Signal.

The new phone number protections should now make it possible to use Signal to communicate with untrusted people in ways that would have previously presented serious privacy risks. A reporter can now post a Signal username on a social media profile to allow sources to send encrypted tips, for instance, without also sharing a number that allows strangers to call their cell phone in the middle of the night. An activist can discreetly join an organizing group without broadcasting their personal number to people in the group they don’t know.

In the past, using Signal without exposing a private number in either of those situations would have required setting up a new Signal number on a burner phone—a difficult privacy challenge for people in many countries that require identification to buy a SIM card—or with a service like Google Voice. Now you can simply set a username instead, which can be changed or deleted at any time. (Any conversations you’ve started with the old username will switch over to the new one.) To avoid storing even those usernames, Signal is also using a cryptographic function called a Ristretto hash, which allows it to instead store a list of unique strings of characters that encode those handles.

Amid these new features designed to calibrate exactly who can learn your phone number, however, one key role for that number hasn’t changed: There’s still no way to avoid sharing your phone number with Signal itself when you register. The fact that this requirement persists even after Signal’s upgrade will no doubt rankle some critics who have pushed Signal’s developers to better cater to users seeking more complete anonymity, such that even Signal’s own staff can’t see a phone number that might identify users or hand that number over to a surveillance agency wielding a court order.

Whittaker says that, for better or worse, a phone number remains a necessary requisite as the identifier Signal privately collects from its users. That’s partly because it prevents spammers from creating endless accounts since phone numbers are scarce. Phone numbers are also what allow anyone to install Signal and have it immediately populate with contacts from their address book, a key element of its usability.

In fact, designing a system that prevents spam accounts and imports the user’s address book without requiring a phone number is “a deceptively hard problem,” says Whittaker. “Spam prevention and actually being able to connect with your social graph on a communications app—those are existential concerns,” she says. “That’s the reason that you still need a phone number to register, because we still need a thing that does that work.”

WhatsApp Chats Will Soon Work With Other Encrypted Messaging Apps

WhatsApp Chats Will Soon Work With Other Encrypted Messaging Apps

Meanwhile, Julia Weis, a spokesperson for the Swiss messaging app Threema, says that while WhatsApp did approach it to discuss its interoperability plans, the proposed system didn’t meet Threema’s security and privacy standards. “WhatsApp specifies all the protocols, and we’d have no way of knowing what actually happens with the user data that gets transferred to WhatsApp—after all, WhatsApp is closed source,” Weis says. (WhatsApp’s privacy policy states how it uses people’s data).

When the EU first announced that messaging apps may have to work together in early 2022, many leading cryptographers opposed the idea, saying it adds complexity and potentially introduces more security and privacy risks. Carmela Troncoso, an associate professor at the Swiss university École Polytechnique Fédérale de Lausanne, who focuses on security and privacy engineering, says interoperability moves could potentially lead to different power relationships between companies, depending on how they are implemented.

“This move for interoperability will, on the one hand, open the market, but also maybe close the market in the sense that now the bigger players are going to have more decisional power,” Troncoso says. “Now, if the big player makes a move, and you want to continue being interoperable with this big player, because your users are hooked up to this, you’re going to have to follow.”

While the interoperability of encrypted messaging apps may be possible, there are still some fundamental challenges about how the systems will work in the real world. How much of a problem spam and scamming will be across apps is largely unknown until people start using interoperable setups. There are also questions about how people will find each other across different apps. For instance, WhatsApp uses your phone number to interact and message other people, while Threema randomly generates eight-digit IDs for people’s accounts. Linking up with WhatsApp “could deanonymize Threema users,” Weis, the Threema spokesperson says.

Meta’s Brouwer says the company is still working on the interoperability features and the level of support it will make available for companies wanting to integrate with it. “Nobody quite knows how this works,” Brouwer says. “We have no idea what the demand is.” However, he says the decision was made to use WhatsApp’s existing architecture to run interoperability as it means that it can more easily scale up the system for group chats in the future. It also reduces the potential for people’s data to be exposed to multiple servers, Brouwer says.

Ultimately, interoperability will evolve over time, and from Meta’s perspective, Brouwer says, it will be more challenging to add new features to it quickly. “We don’t believe interop chats and WhatsApp chats can evolve at the same pace,” Brouwer says, claiming it is “harder to evolve an open network” compared to a closed one. “The second you do something different—than what we know works really well—you open up a wormhole of security, privacy issues, and complexity that is always going to be much bigger than you think it is.”

23andMe Failed to Detect Account Intrusions for Months

23andMe Failed to Detect Account Intrusions for Months

Police took a digital rendering of a suspect’s face, generated using DNA evidence, and ran it through a facial recognition system in a troubling incident reported for the first time by WIRED this week. The tactic came to light in a trove of hacked police records published by the transparency collective Distributed Denial of Secrets. Meanwhile, information about United States intelligence agencies purchasing Americans’ phone location data and internet metadata without a warrant was revealed this week only after US senator Ron Wyden blocked the appointment of a new NSA director until the information was made public. And a California teen who allegedly used the handle Torswats to carry out hundreds of swatting attacks across the US is being extradited to Florida to face felony charges.

The infamous spyware developer NSO Group, creator of the Pegasus spyware, has been quietly planning a comeback, which involves investing millions of dollars lobbying in Washington while exploiting the Israel-Hamas war to stoke global security fears and position its products as a necessity. Breaches of Microsoft and Hewlett-Packard Enterprise, disclosed in recent days, have pushed the espionage operations of the well-known Russia-backed hacking group Midnight Blizzard back into the spotlight. And Amazon-owned Ring said this week that it is shutting down a feature of its controversial Neighbors app that gave law enforcement a free pass to request footage from users without a warrant.

WIRED had a deep dive this week into the Israel-linked hacking group known as Predatory Sparrow and its notably aggressive offensive cyberattacks, particularly against Iranian targets, which have included crippling thousands of gas stations and setting a steel mill on fire. With so much going on, we’ve got the perfect quick weekend project for iOS users who want to feel more digitally secure: Make sure you’ve upgraded your iPhone to iOS 17.3 and then turn on Apple’s new Stolen Device Protection feature, which could block thieves from taking over your accounts.

And there’s more. Each week, we highlight the news we didn’t cover in-depth ourselves. Click on the headlines below to read the full stories. And stay safe out there.

After first disclosing a breach in October, the ancestry and genetics company 23andMe said in December that personal data from 6.9 million users was impacted in the incident stemming from attackers compromising roughly 14,000 user accounts. These accounts then gave attackers access to information voluntarily shared by users in a social feature the company calls DNA Relatives. 23andMe has blamed users for the account intrusions, saying that they only occurred because victims set weak or reused passwords on their accounts. But a state-mandated filing in California about the incident reveals that the attackers started compromising customers’ accounts in April and continued through much of September without the company ever detecting suspicious activity—and that someone was trying to guess and brute-force users’ passwords.

North Korea has been using generative artificial intelligence tools “to search for hacking targets and search for technologies needed for hacking,” according to a senior official at South Korea’s National Intelligence Service who spoke to reporters on Wednesday under the condition of anonymity. The official said that Pyongyang has not yet begun incorporating generative AI into active offensive hacking operations but that South Korean officials are monitoring the situation closely. More broadly, researchers say they are alarmed by North Korea’s development and use of AI tools for multiple applications.

The digital ad industry is notorious for enabling the monitoring and tracking of users across the web. New findings from 404 Media highlight a particularly insidious service, Patternz, that draws data from ads in hundreds of thousands of popular, mainstream apps to reportedly fuel a global surveillance dragnet. The tool and its visibility have been marketed to governments around the world to integrate with other intelligence agency surveillance capabilities. “The pipeline involves smaller, obscure advertising firms and advertising industry giants like Google. In response to queries from 404 Media, Google and PubMatic, another ad firm, have already cut-off a company linked to the surveillance firm,” 404’s Joseph Cox wrote.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory have devised an algorithm that could be used to convert data from smart devices’ ambient light sensors into an image of the scene in front of the device. A tool like this could be used to turn a smart home gadget or mobile device into a surveillance tool. Ambient light sensors measure light in an environment and automatically adjust a screen’s brightness to make it more usable in different conditions. But because ambient light data isn’t considered to be sensitive, these sensors automatically have certain permissions in an operating system and generally don’t require specific approval from a user to be used by an app. As a result, the researchers point out that bad actors could potentially abuse the readings from these sensors without users having recourse to block the information stream.