Select Page
Right-to-Repair Advocates Question John Deere’s New Promises

Right-to-Repair Advocates Question John Deere’s New Promises

Deere’s new agreement states that it will ensure that farmers and independent repair shops can subscribe to or buy tools, software, and documentation from the company or its authorized repair facilities “on fair and reasonable terms.” The tractor giant also says it will ensure that any farmer, independent technician, or independent repair facility will have electronic access to Deere’s Customer Service Advisor, a digital database of operator and technical manuals that’s available for a fee.

The memorandum also promises to give farmers the option to “reset equipment that has been immobilized”—something that can happen when a security feature is inadvertently triggered. Farmers could previously only reset their equipment by going to a John Deere dealer or having a John Deere-authorized technician come to them. “That’s been a huge complaint,” says Nathan Proctor, who leads US PIRG’s right-to-repair campaign. “Farmers will be relieved to know there might be a non-dealer option for that.”

Other parts of the new agreement, however, are too vague to offer significant help to farmers, proponents of the right to repair say. Although the memorandum has much to say about access to diagnostic tools, farmers need to fix as well as identify problems, says Schweitzer, who raises cattle on his 3,000-acre farm, Tiber Angus, in central Montana. “Being able to diagnose a problem is great, but when you find out that it’s a sensor or electronic switch that needs to be replaced, typically that new part has to be reprogrammed with the electronic control unit on board,” he said. “And it’s unclear whether farmers will have access to those tools.”

Deere spokesperson Haber said that “as equipment continues to evolve and technology advances on the farm, Deere continues to be committed to meeting those innovations with enhanced tools and resources.” The company this year will launch the ability to download software updates directly into some equipment with a 4G wireless connection, he said. But Haber declined to say whether farmers would be able to reprogram equipment parts without the involvement of the company or an authorized dealer.

The new agreement isn’t legally binding. It states that should either party determine that the MOU is no longer viable, all they have to do is provide written notice to the other party of their intent to withdraw. And both US PIRG and Schweitzer note that other influential farmers groups are not party to the agreement, such as the National Farmers Union, where Schweitzer is a board member and runs the Montana chapter. 

Schweitzer is also concerned by the way the agreement is sprinkled with promises to offer farmers or independent repair shops “fair and reasonable terms” on access to tools or information. “‘Fair and reasonable’ to a multibillion-dollar company can be a lot different for a farmer who is in debt, trying to make payments on a $200,000 tractor and then has to pay $8,000 to $10,000 to purchase hardware for repairs,” he says. 

The agreement signed by Deere this week comes on the heels of New York governor Kathy Hochul signing into law the Digital Fair Repair Act, which requires companies to provide the same tools and information to the public that are given to their own repair technicians.

However, while right-to-repair advocates mostly cheered the law as precedent-setting, it was weakened by last-minute compromises to the bill, such as making it applicable only to devices manufactured and sold in New York on or after July 1, 2023, and by excluding medical devices, automobiles, and home appliances.

Twitter Promised Them Severance. They Got Nothing

Twitter Promised Them Severance. They Got Nothing

Shortly after taking over Twitter, Elon Musk laid off around 50 percent of the company’s staff. On the same day, he tweeted that all those laid off would receive three months of severance pay. But, after two months of waiting for the company to say what kind of severance and benefits will be available, several former Twitter employees say they’ve heard nothing.

As weeks of waiting turn into months, former staffers in the US are filing arbitration suits, while some in the UK are trying to negotiate terms. In other countries where Twitter laid off staff, people have heard nothing.

Soon after the layoffs were announced, Twitter was forced to backtrack and keep some staff on payroll for longer. California employees were employed, though not working, until January 4 to avoid running afoul of the state’s Worker Adjustment and Retraining Notification Act, or WARN. In New York, former staffers will be employed for another month in accordance with state laws. But as those deadlines pass, Twitter’s silence has become deafening. 

Seven former Twitter employees who spoke to WIRED said they had not received information about their severance, despite some coming up to or being past their last day at the company. Last month, a handful of former employees announced that they would be filing arbitration cases against the company, alleging that it had violated the WARN Act and that its handling of the layoffs constituted a breach of contract.

One former employee, who was laid off in November, is waiting on legal proceedings to see whether they’ll be given severance at all—and is not confident they will be. Another, who was laid off in early November, has heard nothing from the company. 

A third has yet to receive any details of severance, even though they have been chasing Twitter for information since they were fired in November. They had been promised at least twice before that they would be given details of their package—and each time the promised deadline passed without any information.

An ex-staffer in the UK says that they have also not received word about severance but are currently discussing terms with the company on behalf of the some 300 staff based in the country.

A former employee from Twitter’s Accra, Ghana, office, which was open for less than a week before its entire staff was laid off, says that they, “like other staff globally, were assured severance but have not heard from them yet.” The former employee says they were not sure what, if any, recourse they may have against the company in Ghana.

Twitter is, however, providing severance to some. One former contractor says their boss received their severance details on January 5. As for the contractor, they were given a box of chocolates by the agency that got them the job at Twitter. All former Twitter staffers contacted for this story were granted anonymity because talking to the media could affect them being granted severance pay. 

While some chose to wait until their official status as employees expired on January 4, others chose to take preemptive legal action against the company. 

The Slow Death of Surveillance Capitalism Has Begun

The Slow Death of Surveillance Capitalism Has Begun

Surveillance capitalism just got a kicking. In an ultimatum, the European Union has demanded that Meta reform its approach to personalized advertising—a seemingly unremarkable regulatory ruling that could have profound consequences for a company that has grown impressively rich by, as Mark Zuckerberg once put it, running ads.

The ruling, which comes with a €390 million ($414 million) fine attached, is targeted specifically at Facebook and Instagram, but it’s a huge blow to Big Tech as a whole. It’s also a sign that GDPR, Europe’s landmark privacy law that was introduced in 2018, actually has teeth. More than 1,400 fines have been introduced since it took effect, but this time the bloc’s regulators have shown they are willing to take on the very business model that makes surveillance capitalism, a term coined by American scholar Shoshana Zuboff, tick. “It is the beginning of the end of the data free-for-all,” says Johnny Ryan, a privacy activist and senior fellow at the Irish Council for Civil Liberties. 

To appreciate why, you need to understand how Meta makes its billions. Right now, Meta users opt in to personalized advertising by agreeing to the company’s terms of service—a lengthy contract users must accept to use its products. In a ruling yesterday, Ireland’s data watchdog, which oversees Meta because the company’s EU headquarters are based in Dublin, said bundling personalized ads with terms of service in this way was a violation of GDPR. The ruling is a response to two complaints, both made on the day GDPR came into force in 2018. 

Meta says it intends to appeal, but the ruling shows change is inevitable, say privacy activists. “It really asks the whole advertising industry, how do they move forward? And how do they move forward in a way that stops these litigations that require them to change constantly?” says Estelle Masse, global data protection lead at digital rights group Access Now.

EU regulators did not tell Meta how to reform its operations, but many believe the company has only one option—to introduce an Apple-style system that asks users explicitly if they want to be tracked. 

Apple’s 2021 privacy change was a huge blow for companies that rely on user data for advertising revenue—Meta especially. In February 2022, Meta told investors Apple’s move would decrease the company’s 2022 sales by around $10 billion. Research shows that when given the choice, a large chunk of Apple users (between 54 and 96 percent, according to different estimates) declined to be tracked. If Meta was forced to introduce a similar system, it would threaten one of the company’s main revenue streams. 

Meta denies it has to alter the way it operates in response to the EU ruling, claiming it just needs to find a new way to legally justify how it processes people’s data. “We want to reassure users and businesses that they can continue to benefit from personalized advertising across the EU through Meta’s platforms,” the company said in a statement. 

Twitter Is No Longer a Creative Haven

Twitter Is No Longer a Creative Haven

WIRED has written frequently of late about Elon Musk’s Twitter, so forgive me for coming back to it—but for those of us as terminally online as I am, let me just ask: What the hell happened last weekend?

I woke up on Sunday morning to learn that Twitter was going to block all mentions of, or links to, “competing” services, from Instagram to Facebook, to Linktree of all places. It was claimed to be about “preventing free advertising” of the platform’s competitors and to “cut down on spam.” Of course, anyone with two neurons to rub together could tell that this was a cover story—you don’t need a journalist to tell you that—and the great link ban was mainly about stemming the flow of active and popular users to other platforms while controlling speech in the name of Musk’s mission to [checks notes] … protect free speech.

What was essentially a small online riot ensued, with Twitter users from all corners decrying the new policy. Within hours, not only had the company backtracked, but all mentions of the less-than-day-old policy had been scrubbed from Twitter feeds and the company website. It was a whirlwind for anyone who was online to see it. (Although if you missed it, I wouldn’t say you missed it, if you know what I mean.)

But I’m not here to speculate on the true motives behind Sunday’s whiplash; I don’t think that’s helpful. After all, intention and impact are separate things. Regardless of someone’s intention when they hit you in the face, they’ve still hit you in the face. Now you have to deal with the situation that they’ve created. So my thoughts instead turn—and I hope yours will also—to the people impacted by the weekend’s policy change. Those Twitter users who spent Sunday wondering whether the platform they used and trusted to find and promote their work, make connections with others in their field, and in many cases, rely on for income, would allow them to continue.

When we at WIRED talk about “platforms and power,” this is what we’re talking about. Of course, any steward of any platform, whether it’s a CEO, founder, or middle manager, has the unenviable job of setting and enforcing the policies and guidelines for that platform’s safe and legal use. That’s not in question. Without such rules, online spaces can go bad fast. What is an issue is when those platforms choose to actively harm their users through policy decisions, and when those changes are large enough to force users to either adapt or abandon ship. 

Let me explain: I’m lucky enough to know a lot of creatives as well as a lot of journalists and tech workers. When I woke up on Sunday to the news, it was delivered to me by tweets from artists terrified they’d be banned from Twitter for linking to their own portfolios and to platforms where they accept commissions for their artwork. I read horror stories from authors who were terrified that the Linktrees their publishers asked them to create to promote their books, reviews, and Goodreads profiles were suddenly bannable offenses on Twitter.

My friends on Twitch interrupted their streams to discuss the news, worried that they wouldn’t be able to tweet to announce they were starting a new stream, or add a link to their Twitter bio to help viewers find them. All of these things created the potential for lost income for people who, I would argue, need it more than the folks who made these policy decisions. After all, these same creators have the kind of disruptive, entrepreneurial spirit that everyone in Silicon Valley claims to want to foster and empower. 

Elon Musk’s Twitter Is Making Meta Look Smart

Elon Musk’s Twitter Is Making Meta Look Smart

It was the first day of April 2022, and I was sitting in a law firm’s midtown Manhattan conference room at a meeting of Meta’s Oversight Board, the independent body the scrutinizes its content decisions. And for a few minutes, it seemed that despair had set in.

The topic at hand was Meta’s controversial Cross Check program, which gave special treatment to posts from certain powerful users—celebrities, journalists, government officials, and the like. For years this program operated in secret, and Meta even misled the board on its scope. When details of the program were leaked to The Wall Street Journal, it became clear that millions of people received that special treatment, meaning their posts were less likely to be taken down when reported by algorithms or other users for breaking rules against things like hate speech. The idea was to avoid mistakes in cases where errors would have more impact—or embarrass Meta—because of the prominence of the speaker. Internal documents showed that Meta researchers had qualms about the project’s propriety. Only after that exposure did Meta ask the board to take a look at the program and recommend what the company should do with it.

The meeting I witnessed was part of that reckoning. And the tone of the discussion led me to wonder if the board would suggest that Meta shut down the program altogether, in the name of fairness. “The policies should be for all the people!” one board member cried out.

That didn’t happen. This week the social media world took a pause from lookie-looing the operatic content-moderation train wreck that Elon Musk is conducting at Twitter, as the Oversight Board finally delivered its Cross Check report, delayed because of foot-dragging by Meta in providing information. (It never did provide the board with a list identifying who got special permission to stave off a takedown, at least until someone took a closer look at the post.) The conclusions were scathing. Meta claimed that the program’s purpose was to improve the quality of its content decisions, but the board determined that it was more to protect the company’s business interests. Meta never set up processes to monitor the program and assess whether it was fulfilling its mission. The lack of transparency to the outside world was appalling. Finally, all too often Meta failed to deliver the quick personalized action that was the reason those posts were spared quick takedowns. There were simply too many of those cases for Meta’s team to handle. They frequently remained up for days before being given secondary consideration.

The prime example, featured in the original WSJ report, was a post from Brazilian soccer star Neymar, who posted a sexual image without its subject’s consent in September 2019. Because of the special treatment he got from being in the Cross Check elite, the image—a flagrant policy violation—garnered over 56 million views before it was finally removed. The program meant to reduce the impact of content decision mistakes wound up boosting the impact of horrible content.

Yet the board didn’t recommend that Meta shut down Cross Check. Instead, it called for an overhaul. The reasons are in no way an endorsement of the program but an admission of the devilish difficulty of content moderation. The subtext of the Oversight Board’s report was the hopelessness of believing it was possible to get things right. Meta, like other platforms that give users voice, had long emphasized growth before caution and hosted huge volumes of content that would require huge expenditures to police. Meta does spend many millions on moderation—but still makes millions of errors. Seriously cutting down on those mistakes costs more than the company is willing to spend. The idea of Cross Check is to minimize the error rate on posts from the most important or prominent people. When a celebrity or statesman used its platform to speak to millions, Meta didn’t want to screw up.