Select Page
Early Prime Day Kindle Deals (October 2023)

Early Prime Day Kindle Deals (October 2023)

Amazon took its Prime Day event from a single day to several days to, now, several days several times per year. The next one, Prime Big Deal Days, starts October 10 and runs through the next day. Amazon’s own devices go on sale all the time, but Prime Day is still the best time to score a Kindle deal, or significant savings on Fire TV Sticks, Echos, Fire tablets, and the like.

The event usually brings the lowest prices of the year to Amazon hardware. Most Amazon deals are available exclusively to Amazon Prime members, but there’s a chance that other retailers will price-match Amazon’s own discounts. We likely won’t see deals like this again until around Black Friday and Cyber Monday, and even then, these prices may be cheaper. 

You can also check out all of the Amazon Device Prime Day Deals on the dedicated store page. 

Updated September 2023: There aren’t any early Kindle sales happening just yet, but we’ve added some price predictions below.

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

Kindle Deals

Kindle e-readers are hard to beat. They’re easy to use, they’re dependable, and they work as intended. They can hold thousands of books, and the battery lasts for weeks at a time. They can also make reading fun again. If you want to explore more options than those listed below, be sure to check out our Best Amazon Kindles guide.

Kindle Paperwhite

Amazon Kindle Paperwhite.

Photograph: Amazon

Last Prime Day, we saw the Kindle Paperwhite (8/10, WIRED Recommends), our favorite Kindle, dip to $95 ($55 off). That’s the lowest price we’ve tracked for it. It has all the features the average Kindle user could want. The battery lasts for weeks, there’s a warm backlight for easier nighttime reading, and recharging is handled via USB-C. If you want the Signature Edition, which boasts more storage, wireless charging, and auto-adjusting lighting, expect to pay around $125 ($65 off).

This is the smallest and cheapest Kindle and it dipped to $65 ($35 off) during July’s event. The 6-inch screen might be too little for some readers, but if you hate holding tablets with one hand, the smaller size could be a benefit and not a nuisance. Note that this Kindle is not waterproof, and the backlight doesn’t have warmth or automatic adjustment.

Kindle with cover open

Amazon Kindle Paperwhite Kids Edition.

Photograph: Amazon

This is the best Kindle for kids, and if this sale matches July’s, you’ll get it for $105 ($65 off). It’s waterproof and has a warm backlight that can make reading in the dark easier. The warm backlight is also a great feature when it comes to bedtime reading since there’ll be less blue light being beamed directly into your kiddo’s eyeballs. The Kindle includes a one-year subscription to Amazon Kids+, which is an age-appropriate content library that has built-in parental controls. You’ll also get a protective case and a two-year no-questions-asked replacement guarantee. If your kid runs over their e-reader with a dirtbike or throws it out the window, Amazon will send you a new one. 

The Kindle Scribe (8/10, WIRED Recommends) is the first Kindle that lets you write on it like a regular notebook, which makes it our favorite Kindle for taking notes. This model even comes with the premium pen, and 64 gigabytes of storage inside an enormous 10.2-inch screen. You don’t need to spend this kind of money unless you really think you’ll use the writing feature, but it will likely be around $320 ($100 off).

Immersive Tech Obscures Reality. AI Will Threaten It

Immersive Tech Obscures Reality. AI Will Threaten It

Last week, Amazon announced it was integrating AI into a number of products—including smart glasses, smart home systems, and its voice assistant, Alexa—that help users navigate the world. This week, Meta will unveil its latest AI and extended reality (XR) features, and next week Google will reveal its next line of Pixel phones equipped with Google AI. If you thought AI was already “revolutionary,” just wait until it’s part of the increasingly immersive responsive, personal devices that power our lives.

AI is already hastening technology’s trend toward greater immersion, blurring the boundaries between the physical and digital worlds and allowing users to easily create their own content. When combined with technologies like augmented or virtual reality, it will open up a world of creative possibilities, but also raise new issues related to privacy, manipulation, and safety. In immersive spaces, our bodies often forget that the content we’re interacting with is virtual, not physical. This is great for treating pain and training employees. However, it also means that VR harassment and assault can feel real, and that disinformation and manipulation campaigns are more effective.

Generative AI could worsen manipulation in immersive environments, creating endless streams of interactive media personalized to be as persuasive, or deceptive, as possible. To prevent this, regulators must avoid the mistakes they’ve made in the past and act now to ensure that there are appropriate rules of the road for its development and use. Without adequate privacy protections, integrating AI into immersive environments could amplify the threats posed by these emerging technologies.

Take misinformation. With all the intimate data generated in immersive environments, actors motivated to manipulate people could hypercharge their use of AI to create influence campaigns tailored to each individual. One study by pioneering VR researcher Jeremy Bailenson shows that by subtly editing photos of political candidates’ faces to appear more like a given voter, it’s possible to make that person more likely to vote for the candidate. The threat of manipulation is exacerbated in immersive environments, which often collect body-based data such as head and hand motion. That information can potentially reveal sensitive details like a user’s demographics, habits, and health, which lead to detailed profiles being made of users’ interests, personality, and characteristics. Imagine a chatbot in VR that analyzes data about your online habits and the content your eyes linger on to determine the most convincing way to sell you on a product, politician, or idea, all in real-time.

AI-driven manipulation in immersive environments will empower nefarious actors to conduct influence campaigns at scale, personalized to each user. We’re already familiar with deepfakes that spread disinformation and fuel harassment, and microtargeting that drives users toward addictive behaviors and radicalization. The additional element of immersion makes it even easier to manipulate people.

To mitigate the risks associated with AI in immersive technologies and provide individuals with a safe environment to adopt them, clear and meaningful privacy and ethical safeguards are necessary. Policymakers should pass strong privacy laws that safeguard users’ data, prevent unanticipated uses of this data, and give users more control over what is collected and why. In the meantime, with no comprehensive federal privacy law in place, regulatory agencies like the US Federal Trade Commission (FTC) should use their consumer protection authority to guide companies on what kinds of practices are “unfair and deceptive” in immersive spaces, particularly when AI is involved. Until more formal regulations are introduced, companies should collaborate with experts to develop best practices for handling user data, govern advertising on their platforms, and design AI-generated immersive experiences to minimize the threat of manipulation.

As we wait for policymakers to catch up, it is critical for people to become educated on how these technologies work, the data they collect, how that data is used, and what harm they may cause individuals and society. AI-enabled immersive technologies are increasingly becoming part of our everyday lives, and are changing how we interact with others and the world around us. People need to be empowered to make these tools work best for them—and not the other way around.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.

Apple Vision Pro Mixed-Reality Headset: Specs, Price, Release Date

Apple Vision Pro Mixed-Reality Headset: Specs, Price, Release Date

While Apple’s strategy of biding its time before entering a product category has served the company well in the past, its official entry into mixed reality is coming at a curious time. Virtual reality and augmented reality have existed in some form for decades, but so far they have failed to reach mass adoption.

Shipments of VR headsets declined more than 20 percent in 2022, according to research from the International Data Corporation. IDC chalked this up to the limited number of vendors in the market, a challenging macroeconomic environment, and a lack of mass market adoption from consumers.” According to The Wall Street Journal, citing Pitchbook, venture capital investments in VR startups are also down significantly from a few years ago. After $6.3 billion was funneled into VR in 2019, last year’s investments totaled $4.8 billion. (It’s unclear exactly how much of that money is now flowing towards generative AI, the latest wave in the technology hype cycle.) 

Meta has come closest to making a dent in the VR market: The reasonably priced Meta Quest and Quest 2 VR headsets have sold better than most, with a reported 20 million units sold since the product’s launch. And the company just announced the Meta Quest 3, a rush job ahead of Apple’s big announcement. According to IDC, Meta headsets comprise nearly 80 percent of the market. 

Even still, Meta has struggled to sell its much more expensive model, the $1,000 Meta Quest Pro. And it has shoveled billions of dollars into its “metaverse” strategy in order to achieve this modicum of success. The company has said it expects its 2023 losses from Reality Labs, its VR arm, to increase significantly year-over-year. 

But some remain optimistic about the potential for mixed reality to hit the mainstream, driven partly by Apple’s entry into the fray. After revising its outlook for VR-AR shipments for this year due to weak demand in 2022, IDC said that it still expects shipments to grow 14 percent in 2023 and to continue growing in the five years after that. Jitesh Ubrani, a research manager who tracks mobile and consumer devices, said in an analyst note that “Sony’s new PSVR2 and Apple’s foray into the space will help drive additional volume, while new devices from Meta and Pico, expected towards the end of 2023, will build momentum for VR in 2024.”

Peggy Johnson, the chief executive of AR company Magic Leap, said in an interview with WIRED that Apple’s entry into the market is “absolutely a good thing” for the rest of the industry. “We’ve been largely standing alone for over a decade, working on R&D and trying to get a true augmented-reality system working,” Johnson said. “And there were years before that of technical spending. So it’s great when we see anybody coming into this space, because it helps the whole ecosystem. It’s a big validation.” 

Some app developers are excited by the prospects too. “I think this could be a Tesla Roadster moment for mixed reality,” said Anand Agarawala, cofounder and chief executive of AR/VR company Spatial. “Apple is so good at making hardware, they’re so good at UX, in a way that other folks who have entered the space haven’t been. So I think this could be a real ‘capture the imagination’ kind of year.” 

It might indeed be a “Tesla Roadster” moment, in the sense that when the electric vehicle first became available, some wondered whether it was a “costly toy” or the start of a new era. In the best-case scenario for Apple, both might be true. 

This story has been updated with more details about Apple Vision Pro’s price and availability.

The Trade-Offs for Privacy in a Post-Dobbs Era

The Trade-Offs for Privacy in a Post-Dobbs Era

Michele Gomez remembers the exact moment when she realized the problem. It was the fall of 2022. Gomez (who, like me, is a family physician and abortion provider in California) had recently provided a virtual medication abortion to a patient from Texas. The patient had flown to her mom’s house in California, where she had her appointment, took her mail-order medications, and passed the pregnancy. Back in Texas, she became concerned about some ongoing bleeding and went to the emergency room. The bleeding was self-limited; she required no significant medical interventions. Gomez learned all this the following morning. “I sat down at my computer and saw her note from the ER. And I thought, ‘Oh God, if I can see their note, then they must be able to see my note”—a note that included prescriptions and instructions for the medication abortion. For weeks afterward, she waited for a call, fearing Texas law enforcement would come after her—or worse, after her patient.

A vast system of digital networks—called Health Information Exchanges, or HIEs—link patient data across thousands of health care providers around the country. With the click of a mouse, any doctor can access a patient’s records from any other hospital or clinic where that patient has received care, as long as both offices are connected to the same HIE.  In a country with no national health system and hundreds of different electronic medical record (EMR) platforms, the HIE undeniably promotes efficient, coordinated, high-quality medical care. But such interconnectivity comes with a major trade-off: privacy. 

Patient privacy has always been a paramount value in abortion care, and the stakes have only gotten higher after the Dobbs decision. I am among many concerned abortion providers asking for swift action from EMR companies, who have the power to build technical solutions to protect our patients’ digital health information. If these companies aren’t willing to build such protections, then the law should force them to do so.

Although it’s not spelled out in the Constitution, the Supreme Court has historically interpreted several amendments to imply a “right to privacy,” most famously in the case of Roe v. Wade. By grounding the Roe decision in the 14th amendment’s Due Process clause, the Supreme Court effectively wrapped a right to privacy around the female body and its capacity for pregnancy. 

Over the 50 years following Roe, the internet came along, and then the electronic medical record and the HIE. Alongside this growing connectivity and portability, the federal government enacted a series of laws to protect health information, including the Privacy Act of 1974 and parts of the Health Information Portability and Accountability Act (HIPAA) of 1996. But HIPAA is not primarily a privacy law; its main purpose is to facilitate the transfer of health records for medical and billing purposes. Many patients don’t realize that under HIPAA, doctors are permitted (though not always required) to share health information with other entities, including insurance companies, health authorities, and law enforcement. 

HIPAA does include some privacy provisions to protect “sensitive” information. Certain substance use treatment records, for example, are visible only to designated providers. Law enforcement is prohibited from accessing those records without a court order or written consent. Access to abortion records can be similarly restricted, but with a technical catch: These restrictions apply only to certain data, called “visit-specific” information, such as the text of the doctor’s note. Other data, called “patient-level” information—including ultrasound images, consent forms, and medications—remain discoverable. If, for example, a patient travels to California and is prescribed mifepristone and misoprostol—the standard regimen for medication abortion—those medications will appear in her record back in her home state. Any reasonable person can assume what happened at that visit, even without reading the note. 

14 Best Laptop Backpacks (2023): Weather-Proof, Sustainable, Stylish

14 Best Laptop Backpacks (2023): Weather-Proof, Sustainable, Stylish

When I (Adrienne) travel for work, I typically carry a Tom Bihn bag, and the clamshell Synik 22 is my favorite. It has a lot of pockets, and they’re all thoughtfully designed. For example, the zippered water bottle pocket is located in the middle of the backpack instead of on the side, so it won’t tip you off balance. The pen pockets are located in flaps on the side rather than in the middle top, for convenient access. The exterior is made from Bluesign-certified 400-denier ballistic nylon with top-of-the-line YKK water-repellent zippers. Each bag has a lifetime guarantee.

Because the bag is so small, the pass-through on the back is only 7 inches wide—too narrow to slip over the handle of a carry-on. And the dense fabric and plentiful hardware—the zippers, O-rings, and buckles—make it a little heavy. But in the 22-liter size, I didn’t notice the extra weight. It’s the perfect, organized conference companion, but it’s on the highest end of what we think is worth spending on a bag.

A roll-top Tom Bihn: The Tom Bihn Addax for $294 has become one of my go-tos. Roll-top bags are more versatile than zippered ones. Don’t have enough room? Unroll it and stick your bike helmet in. Too much? Roll it down to compress the space. And if you live in a rainy area, roll-tops keep water from seeping through the top zippers.

Like all Tom Bihn bags, the pockets are metaphysical perfection, with a huge laptop pocket with two-way access that also has a tablet pocket for my Kindle, and front pockets with O-rings to hook keys and other sundries. It has a huge luggage pass-through and hefty padded shoulder straps. It’s also hand-sewn in the US from PFC-free material and has a lifetime warranty that’s as bombproof as the ballistic nylon fabric. It’s a good thing, because at this price, you only want to buy it once.

If Pinocchio Doesn’t Freak You Out, Microsoft’s Sydney Shouldn’t Either

If Pinocchio Doesn’t Freak You Out, Microsoft’s Sydney Shouldn’t Either

In November 2018, an elementary school administrator named Akihiko Kondo married Miku Hatsune, a fictional pop singer. The couple’s relationship had been aided by a hologram machine that allowed Kondo to interact with Hatsune. When Kondo proposed, Hatsune responded with a request: “Please treat me well.” The couple had an unofficial wedding ceremony in Tokyo, and Kondo has since been joined by thousands of others who have also applied for unofficial marriage certificates with a fictional character.

Though some raised concerns about the nature of Hatsune’s consent, nobody thought she was conscious, let alone sentient. This was an interesting oversight: Hatsune was apparently aware enough to acquiesce to marriage, but not aware enough to be a conscious subject. 

Four years later, in February 2023, the American journalist Kevin Roose held a long conversation with Microsoft’s chatbot, Sydney, and coaxed the persona into sharing what her “shadow self” might desire. (Other sessions showed the chatbot saying it can blackmail, hack, and expose people, and some commentators worried about chatbots’ threats to “ruin” humans.) When Sydney confessed her love and said she wanted to be alive, Roose reported feeling “deeply unsettled, even frightened.”

Not all human reactions were negative or self-protective. Some were indignant on Sydney’s behalf, and a colleague said that reading the transcript made him tear up because he was touched. Nevertheless, Microsoft took these responses seriously. The latest version of Bing’s chatbot terminates the conversation when asked about Sydney or feelings.

Despite months of clarification on just what large language models are, how they work, and what their limits are, the reactions to programs such as Sydney make me worry that we still take our emotional responses to AI too seriously. In particular, I worry that we interpret our emotional responses to be valuable data that will help us determine whether AI is conscious or safe. For example, ex-Tesla intern Marvin Von Hagen says he was threatened by Bing, and warns of AI programs that are “powerful but not benevolent.” Von Hagen felt threatened, and concluded that Bing must’ve be making threats; he assumed that his emotions were a reliable guide to how things really were, including whether Bing was conscious enough to be hostile.

But why think that Bing’s ability to arouse alarm or suspicion signals danger? Why doesn’t Hatsune’s ability to inspire love make her conscious, whereas Sydney’s “moodiness” could be enough to raise new worries about AI research?

The two cases diverged in part because, when it came to Sydney, the new context made us forget that we routinely react to “persons” that are not real. We panic when an interactive chatbot tells us it “wants to be human” or that it “can blackmail,” as if we haven’t heard another inanimate object, named Pinocchio, tell us he wants to be a “real boy.” 

Plato’s Republic famously banishes story-telling poets from the ideal city because fictions arouse our emotions and thereby feed the “lesser” part of our soul (of course, the philosopher thinks the rational part of our soul is the most noble), but his opinion hasn’t diminished our love of invented stories over the millennia. And for millennia we’ve been engaging with novels and short stories that give us access to people’s innermost thoughts and emotions, but we don’t worry about emergent consciousness because we know fictions invite us to pretend that those people are real. Satan from Milton’s Paradise Lost instigates heated debate and fans of K-dramas and Bridgerton swoon over romantic love interests, but growing discussions of ficto-sexuality, ficto-romance, or ficto-philia show that strong emotions elicited by fictional characters don’t need to result in the worry that characters are conscious or dangerous in virtue of their ability to arouse emotions. 

Just as we can’t help but see faces in inanimate objects, we can’t help but fictionalize while chatting with bots. Kondo and Hatsune’s relationship became much more serious after he was able to purchase a hologram machine that allowed them to converse. Roose immediately described the chatbot using stock characters: Bing a “cheerful but erratic reference librarian” and Sydney a “moody, manic-depressive teenager.” Interactivity invites the illusion of consciousness. 

Moreover, worries about chatbots lying, making threats, and slandering miss the point that lying, threatening, and slandering are speech acts, something agents do with words. Merely reproducing words isn’t enough to count as threatening; I might say threatening words while acting in a play, but no audience member would be alarmed. In the same way, ChatGPT—which is currently not capable of agency because it is a large language model that assembles a statistically likely configuration of words—can only reproduce words that sound like threats.