The reintroduction of a MagSafe magnetic charging port, a staple in older Macs, is especially welcome if you tend to be clumsy and trip over your charging cables. MagSafe mostly ensures the cable releases and that the laptop doesn’t go crashing to the floor.
That reversion to functionality has come with some sacrifice. The new MacBook Pros are heavier than the previous generation of MacBook Pro, weighing 4.7 pounds and 3.5 pounds, respectively, adding about a half pound of heft to the previous generation. But they’re also just a hair thicker, which means Apple has done some clever maneuvering of the machines’ internals to make these ports fit. Battery life, in some ways the only spec that matters, is supposed to be substantially better as well.
That’s due in no small part to the chip upgrade these laptops are getting. The new M1 Pro and M1 Max chips build on last year’s M1 chip, which was Apple’s first custom-designed processor for Macs and powered last year’s MacBook Air and 13-inch MacBook Pro. These new chips should offer massive gains in performance—though benchmark and speed tests will soon reveal just how big these boosts are—and in a series of not-so-subtle charts, Apple showed how the chips should outperform Intel-powered machines in almost every category. The company claims that the M1 Max’s 32-core GPU, for example, rivals even the most powerful laptops with discrete graphics processors.
Apple’s own apps have been optimized for the new chips, which will start rolling out when the new MacBook Pros ship at the end of the month, and some third-party developers were on hand (via pretaped videos) to vouch for the newfound power of the Macs. But these new MacBook Pros will still use last year’s emulator to run some x86 apps. Arguably, it’s some of these apps that are the most important element of MacBook Pros: Buyers of Apple’s high-end hardware also tend to be the customers who will spend a lot of money on software, whether for their jobs or more casual use cases, and this is exactly who Apple is targeting with these $2,000-and-up machines.
“A lot of creative professionals have been anxious to see how the M1 was going to be developed for their needs,” says Mikako Kitagawa, research director at Gartner. “It’s not like all the apps they need are [optimized], but Apple has been working with key third-party developers like Adobe, so between the performance boost and key applications this could be the time for creative professionals to replace their devices.”
Kitagawa said she believes these new MacBook Pros won’t be high-volume items, considering their starting price of $2,000. Putting it another way: She doesn’t think that sales of the new MacBook Pros will alter Apple’s current market share in global PC shipments.
But the new machines are still technologically—and symbolically—significant for Apple. She noted that Microsoft recently announced a new Surface Studio laptop, which is also aimed at designers, developers, and producers. That’s another laptop that’s unlikely to make a huge dent in the market, but it’s a chance for Microsoft to show off what it thinks it can offer creative consumers. Appealing to creatives, and to some extent, counterculture, has been in Apple’s computer DNA. In recent years, it lost its way. To build MacBook Pros for the future, Apple rightly followed the path back to that past.
The robot was not at all real. Or it was very real, depending on whether you believe realness is closely related to physiology or whether you think this whole reality is a simulation. Which is to say, the robot was actually a human cosplaying as a humanoid robot.
The robot shuffled on stage during Tesla’s AI Day yesterday afternoon, a three-hour demo of autonomous car features and slides titled “Multi-Scale Feature Pyramid Fusion.” The big news out of the event was a new custom AI chip for data centers, and a supercomputing system called Dojo. Later in the livestream, Tesla founder and chief executive officer Elon Musk revealed that Tesla was working on this robot. People tuned in, because Musk. Then they laughed, because of the robot. But the joke was on them.
After first appearing stiff-armed and arthritic, the robot broke into dance. The fan fiction came to a fast end. Only a real live human could do the Charleston with such fluidity. The fabric of the robot’s all-white jumpsuit, with its accidentally stylish boat neck, creased as the robot danced. The human robot was having fun. Too much fun. (“Is the robot … Grimes?” I asked an editor.) Musk shooed them off the stage.
“The robot will be real,” Musk told the AI Day audience, in between his trademark titters. “We’ll probably have a prototype sometime next year that basically looks like this.” The demo was bad—transparently so. Musk was trolling us. The not-yet-a-robot was a stunt, a way to get people who normally wouldn’t pay attention to Tesla AI Day talking about Tesla AI Day. And the joke was layered: Implicit in Musk’s future assurance was the fact that the humanoid robot is not at all presently real, even if the human inside the robot outfit was; once the humanoid robot is real, it will obviate the humans who built it.
“This will be quite profound,” Musk said. “Because if you say, What is the economy? It is, at the foundation, it is labor.”
Will the humanoid robot ever ship, with its screenface, AI chip, eight cameras, 40 electromechanical actuators, and fit-model proportions? Who knows. Musk’s bizarre demo laid bare the truth of many new tech demos: They are a ruse, a storyboarded vision of the future held together by digital duct tape.
Anyone who has traveled to the annual CES in Las Vegas fully understands this. Reality is suspended amidst the rolling displays, intelligent exoskeletons, cleaning robots, and self-driving vehicles that all seem to work so well but rarely sell. In 2016, Magic Leap released a videoclip of a virtual whale splashing through a gymnasium floor, set to the score of oohs and ahhs of children in the stands. This, too, was a ruse. Samsung has shown DSLR photos in faked demonstrations of its “smartphone cameras.” Apple’s more recent tech demos are more subtly artificial—suggesting a lifestyle that only a smallish percentage of the world’s population can maintain, promising seamless continuity between gadgets—but the very first iPhone demo was a total charade.
Tesla’s own electric Cybertruck, first unveiled in November of 2019, had a smashing first demo. Its release has been delayed until 2022.
Of course, some of these products actually do ship, and at the same time every year, pandemics and global chip shortages aside. That’s rarely what tech makers are selling you on in demos, though, the same way a friend trying to set you up on a date wouldn’t lead with, “They’re so punctual.”
They’re peddling the fantastic future, and, just maybe, the bridge that will cross the uncanny valley. They’re selling you on tech that will only deepen your sense of humanity, if you would only just embrace what they’re telling you. If only you got the joke. The dancing robot demo wasn’t real, but it will be. The robot human was real, but some day maybe they won’t be.
Lobbyists and trade groups for big tech companies and equipment manufacturers have long argued that giving consumers more access to the tools required to fix products, whether a smartphone or a car, poses safety and security risks. The debate has gotten especially heated as more products become internet-connected, adding a software element to repairs that in the past might have just required swapping parts.
The links to news reports in the White House’s fact sheet that back up its claims of stymied competition specifically point to the issues around cell phone repair, but the language of the order itself urges the FTC to broaden the right to repair by restricting “tech and other” companies from discouraging DIY tinkering. Such language indicates that the FTC’s regulatory target will be much bigger than the device in your pocket.
In an emailed response to the executive order, a spokesperson from John Deere claims the company “leads our industry in providing repair tools, spare parts, information guides, training videos and manuals needed to work on our machines.” But the spokesperson also says that while less than two percent of tractor repairs require a software update, the company still does not support the right to modify embedded software “due to risks associated with the safe operation of the equipment.”
Turn of the Screw
Proctor, of US PIRG, notes that it could still be awhile before the FTC starts enforcing new repair laws, saying that the rule-making process is “not always an expeditious one.” He cites as an example the FTC’s finalization of a rule around “Made in the USA” labels that are falsely applied to products not manufactured in the US. (Congress first enacted legislation around “Made in the USA” claims back in 1994, but for years there was bipartisan consensus that this kind of fraud shouldn’t be subject to tough penalties. Just last week, the FTC codified the rules in such a way that violators would be penalized.)
“Right to Repair is even more complex than that case, and if this is just a directive towards rule-making, it might kick off another long process,” Proctor says. “Still, I’m hopeful that this is a mechanism that gets us to where we need to go a little faster.”
Sheehan from iFixit is more optimistic that the FTC might act quickly around Right to Repair, partly because the agency recently introduced a series of changes designed to streamline rule-making procedures—and partly because the order is coming directly from the White House. “Obviously we want the agency to move quickly on this, and pressure from the Biden administration does make that more possible,” Sheehan says.
A spokeswoman for the FTC declined to comment directly on the matter, instead pointing towards the White House’s statement and the report that the Commission already released in May.
In that report, the FTC concluded that products have, in fact, become harder to fix and maintain, and that “repair restrictions have … steered consumers into manufacturers’ repair networks to replace products before the end of their useful lives.” The FTC also noted that repair restrictions may also “place a greater financial burden on communities of color and lower-income Americans.”
But the FTC also warned in the May report that Right to Repair is a complicated issue, and that expanding consumers’ repair options, whether through industry initiatives or through legislation, “raises numerous issues that will warrant examination.”
Ultimately, the Right to Repair fight will likely still continue at the state level, and advocates plan to continue to lobby Congress for changes as well.
“I think, depending on the scope of the FTC rules, this may not be a substitute for what Congress can do and what states can do,” Sheehan says. As many as 25 states have considered Right to Repair legislation this year, but that, of course, doesn’t mean the bills in those states will be signed into law. A few states have what Sheehan calls “repair-related laws,” including California, Rhode Island, and Indiana. Right now, Massachusetts is the only state with an official Right to Repair law for automobiles, which won the vote by a large margin in 2012 and again in 2020, despite vocal opposition from a coalition of big automakers.
“Whatever rule the FTC passes, it will be up to the FTC to enforce,” Sheehan says. “Whereas state legislation can be enforced by state attorneys general, and occasionally they have more leeway or more resources to focus on these things than the FTC might in the context of all its other many priorities.”
Could drones provide this evidence for waste resources, letting local authorities know where dog poo is being dumped?
Ferdinand Wolf, creative director at DJI Europe, says it can. “Flight time has seen a big improvement in drone technology,” Wolf says. “From the original Phantom that maybe flew seven or eight minutes, now we have drones that easily fly 30-plus minutes, which is essential if you want to scout for dog poo or litter and not constantly land to recharge batteries.” Also, drones now routinely have multiple visual sensors to help navigate autonomously around parks or down country lanes without hitting trees and the like.
“And we can now run image recognition on the drone itself,” Wolf says. So the drones could be programmed to distinguish a dog poo from, say, a rock? “We have databases on the drone where it can look up and compare images. It can differentiate between a human being, a bicycle, a car or a ship. So, if you go further, this is similar. This is a piece of paper or this is the rock or this is a dog poo. If it can look up a database and say, OK, this is usually what dog poo looks like, then this is all technology that can be used for that.”
Talking about trash recognition in general, Zack Jackowski, chief engineer for Boston Dynamics’ Spot robot, puts it more simply: “The way the machine learning works, if you can visually recognize it as a distinct thing, you can train a robot to recognize it. If you have an easy time picking it out, a robot can have an easy time picking it out.”
“Of course, there’s a lot of different forms of poo that can look very different,” Wolf says. “Form and sizes and consistencies can vary a lot, if it’s on grass and has sunk down or decomposed – but for sure it’s possible.” The really good news is that Wolf says the crap dangling from branches is the easiest to identify. “Something like a bag hanging in a tree would be very easy to detect, and flag, because it will have a very similar shape and color.”
This is the sticking point. Drones would be ideal for flagging and tracking dog poo deposits, but not the actual cleanup. In 2017, a startup in the Netherlands claimed to have created two poop-scooping “Dogdrones,” but the idea never took off. Volunteers willing to help in the testing stages were, perhaps understandably, thin on the ground. Besides, the scooping drone of the pair was ground-based anyway.
“Picking up a bag might be something possible, I guess,” Wolf says. “Picking up the poo itself, with like a little shovel, that would be hard to implement. You need to increase the size of the drone, the utilities, then that will make everything bigger and more cumbersome.”
Robots to the Rescue
Robots are frequently envisioned as fulfilling jobs involving the three Ds: “dirty, dangerous and dull”. Clearing up dog mess certainly ticks all these boxes. So, for reliable ground clearance, therefore, what we really need is a robot that can go wherever dogs can. This could be one of the best use cases for Spot yet. Indeed, the robot has already been fitted with its Spot Arm for clearing up trash outdoors.
Boston Dynamics itself says there is interest in a use case for “Spot + Spot Arm” to be used for cleaning of public spaces and along roadsides, and the operation is in essence similar to the “fetch” behavior the BD engineers have already demonstrated.
Most of the early smart home inventions used automatic controls, making it possible to turn something or off without lifting a finger. But they didn’t connect to anything else, and their functionality was limited. That would begin to change in 1983 when ARPANET, the earliest version of the internet, adopted the internet protocol suite (also known as TCP/IP). The protocol set standards for how digital data should be transmitted, routed, and received. Essentially, it laid the groundwork for the modern internet.
The first internet-connected “thing” to make use of this new protocol was a toaster. John Romkey, a software engineer and early internet evangelist, had built one for the 1990 showfloor of Interop, a trade show for computers. Romkey dropped a few slices of bread into the toaster and, using a clunky computer, turned the toaster on. It would still be a decade before anyone used the phrase “internet of things,” but Romkey’s magic little toaster showed what a world of internet-connected things might be like. (Of course, it wasn’t fully automated; a person still had to introduce the bread.) It was part gimmick, part proof of concept—and fully a preview of what was to come.
The term “internet of things” itself was coined in 1999, when Kevin Ashton put it in a PowerPoint presentation for Procter & Gamble. Ashton, who was then working in supply chain optimization, described a system where sensors acted like the eyes and ears of a computer—an entirely new way for computers to see, hear, touch, and interpret their surroundings.
As home internet became ubiquitous and Wi-Fi sped up, the dream of the smart home started to look more like a reality. Companies began to introduce more and more of these inventions: “smart” coffee makers to brew the perfect cup, ovens that bake cookies with precision timing, and refrigerators that automatically restocked expired milk. The first of these, LG’s internet-connected refrigerator, hit the market in 2000. It could take stock of shelf contents, mind expiration dates, and for some reason, came with an MP3 player. It also cost $20,000. As sensors became cheaper, these internet-connected devices became more affordable for more consumers. And the invention of smart plugs, like those made by Belkin, meant that even ordinary objects could become “smart”—or, at least, you could turn them on and off with your phone.
Any IoT system today contains a few basic components. First, there’s the thing outfitted with sensors. These sensors could be anything that collects data, like a camera inside a smart refrigerator or an accelerometer that tracks speed in a smart running shoe. In some cases, sensors are bundled together to gather multiple data points: a Nest thermostat contains a thermometer, but also a motion sensor; it can adjust the temperature of a room when it senses that nobody’s in it. To make sense of this data, the device has some kind of network connectivity (Wi-Fi, Bluetooth, cellular, or satellite) and a processor where it can be stored and analyzed. From there, the data can be used to trigger an action—like ordering more milk when the carton in the smart refrigerator runs out, or adjusting the temperature automatically given a set of rules.
Most people didn’t start building an ecosystem of “smart” devices in their homes until the mass adoption of voice controls. In 2014, Amazon introduced the Echo, a speaker with a helpful voice assistant named Alexa built in. Apple had introduced Siri, its own voice assistant, four years prior—but Siri lived on your phone, while Alexa lived inside the speaker and could control all of the “smart” devices in your house. Positioning a voice assistant as the centerpiece of the smart home had several effects: It demystified the internet of things for consumers, encouraged them to buy more internet-enabled gadgets, and encouraged developers to create more “skills,” or IoT commands, for these voice assistants to learn
The same year that Amazon debuted Alexa, Apple came out with HomeKit, a system designed to facilitate interactions between Apple-made smart devices, sending data back and forth to create a network. These unifying voices have shifted the landscape away from single-purpose automations and toward a more holistic system of connected things. Tell the Google Assistant “goodnight,” for example, and the command can dim the lights, lock the front door, set the alarm system, and turn on your alarm clock. LG’s SmartThinQ platform connects many home appliances, so you can select a chocolate chip cookie recipe from the screen of your smart fridge and it’ll automatically preheat the oven. Manufacturers bill this as the future, but it’s also a convenient way to sell more IoT devices. If you already have an Amazon Echo, you might as well get some stuff for Alexa to control.
By 2014, the number of internet-connected devices would surpass the number of people in the world. David Evans, the former chief futurist at Cisco, estimated in 2015 that “an average 127 new things are connected to the internet” every second. Today, there are over 20 billion connected things in the world, according to estimates from Gartner. The excitement around the brave new internet-connected world has been matched with concern. All of these objects, brought to life like Pinocchio, have made the world easier to control: You can let the delivery man in the front door, or change the temperature inside the house, all with a few taps on a smartphone. But it’s also given our objects—and the companies that make them—more control over us.