Select Page
Your Project Management Software Can’t Save You

Your Project Management Software Can’t Save You

When I worked as a copywriter at a dog-toy-slash-tech company, we used Airtable and Basecamp to organize our workflows. At my next job, the marketers made us learn Asana (“same as Airtable but much better”), but the product team pushed their work and sprints through Jira. I was laid off before I had to learn Jira, and at my next gig they swore by Airtable, which, phew, I already knew. But efficiencies were still being lost, apparently, and Airtable took the blame. As I was leaving that job, I heard someone mention that a new program, Trello, was going to replace Airtable and “change everything” for us. I came back as a contractor a few years later, and everything had not changed. The company had moved on from Trello and was now in the thrall of something called Monday.com. It, too, promised big changes.

If you work as an “individual contributor”—engineer, copywriter, designer, data analyst, marketer—in the modern white-collar workforce, you’ve probably encountered one of these project-management software (PM software) enterprises. Your onboarding will include an invitation to collaborate from the likes of Smartsheet, Notion, Udemy, ClickUp, Projectworks, Wrike, and Height. The list seems endless and yet is somehow still growing. More than a hundred proprietary apps and planners are currently vying for companies’ business, all promising increased productivity, seamless workflow, and unmatched agility. And if, like me, you’ve ping-ponged between a couple of jobs and project teams over a few years, you’ve had to come to terms with the fact that misunderstandings and confusion are natural in any large workforce. But in an increasingly digital, increasingly remote age of work, you might still imagine that a “killer app” really would win. And yet none of these PM software services make work work. The key to these deficiencies lies in the history of workplace efficiency itself—starting with the original business consultants.

Solving for Efficiency

Before the second Industrial Revolution, there was practically no such thing as productivity. (The word itself basically didn’t exist before 1900.) As factories became more complex and wage-laborers proliferated, the goal of capital became ensuring the efficiency of its labor. If connecting your workplace annoyance with too many Trello notifications to the plight of a machinist building lathes in the 1900s gives you vertigo, you aren’t alone. But the idea of making sure you’re working efficiently is as old as the idea of being employed.

And so the 1900s ushered in what we know as project management. According to Frederic Taylor’s The Principles of Scientific Management, the goal of managing workers “should be to secure the maximum prosperity for the employer, coupled with the maximum prosperity for each employee.” At the same time that Taylor, a mechanical engineer, rose from the factory floor to become one of America’s first workplace narcs (or consultants), another engineer, Henry Gantt, popularized and codified the basics of the Gantt chart, a simple bar chart that turns a project’s schedule into a set of lines on an x- and y-axis, with time moving from left to right. Also called the “waterfall” method, Gantt charts create a visual metaphor of tasks and their dependencies and contingencies so you can see each individual task in terms of when it should start and when it must be completed, relative to the overall project and the tasks coming before it.

Are you a graphic designer waiting for photos and copy to come in before you can design a banner ad? In many of our modern PM software apps, you can see those prerequisites, as on modern Gantt charts offered by Monday.com, Wrike, Microsoft Project, and Click Up. Asana also has Gantt templates.

Taylor and Gantt were figuring out how to manage the work of a factory machinist, whose job, like Lucy’s in the chocolate factory, typically involved one repeatable task. But the growth of the information worker means more generalists, consultants, analysts, and managers—and more hierarchy. On a construction project, for example, as long as the rebar is installed, the concrete team can pour a foundation. Similarly, the factory worker does not have to see the Gantt chart to fabricate their part of the widget, they only need to know what to do. They don’t have to participate in the creation of the chart. They don’t have to interact with the chart. In the formidable Hoover Dam project (its construction was organized via Gantt chart), the workers pouring concrete did not have to self-manage that task while also checking in with their Gantt charts. In the time before information work, task workers (individual contributors) did not have to self-govern; they were the governed.

Information work, on the other hand, is more easily managed using the methods Gantt developed. In an information workforce, there are infinite vectors of feedback, debate, stakeholder approval, and revision, not to mention endless points of contact. (If you feel your place of business is swollen with managers, you’re not alone.) Software that mimics a bygone way of setting up project dominoes is the source of our workplace frustration and the beginning of do-it-all solutions that end up simply making more work.

Critical Paths to Road Maps to Endless Options

Did you know that the Manhattan Project is also part of the glorious history of project management? Increasingly complex problems need increasingly elegant solutions, and you can’t go from an idea to an atomic bomb in a few years without efficiently organized parallel paths of work. The observations established by some engineers on the Manhattan Project led to the creation, in the late 1950s, of critical path method, an algorithmic model that creates a mini-map (a bit like a decision tree) of all pieces of a development process or project. Each node and path is given time values, and a computer solves for the fastest (or cheapest) way to get to the end with all necessary tasks accomplished. Combine critical path with the US Navy’s PERT method, a similar system developed simultaneously, and project management has moved into the computer age. Around the same time, the kanban (Japanese for signboard) system was developed at Toyota to wring more efficiency out of lean manufacturing. A manual system of cards and signs, kanban also gained popularity.

By the time software development becomes a more legitimate field to be managed (in the 1980s), we also have Fred Brooks’ “law,” which states that adding manpower to delayed programming projects only further slows them down. The truth behind this idea—that “onboarding” complex tasks is more time-consuming than time-saving—is one of several factors that lead software developers to work in and develop scrums, a more flexible way of communicating during open-ended work projects, like programming. Scrums are possibly more revolutionary than critical path, kanban, or any of their precedents because they present a format that fits the functionality of small teams with shorter-term goals. Scrums help programmers accomplish work quickly and then do the same on the next project.

You may look at a critical path chart and think: Hey, that sounds a lot like a product road map (a somewhat useful-looking combination of the waterfall part of a Gantt chart and the dependent-path layout of a critical path). Or you might consider a kanban board and think: OK, I can get used to this. But notice that Asana is advertising its fluency in kanban, critical path, and scrums, as well as with a newer term: agile. PM software represents itself like Frederic Taylor in the late 1800s, traveling from place to place and assuring factory owners that his system can be applied equally to joinery and industrial laundry. The difference is that Taylor had a one-system-fits-all solution; PM software sells itself a jack of all systems and master of all too.

The Hollywood Writers AI Deal Sure Puts a Lot of Trust in Studios to Do the Right Thing

The Hollywood Writers AI Deal Sure Puts a Lot of Trust in Studios to Do the Right Thing

I’ve been in the entertainment industry since I was nine. I joined the Screen Actors Guild (SAG) when I was 11 in 1977, the Writers Guild of America (WGA) when I was 22, and the Directors Guild of America (DGA) the following year. I got my start as a child actor on Broadway, studied film at NYU, then went on to act in movies like The Lost Boys and the Bill & Ted franchise while writing and directing my own narrative work. I’ve lived through several labor crises and strikes, but none like our current work shutdown, which began last spring when all three unions’ contracts were simultaneously due for renegotiation and the Alliance of Motion Picture and Television Producers (AMPTP) refused their terms.

The unifying stress point for labor is the devaluing of the worker, which reached a boiling point with the rapid advancement of highly sophisticated and ubiquitous machine learning tools. Actors have been replaced by AI replications of their likenesses, or their voices have been stolen outright. Writers have seen their work plagiarized by ChatGPT, directors’ styles have been scraped and replicated by MidJourney, and all areas of crew are ripe for exploitation by studios and Big Tech. All of this laid the groundwork for issues pertaining to AI to become a major flashpoint in  this year’s strikes. Last summer, the DGA reached an agreement with the AMPTP, and on Tuesday the WGA struck its own important deal. Both include terms the unions hope will meaningfully protect their labor from being exploited by machine-learning technology. But these deals, while a determined start, seem unlikely to offer expansive enough protections for artists given how much studios have invested in this technology already.

The DGA’s contract insists that AI is not a person and can’t replace duties performed by members. The WGA’s language, while more detailed, is fundamentally similar, stating that “AI can’t write or rewrite literary material, and AI-generated material will not be considered source material” and demanding that studios “must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material.” Their contract also adds that the union “reserves the right to assert that exploitation of writers’ material to train AI is prohibited.”

But studios are already busy developing myriad uses for machine-learning tools that are both creative and administrative. Will they halt that development, knowing that their own copyrighted product is in jeopardy from machine-learning tools they don’t control and that Big Tech monopolies, all of which could eat the film and TV industry whole, will not halt their AI development? Can the government get Big Tech to rein it in when those companies know that China and other global entities will continue advancing these technologies? All of which leads to the question of proof.

It’s hard to imagine that the studios will tell artists the truth when being asked to dismantle their AI initiatives, and attribution is all but impossible to prove with machine-learning outputs. Likewise, it’s difficult to see how to prevent these tools from learning on whatever data the studios want. It’s already standard practice for corporations to act first and beg forgiveness later, and one should assume they will continue to scrape and ingest all the data they can access, which is all the data. The studios will grant some protections for highly regarded top earners. But these artists are predominantly white and male, a fraction of the union membership. There will be little to no protection for women, people of color, LGBTQIA+, and other marginalized groups, as in all areas of the labor force. I don’t mean to begrudge the work of the DGA and WGA in crafting terms that may not adequately represent the scope of the technology. But we can go further—and SAG has the opportunity to do so in its ongoing negotiations.

SAG is still very much on strike, with plans to meet with the AMPTP next on Monday. In their meeting, I hope they can raise the bar another notch with even more specific and protective language.

It would be good to see terminology that accepts that AI will be used by the studios, regardless of any terms thrown at them. This agreement should also reflect an understanding that studios are as threatened by the voracious appetites of Big Tech as the artists, that the unions and the AMPTP are sitting on opposite sides of the same life raft. To that end, contractual language that recognizes mutual needs will serve everyone’s interest, with agreements between AI users and those impacted by its use on all sides of our industry. It would also be helpful to see language that addresses how AI’s inherent biases, which reflect society’s inherent biases, could be an issue. We must all make a pact to use these technologies with those realities and concerns in mind.

Mostly, I hope everyone involved takes the time to learn how these technologies work, what they can and cannot do, and gets involved in an industrial revolution that, like anything created by humans, can provide tremendous benefit as well as enormous harm. The term Luddite is often used incorrectly to describe an exhausted and embittered populace that wants technology to go away. But the actual Luddites were highly engaged with technology and skilled at using it in their work in the textile industry. They weren’t an anti-tech movement but a pro-labor movement, fighting to prevent the exploitation and devaluation of their work by rapacious company overlords. If you want to know how to fix the problems we face from AI and other technology, become genuinely and deeply involved. Become a Luddite.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.

Immersive Tech Obscures Reality. AI Will Threaten It

Immersive Tech Obscures Reality. AI Will Threaten It

Last week, Amazon announced it was integrating AI into a number of products—including smart glasses, smart home systems, and its voice assistant, Alexa—that help users navigate the world. This week, Meta will unveil its latest AI and extended reality (XR) features, and next week Google will reveal its next line of Pixel phones equipped with Google AI. If you thought AI was already “revolutionary,” just wait until it’s part of the increasingly immersive responsive, personal devices that power our lives.

AI is already hastening technology’s trend toward greater immersion, blurring the boundaries between the physical and digital worlds and allowing users to easily create their own content. When combined with technologies like augmented or virtual reality, it will open up a world of creative possibilities, but also raise new issues related to privacy, manipulation, and safety. In immersive spaces, our bodies often forget that the content we’re interacting with is virtual, not physical. This is great for treating pain and training employees. However, it also means that VR harassment and assault can feel real, and that disinformation and manipulation campaigns are more effective.

Generative AI could worsen manipulation in immersive environments, creating endless streams of interactive media personalized to be as persuasive, or deceptive, as possible. To prevent this, regulators must avoid the mistakes they’ve made in the past and act now to ensure that there are appropriate rules of the road for its development and use. Without adequate privacy protections, integrating AI into immersive environments could amplify the threats posed by these emerging technologies.

Take misinformation. With all the intimate data generated in immersive environments, actors motivated to manipulate people could hypercharge their use of AI to create influence campaigns tailored to each individual. One study by pioneering VR researcher Jeremy Bailenson shows that by subtly editing photos of political candidates’ faces to appear more like a given voter, it’s possible to make that person more likely to vote for the candidate. The threat of manipulation is exacerbated in immersive environments, which often collect body-based data such as head and hand motion. That information can potentially reveal sensitive details like a user’s demographics, habits, and health, which lead to detailed profiles being made of users’ interests, personality, and characteristics. Imagine a chatbot in VR that analyzes data about your online habits and the content your eyes linger on to determine the most convincing way to sell you on a product, politician, or idea, all in real-time.

AI-driven manipulation in immersive environments will empower nefarious actors to conduct influence campaigns at scale, personalized to each user. We’re already familiar with deepfakes that spread disinformation and fuel harassment, and microtargeting that drives users toward addictive behaviors and radicalization. The additional element of immersion makes it even easier to manipulate people.

To mitigate the risks associated with AI in immersive technologies and provide individuals with a safe environment to adopt them, clear and meaningful privacy and ethical safeguards are necessary. Policymakers should pass strong privacy laws that safeguard users’ data, prevent unanticipated uses of this data, and give users more control over what is collected and why. In the meantime, with no comprehensive federal privacy law in place, regulatory agencies like the US Federal Trade Commission (FTC) should use their consumer protection authority to guide companies on what kinds of practices are “unfair and deceptive” in immersive spaces, particularly when AI is involved. Until more formal regulations are introduced, companies should collaborate with experts to develop best practices for handling user data, govern advertising on their platforms, and design AI-generated immersive experiences to minimize the threat of manipulation.

As we wait for policymakers to catch up, it is critical for people to become educated on how these technologies work, the data they collect, how that data is used, and what harm they may cause individuals and society. AI-enabled immersive technologies are increasingly becoming part of our everyday lives, and are changing how we interact with others and the world around us. People need to be empowered to make these tools work best for them—and not the other way around.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.

Is the Physics of Time Actually Changing?

Is the Physics of Time Actually Changing?

Time is not to be trusted. This should come as news to no one.

Yet recent times have left people feeling betrayed that the reliable metronome laying down the beat of their lives has, in a word, gone bonkers. Time sulked and slipped away, or slogged to a stop, rushing ahead or hanging back unaccountably; it no longer came in tidy lumps clearly clustered in well-defined categories: past, present, future.

“Time doesn’t make sense anymore,” a redditor lately lamented. “It feels quicker. Days, weeks, months it’s going by at 2x speed.” Hundreds agreed—and blamed the pandemic.

I’m surprised anyone is surprised. No one understands time. Time is a notorious trickster, evading the best efforts of scientists to pin it down for thousands of years. Psychologists call it a quagmire. Physicists say it’s a mess, hopeless, the ultimate terrorist. A failure of imagination. There’s nothing new about time being nuts.

Intrigued by the pervasive sense of pandemic-induced time distortion, psychologists at first speculated that loss of temporal landmarks was at work: office, gym, pulling on of pants. Words such as “Blursday” crept into the vocabulary, along with “polycrisis” and “permacrisis,” referring to the plethora of perturbances creating instability, pushing time out of sync: war, climate, politics.

Yet for all the newish research involving linguistics, neuroscience, psychology, scientists have made no real progress. We still know pretty much what we’ve always known: Scary movies and skydiving make time seem eternal, as does waiting for rewards (that call from the Nobel committee) or being bored (are we there yet?). In contrast, being happily immersed in some task (“flow”), facing deadlines, running for a bus, getting old, can make time run fast.

Attempts to find a biological mechanism for time—a single stopwatch in the brain—have likewise gotten nowhere. Rather, the brain teems with timekeepers, tick-tocking at different rates, measuring milliseconds and decades, keeping track of breath, heartbeat, body movements, information from the senses, predictions for the future, memories.

“There are thousands of possible intricate answers, all depending on what exactly scientists are asking,” explained one neuroscientist, sounding much like a physicist—that realm of science that routinely slices time into slivers of seconds, describes the universe a trillionth of a trillionth of a second after its birth, yet still doesn’t have clue how to think about it.

Even the great late physicist John Wheeler, who coined the term black hole for a thing made only of spacetime, was stumped by time itself. He once admitted he couldn’t do better than quote a bit of graffiti he’d read on a men’s room wall: “Time is nature’s way to keep everything from happening at once.”

Philosophers have long told us that time is an illusion; modern physicists agree. That doesn’t add much insight. Illusions are stories the brain creates to make sense of confusing information, the chaos out there and within. This describes nearly everything we think we know. Without time, there’s no way of making a narrative; there’s no way of making a universe.

Why Aren’t Disabled Astronauts Exploring Space?

Why Aren’t Disabled Astronauts Exploring Space?

Today, young people are becoming disabled in record numbers with all the various impacts of long Covid, which is estimated to affect between 8 and 25 percent of people who have been infected. The disabled future is coming to pass now, and we need to create inclusive and accessible environments for all kinds and ages of disabled people to deal with it.

Beyond Covid, pollution is increasing rates of environmentally produced disability—​higher levels and lower onset ages of different types of cancers, as well as rising rates of asthma, chemical sensitivities, and autoimmune disabilities, some of which can come from smog and conditions of poor air quality. The future is also disabled for the planet itself. Sunaura Taylor, a fellow disabled scholar and an animal and environmental activist, writes powerfully of the “disabled ecologies” that constitute the landscapes we have impaired. Her case study is the Superfund site in Tucson, Arizona, which contaminated local groundwater and, 40 years later, is still affecting the land and surrounding communities. She thinks disabled people have important insight into how to live, age, and exist with disabled ecologies. She reminds us that we can’t just get rid of our land, our environment. We have to learn how to live in a world we have disabled.

Even with hopeful futures like that of space travel, we can expect the production of disability. Space is already disabling for humans. Just as the built environment on Earth is not suited for disabled bodies, space as an environment is not suited to any human bodies. Every astronaut comes back from the low gravity of space with damage to their bones and eyes—​and the longer they are off Earth’s surface, the worse the damage. Some things can be restored over time, but some changes are long-​lasting. These realities are absent from futurist writing about technology, which is framed as simply magicking away the disabling effects of space travel.

This is why technofuturists’ discussions of “The End of Disability” are so silly. Disability isn’t ending; we’re going to see more and newer forms of disability in the future. This doesn’t mean that all medical projects aimed at treating disease and disability are unpromising. But we need to prepare for the disabled future: becoming more comfortable with other people’s disabilities, accepting the fact that we ourselves will eventually be disabled (if we aren’t already), learning to recognize and root out ableism—​these are all moves toward building a better future for everyone. Planning for the future in a realistic way requires embracing the existence, and indeed the powerful role, of disabled people in it. We must rid ourselves of technoableism—the harmful belief that technology is a “solution” for disability—and instead pay overdue attention to the ways that disabled communities make and shape the world, live with loss and navigate hostility, and creatively adapt.

The promise of disabled space travel is a particularly potent case study. Deaf-​and-​disabled-​led literary journal The Deaf Poets Society asked us to dream in 2017 with their #CripsInSpace special issue. Guest edited by Alice Wong and Sam de Leve, this issue was announced with a video of de Leve showing us how they are specially suited for space—since, as wheelchair users, they were already trained to push off of kitchen counters and walls to get where they wanted to go. They also pointed out that while most kids can dream of being astronauts, disabled people are usually given fewer options, even early in life. So they asked us to dream, write, and create art: The issue features short stories, prose, and poetry in which people think about how they are better suited for going to the stars.

Others have also considered disabled space travel and disabled futures. In 2018, blind linguist Sheri Wells-​Jensen (now the 2023 Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology, Exploration, and Scientific Innovation) made “The Case for Disabled Astronauts” in Scientific American. She wrote about how useful it would be to have a totally blind crew member aboard. Spacesuits would need to be better designed to transmit tactile information, but a blind astronaut would be unaffected by dim or failed lighting or vision loss from smoke, and would be able to respond unimpeded, unclouded, to such an emergency—​Wells-​Jensen refers to a problem on the Mir where they couldn’t find the fire extinguisher when the lights went out.