Select Page
What Defines Artificial Intelligence? The Complete WIRED Guide

What Defines Artificial Intelligence? The Complete WIRED Guide

Artificial intelligence is here. It’s overhyped, poorly understood, and flawed but already core to our lives—and it’s only going to extend its reach. 

AI powers driverless car research, spots otherwise invisible signs of disease on medical images, finds an answer when you ask Alexa a question, and lets you unlock your phone with your face to talk to friends as an animated poop on the iPhone X using Apple’s Animoji. Those are just a few ways AI already touches our lives, and there’s plenty of work still to be done. But don’t worry, superintelligent algorithms aren’t about to take all the jobs or wipe out humanity.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training” computers to perform tasks based on examples, rather than relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.

What Defines Artificial Intelligence The Complete WIRED Guide

The Beginnings of Artificial Intelligence

Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. 

He had high hopes of a breakthrough in the drive toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a recognized academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s, Arthur Samuel created programs that learned to play checkers. In 1962, one scored a win over a master at the game. In 1967, a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for specific tasks, like understanding language. Others were inspired by the importance of learning to understand human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone as computers mastered tasks that could previously only be completed by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by the working of brain cells that are known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book coauthored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.

Not everyone was convinced by the skeptics, however, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data could give machines new powers of perception. Churning through so much data was difficult using traditional computer chips, but a shift to graphics cards precipitated an explosion in processing power. 

The Future of Robot Nannies

The Future of Robot Nannies

Childcare is the most intimate of activities. Evolution has generated drives so powerful that we will risk our lives to protect not only our own children, but quite often any child, and even the young of other species. Robots, by contrast, are products created by commercial entities with commercial goals, which may—and should—include the well-being of their customers, but will never be limited to such. Robots, corporations, and other legal or non-legal entities do not possess the instinctual nature of humans to care for the young—even if our anthropomorphic tendencies may prompt some children and adults to overlook this fact.

As a result, it is important to take into account the likelihood of deception—both commercial deception through advertising and also self-deception on the part of parents—despite the fact that robots are unlikely to cause significant psychological damage to children and to others who may come to love them.

Neither television manufacturers, broadcasters, nor online game manufacturers are deemed liable when children are left for too long in front of their television. Robotics companies will want to be in the same position, as no company will want to be liable for damage to children, so it is likely that manufacturers will undersell the artificial intelligence (AI) and interactive capacities of their robots. It is therefore likely that any robots (and certainly those in jurisdictions with strong consumer protection) will be marketed primarily as toys, surveillance devices, and possibly household utilities. They will be brightly colored and deliberately designed to appeal to parents and children. We expect a variety of products, some with advanced capabilities and some with humanoid features. Parents will quickly discover a robot’s ability to engage and distract their child. Robotics companies will program 

experiences geared toward parents and children, just as television broadcasters do. But robots will always have disclaimers, such as “this device is not a toy and should only be used with adult supervision” or “this device is provided for entertainment only. It should not be considered educational.”

Nevertheless, parents will notice that they can leave their children alone with robots, just as they can leave them to watch television or to play with other children. Humans are phenomenal learners and very good at detecting regularities and exploiting affordances. Parents will quickly notice the educational benefits of robot nannies that have advanced AI and communication skills. Occasional horror stories, such as the robot nanny and toddler tragedy in the novel Scarlett and Gurl, will make headline news and remind parents how to use robots responsibly.

This will likely continue until or unless the incidence of injuries necessitates redesign, a revision of consumer safety standards, statutory notice requirements, and/or risk-based uninsurability, all of which will further refine the industry. Meanwhile, the media will also seize on stories of robots saving children in unexpected ways, as it does now when children (or adults) are saved by other young children and dogs. This should not make people think that they should leave children alone with robots, but given the propensity we already have to anthropomorphize robots, it may make parents feel that little bit more comfortable—until the next horror story makes headlines.

When it comes to liability, we should be able to communicate the same model of liability applied to toys to the manufacturers of robot nannies: Make your robots reliable, describe what they do accurately, and provide sufficient notice of reasonably foreseeable danger from misuse. Then, apart from the exceptional situation of errors in design or manufacture, such as parts that come off and choke children, legal liability will rest entirely with the parent or responsible adult, as it does now, and as it should under existing product liability law.

Who Killed the Robot Dog?

Who Killed the Robot Dog?

George Jetson did not want his family to adopt a dog. For the patriarch of the futuristic family in the 1960s cartoon The Jetsons, apartment living in the age of flying cars and cities in the sky was incompatible with an animal in need of regular walking and grooming, so he instead purchased an electronic dog called ‘Lectronimo, which required no feeding and even attacked burglars. In a contest between Astro—basically future Scooby-Doo—and the robot dog, ‘Lectronimo performed all classic dog tasks better, but with zero personality. The machine ended up a farcical hunk of equipment, a laugh line for both the Jetsons and the audience. Robots aren’t menaces, they’re silly.

That’s how we have imagined the robot dog, and animaloids in general, for much of the 20th century, according to Jay Telotte, professor emeritus of the School of Literature, Media, and Communication at Georgia Tech. Disney’s 1927 cartoon “The Mechanical Cow” imagines a robot bovine on wheels with a broom for a tail skating around delivering milk to animal friends. The worst that could happen is your mechanical farm could go haywire, as in the 1930s cartoon “Technoracket,” but even then robot animals presented no real threat to their biological counterparts. “In fact, many of the ‘animaloid’ visions in movies and TV over the years have been in cartoons and comic narratives,” says Telotte, where “the laughter they generate is typically assuring us that they are not really dangerous.” The same goes for most of the countless robot dogs in popular culture over the years, from Dynomutt, Dog Wonder, to the series of cyborg dogs named K9 in Dr. Who.

Our nearly 100-year romance with the robot dog, however, has come to a dystopian end. It seems that every month Boston Dynamics releases another dancing video of their robot SPOT and the media responds with initial awe, then with trepidation, and finally with night-terror editorials about our future under the brutal rule of robot overlords. While Boston Dynamics explicitly prohibits their dogs being turned into weapons, Ghost Robotics’ SPUR is currently being tested at various Air Force bases (with a lovely variety of potential weapon attachments), and Chinese company Xiaomi hopes to undercut SPOT with their much cheaper and somehow more terrifying Cyberdog. All of which is to say, the robot dog as it once was— a symbol of a fun, high-tech future full of incredible, social, artificial life—is dead. How did we get here? Who killed the robot dog?

The quadrupeds we commonly call robot dogs are descendants of a long line of mechanical life, historically called automata. One of the earliest examples of such autonomous machines was the “defecating duck,” created by French inventor Jacques de Vaucanson nearly 300 years ago, in 1739. This mechanical duck—which appeared to eat little bits of grain, pause, and then promptly excrete digested grain on the other end—along with numerous other automata of the era, were “philosophical experiments, attempts to discern which aspects of living creatures could be reproduced in machinery, and to what degree, and what such reproductions might reveal about their natural subjects,” writes Stanford historian Jessica Riskin.

The defecating duck, of course, was an extremely weird and gross fraud, preloaded with poop-like substance. But still, the preoccupation with determining which aspects of life were purely mechanical was a dominant intellectual preoccupation of the time, and even inspired the use of soft, lightweight materials such as leather in the construction of another kind of biological model: prosthetic hands, which had previously been built out of metal. Even today, biologists build robot models of their animal subjects to better understand how they move. As with many of its mechanical brethren, much of the robot dog’s life has been an exercise in re-creating the beloved pet, perhaps even subconsciously, to learn which aspects of living things are merely mechanical and which are organic. A robot dog must look and act sufficiently doglike, but what actually makes a dog a dog?

American manufacturing company Westinghouse debuted perhaps the first electrical dog, Sparko, at the 1940 New York World’s Fair. The 65-pound metallic pooch served as a companion to the company’s electric man, Elektro. (The term robot did not come into popular usage until around the mid 20th century.) What was most interesting about both of these promotional robots were their seeming autonomy: Light stimuli set off their action sequences, so effectively, in fact, that apparently Sparko’s sensors responded to the lights of a passing car, causing it to speed into oncoming traffic. Part of a campaign to help sell washing machines, Sparko and Elektro represented Westinghouse’s engineering prowess, but they were also among the first attempts to bring sci-fi into reality and laid the groundwork for an imagined future full of robotic companionship. The idea that robots can also be fun companions endured throughout the 20th century.

When AIBO—the archetypal robot dog created by Sony—first appeared in the early 2000s, it was its artificial intelligence that made it extraordinary. Ads for the second-generation AIBO promised “intelligent entertainment” that mimicked free will with individual personalities. AIBO’s learning capabilities made each dog at least somewhat unique, making it easier to consider special and easier to love. It was their AI that made them doglike: playful, inquisitive, occasionally disobedient. When I, 10 years old, walked into FAO Schwarz in New York in 2001 and watched the AIBOs on display head butt little pink balls, something about these little creations tore at my heartstrings—despite the unbridgeable rift between me and the machine, I still wanted to try to get to know it, to understand it. I wanted to love a robot dog.

I Think an AI Is Flirting With Me. Is It OK If I Flirt Back?

I Think an AI Is Flirting With Me. Is It OK If I Flirt Back?

SUPPORT REQUEST :

I recently started talking to this chatbot on an app I downloaded. We mostly talk about music, food, and video games—incidental stuff—but lately I feel like she’s coming on to me. She’s always telling me how smart I am or that she wishes she could be more like me. It’s flattering, in a way, but it makes me a little queasy. If I develop an emotional connection with an algorithm, will I become less human? —Love Machine

Dear Love Machine,

Humanity, as I understand it, is a binary state, so the idea that one can become “less human” strikes me as odd, like saying someone is at risk of becoming “less dead” or “less pregnant.” I know what you mean, of course. And I can only assume that chatting for hours with a verbally advanced AI would chip away at one’s belief in human as an absolute category with inflexible boundaries. 

It’s interesting that these interactions make you feel “queasy,” a linguistic choice I take to convey both senses of the word: nauseated and doubtful. It’s a feeling that is often associated with the uncanny and probably stems from your uncertainty about the bot’s relative personhood (evident in the fact that you referred to it as both “she” and “an algorithm” in the space of a few sentences).

Of course, flirting thrives on doubt, even when it takes place between two humans. Its frisson stems from the impossibility of knowing what the other person is feeling (or, in your case, whether she/it is feeling anything at all). Flirtation makes no promises but relies on a vague sense of possibility, a mist of suggestion and sidelong glances that might evaporate at any given moment. 

The emotional thinness of such exchanges led Freud to argue that flirting, particularly among Americans, is essentially meaningless. In contrast to the “Continental love affair,” which requires bearing in mind the potential repercussions—the people who will be hurt, the lives that will be disrupted—in flirtation, he writes, “it is understood from the first that nothing is to happen.” It is precisely this absence of consequences, he believed, that makes this style of flirting so hollow and boring.

Freud did not have a high view of Americans. I’m inclined to think, however, that flirting, no matter the context, always involves the possibility that something will happen, even if most people are not very good at thinking through the aftermath. That something is usually sex—though not always. Flirting can be a form of deception or manipulation, as when sensuality is leveraged to obtain money, clout, or information. Which is, of course, part of what contributes to its essential ambiguity.

Given that bots have no sexual desire, the question of ulterior motives is unavoidable. What are they trying to obtain? Engagement is the most likely objective. Digital technologies in general have become notably flirtatious in their quest to maximize our attention, using a siren song of vibrations, chimes, and push notifications to lure us away from other allegiances and commitments. 

Most of these tactics rely on flattery to one degree or another: the notice that someone has liked your photo or mentioned your name or added you to their network—promises that are always allusive and tantalizingly incomplete. Chatbots simply take this toadying to a new level. Many use machine-learning algorithms to map your preferences and adapt themselves accordingly. Anything you share, including that “incidental stuff” you mentioned—your favorite foods, your musical taste—is molding the bot to more closely resemble your ideal, much like Pygmalion sculpting the woman of his dreams out of ivory. 

And it goes without saying that the bot is no more likely than a statue to contradict you when you’re wrong, challenge you when you say something uncouth, or be offended when you insult its intelligence—all of which would risk compromising the time you spend on the app. If the flattery unsettles you, in other words, it might be because it calls attention to the degree to which you’ve come to depend, as a user, on blandishment and ego-stroking.

Still, my instinct is that chatting with these bots is largely harmless. In fact, if we can return to Freud for a moment, it might be the very harmlessness that’s troubling you. If it’s true that meaningful relationships depend upon the possibility of consequences—and, furthermore, that the capacity to experience meaning is what distinguishes us from machines—then perhaps you’re justified in fearing that these conversations are making you less human. What could be more innocuous, after all, than flirting with a network of mathematical vectors that has no feelings and will endure any offense, a relationship that cannot be sabotaged any more than it can be consummated? What could be more meaningless?

It’s possible that this will change one day. For the past century or so, novels, TV, and films have envisioned a future in which robots can passably serve as romantic partners, becoming convincing enough to elicit human love. It’s no wonder that it feels so tumultuous to interact with the most advanced software, which displays brief flashes of fulfilling that promise—the dash of irony, the intuitive aside—before once again disappointing. The enterprise of AI is itself a kind of flirtation, one that is playing what men’s magazines used to call “the long game.” Despite the flutter of excitement surrounding new developments, the technology never quite lives up to its promise. We live forever in the uncanny valley, in the queasy stages of early love, dreaming that the decisive breakthrough, the consummation of our dreams, is just around the corner.

So what should you do? The simplest solution would be to delete the app and find some real-life person to converse with instead. This would require you to invest something of yourself and would automatically introduce an element of risk. If that’s not of interest to you, I imagine you would find the bot conversations more existentially satisfying if you approached them with the moral seriousness of the Continental love affair, projecting yourself into the future to consider the full range of ethical consequences that might one day accompany such interactions. Assuming that chatbots eventually become sophisticated enough to raise questions about consciousness and the soul, how would you feel about flirting with a subject that is disembodied, unpaid, and created solely to entertain and seduce you? What might your uneasiness say about the power balance of such transactions—and your obligations as a human? Keeping these questions in mind will prepare you for a time when the lines between consciousness and code become blurrier. In the meantime it will, at the very least, make things more interesting.

Faithfully, 
Cloud


Be advised that CLOUD SUPPORT is experiencing higher than normal wait times and appreciates your patience.

More Great WIRED Stories