OH, THE UNTAINTED optimism of 2014. In the spring of that year, the good Swedes at Volvo introduced Drive Me, a program to get regular Josefs, Frejas, Joeys, and Fayes into autonomous vehicles. By 2017, Volvo executives promised, the company would distribute 100 self-driving SUVs to families in Gothenburg, Sweden. The cars would be able to ferry their passengers through at least 30 miles of local roads, in everyday driving conditions—all on their own. “The technology, which will be called Autopilot, enables the driver to hand over the driving to the vehicle, which takes care of all driving functions,” said Erik Coelingh, a technical lead at Volvo.
Now, in the waning weeks of 2017, Volvo has pushed back its plans. By four years. Automotive News reports the company now plans to put 100 people in self-driving cars by 2021, and “self-driving” might be a stretch. The guinea pigs will start off testing the sort of semi-autonomous features available to anyone willing to pony up for a new Volvo (or Tesla, Cadillac, Nissan, or Mercedes).
“On the journey, some of the questions that we thought were really difficult to answer have been answered much faster than we expected,” Marcus Rothoff, the carmaker’s autonomous driving program director, told the publication. “And in some areas, we are finding that there were more issues to dig into and solve than we expected.” Namely, price. Rothoff said the company was loath to nail down the cost of its sensor set before it knew how it would work, so Volvo couldn’t quite determine what people would pay for the privilege in riding in or owning one. CEO Hakan Samuelsson has said self-driving functionality could add about $10,000 to the sticker price.
Volvo’s retreat is just the latest example of a company cooling on optimistic self-driving car predictions. In 2012, Google CEO Sergey Brin said even normies would have access to autonomous vehicles in fewer than five years—nope. Those who shelled out an extra $3,000 for Tesla’s Enhanced Autopilot are no doubt disappointed by its non-appearance, nearly six months after its due date. New Ford CEO Jim Hackett recently moderated expectations for the automaker’s self-driving service, which his predecessor said in 2016 would be deployed at scale by 2021. “We are going to be in the market with products in that time frame,” he told the San Francisco Chronicle. “But the nature of the romanticism by everybody in the media about how this robot works is overextended right now.”
The scale-backs haven’t dampened the enthusiasm for money-throwing. Venture capital firm CB Insights estimates self-driving car startups—ones building autonomous driving software, driver safety tools, and vehicle-to-vehicle communications, and stockpiling and crunching data while doing it—have sucked in more than $3 billion in funding this year.
To track the evolution of any major technology, research firm Gartner’s “hype cycle” methodology is a handy guide. You start with an “innovation trigger,” the breakthrough, and soon hit the “peak of inflated expectations,” when the money flows and headlines blare.
And then there’s the trough of disillusionment, when things start failing, falling short of expectations, and hoovering up less money than before. This is where the practical challenges and hard realities separate the vaporware from the world-changers. Self-driving, it seems, is entering the trough. Welcome to the hard part.
Technical Difficulties
“Autonomous technology is where computing was in the 60s, meaning that the technology is nascent, it’s not modular, and it is yet to be determined how the different parts will fit together,” says Shahin Farshchi, a partner at the venture capital firm Lux Capital, who once built hybrid electric vehicles for General Motors, and has invested in self-driving startup Zoox, as well as sensor-builder Aeva.)
Turns out building a self-driving car takes more than strapping sensors and software onto a set of wheels. In an almost startlingly frank Medium post, Bryan Salesky, who heads up Ford-backed autonomous vehicle outfit Argo AI, laid out the hurdles facing his team.
First, he says, came the sensor snags. Self-driving cars need at least three kinds to function—lidar, which can see clearly in 3-D; cameras, for color and detail; and radar, with can detect objects and their velocities at long distances. Lidar, in particular, doesn’t come cheap: A setup for one car can cost $75,000. Then the vehicles need to take the info from those pricey sensors and fuse it together, extracting what they need to operate in the world and discarding what they doesn’t.
“Developing a system that can be manufactured and deployed at scale with cost-effective, maintainable hardware is… challenging,” Salesky writes. (Argo AI bought a lidar company called Princeton Lightwave in October.)
Salesky cites other problems, minor technological quandaries that could prove disastrous once these cars are actually moving through 3-D space. Vehicles need to be able to see, interpret, and predict the behavior of human drivers, human cyclists, and human pedestrians—perhaps even communicate with them. The cars must understand when they’re in another vehicle’s blind spot and drive extra carefully. They have to know (and see, and hear) when a zooming ambulance needs more room.
“Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology,” Salesky writes.
He’s not the only killjoy. “Technology developers are coming to appreciate that the last 1 percent is harder than the first 99 percent,” says Karl Iagnemma, CEO of Nutonomy, a Boston-based self-driving car company acquired by automotive supplier Delphi this fall. “Compared to last 1 percent, the first 99 percent is a walk in the park.”
The smart companies, Iagnemma says, are coming up with comprehensive ways to deal with tricky edge cases, not patching them over with the software equivalent of tape and chewing gum. But that takes time.
Money Worries
Intel estimates self-driving cars could add $7 trillion to the economy by 2050, $2 trillion in the US alone—and that’s not counting the impact the tech could have on trucking or other fields. So it’s curious that no one seems quite sure how to make money off this stuff yet. “The emphasis has shifted as much to the product and the business model as pure technology development,” says Iagnemma.
Those building the things have long insisted you’ll first interact with a self-driving car through a taxi-like service. The tech is too expensive, and will at first be too dependent on weather conditions, topography, and high-quality mapping, to sell straight to consumers. But they haven’t sorted out the user experience part of this equation.
Waymo is set to launch a limited, actually driver-free service in Phoenix, Arizona, next year, and says it has come up with a way for passengers to communicate they want to pull over. But the company didn’t let reporters test the functionality during a test drive at its test facility this fall, so you’ll have to take its word for it.
Other questions loom: How do you find your vehicle? Ensure that you’re in the right one? Tell it that you’re having an emergency, or that you’ve had a little accident inside and need a cleanup ASAP? Bigger picture: How does a company even start to recoup its huge research and development budget? How much does it charge per ride? What happens when there’s a crash? Who’s liable, and how much do they have to pay in insurance?
One path forward, money-wise, seems to be shaking hands with enemies. Companies including Waymo, GM, Lyft, Uber, and Intel, and even seemingly extinction-bound players like the car rental firm Avis, have formed partnerships with potential rivals, sharing data and services in the quest to build a real autonomous vehicle, and the infrastructure that will support it.
Still, if you ask an autonomous car developer whether it should be going at it alone—trying to build out sensors, mapping, perception, testing capabilities, plus the car itself—expect a shrug. While a few big carmakers like General Motors clearly seem to think vertical integration is the path to a win (it bought the self-driving outfit Cruise Automation last year, and lidar company Strobe in October), startups providing à la carte services continue to believe they are part of the future. “There are plenty of people quietly making money supplying to automakers,” says Forrest Iandola, the CEO of the perception company DeepScale, citing the success of more traditional automotive suppliers like Bridgestone.
Other companies seize upon niche markets in the self-driving space, betting specific demographics will help them make cash. The self-driving shuttle company Voyage has targeted retirement communities. Optimus Ride, an MIT spinoff, recently announced a pilot project in a new developed community just outside of Boston, and says it’s focused on building software with riders with disabilities in mind.
“We think that kind off approach, providing mobility to those who are not able-bodied, is actually going to create a product that’s much more robust in the end,” says CEO Ryan Chin. Those companies are raising money. (Optimus Ride just came off an $18 million Series A funding round, bringing its cash pull to $23.25 million.) But are theirs viable strategies to survive in the increasingly crowded self-driving space?
The Climb
OK, so you won’t get a fully autonomous car in your driveway anytime soon. Here’s what you can expect, in the next decade or so: Self-driving cars probably won’t operate where you live, unless you’re the denizen of a very particular neighborhood in a big city like San Francisco, New York, or Phoenix. These cars will stick to specific, meticulously mapped areas. If, by luck, you stumble on an autonomous taxi, it will probably force you to meet it somewhere it can safely and legally pull over, instead of working to track you down and assuming hazard lights grant it immunity wherever it stops. You might share that ride with another person or three, à la UberPool.
The cars will be impressive, but not infallible. They won’t know how to deal with all road situations and weather conditions. And you might get some human help. Nissan, for example, is among the companies working on a stopgap called teleoperations, using remote human operators to guide AVs when they get stuck or stumped.
And if you’re not lucky enough to catch a ride, you may well forget about self-driving cars for a few years. You might joke with your friends about how silly you were to believe the hype. But the work will go on quietly, in the background. The news will quiet down as developers dedicate themselves to precise problems, tackling the demons in the details.
The good news is that there seems to be enough momentum to carry this new industry out of the trough and onto what Gartner calls the plateau of productivity. Not everyone who started the journey will make the climb. But those who do, battered and a bit bloody, may just find the cash up there is green, the robots good, and the view stupendous.
Real Talk
- Marching into 2018, artificial intelligence faces five key challenges
- Math says you’re driving wrong, and it’s slowing us all down
- A herky-jerky ride in GM’s ultra-cautious self-driving car
FOR ALL THE hype about killer robots, 2017 saw some notable strides in artificial intelligence. A bot called Libratus out-bluffed poker kingpins, for example. Out in the real world, machine learning is being put to use improving farming and widening access to healthcare.
But have you talked to Siri or Alexa recently? Then you’ll know that despite the hype, and worried billionaires, there are many things that artificial intelligence still can’t do or understand. Here are five thorny problems that experts will be bending their brains against next year.
The meaning of our words
Machines are better than ever at working with text and language. Facebook can read out a description of images for visually impaired people. Google does a decent job of suggesting terse replies to emails. Yet software still can’t really understand the meaning of our words and the ideas we share with them. “We’re able to take concepts we’ve learned and combine them in different ways, and apply them in new situations,” says Melanie Mitchell, a professor at Portland State University. “These AI and machine learning systems are not.”
Mitchell describes today’s software as stuck behind what mathematician Gian Carlo-Rota called “the barrier of meaning.” Some leading AI research teams are trying to figure out how to clamber over it.
One strand of that work aims to give machines the kind of grounding in common sense and the physical world that underpins our own thinking. Facebook researchers are trying to teach software to understand reality by watching video, for example. Others are working on mimicking what we can do with that knowledge about the world. Google has been tinkering with software that tries to learn metaphors. Mitchell has experimented with systems that interpret what’s happening in photos using analogies and a store of concepts about the world.
The reality gap impeding the robot revolution
Robot hardware has gotten pretty good. You can buy a palm-sized drone with HD camera for $500. Machines that haul boxes and walk on two legs have improved also. Why are we not all surrounded by bustling mechanical helpers? Today’s robots lack the brains to match their sophisticated brawn.
Getting a robot to do anything requires specific programming for a particular task. They can learn operations like grasping objects from repeated trials (and errors). But the process is relatively slow. One promising shortcut is to have robots train in virtual, simulated worlds, and then download that hard-won knowledge into physical robot bodies. Yet that approach is afflicted by the reality gap—a phrase describing how skills a robot learned in simulation do not always work when transferred to a machine in the physical world.
The reality gap is narrowing. In October, Google reported promising results in experiments where simulated and real robot arms learned to pick up diverse objects including tape dispensers, toys, and combs.
Further progress is important to the hopes of people working on autonomous vehicles. Companies in the race to roboticize driving deploy virtual cars on simulated streets to reduce the time and money spent testing in real traffic and road conditions. Chris Urmson, CEO of autonomous-driving startup Aurora, says making virtual testing more applicable to real vehicles is one of his team’s priorities. “It’ll be neat to see over the next year or so how we can leverage that to accelerate learning,” says Urmson, who previously led Google parent Alphabet’s autonomous-car project.
Guarding against AI hacking
The software that runs our electrical grids, security cameras, and cellphones is plagued by security flaws. We shouldn’t expect software for self-driving cars and domestic robots to be any different. It may in fact be worse: There’s evidence that the complexity of machine-learning software introduces new avenues of attack.
Researchers showed this year that you can hide a secret trigger inside a machine-learning system that causes it to flip into evil mode at the sight of a particular signal. The team at NYU devised a street-sign recognition system that functioned normally—unless it saw a yellow Post-It. Attaching one of the sticky notes to a stop sign in Brooklyn caused the system to report the sign as a speed limit. The potential for such tricks might pose problems for self-driving cars.
The threat is considered serious enough that researchers at the world’s most prominent machine-learning conference convened a one-day workshopon the threat of machine deception earlier this month. Researchers discussed fiendish tricks like how to generate handwritten digits that look normal to humans, but appear as something different to software. What you see as a 2, for example, a machine vision system would see as a 3. Researchers also discussed possible defenses against such attacks—and worried about AI being used to fool humans.
Tim Hwang, who organized the workshop, predicted using the technology to manipulate people is inevitable as machine learning becomes easier to deploy, and more powerful. “You no longer need a room full of PhDs to do machine learning,” he said. Hwang pointed to the Russian disinformation campaign during the 2016 presidential election as a potential forerunner of AI-enhanced information war. “Why wouldn’t you see techniques from the machine learning space in these campaigns?” he said. One trick Hwang predicts could be particularly effective is using machine learning to generate fake video and audio.
Graduating beyond boardgames
Alphabet’s champion Go-playing software evolved rapidly in 2017. In May, a more powerful version beat Go champions in China. Its creators, research unit DeepMind, subsequently built a version, AlphaGo Zero, that learned the game without studying human play. In December, another upgrade effort birthed AlphaZero, which can learn to play chess and Japanese board game Shogi (although not at the same time).
That avalanche of notable results is impressive—but also a reminder of AI software’s limitations. Chess, shogi, and Go are complex but all have relatively simple rules and gameplay visible to both opponents. They are a good match for computers’ ability to rapidly spool through many possible future positions. But most situations and problems in life are not so neatly structured.
That’s why DeepMind and Facebook both started working on the multiplayer videogame StarCraft in 2017. Neither have yet gotten very far. Right now, the best bots—built by amateurs—are no match for even moderately-skilled players. DeepMind researcher Oriol Vinyals told WIREDearlier this year that his software now lacks the planning and memory capabilities needed to carefully assemble and command an army while anticipating and reacting to moves by opponents. Not coincidentally, those skills would also make software much better at helping with real-world tasks such as office work or real military operations. Big progress on StarCraft or similar games in 2018 might presage some powerful new applications for AI.
Teaching AI to distinguish right from wrong
Even without new progress in the areas listed above, many aspects of the economy and society could change greatly if existing AI technology is widely adopted. As companies and governments rush to do just that, some people are worried about accidental and intentional harms caused by AI and machine learning.
How to keep the technology within safe and ethical bounds was a prominent thread of discussion at the NIPS machine-learning conference this month. Researchers have found that machine learning systems can pick up unsavory or unwanted behaviors, such as perpetuating gender stereotypes, when trained on data from our far-from-perfect world. Now some people are working on techniques that can be used to audit the internal workings of AI systems, and ensure they make fair decisions when put to work in industries such as finance or healthcare.
The next year should see tech companies put forward ideas for how to keep AI on the right side of humanity. Google, Facebook, Microsoft, and others have begun talking about the issue, and are members of a new nonprofit called Partnership on AI that will research and try to shape the societal implications of AI. Pressure is also coming from more independent quarters. A philanthropic project called the Ethics and Governance of Artificial Intelligence Fund is supporting MIT, Harvard, and others to research AI and the public interest. A new research institute at NYU, AI Now, has a similar mission. In a recent report it called for governments to swear off using “black box” algorithms not open to public inspection in areas such as criminal justice or welfare.