Tackling our big AI challenge
Applying AI in the real world is no simple task. While artificial intelligence has much to offer, it is the responsibility of engineers, computer scientists, and technologists to create AI systems that operate in symbiotic synchronicity with human beings. And yet, all too often, AI is seen as the silver-bullet solution to nearly every problem—from cutting manufacturing costs, to guaranteeing same-day delivery, to curing cancer. But in our rush to deliver on AI’s promise, we have made some very big—and very human—mistakes.
The most recent example of this, of course, is Boeing’s 737 Max 8. The two deadly crashes that occurred in October and March have been blamed on the planes’ flight systems, which initiated a dive soon after takeoff. While the exact cause is still under FAA investigation, one thing is clear: the pilots did not have enough information to properly and effectively alter the flight path of the plane.
As someone who has dedicated my life to the study of robotics, I believe the root of the problem is clear. In an effort to deliver a more advanced AI system, Boeing’s engineers made a catastrophic error: they took the human pilot out of the equation. As a result, when the plane’s autopilot system detected a problem, it did nothing to alert or inform the pilot; it simply pitched the nose of the plane downward. In that moment, the rational mind of the pilot was overloaded with decisions—all of which had to be made in a matter of seconds. What is happening? Does the system have more information than I do? Is it acting appropriately? Should I take action to override this "smart" system? Tragically, as the seconds ticked by, the planes continued on their steep trajectories. Boeing designed a system that ignored the human factor and effectively disempowered the pilot.
This all-in approach to AI that gives the system control over the user is a stark contrast to the “stick-shaker” approach that is so familiar to every pilot. Designed to warn the pilot of an imminent stall, the device causes the plane’s control yoke to shake noisily. That action alerts the pilot to the problem, prompting immediate action based on the pilot’s training, experience, and skill. It’s a great example of technology as an enabler of better decision-making.
The lesson to be learned is that it is a mistake to throw the human baby out with the bathwater when developing the AI systems of tomorrow. Common sense and reasoning may be the most powerful traits we humans possess. AI systems need to be developed in a way that puts that human intuition to work shoulder-to-shoulder with the robots. I believe companies that take the time to apply ergonomics and human factors to new AI systems—and do so with great care—will be most successful over the long term.
At the same time, I am well aware that a slower, more deliberate approach to AI development is easier said than done, especially in a competitive landscape in which the companies who are using AI to its greatest abilities are already winning the race. That wasn’t always the case. The companies that were first off the starting block had a distinct advantage. Because there was no competition to speak of, they had the luxury of time to develop AI systems that were meticulously designed to preserve human agency:
- When Kiva Systems (now Amazon Robotics) was first developing its warehouse robots that transformed the logistics industry, they engineered the system with a focus on how the robots would be integrated with human activity throughout the warehouse. They created model warehouses. They were diligent about examining how detailed interactions between the human workers and the robots would maximize the productivity of both groups. They took their time. Even after the robots were rolled out to warehouses across the US, Kiva—and then Amazon—expanded at a deliberate pace, ensuring that each new innovation was applied with similar care.
- The development of early surgical robots followed a similar path of caution. As a rule, advancements in healthcare are required to be built, tested, and trialed over time to protect the patient. While the first robotic surgery was performed more than 30 years ago, it wasn’t until 2005 that the MAKO surgical system was approved by the FDA. Here, too, developers included the expertise of the human operator (highly trained surgeons) in the equation. The MAKO system generates a 3-D model of the patient’s knee, which gives the surgeon the information needed to create an appropriate surgical approach. During the operation, the surgeon controls a robotic arm to remove just the right amount of bone. The system provides the surgeon with immediate feedback and creates a “wall of resistance” if the surgeon begins to operate outside the planned area. If the surgeon doesn’t correct the action, an audio alarm sounds. And if the surgeon doesn’t respond to the audible warning, the saw simply turns off. By empowering—not overpowering—the surgeon, the system helps minimize cutting errors and preserve as much tissue as possible. The result has been a sea-change in the number of successful outcomes of knee-replacement surgery.
- Subaru is another example of a company that has created an AI system that works in concert with human operators. The automaker’s EyeSight driver assist technology is designed to prevent accidents by warning drivers if they drift out of a lane or approach the vehicle ahead too quickly. First, the system makes audible sound and a small movement of the brake pedal to alert the driver to a potential safety issue, nudging the driver to take appropriate action. The system only forces a hard brake if there is imminent danger of impact. Similar to the “stick-shaker” warning, the system gives the driver the right information at the right time to support more effective decision-making. In short, these AI systems empower humans to be better versions of themselves rather than taking the person’s agency out of the equation.
All too often, the “race to AI” is viewed as a drag race, with the winner defined as the first one to reach the finish line. What we’re learning today is that speed does not always equal success. Boeing is learning that lesson the hard way. So is Tesla. So is Watson Oncology. Perhaps as companies continue to witness the costly and sometimes tragic results of developing AI systems without a focus on human interaction, the way AI systems are developed will shift to put human beings back at the core.
I am hopeful that the pendulum is swinging back toward this more human-centric approach. In the business world, I’m hearing CEOs ask how their executives can become more fluent about the implications of AI to apply it more thoughtfully across their organizations. At the same time, more and more, universities are requiring engineering and computer science students to take courses in the humanities—history, psychology, human factors, and design. The hope is that by expanding the students’ world views, tomorrow’s developers of AI systems will understand the weight of the decisions they are making with every design they build. They will understand the human need for connection, the consequences of negotiations of power, and the importance of preserving human agency as we strive for a world that utilizes the power of AI to create a better, more efficient way of being. Perhaps the most important lesson AI has to teach us is this: in everything we do, humans matter. It’s that simple.
About lllah R. Nourbakhsh, PhD
A valuable member of the ROBO Global Strategic Advisory Board, Illah is a Professor of Robotics at Carnegie Mellon University, as well as the director of Carnegie Mellon’s CREATE Lab, which explores socially meaningful innovation and deployment of robotic technologies. He has served as Robotics Group lead at NASA/Ames Research Center, and he was a founder and chief scientist of Blue Pumpkin Software, Inc. His current research projects explore community-based robotics, including educational and social robotics and ways to use robotic technology to empower individuals and communities.
Illah is the CEO and Chairman of Airviz, Inc., a World Economic Forum Global Steward, a member of the Global Future Council on the Future of AI and Robotics, and a member of the IEEE Global Initiative for the Ethical Considerations in the Design of Autonomous Systems. He also serves on the Global Innovation Council of the Varkey Foundation and is a Senior Advisor to The Future Society, Harvard Kennedy School. He earned his BS, MA, and PhD degrees in computer science from Stanford University and is a Kavli Fellow of the National Academy of Sciences. His book Robot Futures is available on Amazon.
 “The History of Robotics in Surgical Specialties,” PubMed Central, 12/14/15