<iframe src="https://www.googletagmanager.com/ns.html?id=GTM-MR2QVMZ" height="0" width="0" style="display:none;visibility:hidden">
Three hands

RoboGlobal Insights

Thematic Investing: The Intersection of Robotics and AI

By Prof. Wyatt Newman, PhD

When I was asked to present a keynote address on robotics and AI at the LGIM Thematic Investing Forum in London last month, I was quite honored. What was most thrilling to me, however, wasn’t the invitation itself, but the fact that a conference focused on thematic investing was placing robotics and AI at the top of the bill. That level of recognition has been a long time coming, and from my vantage point, I believe it’s high time investors took note of the immense opportunity ahead. The thesis of my address was that rapid growth in AI is expanding robot capabilities and corresponding application areas. It was a message that resonated.

One advancement that has escalated our capabilities in AI recently is a major shift in deep learning. A subfield of machine learning, deep learning uses algorithms that strive to mimic the deep neural networks of the human brain. It’s a process that is helping to address one of the most difficult and necessary areas in robotics: machine vision.

Until recently, machine vision applications were hard coded, which involves embedding data directly into each algorithm. It is a slow, expensive, and ‘brittle’ process during which even the most minor variations, including changes in lighting or even dust on lenses, can result in failure. Additionally, conventionally coded solutions are seldom extensible, so adding new parts often requires starting the entire process over again from scratch. The situation is further complicated by the fact that programming new applications requires access to both the parts of interest and an environment that is very similar to the intended workspace for the development of the machine vision. The result: coding machine vision is often done in the final stages of development, which can slow down the development of new manufacturing systems.

Deep Learning offers an alternative approach. Instead of hard coding explicit algorithms, deep learning uses external image databases to compare a huge number of examples to any given part of interest.  These examples are used to ‘train’ the deep neural network. This type of training can use existing neural-net architectures and training algorithms such as the popular TensorFlow freeware. Because this method trains the network based on example images, there is far less dependence on the skill or luck of the programmer, which predictably leads to more robust and reliable results.

The challenge is this: for deep learning to work effectively, the training data it uses must be voluminous, and acquiring a sufficient number of images can be a barrier. However, promising new research may have a solution to this challenge. Using high-fidelity simulators to create artificial training data, virtual images can be computed from virtual objects to rapidly build a training set with sufficient volume to train a neural network effectively. At the same time, this approach may be the key to training vision systems used in manufacturing. By simulating images derived from CAD models of parts, deep learning may be able to design a vision system before the actual parts arrive or even exist.

This is just one example of many that illustrates how advancements in simulation and AI are helping to address a key need in robotics. With the introduction of increasingly high-performance and affordable depth (3-D) cameras, the opportunity for highly dependable machine vision is even greater. At the moment, the programming for interpretation of 3-D images is still fairly primitive, largely because the advent of new devices has outpaced corresponding programming methods. But that is unlikely to remain the case for long, and the more advanced programming methods that are emerging today promise to bring even greater rewards in interpretation of 3-D data. When that shift happens, the training data required for deep learning will be able to be obtained physically or by simulation, and deep networks will be able to be trained to interpret that 3-D data. It is an exciting shift that is likely to completely alter manufacturing as we know it.

The implications of more reliable and robust machine vision are vast. Machine vision is the key to myriad robotics applications in areas such as autonomous vehicles, manufacturing, logistics, exploration, surgery, and domestic robots. At the same time, breakthroughs in quantum computing promise yet another giant leap for deep learning and AI. Deep learning systems are known to be computationally demanding. Advances in quantum computing will break open the computational barriers that are limiting the capabilities and reach of today’s most sophisticated deep learning systems. With the help of such high-speed computing, higher-fidelity simulations will finally be feasible to compute, virtual training data that supports deep networks will be easier to generate, and the immediate benefits to AI will be swift.

For those of us who have dedicated our careers and, indeed, our lives, to studying and advancing robotics and AI, we’re about to see the fruits of our labor. The current confluence of innovations in AI, simulators, and computing hardware is creating a massive wave of change. What lies ahead is better, faster AI that will rapidly accelerate the cognitive capabilities of robots. That change will expand their competence, grow the reach of their applications, and amplify their impact on every aspect of our lives.

There couldn’t be a timelier theme for investors than robotics and AI. While manufacturing may be the first sector to reap the benefits from today’s intersection of robotics and AI, disruption is afoot for every industry in every corner the world. The attendees at the LGIM Thematic Investing Forum heard the message loud and clear. Do you?

Read On