At the core of artificial intelligence (AI) is machine learning—a computer’s ability to use data (and lots of it) to learn and continuously improve its decision-making based on complex algorithms. In order for data centers, robots, drones, autonomous vehicles, and devices like digital assistants and smart phones to process these massive quantities of data, they require a key component: semiconductor chips. The result: the robotics and AI revolution has sparked a new battle for leadership among chip manufacturers that hope to claim a piece of the AI chip market that UBS has forecast to hit $35B by 2021, up from just $6B in 2016. The race is on.
And yet, with so many players on the field, it’s increasingly clear that not just one semiconductor chip will dominate the computing landscape. While Nvidia has rapidly secured a dominant position in AI training applications, a multitude of companies from the US to China have entered the AI chip race, all pouring vast amounts of money into this transformative sector. US tech giants such as Google, Microsoft, Apple, Amazon, and Facebook are working furiously to increase processor speeds to support AI applications such as facial and speech recognition, search, and custom image recognition to improve personalization and robotic assistance.
Google’s Tensor Processing Units (TPUs) are its newest AI chips that deliver 15–30x higher performance and 30–80x higher performance-per-watt than standard CPUs and GPUs. TPUs improve the performance of Google’s Cloud-based services by enabling it to run state-of-the-art neural networks at scale—and at a much more affordable cost. These high-powered chips have also helped accelerate the development of some of Google’s most innovative applications, including Google Assistant, which recognizes voice commands on Androids, and Google Translate, which provides instant language translations.
Apple is hoping to set the stage with its new iPhone XS, featuring an A12 Bionic chip—the industry’s first ever 7-nanometer chip with a 6-core CPU and 4-core GPU, plus an updated neural engine. The iPhone’s apps will still run on the company’s machine learning framework, Core ML, but it will run 9x faster with the A12. Among traditional chipmakers, Nvidia currently dominates, followed by Intel, AMD, and Xilinx. China’s big three—Baidu, Alibaba, and Tencent—have all either released or are developing their own customized processors. Baidu recently announced its dedicated AI chip called Kunlun, while Alibaba and Tencent are deploying AI processing capabilities in their own cloud platforms. Meanwhile, start-ups such as Cerebras and Graphcore have been actively getting into the game, each having raised more than $100M in funding from leading VCs. Their mission: develop chips that can optimize and communicate with the rest of the system to enable AI applications in smart phones, autos, and other consumer devices.
Nvidia, a dominant player in GPUs with first-mover advantage, will likely sustain leadership in data center training and deep learning inference, especially with its new game-changing GPU platform. Based on the Turing architecture developed over the past 10 years, the company’s recently launched RTX platform combines tensor cores for AI inferencing with ray tracing capabilities to accelerate workloads. The new chip delivers 6x more performance than its predecessor Pascal. Nvidia claims that this new GPU architecture represents a fundamental shift in capabilities and could drive the entire industry towards a new mode of graphic rendering using ray tracing. The equity market has rewarded Nvidia’s market dominance and impressive growth with a nearly 10x rise in its share price in the past 3 years.
In the meantime, Intel has been left in the dust, with a declining share in the server market and conspicuous delays in reaching targets for next-generation chips. However, Intel is still in the AI race and a contender not to underestimate. The company currently offers compatible and comprehensive solutions ranging from CPUs, Nervana ASICs, FPGAs from its Altera acquisition, 3D-Xpoint, and Mobileye’s computer vision ASICs. These new offerings will allow the company to become more competitive against Nvidia and others as the industry shifts to inference and edge computing.
AMD, another strong contender in the space, is targeting Nvidia’s core market, datacenter AI training, with a new single solution set that includes the world’s first 7-nanometer GPU architecture based on the Vega platform. Its Radeon Instinct series is aiming to capture market share in image, video, and speech recognition, as well as natural language processing (NLP). In addition, AMD's open-source software allows users to tap into its hardware while also being capable of using Nvidia processors.
And then there is Qualcomm. Its unique approach offers a single platform in a distributed solution the company calls AIEngine. Betting that there is no single solution for AI, Qualcomm is using its latest Snapdragon Kryo CPU, Adreno GPU, and Hexagon DSP cores, starting with various Snapdragon generations, and then tying it all together through a common software platform.
Xilinx, the largest standalone FPGA chipmaker, spent over $1B in the past 4 years to participate in the AI race. It is playing an increasingly significant role in enabling data center workloads associated with machine learning. Using a heterogeneous computing platform, it applies multiple processing resources to create a single AI solution with an emphasis on data centers. Xilinx’s Adaptive Compute Acceleration Platform (ACAP) is designed to deliver 20x and 4x performance increases on deep learning and 5G processing, respectively. The first chip, called Everest, will tape out later this year in a 7-nanometer process. Since traditional processors lack the computational power to support many of these intelligent features, Xilinx’s AI solution for developing neural networking has now expanded to offer ML applications for the Cloud and the Edge, especially with the recent acquisition of DeePhi, a startup from Beijing. The integration with DeePhi will be very important to Xilinx’s AI portfolio as the development of its deep learning processing units (DLPUs) will include FPGA and ASIC chips.
Moving forward, we believe the market is going to remain fiercely competitive. VCs invested more than $1.5B in chip startups in 2017, nearly doubling the investments made two years ago, according to a CB Insights report. There are at least 45 startup chip companies focused on NLP, speech recognition, and self-driving cars. Silicon Valley startup Cerebras and UK’s Graphcore are quietly working on bots that can carry on conversations and systems that can automatically generate video and virtual reality images. Not only do these newcomers have strong backing by leading VCs, but they have also been on an active hiring spree, cherry picking key executives from many of the older and established chipmakers. Cerebras has hired dozens of engineers from Intel, notably bringing in its CTO from Intel’s Datacenter group. Graphcore was founded by semiconductor veterans who have founded multiple startups in the past, including Icera, a mobile chip company, which was sold to Nvidia in 2011 for $376M. Another promising startup, SambaNova, which was funded by Google Ventures and co-founded by an Oracle veteran and professors from Stanford, is working on solutions to integrate hardware and software to maximize the performance and efficiency of AI chips.
With such a crowd of innovators focused on a single target, new innovations are expected to continue to accelerate at a rapid clip. Perhaps the biggest differentiator on deck at the moment is the development of key software that is tightly incorporated into single solution sets. So far, Nvidia appears to have the clear advantage, and its equally weighted software and hardware development teams reflect the importance of software integration in the next generation AI chip sets. But with each new advancement comes the opportunity for new leaders of the pack, and the specialization of AI chips for different segments is already evolving faster than most analysts had expected. As each of these companies fights for its share of this $35B chip market opportunity, watch for accelerating M&A activity—and more opportunities for investors in every area of the space.
By Lisa Chai, Senior research Analyst, ROBO Global
 “Why Artificial Intelligence Could Boost Demand For Chipmaking Gear”, Investor’s Business Daily, 11/3/17