Table of Contents
- AI Inference Chipsets Primer
- Report Methodology
- Decision Criteria Analysis
- Evaluation Metrics
- Key Criteria: Impact Analysis
- Analyst’s Take
- About Anand Joshi
Many AI applications that started as proofs of concept a few years ago are now moving into production. The compute infrastructure for these AI applications requires a specialized consideration to match business needs across criteria such as performance, power, latency, and throughput. The AI chipsets that provide acceleration therefore become a key consideration when building production IT infrastructure.
Riding a wave of hype, a large number of AI chipsets capable of running AI applications have sprung up, with more than 100 chipset startups having emerged over the past three years. Some have achieved multi-billion-dollar valuations.
These AI acceleration chipsets vary significantly in terms of their characteristics and fall into four architectural groups: central processing unit (CPU), graphics processing unit (GPU), field programmable gate array (FPGA), and application-specific integrated circuit (ASIC). While CPUs are better known for their general-purpose architecture, GPUs have become prevalent due to their stellar software support for AI applications. FPGAs are known for their predictable latency and ASICs are promising to combine the best performance and flexibility for AI applications. Factors such as usage (training vs. inference), workload (vision vs. speech, real-time vs. non-real time), and power consumption determine which one works best for a given application.
This report focuses on defining the key criteria for AI chipsets to run AI production applications within enterprises. It explains how different product features of AI chipsets relate to the production needs of AI applications. It defines the metrics used for evaluation of the chipsets and will map different solutions available. A separate report, “The GigaOm Radar Report for AI Chipsets,” will assess individual products and solutions available from vendors.
The focus of this report is on AI chipsets that are shipping and ready for deployment in the market today. We hope that by understanding the readiness of these solutions, executives and IT professionals can plan for their long-term infrastructure needs. With AI penetration within enterprises and data centers expected to increase significantly, understanding the impact that chipsets can have as a key enabler to important business objectives is critical.
In the long run, the value-add of AI to a business should be articulated in terms of total cost of ownership (TCO) and this report, along with the companion GigaOm Radar report, will help executives in that regard.
How to Read this Report
This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Vendor Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.