Shipments of AI Servers Will Climb at CAGR of 10.8% from 2022 to 2026, Says TrendForce
According to TrendForce’s latest survey of the server market, many cloud service providers (CSPs) have begun large-scale investments in the kinds of equipment that support artificial intelligence (AI) technologies. This development is in response to the emergence of new applications such as self-driving cars, artificial intelligence of things (AIoT), and edge computing since 2018. TrendForce estimates that in 2022, AI servers that are equipped with general-purpose GPUs (GPGPUs) accounted for almost 1% of annual global server shipments. Moving into 2023, shipments of AI servers are projected to grow by 8% YoY thanks to ChatBot and similar applications generating demand across AI-related fields. Furthermore, shipments of AI servers are forecasted to increase at a CAGR of 10.8% from 2022 to 2026.
TrendForce has also found that the four major North American CSPs (i.e., Google, AWS, Meta, and Microsoft) together held the largest share of the annual total AI server demand in 2022, accounting for 66.2% of the annual global procurement quantity. Turning to China, localization of manufacturing and self-sufficiency in critical technologies have been gaining momentum in recent years, so the build-out of the infrastructure for AI technologies has also accelerated in the country. Among Chinese CSPs, ByteDance was the leader in the procurement of AI servers in 2022. Its share in the annual global procurement quantity came to 6.2%. Following ByteDance were Tencent, Alibaba, and Baidu that comprised around 2.3%, 1.5%, and 1.5% respectively.
AI-Based Optimization of Search Engines Is Driving Demand for HBM
Seeing a bright future in the development of AI technologies, Microsoft has invested a considerable sum in the well-known research laboratory OpenAI. Furthermore, Microsoft launched an improved version of its search engine Bing this February. The new Bing has incorporated a large-scale language model named Prometheus and the technology that underlays ChatGPT. Prometheus, in particular, is a collaboration between Microsoft and OpenAI. Not to be left out, Baidu launched ERNIE Bot this February as well. Initially operating as a standalone software, ERNIE Bot will be integrated into Baidu’s own search engine at a later time.
Regarding the models and specifications of the computing chips used in the aforementioned projects, ChatGPT has mainly adopted NVIDIA’s A100 and exclusively utilizes the cloud-based resources and services of Microsoft Azure. If the demand from ChatGPT and Microsoft’s other applications are combined together, then Microsoft’s demand for AI servers is projected to total around 25,000 units for 2023. Turning to Baidu’s ERNIE Bot, it originally adopted NVIDIA’s A100. However, due to the export control restrictions implemented by the US Commerce Department, ERNIE Bot has now switched to the A800. If the demand from ERNIE Bot and Baidu’s other applications are combined together, then Baidu’s demand for AI servers is projected to total around 2,000 units for 2023. TrendForce’s survey has revealed that in the market for server GPUs used in AI-related computing, the mainstream products include the H100, A100, and A800 from NVIDIA and the MI250 and MI250X series from AMD. It should be noted that the A800 is designed specifically for the Chinese market under the context of the latest export restrictions. In terms of the market share for server GPUs, NVIDIA now controls about 80%, whereas AMD controls about 20%.
Focusing just on the specifications of the aforementioned GPUs, ones that are involved in high-bandwidth computing and thus require high-bandwidth memory (HBM) have attracted even more attention in the market. Using bits as the basis for calculation, TrendForce has found that HBM currently represents about 1.5% of the entire DRAM market. The main suppliers for HBM solutions are Samsung, SK hynix and Micron. Among them, SK hynix is expected to become the dominant supplier for HBM3 solutions as it is only one capable of mass producing the HBM3 solution that has been adopted by NVIDIA. Also, since HBM solutions on the whole have a very entry high barrier with respect to manufacturing technology, memory suppliers regard them as products with a high gross margin.
During the 2020~2021 period, when the COVID-19 pandemic was at its height, buyers of key components raised their inventories above the usual level because of worries about pandemic-induced shortages in the supply chain. As a result, the demand for HBM solutions rose significantly in the same period. However, the growth of this demand is expected to slow down in 2023 due to the pressure to make inventory corrections. TrendForce currently forecasts that the market for HBM solutions will expand at a CAGR that is above the 40~45% range from 2023 to 2025. In sum, cloud companies around the world are going to invest more in AI servers over the years. Presently, companies and organizations are scaling back IT spending as the global economy is being impacted by high inflation and sluggish growth. However, with applications such as chatbots and search engines driving the demand for an AI-based technological transformation, cloud companies will prioritize the related businesses or projects when allocating capital expenditure.
DRAMeXchange is a global primary provider of future intelligences, in-depth analysis reports and advisory services on DRAM and Flash memory industry with coverage including current business, spot trading prices, and market trends, capital spending and wafer capacity trends, the impact of DRAM/flash memory products on the market, and other relevant PC industry information.
© DRAMeXchange ® Tech.Inc. All rights reserved.