2026-04-23 17:26:35EE Times

According to Counterpoint Research HPC Service's latest "Data Center AI Server Computing ASIC Shipment Forecast and Tracker," the global demand for high bandwidth memory (HBM) bits for AI server computing ASICs is expected to grow 35 times between 2024 and 2028.
Continuing Counterpoint Research's previous analysis of custom accelerator chips, this explosive growth is primarily driven by the widespread adoption of high-density memory architectures in proprietary accelerators. Key demand drivers include Google's significant expansion of its TPU infrastructure to support the Gemini ecosystem, the continued deployment of AWS Trainium clusters, and the increasing adoption of Meta MTIA and Microsoft Maia. While hyperscale cloud service providers continue to actively expand their self-developed chip portfolios, the overall industry trend indicates that HBM has become a key technology supporting next-generation AI workloads and a major focus of capital expenditure.
A key aspect of this trend is the structural shift in the Total Serviceable AI Accelerator (TAM) market, where customized ASICs are rapidly increasing their share of global HBM consumption. Notably, this surge in demand is primarily driven by a significant increase in memory density at the per-chip level. This exponential growth in chip density directly reflects the rapidly increasing computing power demands of next-generation AI workloads. As hyperscalers rapidly adopt mega-parameter models, multi-model architectures, and complex Mixture-of-Experts (MoE) designs, higher memory densities are needed to bring larger datasets closer to the computing cores, thereby increasing data throughput, reducing latency for high-order inference tasks, and ensuring that memory bottlenecks do not drag down overall system performance.
This trend signifies that AI server computing ASICs will occupy an increasingly larger share of the overall HBM market, gradually diversifying memory demand from its past reliance solely on commercial GPUs. Simultaneously, the ASIC memory ecosystem is undergoing a clear generational shift. As cloud service providers prioritize maximum bandwidth and mature supply chain systems, HBM3E is expected to dominate, accounting for approximately 56% of ASIC HBM bit demand by 2028.
Regarding the market share of HBM3E, David Wu, a research specialist at Counterpoint Research, stated, "HBM3E has proven to be the best balance point in current AI architectures, providing the high bandwidth and high density needed to overcome memory bottlenecks and support large-parameter models. As Samsung's yield rates gradually stabilize and previous supply constraints ease, major cloud service providers are progressively standardizing their next-generation ASICs with HBM3E. Counterpoint Research expects this widespread adoption to ensure that HBM3E maintains a more than 50% market share at least until 2028."
MS Hwang, Research Director at Counterpoint Research, stated, "Prior to HBM3E, HBMs were largely considered standardized products with limited differentiation between customers. However, starting with HBM4, as logic chips are integrated into the base die, and further developed into HBM4E, the market is gradually moving towards customized HBMs. The adoption of customized HBMs will continue to increase, driving performance improvements in ASICs. This presents a significant opportunity for memory suppliers to build high-value businesses by reflecting logic chip design costs in product pricing, while simultaneously creating a more stable demand lock-in effect through deep collaboration with customers."
Regarding the demand and development trends of advanced packaging capacity, Ashwath Rao, Senior Analyst at Counterpoint Research, stated, "TSMC remains the primary beneficiary, as most suppliers still utilize its CoWoS-S and CoWoS-L solutions. However, with TSMC's capacity continuing to be constrained, Counterpoint Research has observed that Google and several other major industry players are evaluating the adoption of Intel's EMIB-T technology to meet the needs of next-generation advanced packaging. If this technology is successfully implemented, it will be a significant milestone for the industry, helping the market reduce its reliance on TSMC and providing a more cost-effective alternative that supports larger package sizes."
From a supplier market share perspective, the global HBM market will remain highly concentrated, with SK Hynix and Samsung maintaining their dominance for the foreseeable future. However, the market share distribution within the sector is constantly evolving. Counterpoint Research anticipates that Samsung will gradually expand its market share and accelerate the narrowing of the gap with SK Hynix. As Samsung successfully overcomes past production bottlenecks, its stable yields and improved product performance give it the opportunity to aggressively regain more quotas in the new wave of AI infrastructure deployments. Since this is not a zero-sum game, Counterpoint Research is also watching Micron's ability to achieve design wins in the custom AI server accelerator market and expand its HBM market presence.
Declare:The sources of contents are from Internet,Please『 Contact Us 』 immediately if any infringement caused