HBM3E takes off, and the drums of war have already sounded

tech

HBM, which stands for High Bandwidth Memory, is a novel type of memory chip designed for CPUs and GPUs. If we liken traditional DDR to a "bungalow design," then HBM adopts a "multi-story building design."

Currently, HBM products are being developed in the sequence of HBM (first generation), HBM2 (second generation), HBM2E (third generation), HBM3 (fourth generation), and HBM3E (fifth generation).

It can be observed that each iteration of HBM brings an increase in processing speed. Since its inception in January 2022, HBM3 has quickly become a leader in the field of high-performance computing due to its unique 2.5D/3D memory architecture. HBM3 not only inherits the excellent characteristics of its predecessors but also achieves significant technological breakthroughs. It features a data path as wide as 1024 bits and operates at an impressive rate of 6.4 Gb/s, achieving a bandwidth of up to 819 Gb/s, providing robust support for high-performance computing.

Now, manufacturers such as SK Hynix and Micron have once again elevated this standard, and HBM3E has quickly gained enthusiastic market pursuit upon its release.

01

Key Technical Features of HBM3E from the Three Major Memory GiantsThe main leaders in HBM3E technology continue to be the three giants: Micron, SK Hynix, and Samsung, who play a significant role in this field and jointly promote technological innovation and progress. Below is a showcase of the impressive advancements of these three manufacturers in HBM3E technology.

Advertisement

**SK Hynix - Unique MR-MUF Technology, etc.**

On August 21, 2023, SK Hynix announced the successful development of a new ultra-high-performance DRAM product, HBM3E, designed for AI, and began providing samples to customers for performance verification.

It is reported that SK Hynix has adopted the advanced MR-MUF (Molding with Rubber and UFP) technology, which improves the thermal performance of HBM3E by 10% compared to the previous generation. This technology involves injecting a liquid protective material into the space after semiconductor chip stacking and then curing it, which is more process-efficient and effective in heat dissipation compared to laying a thin-film material each time a chip is stacked. The maximum data processing speed of its HBM3E can reach 1.18 TB (terabytes) per second, meaning it can process a large amount of data in an extremely short time. This is equivalent to processing 230 full high-definition (FHD) movies in one second.

Additionally, its HBM3E offers a transfer speed of up to 8 Gbps, which is a significant improvement over the previous generation HBM3. This high speed is particularly important for applications that require rapid data processing, such as high-performance computing and artificial intelligence.

**Micron - 1β, Advanced Through-Silicon Vias (TSV) Technology, etc.**

In September 2023, SK Hynix's HBM3E memory faced a new competitor - Micron.

Micron has also developed industry-leading HBM3E designs using its 1β (1-beta) technology, advanced through-silicon vias (TSV), and other innovations that enable differentiated packaging solutions.

With its superior performance and excellent energy efficiency, Micron's HBM3E has garnered much favor in the memory market. For instance, Micron HBM3E's pin rate exceeds 9.2 Gb/s, offering over 1.2 TB/s of memory bandwidth, which aids AI accelerators, supercomputers, and data centers in achieving ultra-high-speed data access. Micron HBM3E currently offers a capacity of 24 GB, enabling data centers to seamlessly expand their AI applications. Whether for training massive neural networks or accelerating inference tasks, Micron's solutions provide the necessary memory bandwidth.

**Samsung - Advanced Thermally Conductive Non-conductive Film Technology, etc.**

Samsung has also been advancing in the HBM3E technology arena with its advanced thermally conductive non-conductive film technology and other innovations that enhance performance and reliability. Samsung's contributions to the field are significant, offering solutions that are tailored to the high-speed data demands of modern computing environments. Their HBM3E products are designed to meet the stringent requirements of next-generation applications in AI, high-performance computing, and data-intensive tasks, ensuring that they stay at the forefront of memory technology innovation.Samsung has also been committed to providing superior products to meet the higher demands for high-performance and high-capacity solutions in the era of artificial intelligence.

On October 21, 2023, Samsung unveiled its new generation of HBM3E DRAM, named "Shinebolt," at its annual "Memory Technology Day" event.

Samsung Shinebolt supports the new generation of AI applications, enhancing the total cost of ownership and accelerating AI model training and inference in data centers. The pin speed of HBM3E reaches up to 9.8Gbps, which means the overall transfer rate can exceed 1.2TBps. To achieve a higher number of stacked layers and improve thermal characteristics, Samsung has optimized the non-conductive film (NCF) technology to eliminate gaps between chip layers and maximize thermal conductivity as much as possible.

In February 2024, Samsung successfully launched its first 12-layer stacked HBM3E DRAM—HBM3E 12H, which is Samsung's largest capacity HBM product to date.

Samsung HBM3E 12H supports a maximum bandwidth of up to 1280GB/s at all times, and the product capacity has also reached 36GB. Compared to Samsung's 8-layer stacked HBM3 8H, HBM3E 12H has significantly increased by more than 50% in both bandwidth and capacity.

HBM3E 12H utilizes advanced thermally compressed non-conductive film (TC NCF) technology, which allows the height of 12-layer and 8-layer stacked products to remain consistent, meeting the current requirements for HBM packaging. The vertical density of the HBM3E 12H product has increased by more than 20% compared to its HBM3 8H product. Samsung's advanced thermally compressed non-conductive film technology also improves the thermal performance of HBM by allowing the use of different-sized bumps between chips.

Compared to HBM3 8H, it is expected that after being equipped with artificial intelligence applications, the average speed of AI training with HBM3E 12H can be increased by 34%, and the number of users for inference services can also be increased by more than 11.5 times.

02

Mass production progress of various companiesOn March 19, 2024, SK Hynix announced that it has become the first company to successfully mass-produce a new ultra-high-performance memory product, HBM3E, designed for AI applications, and will begin shipping to customers by the end of March.

As the world's first supplier to introduce HBM3E, SK Hynix has achieved mass production in just seven months since announcing its development plan in August of the previous year. It is reported that the first batch of SK Hynix's HBM3E products will be delivered to NVIDIA on schedule.

On February 26, 2024, Micron announced the start of mass production of its HBM3E high-bandwidth memory solution. It is known that NVIDIA's H200 Tensor Core GPU will utilize Micron's 8-stacked 24GB capacity HBM3E memory and will begin shipping in the second quarter of 2024.

Currently, Samsung has started providing HBM3E 12H samples to customers and is expected to begin mass production on a large scale in the second half of this year.

03

Competing for HBM3E, NVIDIA wants it all

With the popularity of AI servers, the demand for AI accelerator cards has shown a strong upward trend. As an essential component of AI accelerator cards, high-bandwidth memory (HBM) is gradually becoming an indispensable DRAM module.

NVIDIA's next-generation AI accelerator card, the B100 (Blackwell architecture), will feature memory specifications that include HBM3E. Currently, only SK Hynix, Samsung Electronics, and Micron are capable of providing this specification of memory. To ensure a stable supply of HBM, NVIDIA even needs to provide financial support to memory suppliers for the research and development of HBM products in the form of large prepayments. It should be noted that it is rare for customers to make large prepayments to memory suppliers.

After all, NVIDIA is not the only one in urgent need of HBM3E production capacity; many companies are competing for this scarce resource, with the likes of AMD, Microsoft, Amazon, and others all lining up to purchase.It is reported that AMD will launch an upgraded version of the MI300 AI accelerator with HBM3e memory later this year, followed by the next-generation Instinct MI400 in 2025. In addition to NVIDIA and AMD, Amazon and Microsoft are two major players in the cloud service sector, having previously introduced generative AI technology and significantly increased their investment in the AI field. Reports indicate that SK Hynix is currently dealing with a large number of requests from customers for HBM3E samples, but meeting the quantity of samples initially requested by NVIDIA is very urgent.

At the beginning of this race, the production capacity of the two major HBM suppliers was quickly snapped up.

Recently, SK Hynix Vice President Kim Ki-tae stated in a blog post that although 2024 has just begun, all of SK Hynix's HBM for this year has already been sold out. At the same time, to maintain a leading position in the market, the company has started preparations for 2025. Micron Technology also stated that this year's HBM production capacity has been fully booked, and the vast majority of the capacity for 2025 has already been reserved.

Previously, NVIDIA co-founder and CEO Jen-Hsun Huang stated at the GTC 2024 conference that NVIDIA is testing Samsung's HBM chips and may adopt them in the future.

This also means that, against the backdrop of the current extremely tight supply and demand relationship, NVIDIA relies not only on SK Hynix and Micron to provide HBM3E production capacity but also urgently needs Samsung's participation to meet its growing demand.

In this light, with the development of AI at its peak, there is a serious shortage of HBM3E production capacity in 2024.

04

The 2024 HBM supply bit growth rate is expected to reach as high as 260%

According to TrendForce, by the end of 2024, the overall DRAM industry plans to produce a capacity of about 250K/m for HBM TSV, accounting for about 14% of the total DRAM production capacity (approximately 1,800K/m). As original equipment manufacturers continue to increase their investment, the annual growth rate of supply bits is expected to reach as high as 260%.In terms of market share, the demand for HBM continues to be robust, with orders for 2024 already largely secured by buyers. The proportion of HBM in the total DRAM output value is expected to increase from 8.4% in 2023 to 20.1% in 2024, indicating rapid growth.

The institution also estimated the HBM/TSV production capacity of the three major HBM manufacturers. Samsung's annual HBM TSV production capacity is expected to reach 130K/m in 2024. SK Hynix follows with a capacity of 120-125K/m; Micron has a relatively smaller capacity of only 20K/m. Currently, Samsung and SK Hynix are the most aggressive in planning to increase HBM production capacity. SK Hynix has a market share of over 90% in the HBM3 market, while Samsung is expected to continue to pursue closely for several quarters, benefiting from the increasing volume of AMD MI300 chips in the future.

HBM high-bandwidth memory chip wafers are 35% to 45% larger than DDR5 wafers of the same capacity and process, yet the yield (including TSV packaging yield) is 20% to 30% lower. In terms of production cycle, the manufacturing process of HBM (including TSV) is 1.5 to 2 months longer than that of DDR5. Due to the longer production cycle of HBM chips, it takes more than two quarters from wafer input to output and packaging completion. Therefore, buyers in urgent need of sufficient supply lock in orders earlier.

In June last year, media reports stated that SK Hynix was preparing to invest in back-end process equipment and expand the Icheon factory for packaging HBM3. It is expected that by the end of this year, the scale of the factory's back-end process equipment will nearly double.

Recently, SK Hynix is planning to invest about $4 billion to build a large, advanced chip packaging factory in West Lafayette, Indiana, USA, aiming to expand HBM storage capacity to meet NVIDIA's substantial demand. This large, advanced packaging factory is likely to start operations in 2028.

To close the gap, Micron has also placed a significant bet on its next-generation product, HBM3E. It is reported that Micron Technology's Taichung Plant 4 in Taiwan, China, was officially put into operation at the beginning of November 2023. Micron stated that Taichung Plant 4 will integrate advanced detection and packaging testing capabilities, mass-producing HBM3E and other products to meet the growing demand for various applications such as artificial intelligence, data centers, edge computing, and cloud.

Samsung Electronics began to expand the supply of the fourth generation of HBM, namely HBM3, from the fourth quarter of last year and is currently entering a transition period. Han Jin-man, Executive Vice President in charge of Samsung's U.S. semiconductor business, said in January this year that the company has high hopes for high-capacity memory chips, including the HBM series, to lead the rapidly growing field of artificial intelligence chips. At the CES 2024 press conference, he told reporters, "Our HBM chip production this year will be 2.5 times that of last year, and it will continue to double next year."

Samsung officials also revealed that the company plans to increase the maximum production of HBM to 150,000 to 170,000 units per month before the fourth quarter of this year to compete in the 2024 HBM market. Previously, Samsung Electronics invested 10.5 billion KRW to acquire the factory and equipment of Samsung Display in Cheonan, South Korea, to expand HBM production capacity, and also plans to invest between 70 billion and 100 billion KRW to build a new packaging line.However, it is worth noting that recently, overseas analysts have indicated that Samsung's HBM3 chip production yield is about 10% to 20%, while SK Hynix's HBM3 yield can reach 60% to 70%. The main reason lies in Samsung's insistence on using thermal compression non-conductive film (TC NCF) manufacturing technology, which can lead to some production issues. In contrast, SK Hynix has adopted the reflow molded underfill (MR-MUF) technology on a large scale, which can overcome the weaknesses of NCF. To improve output, Samsung is actively negotiating with material manufacturers, including Japanese companies like Nagase, aiming to secure a stable supply of MUF materials. It is reported that although Samsung has placed orders for chip manufacturing equipment for MUF technology, due to the need for more testing and optimization, mass production of high-end chips using MUF technology may not be ready until next year.

HBM sees a rise in both quantity and price, with domestic manufacturers catching up.

According to the bidding website, recently, Wuhan Xinxin released a bidding project titled "High Bandwidth Storage Chip Advanced Packaging Technology Research and Development and Production Line Construction," utilizing 3D integrated multi-wafer stacking technology to create domestically produced high-bandwidth memory products with higher capacity, greater bandwidth, lower power consumption, and higher production efficiency. This project aims to promote the industrialization of multi-wafer stacking processes, with approximately 17 new production equipment units/sets, aiming to achieve a monthly output capacity of ≥3000 wafers (12 inches).

In response to overseas giants' mass production of HBM3E, domestic storage manufacturers are also accelerating breakthroughs in HBM technology, hoping to enhance their competitive strength under the demand of the AI wave. For example, last August, Huabang Electronics from Taiwan introduced its quasi-HBM high-bandwidth product CUBEx, which uses 1 to 4 layers of TSV DRAM stacking, with I/O speeds of 500M to 2Gbps, a total bandwidth of up to 1024GB/s, and a chip capacity of 0.5 to 4GB, with power consumption as low as less than 1pJ/bit. This CUBEx, which has a higher bandwidth than conventional HBM, can be used in fields such as AR, VR, and wearables.

On the Chinese mainland, the DRAM process of international first-tier manufacturers is at the 1alpha, 1beta level, the domestic DRAM process is at the 25 to 17nm level, and the DRAM process in Taiwan is at the 25 to 19nm level. The domestic DRAM process is close to that of overseas manufacturers, and with advanced packaging technology resources and GPU customer resources, there is a strong demand for domestic products. In the future, domestic DRAM manufacturers are expected to break through in HBM.

At present, only first-tier packaging manufacturers such as Jiangsu Changjiang Electronics Technology, Tongfu Microelectronics, and Shenghe Jingwei have the technology (such as TSV silicon through-hole) and equipment to support HBM production. Jiangsu Changjiang Electronics Technology stated in investor interactions that its XDFOI high-density fan-out packaging solution is also suitable for HBM's Chip to Wafer and Chip to Chip TSV stacking applications; Tongfu Microelectronics previously stated that after the construction of the advanced packaging production line at Nantong Tongfu factory, the company will become the most advanced 2.5D/3D advanced packaging R&D and mass production base in the country, achieving a breakthrough in the field of high-performance packaging technology for HBM in China, which is of great significance for the country to break through the "neck-strangling" technology in the field of integrated circuit packaging and testing.

In the rest of the supply chain, chip design company Guoxin Technology stated that it is working with partners to conduct wafer validation for high-performance interconnect IP technology for chiplet chips based on advanced processes, and is actively cooperating with upstream and downstream manufacturers, including high-end chip packaging cooperation involving HBM technology.

Unigroup Guoxin indicated that the company's HBM products are special integrated circuit products and are still in the research and development stage.Shannon Chip Innovation previously stated that as one of SK Hynix's distributors, the company has the agency qualification for HBM. In the future, based on the needs of downstream customers, the company will form corresponding sales under the premise of guaranteed supply from the original factory.

Fei Kai Materials indicated that epoxy molding compound is one of the materials required for the manufacturing technology of HBM memory chips. MUF materials are divided into different varieties according to their properties and processes. Currently, the company's MUF material products include liquid encapsulation materials LMC and GMC granular encapsulation materials. The liquid encapsulation material LMC has been mass-produced and has formed a small amount of sales, while the granular filling encapsulation material GMC is still in the research and development and sample delivery stage.

Xing Sen Technology stated that the company's FCBGA packaging substrate can be used for the packaging of HBM memory, but it has not yet entered the overseas HBM leading industry chain.

HBM3 and HBM3E, as the latest iterations of HBM chips, are gradually becoming the darling of the market. Their close integration with core microprocessor chips provides strong support for generative artificial intelligence to process massive amounts of data. In the future, with the continuous improvement and maturity of HBM3 and HBM3E technologies, we have reason to believe that they will shine in more fields and become an important force in promoting the development of artificial intelligence. Both domestic and international manufacturers will face new development opportunities and challenges in this field.

tech
737 50

Comment Box