For Visitors at Electronica 2024

Book your time now!

All it takes is a few clicks to reserve your place and get the booth ticket

Hall C5 Booth 220

Advance Registration

For Visitors at Electronica 2024
You’ re all sign up! Thank you for making an appointment!
We will send you the booth tickets by email once we have verified your reservation.
Home > News > Nvidia H200 order starts delivery in Q3, with B100 expected to be shipped in the first half of next year
RFQs/Order (0)
English
English

Nvidia H200 order starts delivery in Q3, with B100 expected to be shipped in the first half of next year


According to reports, the upstream chip end of Nvidia AI GPU H200 entered the mass production period in late Q2 and is expected to be delivered in large quantities after Q3. But the launch schedule of Nvidia Blackwell platform is at least one to two quarters ahead of schedule, which affects the willingness of end customers to purchase H200.

The supply chain points out that currently, the client orders awaiting shipment are still mostly concentrated in the HGX architecture of H100, with a limited proportion of H200. In Q3, the H200 that will be mass-produced and delivered is mainly NVIDIA DGX H200; As for B100, it already has partial visibility and is expected to be shipped in the first half of next year.

As an iterative upgrade product of H100 GPU, H200 adopts HBM3e high bandwidth memory technology for the first time based on advanced Hopper architecture, achieving faster data transfer speed and larger memory capacity, especially showing significant advantages for large-scale language model applications. According to official data released by Nvidia, when dealing with complex large language models such as Meta's Llama2, H200 has a maximum improvement of 45% in generative AI output response speed compared to H100.

H200 is positioned as another milestone product of Nvidia in the field of AI computing, not only inheriting the advantages of H100, but also achieving significant breakthroughs in memory performance. With the commercial application of H200, the demand for high bandwidth memory is expected to continue to rise, which will further drive the development of the entire AI computing hardware industry chain, especially the market opportunities for HBM3e related suppliers.

The B100 GPU will adopt liquid cooling technology. Heat dissipation has become a key factor in improving chip performance. The TDP of Nvidia H200 GPU is 700W, while it is conservatively estimated that the TDP of B100 is close to kilowatts. Traditional air cooling may not be able to meet the heat dissipation requirements during chip operation, and the heat dissipation technology will be comprehensively innovated towards liquid cooling.

NVIDIA CEO Huang Renxun stated that starting from the B100 GPU, the cooling technology of all products in the future will shift from air cooling to liquid cooling. Galaxy Securities believes that Nvidia's B100GPU has at least twice the performance of H200 and will exceed four times that of H100. The improvement in chip performance is partly due to advanced processes, and on the other hand, heat dissipation has become a key factor in improving chip performance. The TDP of Nvidia's H200 GPU is 700W, which is conservatively estimated to be close to kilowatts. Traditional air cooling may not be able to meet the heat dissipation needs during chip operation, and heat dissipation technology will be fully revolutionized towards liquid cooling.

Select Language

Click on the space to exit