about Nvidia GPU H100
An Order-of-Magnitude Leap for Accelerated Computing
With the NVIDIA NVLink™ Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads. The GPU also includes a dedicated Transformer Engine to solve trillion-parameter language models.
Transformational AI Training
Deploying H100 GPUs at data center scale delivers outstanding performance and brings the next generation of exascale high-performance computing (HPC) and trillion-parameter AI within the reach of all researchers.
Real-Time Deep Learning Inference
AI solves a wide array of business challenges, using an equally wide array of neural networks. A great AI inference accelerator has to not only deliver the highest performance but also the versatility to accelerate these networks.
Exascale High-Performance Computing
And H100’s new breakthrough AI capabilities further amplify the power of HPC+AI to accelerate time to discovery for scientists and researchers working on solving the world’s most important challenges.
Accelerated Data Analytics
Since large datasets are scattered across multiple servers, scale-out solutions with commodity CPU-only servers get bogged down by a lack of scalable computing performance.
Built-In Confidential Computing
To exponentially increase performance while protecting the confidentiality and integrity of data and applications in use has the ability to unlock data insights like never before.
Exceptional Performance for Large-Scale AI and HPC
The Hopper Tensor Core GPU will power the NVIDIA Grace Hopper CPU+GPU architecture, purpose-built for terabyte-scale accelerated computing and providing 10X higher performance on large-model AI and HPC.