Home NVIDIA H100 PCIe

NVIDIA H100 PCIe

This is a GPU manufactured with TSMC 4nm process, based on Nvidia Hopper architecture and released on Mar 2022. It features 80 billion transistors, 14592 CUDA cores and 80GB HBM2e memory, with 50MB L2 cache, theoretical performance of 51.22TFLOPS, with total power consumption of 350W.

Graphics Card

[Report Issues]
Release Date
Mar 2022
Generation
Tesla Hopper
Type
AI GPU
Bus Interface
PCIe 5.0 x16

Clock Speeds

[Report Issues]
Base Clock
1095 MHz
Boost Clock
1755 MHz
Memory Clock
1593 MHz

Memory

[Report Issues]
Memory Size
80GB
Memory Type
HBM2e
Memory Bus
5120bit
Bandwidth
2039GB/s

Render Config

[Report Issues]
SM Count
114
Shading Units
14592
TMUs
456
ROPs
24
Tensor Cores
456
L1 Cache
256 KB (per SM)
L2 Cache
50 MB

Theoretical Performance

[Report Issues]
Pixel Rate
42.12 GPixel/s
Texture Rate
800.3 GTexel/s
FP16 (half)
204.9 TFLOPS
FP32 (float)
51.22 TFLOPS
FP64 (double)
25.61 TFLOPS

Graphics Processor

[Report Issues]
GPU Name
GH100
Architecture
Hopper
Foundry
TSMC
Process Size
4 nm
Transistors
80 billion
Die Size
814 mm²

Board Design

[Report Issues]
TDP
350W
Suggested PSU
750 W
Outputs
No outputs
Power Connectors
1x 16-pin

Graphics Features

[Report Issues]
DirectX
N/A
OpenGL
N/A
OpenCL
3.0
Vulkan
N/A
CUDA
9.0
Shader Model
N/A

Rankings

[Report Issues]
FP32 (float)
52.22 TFLOPS
51.48 TFLOPS
NVIDIA H100 PCIe 80 GB HBM2e
51.22 TFLOPS
51.22 TFLOPS
51.22 TFLOPS
48.74 TFLOPS
48.74 TFLOPS
Blender
NVIDIA RTX A6000 48 GB GDDR6
5387
NVIDIA H100 PCIe 80 GB HBM2e
4845
NVIDIA RTX A5000 24 GB GDDR6
4829
4680

Related Comparisons

© 2024 - TopCPU.net   Contact Us Privacy Policy