Home GPU Comparison NVIDIA Tesla V100 SXM2 32 GB vs AMD Instinct MI300A

NVIDIA Tesla V100 SXM2 32 GB vs AMD Instinct MI300A

We compared two Professional market GPUs: 32GB VRAM Tesla V100 SXM2 32 GB and 128GB VRAM AMD Instinct MI300A to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

NVIDIA Tesla V100 SXM2 32 GB 's Advantages
Lower TDP (250W vs 760W)
AMD Instinct MI300A 's Advantages
Released 5 years and 9 months late
Boost Clock has increased by 37% (2100MHz vs 1530MHz)
More VRAM (128GB vs 32GB)
Larger VRAM bandwidth (5300GB/s vs 897.0GB/s)
9472 additional rendering cores

Score

Benchmark

FP32 (float)
Tesla V100 SXM2 32 GB
15.67 TFLOPS
AMD Instinct MI300A +682%
122.6 TFLOPS
VS

Graphics Card

Mar 2018
Release Date
Dec 2023
Tesla
Generation
Instinct
Professional
Type
Professional
PCIe 3.0 x16
Bus Interface
PCIe 5.0 x16

Clock Speeds

1290 MHz
Base Clock
1000 MHz
1530 MHz
Boost Clock
2100 MHz
876 MHz
Memory Clock
5200 MHz

Memory

32GB
Memory Size
128GB
HBM2
Memory Type
HBM3
4096bit
Memory Bus
8192bit
897.0GB/s
Bandwidth
5300GB/s

Render Config

80
SM Count
-
-
Compute Units
228
5120
Shading Units
14592
320
TMUs
880
128
ROPs
0
640
Tensor Cores
-
-
RT Cores
-
128 KB (per SM)
L1 Cache
16 KB (per CU)
6 MB
L2 Cache
16 MB

Theoretical Performance

195.8 GPixel/s
Pixel Rate
0 MPixel/s
489.6 GTexel/s
Texture Rate
1496 GTexel/s
31.33 TFLOPS
FP16 (half)
980.6 TFLOPS
15.67 TFLOPS
FP32 (float)
122.6 TFLOPS
7.834 TFLOPS
FP64 (double)
61.3 TFLOPS

Graphics Processor

GV100
GPU Name
MI300
-
GPU Variant
-
Volta
Architecture
CDNA 3.0
TSMC
Foundry
TSMC
12 nm
Process Size
5 nm
21.1 billion
Transistors
146 billion
815 mm²
Die Size
1017 mm²

Board Design

250W
TDP
760W
600 W
Suggested PSU
1000 W
No outputs
Outputs
No outputs
None
Power Connectors
None

Graphics Features

12 (12_1)
DirectX
N/A
4.6
OpenGL
N/A
3.0
OpenCL
3.0
1.3
Vulkan
N/A
7.0
CUDA
-
6.6
Shader Model
N/A

Related GPU Comparisons

Related News

© 2024 - TopCPU.net   Contact Us Privacy Policy