Home GPU Comparison AMD Radeon Instinct MI8 vs NVIDIA Tesla V100 SXM2 32 GB

AMD Radeon Instinct MI8 vs NVIDIA Tesla V100 SXM2 32 GB

We compared two Professional market GPUs: 4GB VRAM Radeon Instinct MI8 and 32GB VRAM Tesla V100 SXM2 32 GB to see which GPU has better performance in key specifications, benchmark tests, power consumption, etc.

Main Differences

AMD Radeon Instinct MI8 's Advantages
Lower TDP (175W vs 250W)
NVIDIA Tesla V100 SXM2 32 GB 's Advantages
Released 1 years and 3 months late
Boost Clock1530MHz
More VRAM (32GB vs 4GB)
Larger VRAM bandwidth (897.0GB/s vs 512.0GB/s)
1024 additional rendering cores

Score

Benchmark

FP32 (float)
Radeon Instinct MI8
8.192 TFLOPS
Tesla V100 SXM2 32 GB +91%
15.67 TFLOPS
VS

Graphics Card

Dec 2016
Release Date
Mar 2018
Radeon Instinct
Generation
Tesla
Professional
Type
Professional
PCIe 3.0 x16
Bus Interface
PCIe 3.0 x16

Clock Speeds

-
Base Clock
1290 MHz
-
Boost Clock
1530 MHz
500 MHz
Memory Clock
876 MHz

Memory

4GB
Memory Size
32GB
HBM
Memory Type
HBM2
4096bit
Memory Bus
4096bit
512.0GB/s
Bandwidth
897.0GB/s

Render Config

-
SM Count
80
64
Compute Units
-
4096
Shading Units
5120
256
TMUs
320
64
ROPs
128
-
Tensor Cores
640
-
RT Cores
-
16 KB (per CU)
L1 Cache
128 KB (per SM)
2 MB
L2 Cache
6 MB

Theoretical Performance

64.00 GPixel/s
Pixel Rate
195.8 GPixel/s
256.0 GTexel/s
Texture Rate
489.6 GTexel/s
8.192 TFLOPS
FP16 (half)
31.33 TFLOPS
8.192 TFLOPS
FP32 (float)
15.67 TFLOPS
512.0 GFLOPS
FP64 (double)
7.834 TFLOPS

Graphics Processor

Fiji
GPU Name
GV100
Fiji XT CA (215-0862120)
GPU Variant
-
GCN 3.0
Architecture
Volta
TSMC
Foundry
TSMC
28 nm
Process Size
12 nm
8.9 billion
Transistors
21.1 billion
596 mm²
Die Size
815 mm²

Board Design

175W
TDP
250W
450 W
Suggested PSU
600 W
No outputs
Outputs
No outputs
1x 8-pin
Power Connectors
None

Graphics Features

12 (12_0)
DirectX
12 (12_1)
4.6
OpenGL
4.6
2.1
OpenCL
3.0
1.2.170
Vulkan
1.3
-
CUDA
7.0
6.5
Shader Model
6.6

Related GPU Comparisons

Related News

© 2024 - TopCPU.net   Contact Us Privacy Policy