AMD Radeon Instinct MI300X vs NVIDIA Tesla V100 SXM2 16 GB

Comparison of AMD Radeon Instinct MI300X with 192 GB HBM3 and 19,456 cores vs NVIDIA Tesla V100 SXM2 16 GB with 16 GB HBM2 and 5,120 cores.

Loading...

Performance Rating

H200 H200
MI325X MI325X
A100 A100

AMD Radeon Instinct MI300X

AMD Radeon Instinct MI300X

RX 7900 XTX RX 7900 XTX
Instinct MI300X Instinct MI300X
MI250 MI250

NVIDIA Tesla V100 SXM2 16 GB

16.0

NVIDIA Tesla V100 SXM2 16 GB

16.0

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

192 GB 16 ГБ

Memory Type

HBM3 HBM2

Memory Bandwidth

10.3 TB/s
🔥 1.13 TB/s

Memory Bus Width

8,192 бит 4,096 бит

ML Performance

FP16 (Half Precision)

653.7 TFLOPS 32.71 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

81.72 TFLOPS 16.35 TFLOPS

FP64 (Double Precision)

81.72 TFLOPS 8.177 TFLOPS

CUDA Cores

19,456 5,120

RT Cores

No No

Architecture & Compatibility

GPU Architecture

CDNA 3.0 Volta

SM (Streaming Multiprocessor)

No 80

PCIe Version

PCIe 5.0 x16 PCIe 3.0 x16

ML Software Support

CUDA Version

No 7.0

Clocks & Performance

Base Clock

1,000 1,245

Boost Clock

2,100 1,597

Memory Clock

2,525 1,106

Power Consumption

TDP/TGP

750 W
🔥 -67% 250 W

Recommended PSU

1150 W
🔥 -48% 600 W

Power Connector

None None

Rendering

Texture Units (TMU)

1,216 320

ROP

No No

L2 Cache

16 MB 6 MB

Benchmarks

MLPerf, llama2-70b-99.9 (UNSET)

1 983 tokens/s

MLPerf, llama2-70b-99.9 (fp16)

1 740 tokens/s

MLPerf, llama2-70b-99.9 (fp8)

1 057 tokens/s

MLPerf, llama3.1-405b (UNSET)

30.4 tokens/s

MLPerf, llama3.1-405b (fp16)

34.8 tokens/s

llama.cpp, llama-2-7b-Q4_0

232.9 tokens/s

MLPerf, mixtral-8x7b (fp8)

5 975 tokens/s

Additional

Slots

OAM Module
🔥 SXM Module

Release Date

Dec. 6, 2023 Nov. 26, 2019

Display Outputs

No outputs
No outputs

Renting is cheaper than buying