AMD Radeon Instinct MI300X vs NVIDIA H100 PCIe 96 GB

Comparison of AMD Radeon Instinct MI300X with 192 GB HBM3 and 19,456 cores vs NVIDIA H100 PCIe 96 GB with 96 GB HBM3 and 16,896 cores.

Loading...

Performance Rating

H200 H200
MI325X MI325X
A100 A100

AMD Radeon Instinct MI300X

AMD Radeon Instinct MI300X

RX 7900 XTX RX 7900 XTX
Instinct MI300X Instinct MI300X
MI250 MI250

NVIDIA H100 PCIe 96 GB

55.6

NVIDIA H100 PCIe 96 GB

55.6

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

192 GB
🔥 96 ГБ

Memory Type

HBM3 HBM3

Memory Bandwidth

10.3 TB/s
🔥 3.36 TB/s

Memory Bus Width

8,192 бит 5,120 бит

ML Performance

FP16 (Half Precision)

653.7 TFLOPS
🔥 248.3 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

81.72 TFLOPS
🔥 62.08 TFLOPS

FP64 (Double Precision)

81.72 TFLOPS
🔥 31.04 TFLOPS

CUDA Cores

19,456
🔥 16,896

RT Cores

No No

Architecture & Compatibility

GPU Architecture

CDNA 3.0 Hopper

SM (Streaming Multiprocessor)

No
🔥 132

PCIe Version

PCIe 5.0 x16 PCIe 5.0 x16

ML Software Support

CUDA Version

No 9.0

Clocks & Performance

Base Clock

1,000
🔥 +66% 1,665

Boost Clock

2,100
🔥 1,837

Memory Clock

2,525
🔥 1,313

Power Consumption

TDP/TGP

750 W
🔥 -7% 700 W

Recommended PSU

1150 W
🔥 -4% 1100 W

Power Connector

None 8-pin EPS

Rendering

Texture Units (TMU)

1,216
🔥 528

ROP

No No

L2 Cache

16 MB
🔥 +212% 50 MB

Benchmarks

MLPerf, llama2-70b-99.9 (UNSET)

1 983 tokens/s

MLPerf, llama2-70b-99.9 (fp16)

1 740 tokens/s

MLPerf, llama2-70b-99.9 (fp8)

1 057 tokens/s

MLPerf, llama3.1-405b (UNSET)

30.4 tokens/s

MLPerf, llama3.1-405b (fp16)

34.8 tokens/s

llama.cpp, llama-2-7b-Q4_0

232.9 tokens/s

MLPerf, mixtral-8x7b (fp8)

5 975 tokens/s

Additional

Slots

OAM Module Dual-slot

Release Date

Dec. 6, 2023 March 21, 2023

Display Outputs

No outputs
No outputs

Renting is cheaper than buying