AMD Radeon Instinct MI325X vs NVIDIA A100 SXM4 80 GB

Comparison of AMD Radeon Instinct MI325X with 288 GB HBM3e and 19,456 cores vs NVIDIA A100 SXM4 80 GB with 80 GB HBM2e and 6,912 cores.

Loading...

Performance Rating

H200 H200
MI325X MI325X
A100 A100

AMD Radeon Instinct MI325X

AMD Radeon Instinct MI325X

RX 7900 XTX RX 7900 XTX
Instinct MI300X Instinct MI300X
MI250 MI250

NVIDIA A100 SXM4 80 GB

NVIDIA A100 SXM4 80 GB

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

288 GB 80 GB

Memory Type

HBM3e HBM2e

Memory Bandwidth

10.3 TB/s 2.04 TB/s

Memory Bus Width

8,192 бит 5,120 бит

ML Performance

FP16 (Half Precision)

653.7 TFLOPS 77.97 TFLOPS

BF16 (Brain Float)

No 311.84 TFLOPS

TF32 (TensorFloat)

No 155.92

Compute Power

FP32 (Single Precision)

81.72 TFLOPS 19.49 TFLOPS

FP64 (Double Precision)

81.72 TFLOPS 9.746 TFLOPS

CUDA Cores

19,456 6,912

RT Cores

No No

Architecture & Compatibility

GPU Architecture

CDNA 3.0 Ampere

SM (Streaming Multiprocessor)

No 108

PCIe Version

PCIe 5.0 x16 PCIe 4.0 x16

ML Software Support

CUDA Version

No 8.0

Clocks & Performance

Base Clock

1,000 1,275

Boost Clock

2,100 1,410

Memory Clock

2,525 1,593

Power Consumption

TDP/TGP

1000 W
🔥 -60% 400 W

Recommended PSU

1400 W
🔥 -43% 800 W

Power Connector

None None

Rendering

Texture Units (TMU)

1,216 432

ROP

No No

L2 Cache

16 MB 40 MB

Benchmarks

MLPerf, llama2-70b-99.9 (Dummy)

3 596 tokens/s

MLPerf, llama2-70b-99.9 (fp8)

1 946 tokens/s

llama.cpp, llama-2-7b-Q4_0

22.4 tokens/s

MLPerf, mixtral-8x7b (fp8)

6 975 tokens/s

Additional

Slots

OAM Module OAM Module

Release Date

Oct. 12, 2024 Nov. 16, 2020

Display Outputs

No outputs
No outputs

Renting is cheaper than buying