EVGA GTX 1080 ACX 3.0 vs NVIDIA H200 SXM 141 GB

Comparison of EVGA GTX 1080 ACX 3.0 with 8 GB GDDR5X and 2,560 cores vs NVIDIA H200 SXM 141 GB with 141 GB HBM3e and 16,896 cores.

Loading...

Performance Rating

H200 H200
MI325X MI325X
A100 A100

EVGA GTX 1080 ACX 3.0

EVGA GTX 1080 ACX 3.0

RX 7900 XTX RX 7900 XTX
Instinct MI300X Instinct MI300X
MI250 MI250

NVIDIA H200 SXM 141 GB

NVIDIA H200 SXM 141 GB

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

8 GB 141 GB

Memory Type

GDDR5X HBM3e

Memory Bandwidth

320.3 GB/s 4.89 TB/s

Memory Bus Width

256 бит 6,144 бит

ML Performance

FP16 (Half Precision)

0.1386 TFLOPS 267.6 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

8.873 TFLOPS 66.91 TFLOPS

FP64 (Double Precision)

0.2773 TFLOPS 33.45 TFLOPS

CUDA Cores

2,560 16,896

RT Cores

No No

Architecture & Compatibility

GPU Architecture

Pascal Hopper

SM (Streaming Multiprocessor)

20 132

PCIe Version

PCIe 3.0 x16 PCIe 5.0 x16

ML Software Support

CUDA Version

6.1
🔥 9.0

Clocks & Performance

Base Clock

1,607 1,500

Boost Clock

1,733 1,980

Memory Clock

1,251 1,593

Power Consumption

TDP/TGP

🔥 -74% 180 W
700 W

Recommended PSU

🔥 -59% 450 W
1100 W

Power Connector

1x 8-pin 8-pin EPS

Rendering

Texture Units (TMU)

160 528

ROP

No No

L2 Cache

2 MB 50 MB

Benchmarks

MLPerf, llama2-70b-99.9 (UNSET)

3 534 tokens/s

MLPerf, llama2-70b-99.9 (fp16)

3 553 tokens/s

MLPerf, llama2-70b-99.9 (fp8)

2 444 tokens/s

MLPerf, llama3.1-405b (fp16)

40.8 tokens/s

MLPerf, llama3.1-405b (fp8)

25.3 tokens/s

MLPerf, llama3.1-8b (fp8)

5 161 tokens/s

MLPerf, deepseek-r1 (fp8)

1 113 tokens/s

MLPerf, mixtral-8x7b (fp8)

7 132 tokens/s

Additional

Slots

Dual-slot SXM Module

Release Date

May 27, 2016 Nov. 18, 2024

Display Outputs

1x DVI
1x HDMI 2.0
3x DisplayPort 1.4a
No outputs

Renting is cheaper than buying