NVIDIA CMP 70HX vs NVIDIA H100 SXM5 96 GB

Comparison of NVIDIA CMP 70HX with 8 GB GDDR6X and 3,840 cores vs NVIDIA H100 SXM5 96 GB with 96 GB HBM3 and 16,896 cores.

Loading...

Performance Rating

H200 H200
MI325X MI325X
A100 A100

NVIDIA CMP 70HX

NVIDIA CMP 70HX

MI250 MI250
Instinct MI300X Instinct MI300X
RX 7900 XTX RX 7900 XTX

NVIDIA H100 SXM5 96 GB

57.9

NVIDIA H100 SXM5 96 GB

57.9

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

8 GB
🔥 +1,100% 96 ГБ

Memory Type

GDDR6X HBM3

Memory Bandwidth

608.3 GB/s
🔥 3.36 TB/s

Memory Bus Width

256 бит 5,120 бит

ML Performance

FP16 (Half Precision)

10.71 TFLOPS
🔥 +2,399% 267.6 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

10.71 TFLOPS
🔥 +525% 66.91 TFLOPS

FP64 (Double Precision)

0.1674 TFLOPS
🔥 +19,882% 33.45 TFLOPS

CUDA Cores

3,840
🔥 +340% 16,896

RT Cores

30 No

Architecture & Compatibility

GPU Architecture

Ampere Hopper

SM (Streaming Multiprocessor)

30
🔥 +340% 132

PCIe Version

PCIe 1.0 x4 PCIe 5.0 x16

ML Software Support

CUDA Version

8.6
🔥 9.0

Clocks & Performance

Base Clock

1,365
🔥 1,350

Boost Clock

1,395
🔥 +42% 1,980

Memory Clock

1,188
🔥 +11% 1,313

Power Consumption

TDP/TGP

unknown 700 W

Recommended PSU

🔥 -82% 200 W
1100 W

Power Connector

1x 12-pin 8-pin EPS

Rendering

Texture Units (TMU)

120
🔥 +340% 528

ROP

30 No

L2 Cache

4 MB
🔥 +1,150% 50 MB

Benchmarks

llama.cpp, llama 7B Q4_0

248.8 tokens/s

llama.cpp, llama-2-7b-Q4_0

280.7 tokens/s

Additional

Slots

Dual-slot
🔥 SXM Module

Release Date

March 11, 2021 March 21, 2023

Display Outputs

No outputs
No outputs

Renting is cheaper than buying