NVIDIA H100 SXM5 96 GB vs PNY Quadro P1000 V2

Comparison of NVIDIA H100 SXM5 96 GB with 96 GB HBM3 and 16,896 cores vs PNY Quadro P1000 V2 with 4 GB GDDR5 and 512 cores.

Loading...

Performance Rating

H200 H200
MI325X MI325X
A100 A100

NVIDIA H100 SXM5 96 GB

57.9

NVIDIA H100 SXM5 96 GB

57.9
RX 7900 XTX RX 7900 XTX
Instinct MI300X Instinct MI300X
MI250 MI250

PNY Quadro P1000 V2

PNY Quadro P1000 V2

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

🔥 +2,300% 96 ГБ
4 GB

Memory Type

HBM3 GDDR5

Memory Bandwidth

🔥 3.36 TB/s
96.13 GB/s

Memory Bus Width

5,120 бит 128 бит

ML Performance

FP16 (Half Precision)

🔥 +1,199,900% 267.6 TFLOPS
0.0223 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

🔥 +4,594% 66.91 TFLOPS
1.4254 TFLOPS

FP64 (Double Precision)

🔥 +75,069% 33.45 TFLOPS
0.0445 TFLOPS

CUDA Cores

🔥 +3,200% 16,896
512

RT Cores

No No

Architecture & Compatibility

GPU Architecture

Hopper Pascal

SM (Streaming Multiprocessor)

🔥 +3,200% 132
4

PCIe Version

PCIe 5.0 x16 PCIe 3.0 x16

ML Software Support

CUDA Version

🔥 9.0
6.1

Clocks & Performance

Base Clock

🔥 1,350
1,354

Boost Clock

🔥 +42% 1,980
1,392

Memory Clock

🔥 1,313
1,502

Power Consumption

TDP/TGP

700 W
🔥 -93% 47 W

Recommended PSU

1100 W
🔥 -82% 200 W

Power Connector

8-pin EPS None

Rendering

Texture Units (TMU)

🔥 +1,550% 528
32

ROP

No No

L2 Cache

🔥 50 MB
1024 KB

Benchmarks

llama.cpp, llama 7B Q4_0

🔥 +1,678% 248.8 tokens/s
14.0 tokens/s

llama.cpp, llama-2-7b-Q4_0

🔥 +1,907% 280.7 tokens/s
14.0 tokens/s

Additional

Slots

🔥 SXM Module
Single-slot

Release Date

March 21, 2023 Feb. 7, 2017

Display Outputs

No outputs
4x mini-DisplayPort 1.4a

Renting is cheaper than buying