NVIDIA L40-2Q vs NVIDIA RTX PRO 6000 Blackwell Server

Comparison of NVIDIA L40-2Q with 2 GB GDDR6 and 18,176 cores vs NVIDIA RTX PRO 6000 Blackwell Server with 96 GB GDDR7 and 24,064 cores.

Loading...

Performance Rating

H200 H200
MI325X MI325X
A100 A100

NVIDIA L40-2Q

22.5

NVIDIA L40-2Q

22.5
MI250 MI250
Instinct MI300X Instinct MI300X
RX 7900 XTX RX 7900 XTX

NVIDIA RTX PRO 6000 Blackwell Server

NVIDIA RTX PRO 6000 Blackwell Server

Contents:

Memory ML Performance Compute Power Architecture & Compatibility ML Software Support Clocks & Performance Power Consumption Rendering Benchmarks Additional

Memory

Memory Size

🔥 2 ГБ
96 GB

Memory Type

GDDR6 GDDR7

Memory Bandwidth

864.0 GB/s 1.79 TB/s

Memory Bus Width

384 бит 512 бит

ML Performance

FP16 (Half Precision)

🔥 90.52 TFLOPS
126.0 TFLOPS

BF16 (Brain Float)

No No

TF32 (TensorFloat)

No No

Compute Power

FP32 (Single Precision)

🔥 90.52 TFLOPS
126.0 TFLOPS

FP64 (Double Precision)

No 1.968 TFLOPS

CUDA Cores

🔥 18,176
24,064

RT Cores

🔥 142
188

Architecture & Compatibility

GPU Architecture

Ada Lovelace Blackwell 2.0

SM (Streaming Multiprocessor)

🔥 142
188

PCIe Version

PCIe 4.0 x16 PCIe 5.0 x16

ML Software Support

CUDA Version

8.9
🔥 12.0

Clocks & Performance

Base Clock

🔥 735
1,590

Boost Clock

🔥 2,490
2,617

Memory Clock

🔥 +29% 2,250
1,750

Power Consumption

TDP/TGP

🔥 -50% 300 W
600 W

Recommended PSU

🔥 -30% 700 W
1000 W

Power Connector

1x 16-pin 1x 16-pin

Rendering

Texture Units (TMU)

🔥 568
752

ROP

🔥 142
188

L2 Cache

🔥 96 MB
128 MB

Benchmarks

MLPerf, llama2-70b-99.9 (fp4)

3 250 tokens/s

MLPerf, llama3.1-8b (fp4)

5 758 tokens/s

Geekbench AI, FP16

53 322 points

Geekbench AI, INT8

28 264 points

Geekbench AI, FP32

37 299 points

MLPerf, mixtral-8x7b (fp8)

3 767 tokens/s

Additional

Slots

Dual-slot Dual-slot

Release Date

Oct. 13, 2022 March 18, 2025

Display Outputs

4x DisplayPort 1.4a
4x DisplayPort 2.1b

Renting is cheaper than buying