Components

NVIDIA RTX PRO Blackwell GPUs - Up to 96GB of GDDR7 Memory

March 27, 2025
7 min read
Exx-blog-NVIDIA-RTX-PRO-Blackwell-GPU-96GB-GDDR7-Memory.jpg

Introduction

At GTC 2025, NVIDIA unveiled and announced the NVIDIA RTX PRO Blackwell Series GPUs, designed to accelerate and power AI workflows, engineering simulation, life science research, and 3D design. These GPUs featured the new Blackwell Architecture for high-performance compute, large memory capacities, and enterprise support. Exxact will support NVIDIA RTX PRO Blackwell in our workstations and servers, and will be available in our configurator.

Accelerate Your Unique Workloads with the Latest Hardware

We extensively stock the latest CPUs and most powerful GPUs; accelerate your workloads with a workstation optimized to your deployment, budget, and desired performance.

Configure Now

NVIDIA Blackwell Architecture Lineup and Key Features

The RTX PRO Blackwell lineup includes:

This new generation of RTX PRO Blackwell features new architectural changes:

  • New Streaming Multiprocessors: Offering up to 1.5x faster throughput and new neural shaders that integrate AI into programmable shaders.
  • 5th-Gen Tensor Cores – Deliver up to 4,000 AI TOPS, over 2x the performance of Ada Lovelace generation, with support for FP4 for accelerating AI-powered rendering, generative AI, and running large AI workflows.
  • 4th-Gen RT Cores – Enable 2x faster real-time ray tracing than last generation for smoother, high-fidelity, photoreal, and accurate scenes for complex 3D environments.
  • Up to 96GB of GDDR7 Memory – Doubling the max memory of the RTX 6000 Ada for storing complex AI models, simulations, and 3D model environments in local memory for better efficiency, speed, and unlocking new capabilities.
  • PCIe 5.0 Support: RTX PRO Blackwell features PCIe 5.0 for double the bandwidth, improving data transfer speeds from CPU to GPU memory, further reducing the data transfer speed bottleneck for data-intensive tasks (mainly AI and simulation workflows).
  • DisplayPort 2.1 – Supports 4K at 480Hz and 8K at 165Hz, providing ultra-high resolution for professional display setups.

Here’s the specification table for all the RTX PRO Blackwell GPUs:

Specification

NVIDIA RTX PRO 6000 Blackwell Server Edition

RTX PRO 6000 Blackwell Workstation Edition

RTX PRO 6000 Blackwell Max-Q Workstation Edition

NVIDIA RTX PRO 5000 Blackwell

NVIDIA RTX PRO 4500 Blackwell

NVIDIA RTX PRO 4000 Blackwell

Host Interface

PCIe 5.0 x16

PCIe 5.0 x16

PCIe 5.0 x16

PCIe 5.0 x16

PCIe 5.0 x16

PCIe 5.0 x16

Standard Memory

96GB GDDR7

96GB GDDR7

96GB GDDR7

48GB GDDR7

32GB GDDR7

24GB GDDR7

Memory Bandwidth

1.6 TB/s

1.8TB/s

1.8TB/s

1.34 TB/s

896.0 GB/s

672.0 GB/s

CUDA Cores

24,064

24,064

24,064

14080

10496

8960

Tensor Cores

752

752

752

440

328

280

RT Cores

188

188

188

110

82

70

Single Precision
FP32 (TFLOPS)

117.3

125

110

73.7

54.9

46.9

Double Precision
FP64 (TFLOPS)

1.83

1.97

1.72

1.15

0.86

0.73

Power

600W

600W

300W

300W

200W

140W

RTX PRO Blackwell, while having the longest naming convention for an NVIDIA GPU ever, features a load of new features that accelerate professional workflows from AI training and deployment to simulation and scientific research. A couple of considerations for this new generation of GPUs:

  • RTX PRO 6000 Blackwell has 3 different models with distinctive features:
    • RTX PRO 6000 Blackwell Workstation Edition: This edition features the dual flow-through fan design identical to the Founder’s Edition RTX 5090. It has a max power draw of 600W, perfect for multi-GPU workstations built with the most GPU memory and performance. This card will not fit in the standard double-wide slots found in 2U compute servers because of the 10.5” height.
    • RTX PRO 6000 Blackwell Max-Q Workstation Edition: This edition features the glossy-black and gold active cooler design we are familiar with for the past 3 generations of professional RTX GPUs (RTX A6000 & RTX 6000 Ada). Its power draw is capped at 300W but can be used in workstations and servers with the standard 9.5” dual-slot width dimensions.
    • RTX PRO 6000 Blackwell Server Edition: This edition is the successor to the NVIDIA L40S in the all-gold passive cooler design. Its power draw has a configurable up to 600W for servers only. The server edition allows for a little more performance than the Max-Q Workstation Edition (300W), but not quite as much as the Workstation Edition (600W).
  • RTX PRO 5000, 4500, 4000 Blackwell: The rest of the RTX PRO Blackwell family feature the same all-black active cooler design we are familiar with, but have an uptick in VRAM. Workflows that don’t require the utmost performance but just need additional GPU memory, these GPUs are amazing choices with 48GB, 32GB, and 24GB of GDDR7 memory, respectively.

NVIDIA RTX PRO 6000 Blackwell GPUs: Workstation Edition, Max-Q, Server Edition

Benefits of RTX PRO Blackwell GPUs

The NVIDIA RTX PRO Blackwell lineup of GPUs’ biggest upgrade is the additional memory. As workloads trend towards larger and larger model sizes, the RTX PRO Blackwell 96GB of GDDR7 memory is a big advantage over the last generation RTX 6000 Ada 48 GB. With 96GB of memory, here are some workloads that will benefit greatly:

  • Running Large AI Models Locally: Businesses that employ LLMs and Agentic AI models can now run larger parameter foundational AI models on fewer GPU cards. Previous models running on dual RTX 6000 Ada can now be consolidated to run on a single RTX PRO 6000 Blackwell. Models that require 48GB of memory can now run on the more cost effective RTX PRO 5000.
  • High Element Count Engineering Simulation: Last year, Exxact partnered with numerous engineering software companies to benchmark CFD workflows from Ansys and Siemens. Some models could not run due to limited memory size, requiring an additional GPU or stepping up to the data center GPUs only supported in servers. The RTX PRO 6000's 96GB alleviates the model size issue in many simulation workflows with the ability to leverage 384GB of VRAM in a 4x GPU workstation. Additionally, the GDDR7 1.8TB/s of memory bandwidth improves multi-GPU performance scalability.
  • High Cell Count Molecular Dynamics: Some molecular dynamics simulation suites do not have multi-GPU support for calculations on the same simulation, which can limit the cell count of specific simulation use cases. With additional memory, researchers can evaluate even larger-scale molecular models, even on the RTX PPRO 5000 Blackwell now with 48GB GDDR7.

Exxact will support the new NVIDIA RTX PRO Blackwell series of GPUs in our Workstations and Servers. All of our systems are custom-configurable to perform optimally for your unique workload. Contact our team for more information on how we can tailor the most performant solution so you can continue your business and research workflows.

Facilitate Deployment & Training AI with an Exxact GPU Workstation

With the latest CPUs and most powerful GPUs available, accelerate your deep learning and AI project optimized to your deployment, budget, and desired performance!

Configure Now
Exx-blog-NVIDIA-RTX-PRO-Blackwell-GPU-96GB-GDDR7-Memory.jpg
Components

NVIDIA RTX PRO Blackwell GPUs - Up to 96GB of GDDR7 Memory

March 27, 20257 min read

Introduction

At GTC 2025, NVIDIA unveiled and announced the NVIDIA RTX PRO Blackwell Series GPUs, designed to accelerate and power AI workflows, engineering simulation, life science research, and 3D design. These GPUs featured the new Blackwell Architecture for high-performance compute, large memory capacities, and enterprise support. Exxact will support NVIDIA RTX PRO Blackwell in our workstations and servers, and will be available in our configurator.

Accelerate Your Unique Workloads with the Latest Hardware

We extensively stock the latest CPUs and most powerful GPUs; accelerate your workloads with a workstation optimized to your deployment, budget, and desired performance.

Configure Now

NVIDIA Blackwell Architecture Lineup and Key Features

The RTX PRO Blackwell lineup includes:

This new generation of RTX PRO Blackwell features new architectural changes:

  • New Streaming Multiprocessors: Offering up to 1.5x faster throughput and new neural shaders that integrate AI into programmable shaders.
  • 5th-Gen Tensor Cores – Deliver up to 4,000 AI TOPS, over 2x the performance of Ada Lovelace generation, with support for FP4 for accelerating AI-powered rendering, generative AI, and running large AI workflows.
  • 4th-Gen RT Cores – Enable 2x faster real-time ray tracing than last generation for smoother, high-fidelity, photoreal, and accurate scenes for complex 3D environments.
  • Up to 96GB of GDDR7 Memory – Doubling the max memory of the RTX 6000 Ada for storing complex AI models, simulations, and 3D model environments in local memory for better efficiency, speed, and unlocking new capabilities.
  • PCIe 5.0 Support: RTX PRO Blackwell features PCIe 5.0 for double the bandwidth, improving data transfer speeds from CPU to GPU memory, further reducing the data transfer speed bottleneck for data-intensive tasks (mainly AI and simulation workflows).
  • DisplayPort 2.1 – Supports 4K at 480Hz and 8K at 165Hz, providing ultra-high resolution for professional display setups.

Here’s the specification table for all the RTX PRO Blackwell GPUs:

Specification

NVIDIA RTX PRO 6000 Blackwell Server Edition

RTX PRO 6000 Blackwell Workstation Edition

RTX PRO 6000 Blackwell Max-Q Workstation Edition

NVIDIA RTX PRO 5000 Blackwell

NVIDIA RTX PRO 4500 Blackwell

NVIDIA RTX PRO 4000 Blackwell

Host Interface

PCIe 5.0 x16

PCIe 5.0 x16

PCIe 5.0 x16

PCIe 5.0 x16

PCIe 5.0 x16

PCIe 5.0 x16

Standard Memory

96GB GDDR7

96GB GDDR7

96GB GDDR7

48GB GDDR7

32GB GDDR7

24GB GDDR7

Memory Bandwidth

1.6 TB/s

1.8TB/s

1.8TB/s

1.34 TB/s

896.0 GB/s

672.0 GB/s

CUDA Cores

24,064

24,064

24,064

14080

10496

8960

Tensor Cores

752

752

752

440

328

280

RT Cores

188

188

188

110

82

70

Single Precision
FP32 (TFLOPS)

117.3

125

110

73.7

54.9

46.9

Double Precision
FP64 (TFLOPS)

1.83

1.97

1.72

1.15

0.86

0.73

Power

600W

600W

300W

300W

200W

140W

RTX PRO Blackwell, while having the longest naming convention for an NVIDIA GPU ever, features a load of new features that accelerate professional workflows from AI training and deployment to simulation and scientific research. A couple of considerations for this new generation of GPUs:

  • RTX PRO 6000 Blackwell has 3 different models with distinctive features:
    • RTX PRO 6000 Blackwell Workstation Edition: This edition features the dual flow-through fan design identical to the Founder’s Edition RTX 5090. It has a max power draw of 600W, perfect for multi-GPU workstations built with the most GPU memory and performance. This card will not fit in the standard double-wide slots found in 2U compute servers because of the 10.5” height.
    • RTX PRO 6000 Blackwell Max-Q Workstation Edition: This edition features the glossy-black and gold active cooler design we are familiar with for the past 3 generations of professional RTX GPUs (RTX A6000 & RTX 6000 Ada). Its power draw is capped at 300W but can be used in workstations and servers with the standard 9.5” dual-slot width dimensions.
    • RTX PRO 6000 Blackwell Server Edition: This edition is the successor to the NVIDIA L40S in the all-gold passive cooler design. Its power draw has a configurable up to 600W for servers only. The server edition allows for a little more performance than the Max-Q Workstation Edition (300W), but not quite as much as the Workstation Edition (600W).
  • RTX PRO 5000, 4500, 4000 Blackwell: The rest of the RTX PRO Blackwell family feature the same all-black active cooler design we are familiar with, but have an uptick in VRAM. Workflows that don’t require the utmost performance but just need additional GPU memory, these GPUs are amazing choices with 48GB, 32GB, and 24GB of GDDR7 memory, respectively.

Benefits of RTX PRO Blackwell GPUs

The NVIDIA RTX PRO Blackwell lineup of GPUs’ biggest upgrade is the additional memory. As workloads trend towards larger and larger model sizes, the RTX PRO Blackwell 96GB of GDDR7 memory is a big advantage over the last generation RTX 6000 Ada 48 GB. With 96GB of memory, here are some workloads that will benefit greatly:

  • Running Large AI Models Locally: Businesses that employ LLMs and Agentic AI models can now run larger parameter foundational AI models on fewer GPU cards. Previous models running on dual RTX 6000 Ada can now be consolidated to run on a single RTX PRO 6000 Blackwell. Models that require 48GB of memory can now run on the more cost effective RTX PRO 5000.
  • High Element Count Engineering Simulation: Last year, Exxact partnered with numerous engineering software companies to benchmark CFD workflows from Ansys and Siemens. Some models could not run due to limited memory size, requiring an additional GPU or stepping up to the data center GPUs only supported in servers. The RTX PRO 6000's 96GB alleviates the model size issue in many simulation workflows with the ability to leverage 384GB of VRAM in a 4x GPU workstation. Additionally, the GDDR7 1.8TB/s of memory bandwidth improves multi-GPU performance scalability.
  • High Cell Count Molecular Dynamics: Some molecular dynamics simulation suites do not have multi-GPU support for calculations on the same simulation, which can limit the cell count of specific simulation use cases. With additional memory, researchers can evaluate even larger-scale molecular models, even on the RTX PPRO 5000 Blackwell now with 48GB GDDR7.

Exxact will support the new NVIDIA RTX PRO Blackwell series of GPUs in our Workstations and Servers. All of our systems are custom-configurable to perform optimally for your unique workload. Contact our team for more information on how we can tailor the most performant solution so you can continue your business and research workflows.

Facilitate Deployment & Training AI with an Exxact GPU Workstation

With the latest CPUs and most powerful GPUs available, accelerate your deep learning and AI project optimized to your deployment, budget, and desired performance!

Configure Now