The World's First AI Supercomputing Data Center GPU
Artificial intelligence for self-driving cars. Predicting our climate's future. A new drug to treat cancer. Some of the world's most important challenges need to be solved today, but require tremendous amounts of computing to become reality. Today's data centers rely on many interconnected commodity compute nodes, limiting the performance needed to drive important High Performance Computing (HPC) and hyperscale workloads.
NVIDIA® Tesla® P100 GPU accelerators are the world's first AI supercomputing data center GPUs. They tap into NVIDIA Pascal™ GPU architecture to deliver a unified platform for accelerating both HPC and AI. With higher performance and fewer, lightning-fast nodes, Tesla P100 enables data centers to dramatically increase throughput while also saving money.
With over 550 HPC applications accelerated—including 15 out of top 15—as well as all deep learning frameworks, every HPC customer can deploy accelerators in their data centers.
Infinite Compute Power For The Modern Data Center
The NVIDIA Tesla P100 is the most advanced data center accelerator ever built,leveraging the groundbreaking NVIDIA Pascal™ GPU architecture to deliver theworld’s fastest compute node. It’s powered by four innovative technologies withhuge jumps in performance for HPC and deep learning workloads.
The Tesla P100 also features NVIDIA NVLink™ technology that enables superiorstrong-scaling performance for HPC and hyperscale applications. Up to eight TeslaP100 GPUs interconnected in a single node can deliver the performance of racks ofcommodity CPU servers.
NVIDIA Tesla P100 For Strong-scale HPC
Tesla P100 with NVIDIA NVLink technology enables lightning-fast nodes to substantially accelerate time-to-solution for strong-scale applications. A server node with NVLink can interconnect up to eight Tesla P100s at 5X the bandwidth of PCIe. It's designed to help solve the world's most important challenges that have infinite compute needs in HPC and deep learning.
NVIDIA Tesla P100 For Mixed-Workload HPC
Tesla P100 for PCIe enables mixed-workload HPC data centers to realize a dramatic jump in throughput while saving money. For example, a single GPU-accelerated node powered by four Tesla P100s interconnected with PCIe replaces up to 32 commodity CPU nodes for a variety of applications. Completing all the jobs with far fewer powerful nodes means that customers can save up to 70% in overall data center costs.