Highlights:
  • Rack Height: 10U
  • Processor Supported: 2x Intel Xeon Scalable family
  • Drive Bays: 16x 2.5" Hot-Swap NVMe
  • Supports 16x NVIDIA Tesla V100 32 GB SXM3 GPUs
Contact sales for pricing

Based on the NVIDIA HGX-2 platform, the TensorEX TS4-144580094-DPN is accelerated by 16 NVIDIA® Tesla® V100 GPUs and NVIDIA NVSwitch™, It has the unprecedented compute power, bandwidth, and memory topology to train massive models, analyze datasets, and solve simulations faster and more efficiently. The 16 Tesla V100 GPUs work as a single unified 2-petaFLOP accelerator with half a terabyte (TB) of total GPU memory, allowing it to handle the most computationally intensive workloads.

Key Features
  • 16x NVIDIA Tesla V100 SXM3
  • 81,920 NVIDIA CUDA Cores
  • 10,240 NVIDIA Tensor Cores
  • 0.5TB Total GPU Memory
  • NVSwitch powered by NVLink 2.4TB/sec aggregate speed

NVIDIA NVSwitch for Full Bandwidth Computing

NVSwitch enables every GPU to communicate with every other GPU at full bandwidth of 2.4TB/sec to solve the largest of AI and HPC problems. Every GPU has full access to 0.5TB of aggregate HBM2 memory at a bandwidth of 16TB/s to handle the most massive of datasets. By enabling a unified server node, NVSwitch dramatically accelerates complex AI deep learning, AI machine learning, and HPC applications.

GPUs have provided groundbreaking performance to accelerate deep learning research with thousands of computational cores and up to 100x application throughput when compared to CPUs alone. Exxact has developed the Deep Learning DevBox, featuring NVIDIA GPU technology coupled with state-of-the-art NVSwitch powered by NVLink GPU-GPU interconnect technology, and a full pre-installed suite of the leading deep learning software, for developers to get a jump-start on deep learning research with the best tools that money can buy.

Deep Learning Software Stack


Ubuntu Deep Learning Software Stack

NVIDIA CUDA
cuDNN
theano
Torch
Keras
Docker
NCCL
Caffe
Bidmach
OpenCV
RAPIDS
Caffe2
Anaconda
TensorFlow
MXNet

Exxact Docker NGC Ready Deep Learning Stack

Includes configuration and testing for the following NGC Ready Docker containers:

NVIDIA CUDA
NVIDIA Digits
Caffe
PyTorch
TensorFlow
RAPIDS

Additional Docker Images:

Portainer

Optional Software

SINGA
Singularity
Microsoft Cognitive Toolkit
Deep Learning NVIDIA GPU Solutions

Processor & Chipset
Number of Processors Supported
2
Processor Socket
LGA 3647
Processor Type
Xeon
Processor Supported
  • Bronze 31XX
  • Bronze 32XX
  • Silver 41XX
  • Silver 42XX
  • Gold 51XX
  • Gold 52XX
  • Gold 61XX
  • Gold 62XX
  • Platinum 81XX
  • Platinum 82XX
Thermal Design Power (TDP)
205 W
GPU
16x NVIDIA Tesla V100 32 GB Volta SXM3
Chipset Manufacturer
Intel
Chipset Model
C621
Memory
Maximum Memory
3 TB
Memory Technology
  • DDR4 SDRAM
  • DDR4 NVDIMM (Intel Optane DCPMM)
Memory Standard
DDR4-2933/PC4-23400
Number of Total Memory Slots
24
Controllers
SATA3
  • Via Intel C621 chipset
Display & Graphics
Graphics Controller Manufacturer
ASPEED
Graphics Controller Model
AST2500 BMC
Network & Communication
Ethernet Technology
  • 10GBASE-T
I/O Expansions
PCI Express
  • 16x PCI-E 3.0 x16 for RDMA via IB EDR
  • 2x PCI-E 3.0 x16 on motherboard
  • 2x PCI-E 3.0 x4 M.2 (2280, 22110)
Drive Bays
Hot-swap
  • 16x 2.5" NVMe Hot-swap
  • 6x 2.5" SATA3 Hot-swap
Interfaces/Ports
Total Number of USB 3.0 Ports
2 (front
Number of SATA Interfaces
6
Number of NVMe Interfaces
16
LAN
  • 2x RJ45 10GBASE-T Ethernet LAN Ports
  • 1x RJ45 Dedicated IPMI Port
Onboard Video
1x VGA Connector
COM Port
1x COM
Power Description
Number of Power Supplies
6
Redundant Power Supplies
Yes
Maximum Power Supply Wattage
3000 W
Certification
80 Plus Titanium
Physical Characteristics
Color
Black
Form Factor
Rack-mountable
Rack Height
10U
Height
17.2
Width
17.8"
Depth
27.75"