The Tensor TS2-198767-DPN is a 2U rack mountable Deep Learning NVIDIA GPU server supporting 2x Intel Xeon E5-2600 v3/v4 processors and a maximum of 1.5 TB DDR4 memory.
GPUs have provided groundbreaking performance to accelerate deep learning research with thousands of computational cores and up to 100x application throughput when compared to CPUs alone. Exxact has developed the Deep Learning DevBox, featuring NVIDIA GPU technology coupled with state-of-the-art PCIe peer to peer interconnect technology, and a full pre-installed suite of the leading deep learning software, for developers to get a jump-start on deep learning research with the best tools that money can buy.
- Supports 2x Intel Xeon E5-2600 v3/v4 Haswell/Broadwell EP processors (Socket R3)
- 4x Tesla V100 32 GB (15 TFlops of single precision, 900 GB/s of memory bandwidth, and 32 GB of memory per board), V100 16 GB, P40, P100 16 GB, P100 12 GB or M40 GPUs
- NVIDIA DIGITS™ 3.0 software providing powerful design, training, and visualization of deep neural networks for image classification
- Pre-installed standard Ubuntu 14 w/ Caffe, Torch, Theano, BIDMach, cuDNN, OpenCV, CUDA Toolkit, and MXNet
- Google TensorFlow software libarary
- Automatic software update tool included
- A turn-key server with superior PCIe peer to peer topology
Docker Images Available on Exxact Solutions
Docker images available for the following software:
- NVIDIA Caffe
Click HERE for more information on Docker.
- E5-2600 v3
- E5-2600 v4
- 8x PCI-E 3.0 x16 slots (4x at x16 or 8x at x8, supports 4x double-wide cards)
- 2x RJ45 Gigabit Ethernet LAN Ports
- 1x RJ45 Management Port