The Tensor TS1-185917-DPN is a 1U rack mountable Deep Learning NVIDIA GPU server supporting 2x Intel Xeon E5-2600 v3/v4 processors, a maximum of 1.5 TB DDR4 memory, and four Tesla P100 Pascal GPUs (SXM2), with up to 80GB/s NVLINK GPU-GPU interconnect.
GPUs have provided groundbreaking performance to accelerate deep learning research with thousands of computational cores and up to 100x application throughput when compared to CPUs alone. Exxact has developed the Deep Learning DevBox, featuring NVIDIA GPU technology coupled with state-of-the-art NVLINK GPU-GPU interconnect technology, and a full pre-installed suite of the leading deep learning software, for developers to get a jump-start on deep learning research with the best tools that money can buy.
- Supports 2x Intel Xeon E5-2600 v3/v4 Haswell/Broadwell EP processors (Socket R3)
- 4x Tesla P100 SXM2 (10.6 TFlops of single precision, 732 GB/s of memory bandwidth, and 16 GB of memory per board) GPUs
- NVIDIA DIGITS™ 3.0 software providing powerful design, training, and visualization of deep neural networks for image classification
- Pre-installed standard Ubuntu 14 w/ Caffe, Torch, Theano, BIDMach, cuDNN, OpenCV, CUDA Toolkit, and MXNet
- Google TensorFlow software libarary
- Automatic software update tool included
- A turn-key server with up to 80GB/s NVLINK GPU-GPU interconnect
Docker Images Available on Exxact Solutions
Docker images available for the following software:
- NVIDIA Caffe
Click HERE for more information on Docker.
- E5-2600 v3
- E5-2600 v4
- 3x PCI-E 3.0 x16 slots
- 1x PCI-E 3.0 x8 slot (low-profile)
- 2x RJ45 Gigabit Ethernet LAN Ports
- 1x RJ45 Dedicated IPMI Port