The Tensor TS1-672696-DPN is a 1U rack mountable Deep Learning NVIDIA GPU server supporting 2x Intel Xeon Scalable Family processors, a maximum of 768 GB DDR4 memory, and four Tesla P100 Pascal GPUs (SXM2), with up to 80GB/s NVLINK GPU-GPU interconnect.
GPUs have provided groundbreaking performance to accelerate deep learning research with thousands of computational cores and up to 100x application throughput when compared to CPUs alone. Exxact has developed the Deep Learning DevBox, featuring NVIDIA GPU technology coupled with state-of-the-art NVLINK GPU-GPU interconnect technology, and a full pre-installed suite of the leading deep learning software, for developers to get a jump-start on deep learning research with the best tools that money can buy.
- Supports 2x Intel Xeon Scalable Family processors (Socket 3647)
- 4x Tesla P100 SXM2 (10.6 TFlops of single precision, 732 GB/s of memory bandwidth, and 16 GB of memory per board) GPUs
- NVIDIA DIGITS™ 3.0 software providing powerful design, training, and visualization of deep neural networks for image classification
- Pre-installed standard Ubuntu 14 w/ Caffe, Torch, Theano, BIDMach, cuDNN, OpenCV, CUDA Toolkit, and MXNet
- Google TensorFlow software libarary
- Automatic software update tool included
- A turn-key server with up to 80GB/s NVLINK GPU-GPU interconnect
Docker Images Available on Exxact Solutions
Docker images available for the following software:
- NVIDIA Caffe
Click HERE for more information on Docker.
- Bronze 31XX
- Silver 41XX
- Gold 51XX
- Gold 61XX
- Platinum 81XX
- 2x PCI-E 3.0 x16 slots
- 2x PCI-E 3.0 x16 slot (Low-profile)
- 4x 2.5" drive bays
- Supports 2x NVMe drives
- 2x RJ45 10GBASE-T LAN ports
- 1x RJ45 IPMI LAN port