The Tensor TS4-1642659-DPN is a 4U rack mountable Deep Learning NVIDIA GPU server supporting 1x Intel Core X-Series processor and a maximum of 128 GB DDR4 memory.
GPUs have provided groundbreaking performance to accelerate deep learning research with thousands of computational cores and up to 100x application throughout when compared to CPUs alone. Exxact has developed the Deep Learning DevBox, featuring NVIDIA GPU technology with state-of-the-art PCIe peer to peer interconnect technology, and a full pre-installed suite of the leading deep learning software, for developers to get a jump-start on deep learning research with the best tools that money can buy.
- 1x Intel® Core™ X-Series processor
- 4x Quadro GV100 with 14.8 TFlops of single precision, up to 870 GB/s of memory bandwidth, and 32 GB of HBM2 memory, GP100, P6000, P5000, or P4000 GPUs.
- NVIDIA DIGITS™ software providing powerful design, training, and visualization of deep neural networks for image classification
- Pre-installed standard Ubuntu 14 w/ Caffe, Torch, Theano, BIDMach, cuDNN, OpenCV, CUDA Toolkit and MXNet
- Google TensorFlow software library
- Automatic software update tool included
- A turn-key server with superior PCIe peer to peer topology
Docker Images Available on Exxact Solutions
- NVIDIA Caffe
Click HERE for more information on Docker.
- Core X-Series
- Core i5
- Core i7
- Core i9
- 128 GB (8-Core and above CPU)
- 64 GB (4-Core CPU)
- Core i9
- 7x PCI-E 3.0 x16 slots (single x16 or dual x16/x16 or triple x16/x16/x16 or quad x16/x16/x16/x16 or seven x16/x8/x8/x8/x8/x8/x8, supports 4x double-wide cards)
- 1x M.2 PCIe 3.0 x4 Socket 3, with M key, type 2242/2260/2280/22110 storage device
- 1x M.2 PCIe 3.0 x4 Socket 3, with M key, type 2242/2260/2280 storage device
- 2x U.2 connector
- 2x RJ45 Gigabit Ethernet LAN Ports