- Rack Height: 4U
- Processor: 2x Intel Xeon Scalable family
- Drive Bays: 16x 2.5" Hot-Swap (8x NVMe)
- Supports up to 8x NVIDIA Tesla V100 SXM2 GPUs
The TensorEX TS4-1686449-DPN is a 4U rack mountable Deep Learning & AI server supporting 2x Intel Xeon Scalable family processors, a maximum of 3 TB DDR4 memory, and eight Tesla V100 Volta GPUs (SXM2), with up to 150GB/s NVLINK 2.0 GPU-GPU interconnect.
GPUs have provided groundbreaking performance to accelerate deep learning research with thousands of computational cores and up to 100x application throughput when compared to CPUs alone. Exxact has developed the TensorEX TS4-1686449-DPN, featuring NVIDIA GPU technology coupled with state-of-the-art NVLINK 2.0 GPU-GPU interconnect technology, and a full pre-installed suite of the leading deep learning software, for developers to get a jump-start on deep learning research with the best tools that money can buy.
- Supports 2x Intel Xeon Scalable Family processors (Socket P)
- 8x Tesla V100 SXM2 32 GB (15.7 TFlops of single precision, 900 GB/s of memory bandwidth, and 32 GB of memory per board) or V100 SXM2 16 GB GPUs
- NVIDIA DIGITS software providing powerful design, training, and visualization of deep neural networks for image classification
- Pre-installed standard Ubuntu 18.04 w/ Deep Learning software stack
- Google TensorFlow software library
- Automatic software update tool included
- A turn-key server with up to 150GB/s NVLINK 2.0 GPU-GPU interconnect
EMLI (Exxact Machine Learning Images)
*Additional NGC (NVIDIA GPU Cloud) containers can be added upon request.
Conda EMLISeparated Frameworks
Container EMLIFlexible. Reconfigurable.
DIY EMLISimple. Clean. Custom.
Who is it for?
For developers who want pre-installed deep learning frameworks and their dependencies in separate Python environments installed natively on the system.
For developers who want pre-installed frameworks utilizing the latest NGC containers, GPU drivers, and libraries in ready to deploy DL environments with the flexibility of containerization.
For experienced developers who want a minimalist install to set up their own private deep learning repositories or custom builds of deep learning frameworks.
|Microsoft Cognitive Toolkit||—||—|
|NVIDIA CUDA Toolkit|
|NVIDIA CUDA Dev Toolkit||—|
|Micro-K8s||Free upgrade available||Free upgrade available||Free upgrade available|
- Bronze 31XX
- Bronze 32XX
- Silver 41XX
- Silver 42XX
- Gold 51XX
- Gold 52XX
- Gold 61XX
- Gold 62XX
- Platinum 81XX
- Platinum 82XX
- DDR4 SDRAM
- DDR4 NVDIMM (Intel Optane DCPMM)
- Via C621 chipset
- RAID 0, 1, 5, 10
- 4x PCI-E 3.0 x16 slots (Low-profile, GPU tray for GPUDirect RDMA)
- 2x PCI-E 3.0 x16 slots (Low-profile, CPU tray)
- 16x 2.5 hot-swap drive bays
- Supports 8x NVMe drives
- 2x RJ45 10GBASE-T Ethernet LAN Ports
- 1x RJ45 Dedicated IPMI Port