Exxact TensorEX TS4-672702-DPN 4U 2x Intel Xeon processor Deep Learning & AI server
- Rack Height: 4U
- Processor Supported: 2x Intel Xeon Scalable Family
- Drive Bays: 24x 2.5" Hot-Swap
- Supports up to 10x Double-Wide cards in a Single-Root complex
The TensorEX TS4-672702-DPN is a 4U rack mountable Deep Learning & AI server supporting 2x Intel Xeon Scalable processors and a maximum of 3 TB DDR4 memory. The TensorEX TS4-672702-DPN is a compute node designed and optimized for computing accelerators such as the NVIDIA® Tesla® GPUs. The TensorEX TS4-672702-DPN supports up to 10 double wide GPU cards on a single root complex PCIe bus to offer vast gains in processing time for the most supercomputing intensive applications, from cutting-edge rendering farms to the latest simulations in leading scientific research.
*Additional NGC (NVIDIA GPU Cloud) containers can be added upon request.
Conda EMLISeparated Frameworks
Container EMLIFlexible. Reconfigurable.
DIY EMLISimple. Clean. Custom.
Who is it for?
For developers who want pre-installed deep learning frameworks and their dependencies in separate Python environments installed natively on the system.
For developers who want pre-installed frameworks utilizing the latest NGC containers, GPU drivers, and libraries in ready to deploy DL environments with the flexibility of containerization.
For experienced developers who want a minimalist install to set up their own private deep learning repositories or custom builds of deep learning frameworks.
|Microsoft Cognitive Toolkit||—||—|
|NVIDIA CUDA Toolkit|
|NVIDIA CUDA Dev Toolkit||—|
|Micro-K8s||Free upgrade available||Free upgrade available||Free upgrade available|