Exxact Servers featuring NVIDIA® Tesla® GPUs Provide Unmatched Scalability for AI, HPC, and Data Center Applications
Accelerate your most demanding HPC and hyperscale data center workloads with NVIDIA Tesla GPU Servers. Data scientists and researchers can now parse petabytes of data orders of magnitude faster than they could using traditional CPUs, in applications ranging from energy exploration to deep learning. Tesla accelerators also deliver the horsepower needed to run bigger simulations faster than ever before. Plus, Tesla delivers the highest performance and user density for virtual desktops, applications, and workstations.
As an NVIDIA Elite Partner, Exxact Corporation works closely with the NVIDIA team to ensure seamless factory development and support. We pride ourselves on providing value-added service standards unmatched by our competitors.
NVIDIA A100 Accelerates Deep Learning Training and Inference
BERT pre training throughput using Pytorch, including (2/3) Phase 1 and (1/3) Phase 2 | Phase 1 Seq Len = 128, Phase 2 Seq Len = 512; V100: NVIDIA DGX-1™ server with 8x V100 using FP32 precision; A100: DGX A100 Server with 8x A100 using TF32 precision.
Geometric mean of application speedups vs. P100: benchmark application: Amber [PME-Cellulose_NVE], Chroma [szscl21_24_128], GROMACS [ADH Dodec], MILC [Apex Medium], NAMD [stmv_nve_cuda], Pytorch [BERT Large Fine Tuner], Quantum Espresso [AUSURF112-jR]; Random Forest FP32 [make_blobs (160000 x 64 : 10)], TensorFlow [ResNet-50], VASP 6 [Si Huge], | GPU node with dual-socket CPUs with 4x NVIDIA P100, V100, or A100 GPUs.
Highly Power Efficient NVIDIA T4 Inference Solutions
The NVIDIA Tesla T4 GPU is the world’s most efficient accelerator for all AI inference workloads. Powered by NVIDIA Turing Tensor Cores, T4 provides revolutionary multi-precision inference performance to accelerate the diverse applications of modern AI. The Exxact Tensor TS4-1910483 4U Server can handle up to 20x T4 PCIe cards for maximum Deep Learning Inference tasks.