As an NVIDIA Elite Partner, Exxact Corporation works closely with the NVIDIA team to ensure seamless factory development and support. We pride ourselves on providing value-added service standards unmatched by our competitors.
EDU Discounts Available
Purchase a qualified NVIDIA powered workstation or server from Exxact and gain exclusive EDU discounts.
Wide NVIDIA Platform Selection
Exxact offers a wide selection of workstation and server platforms to meet the individual compute needs of each customers unique use case.
Standard 3 Year Warranty
Have peace of mind, focus on what matters most, knowing your system is backed by a 3 year warranty and support.
Find the Right Fit for Your Needs
The Most Powerful End-to-End AI and HPC Data Center Platforms from Exxact
1-4 GPUs with MIG for compute intensive multi GPU workloads
PCIe
4-8 GPUs
1 GPU w/ MIG
1-4 GPUs with MIG for higher ed and research
1-4 GPUs with MIG for compute intensive single GPU workloads
High-Performance Computing with NVIDIA Tesla A100
To unlock next-generation discoveries, scientists look to simulations to better understand complex molecules for drug discovery, physics for potential new sources of energy, and atmospheric data to better predict and prepare for extreme weather patterns.
A100 introduces double-precision Tensor Cores, providing the biggest milestone since the introduction of double-precision computing in GPUs for HPC. This enables researchers to reduce a 10-hour, double-precision simulation running on NVIDIA V100 Tensor Core GPUs to just four hours on A100. HPC applications can also leverage TF32 precision in A100’s Tensor Cores to achieve up to 10x higher throughput for single-precision dense matrix multiply operations.
Geometric mean of application speedups vs. P100: benchmark application: Amber [PME-Cellulose_NVE], Chroma [szscl21_24_128], GROMACS [ADH Dodec], MILC [Apex Medium], NAMD [stmv_nve_cuda], Pytorch [BERT Large Fine Tuner], Quantum Espresso [AUSURF112-jR]; Random Forest FP32 [make_blobs (160000 x 64 : 10)], TensorFlow [ResNet-50], VASP 6 [Si Huge], | GPU node with dual-socket CPUs with 4x NVIDIA P100, V100, or A100 GPUs.
Ampere A100 Accelerates Deep Learning Training and Inference
BERT pre training throughput using Pytorch, including (2/3) Phase 1 and (1/3) Phase 2 | Phase 1 Seq Len = 128, Phase 2 Seq Len = 512; V100: NVIDIA DGX-1™ server with 8x V100 using FP32 precision; A100: DGX A100 Server with 8x A100 using TF32 precision.
BERT Large Inference | NVIDIA T4 Tensor Core GPU: NVIDIA TensorRT (TRT) 7.1, precision = INT8, batch size = 256 | V100: TRT 7.1, precision = FP16, batch size = 256 | A100 with 7 MIG instances of 1g.5gb: pre-production TRT, batch size = 94, precision = INT8 with sparsity.