Inference Servers & Edge Devices

Exxact Deep Learning Inference solutions are optimized for use in image and video search, video analytics, object classification and detection, and more.

value propositon

High Performance Hardware

From NVIDIA T4 Inference GPUs, or Xilinx FPGA Accelerators, Exxact Inference Solutions meet your most demanding deep learning inference tasks.

value propositon

Low-Latency Throughput

Exxact Deep Learning Inference Servers cater to real-time use cases involving multiple inferences per query, such as automatic speech recognition, speech to text, natural language processing, and more.

value propositon

Pre-Installed Frameworks

Our systems come pre-loaded with TensorFlow, PyTorch, Keras, Caffe, RAPIDS, Docker, Anaconda, MXnet and more upon request.

Suggested Exxact Deep Learning Inference Data Center Systems

Solution image
Entry-Level
TS2-197278655-DPN

Base Specs
CPU2x 3rd Gen Intel Xeon Scalable Processors
GPU4x NVIDIA Tesla T4
MEM256GB Memory
STO1x 2TB SSD (OS/Data)
Solution image
Mid-Range
TS2-197278655-DPN

Base Specs
CPU2x 3rd Gen Intel Xeon Scalable Processors
GPU8x NVIDIA Tesla T4
MEM512GB Memory
STO1x 2TB SSD (OS) Up to 5x 2TB SSD (Data)
Solution image
High-End
TS4-1910483-DPN

Base Specs
CPU2x Intel Xeon Scalable Processors
GPU20x NVIDIA Tesla T4
MEM512GB Memory
STO1x 2TB SSD (OS) Up to 5x 2TB SSD (Data)

Not sure what you need?

Let us know what kind of project you have planned. We can help you decide.

Bring AI to the Edge with Exxact Inference Servers & NVIDIA EGX

Communicate with customers in real time. Adapt quickly as data flows from billions of sensors, from factory floors to store aisles. Instantaneously diagnose diseases and provide life-saving patient care. All of this is possible—smart retail, healthcare, manufacturing, transportation, and cities—with today's powerful AI and the NVIDIA EGX platform, which brings the power of accelerated AI computing to the edge.

value propositon

High Performance and Scalable

NVIDIA EGX is highly scalable, starting from a single node GPU system and scaling all the way to a full rack of NVIDIA T4 servers, with the ability to deliver more than 10,000 TOPS to serve hundreds of users for real-time speech recognition and other complex AI experiences.

value propositon

Hybrid Cloud and Multicloud IoT

NVIDIA EGX is architecturally compatible with major cloud providers. AI applications developed in the cloud can run on NVIDIA EGX and vice versa. NVIDIA Edge Stack connects to major cloud IoT services, and customers can remotely manage their services.

value propositon

Enterprise Grade and Secure

NVIDIA Edge Stack has been optimized on Red Hat OpenShift, the leading enterprise-grade Kubernetes container orchestration platform. Mellanox Smart NICs can offload and accelerate software defined networking to enable a higher level of isolation and security without impacting CPU performance.

Enterprise-Grade Software Stack for the Edge

NVIDIA Edge Stack is an optimized software stack that includes NVIDIA drivers, a CUDA® Kubernetes plug-in, a CUDA Docker container runtime, CUDA-X libraries, and containerized AI frameworks and applications, including NVIDIA TensorRT™, TensorRT Inference Server, and DeepStream.

nvidia egx platform software stack

NVIDIA TensorRT Hyperscale Inference Platform

The NVIDIA TensorRT™ Inference Platform is designed to make deep learning accessible to every developer and data scientist anywhere in the world. Utilizing the new Turing architecture, Tesla T4 accelerates all types of neural networks for images, speech, translation, and recommendation systems. Tesla T4 supports a wide variety of precision and accelerates all major DL frameworks, including TensorFlow, PyTorch, MXNet, Chainer, and Caffe2.

DL-inference.png


NVIDIA TensorRT optimizer and runtime unlocks the power of Turing GPUs across a wide range of precision, from FP32 down to INT4. In addition, TensorRT integrates with TensorFlow and supports all major frameworks through the ONNX format. NVIDIA TensorRT Inference Server is a production-ready deep learning inference server. It reduces costs by maximizing utilization of GPU servers and saves time by integrating seamlessly into production architectures.

For large-scale, multi-node deployments, Kubernetes enables enterprises to scale up training and inference deployment to multi-cloud GPU clusters seamlessly. It allows software developers and DevOps engineers to automate deployment, maintenance, scheduling, and operation of multiple GPU-accelerated application containers across clusters of nodes. With Kubernetes on NVIDIA GPUs, they can build and deploy GPU-accelerated deep learning training or inference applications to heterogeneous GPU clusters and scale seamlessly.



Use Cases for Inference Solutions

DataCenter

Data Center

Self Driving Cars

Self Driving Cars

Intelligent Video Analytics

Intelligent Video Analytics

Embedded Devices

Embedded Devices