

WEKA Neural Mesh Data Platform


Core that intelligently distributes data and metadata in the NeuralMesh, a network of containerized microservices, intelligent load-balancing, built-in data protection, and auto self-healing, creating a high-availability and fault-tolerant data platform.
Accelerate gives NeuralMesh microsecond latency and ultra-high throughput by fusing memory and flash into a unified pool. It intelligently distributes metadata, eliminates data duplication, and bypasses kernel overhead, delivering blistering performance and maximum GPU efficiency at any scale.
WEKA reinvents storage with a rebuilt distributed parallel file system with local snapshotting, automated tiering, backup redundancy, and more, increasing utilization, reducing complexity, and creating efficient data pipelines.
The Observe component provides real-time manageability and observability of your NeuralMesh storage environment. Optimize your performance and prevent downtime with telemetry, logs, metrics, and change tracking with latency-aware rebalancing, anomaly detection, and AI-driven insights.
Enterprise Services provides mission-critical security, data protection, and isolation for NeuralMesh. Experience high performance across tenants with NeuralMesh’s role-based access, zero-copy efficiency, built-in erasure coding, and zero tuning. Every workload gets exactly what it needs securely and efficiently.
Before and After WEKA Data Platform Deployment
WEKApod: Certified for NVIDIA DGX SuperPODâ„¢on NVIDIA DGX H100 Systems

The WEKApod Data Platform Appliance seamlessly integrates turnkey storage hardware and award-winning storage software with NVIDIA DGX for simplicity, performance, scalability, and efficiency.
Infinite Scalability
One 8-node WEKApod is capable of 720GB/s & 186 GB/s of sustained read & write with infinite and linear scalability.
Sustainable Efficiency
10-50x better AI/ML efficiency and reduced infrastructure footprint by 4-7x via data copy reduction and cloud elasticity.
Integration Simplicity
Fully supported on NVIDIA Base Command Manager and orchestration tools like Run:AI for single pane-of-glass management.
On-Prem & Cloud
Seamlessly connect on-premises AI workloads with GPU cloud environments for hybridized workflow and backup archival.
Interested?
