BeeGFS Storage for HPC

Turnkey storage appliances built on validated hardware for the most demanding HPC workloads.

value propositon

Never Lose Data Again

BeeGFS Buddy Mirroring automatically replicates data, handles storage server failures transparently for running applications and provides automatic self-healing.

value propositon

Efficient Persistent Access

Create multiple storage pools with segmented projects based on performance so each project gets exactly what it needs.

value propositon

Scale On Demand

Simply add more disks to increase capacity, plus with BeeOND, you can easily create on-demand shared parallel filesystems on a per-job basis.

Simple and Scalable on any Hardware

BeeGFS Architecture

Networking

BeeGFS systems are compatible with any TCP/IP or RDMA-capable networks like Infiniband, Omni-Path, or RoCE.

Management Node

BeeGFS Management Service provides a rendevouz point for new servers and clients while continually protecting registered services and state checking.

Metadata Node

The metadata servers manage file directory information, ownership, and file location targets for quick retrieval.

Flash Storage

An optional way to provide optimal performance for active high-performance jobs. Storage targets can be combined into a Fast Pool and tied to a directory.

Storage Storage

These robust storage servers perform the bulk storage functions including storing user file contents either internal or externally attached.

BeeGFS cluster illustration

1 PB 16U Storage Cluster


Solution value property image4x 4U Storage Nodes
Solution value property image1,056TB Capacity
Solution value property imageFlash storage pools optional - NVME or SSD

1.5 PB 24U Storage Cluster


Solution value property image6x 4U Storage Nodes
Solution value property image1,584TB Capacity
Solution value property imageFlash storage pools optional - NVME or SSD

2 PB 32U Storage Cluster


Solution value property image8x 4U Storage Nodes
Solution value property image2,112TB Capacity
Solution value property imageFlash storage pools optional - NVME or SSD

Each Exxact BeGFS Storage Cluster Includes

1x 1U Management Node

  • 2x Intel Xeon Scalable Processors
  • 96GB Memory
  • 960GB SSD
  • 24TB HDD

1x 1U Metadata Node

  • 2x Intel Xeon Scalable Processors
  • 96GB Memory
  • 960GB SSD
  • 48TB HDD

1x 1U Networking

  • 10GbE Ethernet
  • 24 Ports
  • Optional 25GbE/40GbE/50GbE
  • Optional EDR IB/ 100GbE

See How it Scales

Read Write
Specs: 2x Intel Xeon X5660, 48 GB RAM, 4x Intel 510 Series SSD (RAID 0), Ext4, QDR Infiniband and run Scientific Linux 6.3, Kernel 2.6.32-279 and FhGFS 2012.10-beta1 Performed on Fraunhofer Seislab, a test and experimental cluster at Fraunhofer ITWM with 25 nodes (20 compute + 5 storage) and a three tier memory: 1 TB RAM, 20 TB SSD, 120 TB HDD. Single node performance on the local file system without BeeGFS is 1,332 MB/s (write) and 1,317 MB/s (read).
FileCreates
IOPS