Corespan 2500 Product Series Datasheet

Corespan 2500 Series Overview

Disaggregate and dynamically compose GPU, storage, and accelerator resources with photonic connectivity and real-time orchestration. The 2500 Series delivers high-density PCIe Gen5 pooling, low-latency paths, and elastic allocation for AI/ML, HPC, private cloud, and GPU-as-a-Service environments.

Corespan Composer 1

Key Features

High-Density, PCIe Gen5 Resource Platform

High-Density, PCIe Gen5 Resource Platform

• Up to 12 PCIe Gen5 x16 device slots per PRU chassis • Support for heterogeneous devices (GPUs, FPGAs, NICs, storage) • Four host FIC slots for flexible fabric bandwidth

Photonic Fabric Integration

Photonic Fabric Integration

• Ultra-low latency and high bandwidth across pooled resources • Direct device-to-device PCIe paths (GPU-GPU, GPU-storage) • Optical connectivity reduces hops and CPU bottlenecks

Dynamic Composition & Orchestration

Dynamic Composition & Orchestration

• Real-time attach/detach of PCIe devices • Policy-driven provisioning for multi-tenant and service offerings • Centralized pool visibility and health monitoring

Multi-Vendor Support

Multi-Vendor Support

• Works with leading GPU and accelerator brands • Composable infrastructure without vendor lock-in

Use Cases

AI and ML Platforms

AI and ML Platforms

Dynamic GPU allocation at scale

GPU-as-a-Service

GPU-as-a-Service

Right-sized GPUs per tenant

Composable Private Cloud

Composable Private Cloud

Independent scaling of compute

Disaggregated Storage Architectures

Disaggregated Storage Architectures

On-demand storage, faster provisioning

Technical Specifications

PCIe Support:PCIe Gen5 x16 device slots (12 per PRU)
Host Interconnect:FIC 2500 (4 slots per PRU)
Device Support:GPUs, FPGAs, SmartNICs, NVMe, accelerators
Hot Swap:Real-time attach/detach
Compatibility:Multi-vendor PCIe devices

Benefits

The 2500 Series delivers a photonic-native, composable PCIe Gen5 infrastructure that fundamentally improves how data center resources are deployed and utilized. By disaggregating GPUs, storage, and accelerators from fixed servers and pooling them over low-latency photonic connectivity, the platform eliminates stranded capacity, reduces power and cooling overhead, and extends hardware lifecycles through modular upgrades. Direct device-to-device communication bypasses traditional server bottlenecks, enabling higher throughput and predictable performance for AI, HPC, and cloud workloads while supporting seamless scaling without disruptive forklift upgrades.

User Benefits

For operators and end users, the 2500 Series translates into faster deployment, greater flexibility, and lower total cost of ownership. Infrastructure teams can provision and reassign GPU and accelerator resources in real time, simplify maintenance through non-disruptive hot-swap capabilities, and confidently support multi-tenant environments with secure, isolated resource paths. End customers gain access to right-sized, high-performance resources on demand—accelerating AI and HPC workflows, improving service reliability, and enabling new consumption models such as GPU-as-a-Service without overprovisioning or long lead times.

User Benefits

Corespan 2500 Series Use Case Diagram

Illustrates Corespan 2500 Series PRU 2500 and FIC 2500

Corespan 2500 Series
Footer background image

For More Details

Access more documentation by downloading the Corespan 2500 Series datasheet.

Need more help? Get in touch with our sales or support team.