Bottom Line Up Front: Corespan delivers native PCIe Gen5 over optics, using the PRU 2500 to attach up to 40 SSD drives per PRU 2500 and exposes them as direct-attached PCIe storage to hosts equipped with a FIC 2500. Using 30TB drives as an example, each PRU 2500 provides 1.2PB of raw capacity from NVMe SSDs that is made available via FIC 2500 cards as true local SSD devices in the host OS. These drives can be dynamically mapped and re-mapped to any host, enabling just‑in‑time composition of storage without NVMe‑oF or traditional SAN overhead. Leveraging an optical circuit switch for PCIe over optics, Corespan builds a low‑latency, rack‑scale storage interconnect where large storage clusters can be assembled, dissembled and reassigned as workload demands change.
The Details: AI infrastructure is forcing a hard reset in how we think about storage. For years, storage was treated as something you bolted into a server at purchase time. You guessed the right ratio of CPU, memory, GPU, and SSD. You racked the node. You lived with the configuration until the next refresh cycle. That model is breaking.
Modern AI and data workloads do not consume resources evenly. A host may need massive NVMe bandwidth during a Spark shuffle, then very little after the job completes. An inference system may need high-speed scratchpad capacity beside a GPU to hold KV-cache state for long-context sessions. An observability cluster may need a larger hot tier during an incident window, then release that capacity hours later. Static servers cannot adapt to that rhythm. They strand resources.
Corespan's answer is disaggregated storage built on native PCIe Gen 5 over optical: dense pools of NVMe SSDs in the PRU 2500, dynamically mapped into hosts as local drives, without permanently trapping those drives inside any one server.
Storage Should Follow the Workload
The core idea behind Corespan's DynamicXcelerator Architecture is simple: compose infrastructure around the job, not the other way around. Instead of buying servers with fixed internal SSD footprints, customers place dense NVMe SSD capacity in a disaggregated design. Hosts connect to an optical interconnect through the FIC 2500 card. Corespan Composer then attaches and detaches PCIe devices in real time, assigning SSDs drives to the hosts that need them.
To the host, those drives behave like local SSDs. Operationally, they are a shared, composable pool of NVMe resources.
That distinction matters. This is not traditional network storage pretending to be fast. Corespan is moving PCIe Gen 5 natively over an optical interconnect, keeping the architecture inside the PCIe device model. The result is a low-latency, high-bandwidth storage plane where disaggregated SSDs can be consumed by hosts as local PCIe resources.
In practical terms, a host can receive a few drives for light ingest, 8 to 16 drives for an analytics job, or dozens of drives for a major hot-tier, reindexing, shuffle, or cache-heavy workload. When the job ends, those drives return to the pool. No screwdriver. No forklift server refresh. No stranded SSDs sitting idle in the wrong node.
Why Native PCIe Gen 5 Over Optical Matters
Disaggregation only works if the interconnect is good enough.
If storage is moved out of the server, but forced through a conventional network storage path, the value proposition collapses for performance-sensitive workloads. AI inference scratchpads, hot observability tiers, Spark shuffle, GPU data pipelines, and warehouse cache layers all care about latency, bandwidth, and local-drive behavior.
That is why Corespan's use of native PCIe Gen 5 over optical is so important. PCIe is already the language of GPUs, NVMe drives, accelerators, and high-performance host I/O. By extending PCIe Gen 5 over an optical fabric, Corespan lets infrastructure teams separate physical device placement from logical device ownership. The NVMe drive does not need to live inside the host chassis to behave like it belongs to the host.
This changes the design center of the data center. The server no longer must be the fixed container for every resource. It becomes a compute endpoint connected to a fabric of composable PCIe devices. That is the architectural unlock.
Dense SSD Pools Beat Fixed Server SSDs
The PRU 2500 is the physical building block that makes this useful at scale. A dense PRU 2500 SSD pool can hold large numbers of high-capacity drives and expose them through the photonic interconnect. In one reference configuration, two PRU 2500s populated with 40 x 30TB drives each create 80 drives and 2.4PB of raw storage capacity. Instead of spreading those drives across many servers and hoping every node has the right amount of SSD, operators centralize the high-performance media and allocate it dynamically.

Three-layer DynamicXcelerator architecture: FIC 2500–equipped hosts, a photonic PCIe Gen 5 interconnect, and a composable PRU 2500 storage pool, dynamically mapped by Corespan Composer.
This directly attacks one of the quiet killers of infrastructure ROI: stranded storage.
Every infrastructure team has seen it. One server has idle SSD capacity but no workload that needs it. Another server is bottlenecked on local disk. A Spark job spills heavily on one set of workers while another cluster has unused NVMe. A search cluster needs more hot-tier capacity for a week, but the drives are trapped in the wrong nodes. Corespan breaks that pattern. Drives become assignable inventory. Hosts get what they need when they need it.
Optical Circuit Switching Expands the Fabric
Dense SSD pooling becomes even more powerful when paired with optical circuit switching.
As fabrics grow, the radix and reach become limiting factors. The more hosts, PRUs, accelerators, and SSD pools you want to connect, the more important it becomes to scale the interconnect cleanly. Optical circuit switches increase the effective radix of the environment, creating a larger switching domain that can map disaggregated drives into more hosts across a broader fabric.
That means Corespan is not limited to a small, static set of point-to-point connections. With optical circuit switching, the architecture can create dynamic physical paths across a larger pool of resources and hosts. NVMe drives in a dense PRU 2500 can be connected to different hosts as workload demand changes, while preserving the local-drive model that performance-sensitive software expects.
This is where the architecture gets especially interesting. Corespan is not just pooling drives. It is creating a composable PCIe topology where optical circuit switching expands how many hosts and resources can participate. In other words: higher radix, larger pools, more flexible mappings, better utilization.
The GPU Scratchpad Use Case
For AI inference, the storage story is no longer just about datasets. It is about memory hierarchy.
Large language model inference creates KV-cache state during the prefill phase and reuses it during generation. As context windows grow and multi-turn sessions become more common, KV cache can put enormous pressure on scarce GPU memory. Keeping everything in HBM is expensive. Recomputing cache hurts latency and throughput. Adding more GPUs is often the bluntest and most expensive answer.
Corespan offers another path: pair GPUs with dynamically composed NVMe scratchpad capacity.
A GPU can be assigned four or more NVMe SSDs from the PRU 2500 pool. Those drives can hold transient inference artifacts such as KV-cache blocks, prior-turn state, reusable prompt context, and other high-speed scratchpad data. The GPU stays focused on active compute while the NVMe tier expands the effective working set around it.
The key is that this scratchpad is not trapped in a specific GPU server. If one inference profile needs four drives per GPU and another needs eight, the allocation can change. If demand shifts from one model to another, the storage follows the workload. That is exactly the kind of flexibility AI infrastructure needs.
Hot Data, Shuffle, Spill, and Cache
The same disaggregated NVMe capability applies beyond inference. Search platforms, observability systems, security analytics, data warehouses, and Spark clusters all have hot working sets. They ingest, index, search, aggregate, shuffle, spill, and cache recent data far more aggressively than older data.
These workloads want fast local SSD semantics, but their demand is uneven. Corespan lets operators give hot-tier nodes more storage capacity during heavy ingest. It lets Spark workers receive more local scratch during shuffle-heavy jobs. It lets warehouse-style environments expand local cache for interactive query windows. It lets Splunk-like and Elasticsearch-like platforms scale hot storage independently of compute.
That last point is critical: independent scaling.
With fixed servers, buying more storage often means buying more CPUs, memory, NICs, chassis, power supplies, and GPUs you may not need. With Corespan, raw NVMe SSD capacity can scale in the PRU tier while hosts scale on their own lifecycle.
Local Drives, Cloud-Like Flexibility
The cloud taught infrastructure teams to expect elasticity, but many high-performance workloads still require local device behavior. Corespan bridges that gap.
It gives customers the operational flexibility of a shared resource pool while preserving the local semantics that GPUs, NVMe-aware applications, hot data tiers, and analytics engines expect. Dense drives live in PRU 2500 shelves. Native PCIe Gen 5 travels over optical. Optical circuit switches expand the radix of the interconnect. Corespan Composer maps the right devices into the right hosts at the right time.
The result is a new infrastructure pattern: disaggregated storage that does not feel remote.
That is the future Corespan is building. No more guessing the perfect server configuration years in advance. No more stranded SSDs. No more overbuilding every node for peak storage demand. No more treating the server chassis as the boundary of performance.
With Corespan, storage capacity becomes fluid, local, and composable, and for AI infrastructure, that changes everything.