A Tectonic Shift Is Underway in the Data Center
The data center is in the middle of an architectural upheaval not seen since the telecom build-outs of the late 1990s. Optical circuit switching, once a niche curiosity reserved for telecom fiber networks, is moving rapidly into the heart of AI infrastructure. A convergence of massive funding rounds, strategic investments from the largest photonics incumbents, and an insatiable demand for GPU-to-GPU bandwidth is reshaping how we think about connectivity inside the rack and across the data center floor. Over the next 24 to 48 months, the data center will look fundamentally different, and the companies positioning themselves now will define the next decade of computing.
A Flood of Capital into Optical Switching and Photonic Startups
The signal from the venture capital community is unmistakable, and it is not limited to a handful of investments. Capital is flooding into every layer of the optical stack, from switching to interconnects to light engines, at a pace and scale that has no precedent. In the optical circuit switching segment alone, three startups have attracted significant capital in rapid succession: iPronics, Salience Labs, and nEye Systems. Those three companies have raised >$215M, and if the photonic component and photonic I/O markets are included, there is easily another $2B of invested capital. The pace has accelerated over the past five quarters, with >$1.5 billion in venture funding committed. This is not speculative interest. This is the market declaring that electrical interconnects have reached their scaling limits, but with all technology transitions, not all of the pieces are in place yet.
The Incumbents are Making Billion-Dollar Commitments
While startups chase the bleeding edge, the established photonics giants are making bets that dwarf anything the venture market can muster. In March 2026, NVIDIA announced a $2 billion strategic investment in Coherent Corp., accompanied by a multibillion-dollar purchase commitment for advanced laser and optical networking products. Coherent doubled its optical circuit switching serviceable addressable market estimate from $2 billion to $4 billion at OFC 2026, citing broader use cases across scale-up, scale-out, and scale-across applications. Coherent is now shipping 64×64 and 320×320 OCS systems into production and developing a 512×512 system using liquid crystal technology.
Lumentum received identical treatment. NVIDIA made a $2 billion strategic investment, securing a long-term supply of advanced laser chips and OCS platforms. At OFC 2026, Lumentum and Marvell demonstrated the R300 optical circuit switch interoperating with Marvell's Aquila 1.6T coherent-lite DSPs and Ara 1.6T PAM4 optical DSPs—a live, rack-level system showcasing software-controlled optical connectivity with predictable low-latency paths. JPMorgan's April 2026 upgrade of Lumentum to a $950 price target, citing the "Optical Supercycle," was not analyst hyperbole. It was recognition that the copper-to-optics transition inside the data center has reached an inflection point. Lumentum's OCS backlog has reportedly surpassed $400 million.
Between the startup ecosystem and the incumbent investments, the optical switching and interconnect sector has attracted well over $10 billion in committed capital in the past eighteen months. The infrastructure is being funded. What remains is the harder problem: building the software systems that make it all work.
The Software Control Plane: The Critical Missing Layer
Building optical circuit switches is necessary, but insufficient. Light does not carry IP addresses. Photons do not understand routing tables. To make a reconfigurable optical fabric useful at data center scale, someone must build a software control plane that sits above the optical layer and orchestrates it in real time.
This is a digital problem layered on top of an analog substrate. The control plane must understand the topology of the optical mesh, manage wavelength assignments, handle path computation, and coordinate reconfiguration events, all while maintaining the latency and determinism that GPU workloads demand. It must also provide the abstraction layer that lets higher-level orchestration systems like Kubernetes, Slurm, and custom AI schedulers treat the optical fabric as a programmable resource rather than a static physical constraint.
Without this software layer, optical circuit switches are expensive patch panels. With it, they become the foundation of a composable, dynamically reconfigurable data center. The software control plane is what transforms raw optical hardware into an intelligent fabric, one that can allocate bandwidth on demand, rebalance topology in response to workload changes, and maximize utilization across thousands of endpoints. The companies building optical switches understand the physics. The question is who builds the software system that leverages those optical technologies and turns photonic capability into operational infrastructure.
Coupling the Analog Server Bus to the Optical Domain
Now comes the harder part. Inside every server, the CPU, GPU, memory, and storage communicate over PCIe, a protocol that is electrical, analog, and deeply entrenched. PCIe Gen 5 and Gen 6 are not going away; NVlink is not going away. CXL is not going away. The analog server bus domain is the foundation upon which every AI workload runs, and it will remain so for the foreseeable future.
The critical architectural challenge of the next two years is coupling this analog PCIe domain to the optical switching domain. How does one take a PCIe transaction, generated by a GPU requesting a memory page or a storage read, and extend it transparently across an optical fabric to a resource that may sit in another tray, another rack, or another row? How does one accomplish this without breaking the latency budget, without requiring application changes, and without abandoning the vast ecosystem of PCIe-native devices?
This is not a transceiver problem. It is a systems architecture problem that requires a purpose-built software system spanning both domains, the deterministic, low-latency world of PCIe and the reconfigurable, wavelength-routed world of optical circuit switching. That software must speak both languages: the analog bus protocol of the server and the photonic signaling of the optical fabric. It must manage the translation seamlessly, at wire speed, and at scale.
From Direct Fiber to WDM: A Familiar Playbook at Unprecedented Scale
There is a historical parallel that illuminates where this is heading. In 1996, Sprint deployed Ciena's MultiWave 1600 system across its nationwide fiber network. The technology was dense wavelength division multiplexing (WDM), which sent 16 discrete optical channels over a single fiber pair, boosting capacity by 1,600 percent without laying a single new strand of glass. As Sprint's CTO Marty Kaplan said at the time, "The Ciena solution protects our existing investment and allows us to incrementally expand the transmission capacity of our network as we deliver new broadband services to our customers." It was transformational. Sprint protected its existing fiber investment and scaled incrementally as demand grew.
The same evolution is now beginning inside the data center. Today, most intra-data-center optical connections are direct fiber: one fiber, one signal, one connection. That works at a modest scale, but it does not work when the requirement is to interconnect tens of thousands of GPUs across hundreds of racks through hundreds of optical circuit switches. The fiber count becomes unmanageable, and the sheer number of optical switch ports required is massive. The physical infrastructure becomes a constraint.
WDM solves this the same way it solved Sprint's problem thirty years ago, by multiplexing many signals onto a single fiber using different wavelengths of light. Single-mode fiber within the data center will evolve from direct fiber connections to fiber carrying multiple WDM channels, dramatically increasing the logical connectivity achievable over the existing physical plant, but the scale is different. Sprint was upgrading a few hundred route-miles of long-haul fiber. Data centers will need to deploy WDM across thousands of short-reach connections, orchestrated by hundreds of optical circuit switches, managed by a unified software control plane. The physics are the same. The engineering challenge is orders of magnitude greater.
The Evolution Corespan Systems Is Building For
This is the landscape that Corespan Systems was built to address. The optical switching layer is arriving. The funding rounds and strategic investments prove that beyond any doubt, but the analog PCIe domain of the CPU, GPU, and storage is not going away anytime soon. The data center does not need to choose between these two worlds. It needs a system architecture that bridges them.
Corespan is building the composable infrastructure platform that couples the analog server bus domain to the optical switching domain through a unified software control plane. It is not an optical switch company. It is not a transceiver company. It is the systems-level software and hardware architecture that makes disaggregated, dynamically composable AI infrastructure possible, extending PCIe natively across optical fabrics so that GPUs, memory, and storage can be allocated, shared, and recomposed on demand. Where the optical switch companies are building the roads, and the component startups are building the pavement, Corespan is building the traffic management system that makes the entire network function as a coherent whole.
Corespan exploits the value of optical circuit switches by treating them not as standalone products, but as programmable building blocks within a larger composable fabric. Where OCS vendors deliver raw photonic switching capability, Corespan provides the hardware to transition the analog server to the optical domain under software control across dynamically reconfigurable optical paths to match workload demands, extending PCIe transactions transparently across the optical domain, and composing disaggregated GPUs, memory, and storage into unified virtual machines on demand.
This transforms optical circuit switches from static, manually provisioned cross-connects into the foundation of an intelligent, software-defined AI fabric where every photonic path is allocated, monitored, and rebalanced by Corespan’s Composer control plane, maximizing GPU utilization, eliminating stranded resources, and delivering the lowest cost-per-token at data center scale.
The next 24 to 48 months will see optical circuit switching move from conference demos to production deployments. WDM will begin its migration from the long-haul network into the data center. And the companies that solve the software systems integration problem—bridging the analog PCIe domain to the optical fabric with intelligent software, will define the architecture of AI infrastructure for the next decade.
The light is arriving. The question is, who will build the bridges across the optical interconnect.