AI is driving data center energy consumption to levels most operators didn’t plan for. Power costs are rising, grid access is becoming increasingly constrained, and cooling budgets are growing in lockstep with model size. At the same time, land availability, water use and emissions targets add even more friction.
The commercial case for space-based data centers
The dawn of the reusable rocket era has made the notion of orbital compute plausible. Launch costs are falling and innovators are eyeing the unique advantages of space. Satellites in sun-synchronous low Earth (LEO) orbit can harvest up to eight times more solar energy than Earth-based systems, offering near-continuous power and reducing the need for massive cooling infrastructure. Jeff Bezos, founder of Blue Origin, has even suggested that this new way of hosting compute resources will soon “beat the cost of terrestrial data centers.” This is backed up by industry analysts, who predict that by the mid-2030s, launch costs could fall below $200/kg.
Orbital data center research
Pioneering R&D is already in progress. Google’s Project Suncatcher is on the verge of testing satellites fitted with its Tensor Processing Units (TPUs) that relay data to Earth via free-space optical links. Meanwhile, Starcloud plans to launch its first GPU cluster, Starcloud-2, in 2026, positioning itself as an early entrant into an emerging market.
But terrestrial facilities won’t be replaced anytime soon. Radiation-hardened hardware, high-bandwidth optical networking, free-space optical reliability, thermal regulation and long-term orbital station-keeping all remain formidable challenges. Even once these obstacles are overcome, space-based compute will still be reserved for energy-intensive or latency-tolerant workloads, with Earth-bound networks handling the vast majority of traffic.
Orbital data centers may be tomorrow’s solution, but today’s challenge is terrestrial efficiency.Distributed models reshape terrestrial networks
The more immediate concern for hyperscalers is the evolution happening on the ground. To bypass grid bottlenecks, operators are increasingly adopting a distributed strategy – splitting massive AI clusters across multiple data centers to spread power demand and ease local infrastructure pressure.
In this model, data center interconnect (DCI) stops being a passive transport link and becomes a true extension of the compute fabric. Horizontal scaling only works if those facilities operate as one cohesive system, which places enormous pressure on the efficiency of the optical transport layer itself. As clusters become more distributed, power per gigabit becomes the new critical path for AI expansion.
An optical transport portfolio designed for the AI era
Optical transceivers now account for a growing share of total data center energy use. That’s why our DCI portfolio, which includes our FSP 3000 open line system, is built to enable power-efficient optical transport while maximizing throughput. By cutting watts per bit, simplifying scaling and enabling more sustainable high-capacity interconnects, our solutions ensure operators can meet AI demand while reducing overall energy consumption.
Scaling AI efficiently on Earth
Orbital data centers may one day help manage future AI workloads, and it’s exciting to watch the early R&D projects begin to take shape. But the challenge here on Earth is far more urgent: lowering power consumption while increasing throughput across terrestrial networks. Those who stay ahead will be the ones who drive down power consumption across every part of their infrastructure – compute, cooling and transport. Because in a data-driven world defined by AI, energy efficiency isn’t just an advantage – it’s the only viable way to scale.
To learn how our portfolio of open optical transport solutions helps operators scale AI workloads with the speed, capacity and efficiency today’s networks demand, visit adtran.com.