Connect with us
Space Tech

The Race Towards Space Data Centers

Violeta Mihai 11 Feb 2026 | 4 min. read
Visual illustrating Violeta Mihai from Underline Ventures

Few things spark as many debates in the scientific and startup worlds as the term “space data center”. On one hand, some claim that space data centers are impossible, inefficient, or simply pointless, while on the other, some are backing them with millions. Companies like Google, Axiom Space, SpaceX, and Starcloud are lining up to be the first to make space data centers profitable.

Space data centers have been “just around the corner” for years, so perhaps asking ourselves a few questions will help us feel more like clear-eyed observers when the moment finally arrives.

Let’s start with reasonable questions.

Why on Earth would we put data centers in space?

The first possible use case unlocked by space-based data centers is processing the enormous amounts of data produced in orbit. There are already over 1,000 satellites scanning Earth across different spectrums, operated by more than 200 entities. On top of that, tens of thousands of companies on Earth use this data for applications ranging from national security to detecting specific types of algae near beaches.

The Earth Observation domain is vast and flooded with untapped data. Satellites can capture terabytes of data daily, but only 5–20% of it reaches the ground for analysis due to physical limitations. Processing more of this data in orbit could unlock new and valuable applications, such as detecting the spread of wildfires before they become a serious threat. Of course, for every beneficial use case, a potentially creepy surveillance application could also emerge.

For terrestrial applications, space data centers are also seen as a way to lower energy bills. In space, solar panels can produce up to five times more energy, and in sun-synchronous orbits, they are not disrupted by the day–night cycle.

But these perks also come with significant engineering issues that need to be addressed.

The bottlenecks

There are three ways of heat transmission: convection, which transfers heat via a fluid (gas or liquid), conduction, which occurs through solids, and radiation, which involves emitting electromagnetic waves. On Earth, data centers are mainly cooled through convection, using air or water.

In space, however, the only way to get rid of excess heat is by radiating it into the void of space. This requires large radiators, and building such large structures in space comes with significant challenges. This is the approach Starcloud is taking, adapting NVIDIA GPUs for the space environment and planning radiators and solar panels up to 4 km wide.

Interestingly, one counterintuitive idea is that if we allow chips to operate at higher temperatures, heat can be dissipated more efficiently, potentially reducing the need for massive radiators. This works because the power radiated is proportional to the fourth power of the temperature. In short, while space is extremely cold, cooling in space is surprisingly difficult.

Radiation is another concern. Even in Low Earth Orbit, it can cause significant software errors. Every computing unit sent to space, whether FPGAs, CPUs, or GPUs, requires some level of protection, either at the hardware level or in software, to minimize errors and unwanted events.

Limited power is yet another challenge. While solar panels are more efficient in space, a significant surface area is still needed to provide enough power for energy-hungry chips like GPUs.

How it’s done. At the moment. 

Google wants to build a constellation of processing nodes using their best TPUs, chips optimized for deep learning tasks. They see the main challenges as intersatellite communications, the coordination of large satellite formations, and the radiation tolerance of their proprietary chips, all while balancing deployment costs.

Axiom Space collaborates with Microchip to bring computing nodes to the ISS and study optical links. SpaceX’s rumored IPO and plans to launch a million satellites suggest that SpaceX, with all the experience accumulated in managing mega-constellations with Starlink, could be a formidable competitor.

Projects are also emerging from Eastern Europe, with proprietary hardware from Edge Aerospace and Zaitra. Other approaches have come through research, such as the use of neuromorphic computing aboard a satellite in 2022, in a collaboration between Intel and NASA, which launched the Loihi chip for AI applications in space.

The different approaches appear intrinsically very different: from Starcloud’s monolithic constructions to large numbers of small computing nodes, from GPUs to TPUs to FPGAs and other specialized chips. Not only are the building blocks different, but the underlying theses are as well.

Will there be only one winner of this race, a huge company with deep expertise and capital, or will a startup build a solution from the ground up, disrupting the market in its typical manner? Maybe the question is not who will win the race, but which race is worth running at all.

Sign up to our newsletter Sign up to our newsletter