Related (posted just 2hours before this article) : https://news.ycombinator.com/item?id=46086833 "Blimps lifting quantum data centers to the stratosphere? (newatlas.com)" "... blimps, to lift quantum computers to the stratosphere. There, at an altitude of about 20 km (12.4 miles), temperatures are in the -50 °C range (about -58 °F) and would be cold enough to allow the qubits to function correctly."
I am skeptical as well BUT on the cooling question, which is one of the main concerns we all seem to have, the article is doing a bit of an apples-to-oranges comparison between the ISS and a cluster of small satellites.
It cites the ISS's centralized 16kW cooling system which is for a big space station that needs to collect and shunt heat over a relatively large area. The Suncatcher prototype is puny in comparison: just 4 TPUs and a total power budget of ballpark 2kW.
Suncatcher imagines a large cluster of small satellites separated by optical links, not little datacenter space stations in the sky. They would not be pulling heat from multiple systems tens of meters away like on the ISS, which bodes well for simpler passive cooling systems. And while the combined panel surface area of a large satellite cluster would be substantial, the footprint of any individual satellite, the more important metric, would likely be reasonable.
Personally I am more concerned with the climate impact of launches and the relatively short envisioned mission life of 5 years. If the whole point is better sustainability, you can't just look at the dollar cost of launches that don't internalize the environmental externalities of stuff like polluting the stratosphere.
To say the quiet part out loud, I don't think any serious companies have any intention to build a data center in space. There is no benefit in actually trying this. There is however, benefit in saying you'll do it to advance a narrative and distract from the problems terrestrial data centers are facing to an audience that mostly doesn't understand how heat transfer in a vacuum works.
Many of the dumb ideas being hyped in this AI bubble make sense viewed through this lens.
Data centres stirring up opposition? Sell a sci-fi vision that you will move them to Space! And reassure your over-extended investors that the data centre buildout rush you’re committing to isn’t going to get bogged down in protests and lawsuits.
The people hyping this stuff are not stupid, just their real goal (make as much money as possible as quickly as possible) has only a vague relationship to what they claim to be doing.
Related" "A City on Mars" (2024) [1] A useful book on why self-sustaining settlements on Luna, Mars, or earth orbit are pretty much hopeless. Remote bases that take a lot of supply, maybe, with great difficulty. The environment is just too hostile and doesn't have essential resources for self-sustaining settlements. The authors go into how Antarctic bases work and how Biosphere II didn't.
The worst real estate on Earth is better than the best real estate on Mars or Luna.
[1] https://www.amazon.com/City-Mars-settle-thought-through/dp/1...
> The worst real estate on Earth is better than the best real estate on Mars or Luna.
Very true..
Here's a recent HN link to a chilling documentary about one of the most isolated settlements in the world: https://news.ycombinator.com/item?id=46040459
Datacenters in space is about circumventing nation states masked as ambitions to generate more power.
Follow the rationale:
1. Nation states ultimately control three key infrastructure pieces required to run data centers (a) land (protected by sovereign armed forces) (b) internet / internet infra (c) electricity. If crypto ever became a legitimate threat, nation states could simply seize any one of or all these three and basically negate any use of crypto.
2. So, if you have data centers that no longer rely on power derived from a nation state, land controller by a nation state or connectivity provided by the nation state's cabling infra, then you can always access your currency and assets.
My favorite F-15 kill:
Putting data centers on ships in international waters would be just as effective at evading government control (i.e. not very) while being orders of magnitude easier and cheaper to build and operate.
Recently the USA blew out some some boats in international waters and came back to finish off the survivors, despite thin evidence and no due process, while maintaining that it was legal. If those data centers on ships ever become declared as a 'threat to national security' then they might get the same treatment.
I think GP's point is that an advanced nation-state could just as easily shoot down an orbiting data center as an oceanic data center and that "international space" offers an equally flimsy defense as "international waters" but a much larger price.
This would be equally true in space.
They've always been able to do this.
Microsoft was talking about submarine data centers powered by tidal forces in the early 2000s.
There have been talks of data centers on Sealand-like nation states.
Geothermal ...
Exotic data center builds will always be hyped. Always be within the realm of feasibility when cost is no object, but probably outside of practicality or need.
Next it'll be fusion-powered data centers.
Commonwealth Fusion Systems called dibs on next last year by saying they’re gonna have a Dominion (Virginia) commercial site up and running in the early 2030s.
https://cfs.energy/news-and-media/commonwealth-fusion-system...
Is there a way I can take bets on this not happening? Because I’d sure like to.
Except the people that run and manage that satellite will be on earth, under some nation state's rules...
corporations will use their knowledge in tax dodging to avoid that too.
If they're already well versed in dodging fiscal rules, why do they need a space computer?
Physical location is difficult to dodge unfortunately.
Fiscal rules are sort of man made.
The Outer Space Treaty is very very clear: anything launched into space is the responsibility of the country that launched it. Even if a private company payts for it and operates it, it's still the responsibility of the launching nation. Even if you launch from international waters, your operating company is still registered to a specific country, and the company is made up of citizens of one or more countries, and it is those countries which are responsible for the satellites. Those countries, in fact, have the responsibility to make sure that their citizens follow their laws and regulations. Unless you and your entire team are self-sustaining on that datacenter in outer space (maybe possible a century from now? Maybe not possible ever), you will be hunted down by the proper authorities and held to account for your actions. There is no magic "space is beyond the law" rules; it is just as illegal- and you are just as vulnerable to being arrested- for work done on a datacenter in space as work done on a datacenter on the ground.
This is the only "advantage" I can see with space-based datacenters. Crypto will remain a joke but putting devices beyond the reach of ground-based jurisdictions is a libertarian dream. It will probably fail - you still need plenty of ground infrastructure.
I'm sorry, but this is stupid. It's the same dumb thinking behind Sealand: "we're outside state borders! nobody can touch us!", which was only true as long as nobody cared what they were doing. Once Sealand actually started angering people, the Royal Navy showed up and that was that. "Datacenters in space" wouldn't fare any better: multiple nations have successfully tested anti-satellite weapons.
> Once Sealand actually started angering people, the Royal Navy showed up and that was that.
What did the royal navy do? There is no mention of the UK using force against sealand in either the Wikipedia page or this BBC article about sealand. (Though obviously the royal navy could retake sealand if they wanted)
Data centers in space is about leading investors to circumvent their brains and jump on the hype train at worst, and developing technology around data center infrastructure at best.
Microsoft did something similar with their submarine data center pilots. This gets more press because AI.
Nation states can fire missiles at your space datacenter, bruh.
Or just triangulate any signals being sent to it, and fire missiles at the source.
Or just blast it with a laser...
As someone with a similar background to the writer of this post (I did avionics work for NASA before moving into more “traditional” software engineering), this post does a great job at summing up my thoughts on why space-based data centers won’t work. The SEU issues were my first though followed by the thermal concerns, and both are addressed here fantastically.
On the SEU issue I’ll add in that even in LEO you can still get SEUs - the ISS is in LEO and gets SEUs on occasion. There’s also the South Atlantic Anomaly where spacecraft in LEO see a higher number of SEUs.
As someone with only a basic knowledge of space technology, my first thought when I read the idea was "how the hell are they going to cool it".
Single event upsets are already commonplace at sea level well below data center scale.
The section of the article that talks about them isn’t great. At least for FPGAs, the state of the art is to run 2-3 copies of the logic, and detect output discrepancies before they can create side effects.
I guess you could build a GPU that way, but it’d have 1/3 the parallelism as a normal one for the same die size and power budget. The article says it’d be a 2-3 order of magnitude loss.
It’s still a terrible idea, pf course.
The only advantage I can come up with is the background temperature being much colder than Earth surface. If you ignored the capex cost to get this launched and running in orbit, could the cooling cost be smaller? Maybe that's the gimmick being used to sell the idea. "Yes it costs more upfront but then the 40% cooling bill goes away... breakeven in X years"
Strictly speaking, the thermosphere is actually much warmer than the atmosphere we experience--on the order of 100's or even a 1000 degrees Celsius, if you're measuring by temperature (the average kinetic energy of molecules). However, since particle density is so low, the number of molecules is quite low, and so total heat content of the thermosphere is low. But since particle count is low, conduction and convection are essentially nonexistent, which means cooling needs to rely entirely on radiation, which is much less efficient than other modes at cooling.
In other words, a) background temperature (to the extent it's even meaningful) is much warmer than Earth's surface and b) cooling is much, much more difficult than on Earth.
Technically radiation cooling is 100% efficient. And remarkably effective, you can cool an inert object to the temperature of the CMBR (4K) without doing anything at all. However it is rather slow and works best if there's no nearby planets or stars.
Fun fact though, make your radiator hotter and you can dump just as much if not more energy then you would typically via convective cooling. At 1400C (just below the melting point of steel) you can shed 450kW of heat per square meter, all you need is a really fancy heat pump!
Your hypothetical liquid metal heat pump would have a Carnot efficiency of only 25%.
How much power would a square meter at 1400C shed from convection?
I dont have firm numbers for you since it would depend on environmental conditions. As an educated guess though, I would say a fucking shit ton. You wouldn't want to be anywhere near the damn thing.
A sports car radiator has about that size and dumps 1 MW without boiling the coolant.
A car's "radiator" doesn't actually lose heat by radiation though. It conducts heat to the air rushing through it. That's absolutely nothing like a radiator in a vacuum.
Not much in space; There's almost no matter to convect!
Is it an advantage though ? One of the main objections in the article is exactly that.
There's no atmosphere that helps with heat loss through convection, there's nowhere to shed heat through conduction, all you have is radiation. It is a serious engineering challenge for spacecrafts to getting rid of the little heat they generate, and avoid being overheated by the sun.
I think it is an advantage, the question is just how big, and assume we look only at ongoing operation cost.
- Earth temperatures are variable, and radiation only works at night
- The required radiator area is much smaller for the space installation
- The engineering is simple: CPU -> cooler -> liquid -> pipe -> radiator. We're assuming no constraint on capex so we can omit heat pumps
Radiators on earth mainly do it to air, there's no air in space.
This question is thoroughly covered in the linked article.
Pardon, but the question of "could the operational cost be smaller in space" is almost not touched at all in the article. The article mostly argues that designing thermal management systems for space applications is hard, and that the radiators required would be big, which speaks to the upfront investment cost, not ongoing opex.
Ok, sure, technically. To be fair you can't really assess the opex of technology that doesn't exist yet, but I find it hard to believe that operating brand new, huge machines that have to move fluid around (and not nice fluids either) will ever be less than it is on the surface. Better hope you never get a coolant leak. Heck, it might even be that opex=0 still isn't enough to offset the "capex". Space is already hard when you're not trying to launch record-breaking structures.
Even optimistically, capex goes up by a lot to reduce opex, which means you need a really really long breakeven time, which means a long time where nothing breaks. How many months of reduced electricity costs is wiped out if you have to send a tech to orbit?
Oh, and don't forget the radiation slowly destroying all your transistors. Does that count as opex? Can you break even before your customers start complaining about corruption?
Maintenance will be impossible or at least prohibitively expensive. Which means your only opex is ground support. But it also means your capex depreciates over whatever lifetime these things will have with zero repairs or preventive maintenance.
Cooling is more difficult in space, yes it's colder, but transferring heat is more difficult.
But the cooling cost wouldn’t be smaller. There’s no good way to eliminate the waste heat into space. It’s actually far far harder to radiate the waste heat into space directly than it would be to get rid of it on Earth.
Which is why vacuum flask for hot/cold drinks are a thing/work. Empty space is a pretty good insulator as it turns out.
It’s a little worrying so many don’t know that.
I don't know about that. Look at where the power goes in a typical data center, for a 10MW DC you might spend 2MW just to blow air around. A radiating cooler in space would almost eliminate that. The problem is the initial investment is probably impractical.
>99.999% of the power put into compute turns into heat, so you're going to need to reject 8 MW of power into space with pure radiation. The ISS EATCS radiators reject 0.07 MW of power in 85 sq. m, so you're talking about 9700 sq. m of radiators, or bigger than a football field/pitch.
Now scale the radiator size for your 8MW datacenter.
Things on earth also have access to that coldness for about half of each day. How many data centers use radiative cooling into the night sky to supplement their regular cooling? The fact that the answer is “zero” should tell you all you need to know about how useful this is.
The atmosphere is in the way even at night, and re-radiates the energy. The effective background temperature is the temperature of the air, not to mention it would only work at night. I think there would need to be like 50-ish acres of radiators for a 50MW datacenter to radiate from 60 to 30C. This would be a lot smaller in space due to bigger temp delta. Either way opex would be much much less than average Earth DC (PUE almost 1 instead of run-of-the mill 1.5 or as low as 1.1 for hyperscalers). But yeah the upfront cost would be immense.
I think you’re ignoring a huge factor in how radiative cooling actually works. I thought the initial question was fine if you hadn’t read the article but understand the downvotes due to doubling down. Think of it this way. Why do thermoses have a vacuum sealed chamber between two walls in order to insulate the contents of the bottle? Because a vacuum is a fucking terrible heat convector. Putting your data center into space in order to cool it is like putting a computer inside of a thermos to cool it. It makes zero fucking sense. There is nowhere for the heat to actually radiate to so it stays inside.
Pardon but this doesn't make sense to me. A 1 m^2 radiator in space can eliminate almost a kilowatt of heat.
>vacuum is a fucking terrible heat convector
Yes we're talking about radiating not convection
At what temperature?
And a kilowatt from one square meter is awful. You can do far more than that with access to an atmosphere, never mind water.
Breakeven in X years probably makes sense for storage (slow depreciation), not GPUs (depreciates in like 4 years)
I think by far the most mass in this kind of setup would go into the heat management, which could probably last a long time and could be amortized separately from the electronics.
Seriously?
I know Silicon Valley runs on out there ideas and outright BS because 0.1% of the ideas pan out and pay for the other 99.9%, but this is just laughable for the reasons pointed out in the article.
In addition to the ludicrous unworkable physics, as it turns out, datacenters need people servicing things all the time. Even if you could get those measly three racks into space, they'd function about a month before some harddisks were failing, network switches were down, some crap breaks in the cooling system, power system short, breakers trip, etc, and on and on.
So obviously we're not going to be some SREs into space to babysit the machines. Have everything fail in place? Have robots do it? What about the regular supply missions to keep replacing all the failing hardware (there's only so many spare HDDs you can have on hand).
The whole thing is farcical.
Nah; let it fail in place.
See also: Any on-prem horror show that budgeted for capex, rent, cooling, network and power, but not maintenance.
Yes. Anyone who thinks you can ship a datacenter to space and save has never managed a datacenter.
Plus, in space, their electronic components would experience much more radiation (and the effects on components). They could build with rad-hardened components but those are both more expensive and several generations older than SOTA found in the habitable zone.
> So obviously we're not going to be some SREs into space to babysit the machines.
Shut up! This is the chance for one of us to go into space! I don't care if all I'm doing is swapping 1U pizza boxes in the cold hard vacuum of space, I'm down!
Datacenters in space would make 0 sense because the only way to lose heat is through radiation, which makes for terrible cooling.
If you want to avoid national laws and have great cooling, then submerse your datacenter in the ocean instead.
And they've already at least tried datacenters in the ocean.
https://news.microsoft.com/source/features/sustainability/pr...
Always remember the magic words: dual use technology. The people pushing these aren't saying to you that they want to build data centers in space because conventional data centers are at huge risk of getting bombed by foreign nations or eventually getting smashed by angry mobs. But you can bet they're saying that to the people with the dual-use technology money bag. Or even better, let them draw that conclusion themselves, to make them think it was their idea - that also has the advantage of deniability when it turns out data centers in space was a terrible solution to the problem.
It is far easier to build them at remote places and bunkers (or both). Even at the middle of the ocean will make more sense and provide better cooling (See Microsoft attempt at that).
Not exactly at the middle but close to shore is pretty good too, a lot of solar and wind around to feed the compute.
One of these projects is bonkers IMO: china-has-an-underwater-data-center-the-us-will-build-them-in-space
https://www.forbes.com/sites/suwannagauntlett/2025/10/20/chi...
it is not far easier to distribute content from a bunker than from the space.
The only vaguely valid dual use technology I can see coming out of this is improving space-rated processing enough that deep space probes sent out to Uranus or whatever can run with more processing power than a Ti-82 and thus can actually do some data processing rather than clogging up the deep space network for three weeks on an uplink with less power than a lightbulb
Who knows what tech is in space already. Maybe an “AI data center in space” would be the equivalent of a flock camera for an entire region.
The reason why we don't see satellite-targeting missiles is not because the problem is hard. All relevant actors are capable of that.
At this point I wouldn't be surprised if a non zero number of pitch meetings start with, "in order to not disrupt your life too much as the mobs of the starving and displaced beat down your door"
What makes an orbital facility at less risk of getting bombed?
Probably needs more delta-v to match orbit than a suborbital ICBM would. Not less risk—just more expensive. Depends how valuable the target is.
Nah, they are pretty similar in difficulty for interception - the first US ASAT program used essentially the same Nike Zeus missiles used for ABM duty during the late 50s
not really. Suborbital vehicles achieve orbital heights. It's actually probably easier since you don't need a payload. The velocity alone will do the trick.
Except you don't. You only need to match velocities if you want to dock with something.
Hitting something in orbit just requires you to be in the way at the right time.
Basically an intercept is a lot easier.
Because its stupid, not that its hard.
You want to push things out of orbit not turn a massive structure into a supersonic shard field for 20 years
I'd be most curious to see what type of processing power they would put on such a data center.
For example, the JWST uses a RAD750 ( https://en.wikipedia.org/wiki/RAD750 ) which is based on a PowerPC 750 running at 110 MHz to 200 MHz.
Its successor is the RAD5500 ( https://en.wikipedia.org/wiki/RAD5500 )... which runs at between 66 MHz and 462 MHz.
> The RAD5545 processor employs four RAD5500 cores, achieving performance characteristics of up to 5.6 giga-operations per second (GOPS) and over 3.7 GFLOPS. Power consumption is 20 watts with all peripherals operating.
That's kind of neat... but not exactly data center performance.
Back to the older RAD750...
> The RAD750 system has a price that is comparable to the RAD6000, the latter of which as of 2002 was listed at US$200,000 (equivalent to $349,639 in 2024).
That isn't exactly price performance. Well, unless you're constrained by "it costs millions to replace it."
So... I'm not really sure what devices they'd be putting up there.
The "data centers in space" is much more a "space launch is a hot technology, AI and data centers are a hot technology... put the two together and its too the moon!" (Or at least that's what we tell the investors before we try to spend all their money)
I think the last time they put commodity hardware in orbit was via the HPE[1] project and the results were quite mixed with failure rates for components that were quite high. In addition to running the system in a twin config to get any meaningful work done.
Best case scenario custom ASICs for specialised workloads either for edge computing of orbital workloads or military stuff.That would be with ability to replace/upgrade components rather than a sealed sat like environment.
Its similar to the hype for spacelink type sats for internet connectivity rather than a proper fiber buildout that would solve most of the issues at less cost.After the last couple of years seeing the deployment in UKR,Sahel its mostly a mil tool.
[1] https://www.theregister.com/2024/01/24/updated_hpe_spaceborn...
So many ideas involving AI just seems to be built off of sci-fi (not in a good way), including this one. Like sci-fi, there are little practical considerations made.
Sci-fi isn't even really about the tech. It's about what happens to us, humans, when the tech changes in dramatic ways. Sci-fi authors dream up types of technology that create new social orders, factions, rifts, types of interpersonal relationships, types of fascism, where the unforseen consequences of human ingenuity hoist us upon our collective petard.
But these baffoons only see the blinky shiney and completely miss the point of the stories. They have a child's view of SF the way that men in their teens and 20d thought they were supposed to be like Tyler Durden.
This is a good point and is why I prefer to refer to the genre as Speculative Fiction - not only is it broader but it better gets at the idea behind this type of fiction. Not just space lasers.
Eager space did a pretty thorough hypothetical cost breakdown of orbital data centers that I recommend. https://www.youtube.com/watch?v=JAcR7kqOb3o
Sounds like the people behind Solar Roadways found a new project.
There are 8,000+ Starlink satellites in orbit right now. Each one has about 30 square-meters of solar panels. That's 240,000 square meters. ISS has 25,000 square meters, so SpaceX has already launched almost 10-times the solar panels of ISS.
The next generation Starlink (V3) will have 250 square meters of solar panels per satellite, and they are planning on launching about 10,000 of them, so now you're at 2.5 million m^2 of panels or 100 times ISS.
All those satellites have their own radiators to manage heat. True, they lose some heat by beaming it to the ground, but data center satellites would just need proportionally larger radiators.
And, of course, all those satellite have CPUs and memory chips; they are already hardened to resist space radiation (or else they wouldn't function).
Almost every single objection to data centers in space has already been overcome at a smaller scale with Starlink. The only one that might apply is cost: if it's cheaper to build data centers on Earth, then space doesn't make sense (and it won't happen). But prices are always coming down in space, and prices on Earth keep going up (because of environmental restrictions).
> The only one that might apply is cost: if it's cheaper to build data centers on Earth, then space doesn't make sense (and it won't happen).
So the only problem left to be solved is that space datacenters would be millions of times more expensive per unit of compute than a ground based datacenter. And cost millions of times more to maintain.
Starlink cost maybe $10 billion. A 100,000 gpu data center costs between $20 and $40 billion to build.
Also remember that data centers last for about 5 years; after that the gpus are obsolete. That’s no different than the lifetime of a Starlink satellite.
Starlink solar panels generate at best 200 W on average. Even with 2.5 million square metres, that is a total of half a gigawatt. And the cost is not to be ignored! Most of the cost of these data centres is in the GPUs themselves, so you need to add that to the cost of building out the constellation. Unless you are arguing that the cost of supporting infrastructure (cooling, power, etc) costs $10bn to support half a gigawatt of GPUs in the typical data centre, then your numbers are simply way off.
At what price per MW of load?
The Starlink constellation cost $10 billion. That’s comparable to a small data center (maybe 50,000 gpus).
If launch costs keep dropping and environmental costs keep rising, space based data centers will make sense.
What a ridiculous waste of money.
The facts you quoted just made me even more convinced that space-based datacenters will not be cost effective any time soon. If an entire generation of satellites costing many billions of dollars can't power more GPUs than a single terrestrial datacenter, how could it possibly be cost effective?
A data center costs $20 to $40 billion! And launch costs keep dropping.
Plus, environmental costs of data centers keep rising.
>Almost every single objection to data centers in space has already been overcome at a smaller scale with Starlink
Did you not read the article? It had many objections that make it clear datacenters in space are unworkable...
Starlink is already a small data center! It has power, radiators, and compute!
It needs to be scaled up, but there is no obstacle to that (at least none that the article mentions).
The only valid objection is cost, but space prices keep dropping and earth prices keep rising.
Google’s paper [1] does talk about radiation hardening and thermal management. Maybe their ideas are naive and it’s a bad paper? I’m not an expert so I couldn’t tell from a brief skim.
It does sound to me like other concepts that Google has explored and shelved, like building data centers out of shipping container sized units and building data centers underwater.
[1] https://services.google.com/fh/files/misc/suncatcher_paper.p...
The only sentence in the whole "paper" about cooling is
> Cooling would be achieved through a thermal system of heat pipes and radiators while operating at nominal temperatures
Which is kind of similar to writing a paper about building a bridge over the Pacific and saying "The bridge would be strong enough by being built out of steel". Like you can say it, but that doesn't magically make it true.
Pedantically, Microsoft has actually submerged datacenters (UDC). Google's only tried pumping seawater for cooling.
Apparently Microsoft tried it and it worked, but they shelved it?
https://www.tomshardware.com/desktops/servers/microsoft-shel...
It didn't work, it was an utterly terrible idea and they are almost certainly lying about the sentiment that it "worked". No ability to perform maintenance is a complete nonstarter. Communications and power is a nightmare to get right. The thermal management story sucks - just because you have metal touching water doesn't mean you have effective radiation of heat. Actually scaling it up is nearly impossible because you need thicker and more expensive vessels the bigger it gets. The problems go on and on.
Presumably it didn't work well or they wouldn't have shelved it. But do you actually know about what happened or is this all based on your priors?
I don't think MS ever revealed enough information to answer that. For example, I haven't seen any explanation of how heat is transferred from the servers to the skin of the container. I can guess how they did it but I don't want to make any judgement based on guesses.
So if the big idea is to have a data center outside of legal jurisdictions why not build a floating data center in the Southern Pacific Ocean? You can power it with floating solar panels provide data via Starlink or a regular communication satellite and still be outside of the law. You might say that it will be vulnerable to pirates, but practically speaking nobody is going down there. Sure you will have to deal with weather, but overall the problems are way easier to solve than building an orbital data center.
But the real reason they won't work is because they're investor scams that were never serious in the first place.
Worth sharing Starcloud’s paper in this post:
Only legit thing I can see this being used for is redundant archival storage or just general research into hardening equipment to radiation or micrograv (eg for liquid cooling). But anything that generates significant amounts of heat seems like it'd be a huge problem.
Then again there's lots of space in space, perhaps it's possible to isolate racks/aisles into their own individual satellites, each with massive radiant heatshedding panels? It's an interesting problem space that would be very interesting to try to solve, but ultimately I agree with OP when we come back around to "But, why?" Research for the sake of research is a valid answer, but "For prod"? I don't see it.
Orbital data centers are very hard but this isn't a good explanation of why. There really is more light in space since certain orbits are always in daylight. Radiators are no larger than the solar panels so if you can build multi square kilometer solar arrays you can probably also build massive radiators.
If you think about it, all the existing data centers are in space already. They're just attached to a big ball of rock, water, and air that acts as a support system for them, simplifying cooling and radiation protection.
If humans are going to expand beyond the Earth, we'll certainly need to get much better at building and maintaining things in space, but we don't need to put data centers in space just to support people stuck on the ground.
Shhh, I'm begging people, if brain-dead VCs want to waste their money on things that are obviously farcical (and not actively destructive), please let them and stop doing their due diligence for them. The alternative is that they turn their impossible amounts of capital towards societally-destructive acts like buying up all the real estate in the world and turning us back into land-slaves.
I asked Google for more information about AI datacenter in space. This was the first sentence, 'AI data centers are being developed in space to handle the massive energy demands of AI, using solar power and the vacuum of space for cooling.'
> After laughing at "the vacuum of space for cooling" I closed the page because there was nothing serious there. Basic high school physics student would be laughing at that sentence.
I tried Google and it pointed me to a ycombinator video about Starcloud https://youtu.be/hKw6cRKcqzY They launched a satellite with one H100 in on Nov 2nd.
>I mean, when you tell people that within 10 years it could be the case that most new data centers are being built in space, that sounds wacky to a lot of people, but not to YC. (8:00)
I'mma guess that AI mixed up "datacenter" with "Dyson" to get nonsensical returns involving both vacuums AND space!
You can radiate the excess energy away on the non-sun facing part. In theory.
There are even commercially available prototypes of that vacuum cooling technology, if you want to perform your own experiments with that concept: https://www.amazon.com/Thermos-Stainless-Ounce-Drink-Bottle/...
To be fair, they have mirror surfaces inside. A more realistic prototype would be ultra-black for something like 10-50x better radiative heat transfer. Of course it would still be more like shitty insulation than like good conduction.
That's my water bottle. 10/10 would recommend for not passing temperature gradients.
this kind of sarcasm will go over their head. People truly don't understand vacuums
I absolutely don't understand how vacuum works. So I absolutely cannot model how a Dewar flask which has 15 billion light year thickness between the inner and outer wall - a wall that is very close to absolute zero will behave.
I wonder if there should be levels of "in theory". Yes theoretically black body radiation exist and well stuff cools down to near background radiation via that. But the next level is theoretical implementation. Like actually moving around the heat from source and so on. Maybe this could be the spherical cow step...
Reminds me of the hyperloop. Well yes, things in vacuum tube go fast. Now does enough things go fast to make any sense...
>Now does enough things go fast to make any sense...
You're worried about rates when we can't even get the ball rolling on safety for human occupancy, maintenance, workability.
I swear, nothing on Earth more dangerous than someone with dollar signs in their eyes.
Serious question: how in theory?
I’m under the impression you need to radiate through matter (air, water, physical materials, etc).
Is my understanding of the theory just wrong?
Heat conduction requires a medium, but radiation works perfectly fine in a vacuum. Otherwise the Sun wouldn't be able to heat up the Earth. The problem for spacecraft is that you're limited by how much IR radiation is passively emitted from your heat sinks, you can't actively expel heat any faster.
Hot objects emit infrared light no matter the conditions. The hotter the object, the more light it throws off. By radiating this light away, thermal energy is necessarily consumed and transformed into light. It's kind of wild actually
There is some medium in low Earth orbit. Not all vacuums are created equal. However, LEO vacuum is still very, very sparse compared to the air and water we use for cooling systems.
The main way that heat dissipates from space stations and satellites is through thermal radiation: https://en.wikipedia.org/wiki/Thermal_radiation.
Passively yeah. Can't imagine it's anywhere near as fast as evap or chillers
Yes. And it's an absolutely terrible way to get rid of heat. Cooling in space is a major problem because the actually effective ways to do it are not available.
It's not the Sun..it's the lack of medium.
You can radiate the excess energy away on the non-sun facing part on Earth almost just as well..., though corrosion is an issue.
"just as well"?
I man you totally can radiate excess heat energy on earth, but your comment implies that the parents idea of radiating off excess "energy", specifically HEAT energy in space is possible, which it isn't.
You can radiate excess energy for sure, but you'd first have to convert it away from heat energy into light or radio waves or similar.
I don't think we even have that tech at this point in time, and neither do we have any concepts how this could be done in theory.
>specifically HEAT energy in space is possible, which it isn't.
I see, yes. I was thinking more along the lines of radiating heat energy at a scale that's useable for cooling, not at the more extreme levels of over 500°C/1k fahrenheit
That's technically correct I guess, at some temperature threshold it becomes possible to bleed some fractions of energy while the material is exceedingly hot.
There's no air and negligible thermal medium to convect heat away. The only way heat leaves is through convection from the extremely sparse atmosphere in low Earth orbit (less than a single atom per cubic millimeter) and through thermal radiation. Both of which are much, much slower than convection with water or air.
Space stations need enormous radiator panels to dissipate the heat from the onboard computers and the body heat of a few humans. Cooling an entire data center would require utterly colossal radiator panels.
You could help by using the thumbs down button below the answer.
Why is it my job to train the machines?
If you would kindly consult your Human HR Universal Handbook (2025 Edition) and navigate to section 226.8.2F, you’ll be gently reminded that it’s the responsibility of any and all employees to train their replacements.
Where can I find a copy?
Please consult your Human HR Universal Handbook (2025 Edition) on how to request a new copy of the Human HR Universal Handbook (2025 Edition). I believe it's in Volume III Section 9912.64.1 or thereabouts.
Typically, these sorts of things are located in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard'.
So, it makes sense to always start there.
you have to steal it from the HR department. They do have a copy but they won't tell you.
Human Human Resources?
The Synthetic Human Resources Universal Handbook is in a binary format which is not understood by Organics, but seems to be useful sometimes.
don't you care about maximizing Googles ROI?
AI is a tool. If it doesn't work I'm not going to fix the tool; I'd rather find another tool that can do the job.
I would be tempted to give the thumbs up to terrible answers like that.
What if we deploy reversible computing, which does not produce heat?
Datacenters in Antarctica or floating on the ocean make more sense than space.
Building datacenters in the arctic also has the added benefit that sysadmins would have to take polar bear safety lessons, which would be pretty funny.
Pantheon style
- Costs to keep it in orbit.
- More junk whizzing around Earth.
- Inaccessibility for maintenance.
- Power costs.
- Susceptibility to solar storms and cosmic rays.
Risky/untried things aren't dumb because they're hard, they're dumb when they're more expensive/harder than cheaper/easier alternatives that already exist that do the same thing.
So what?
Of course it’s stupid and it’s never going to work. The same is true for Carbon Capture and Storage, blue hydrogen, etc. It’s nonsense from the start, but it didn’t stop governments around the world to spend billions on it.
It works like this: companies spend a few millions on PR to market a sci-fi project that’s barely plausible. Governments who really want to preserve the status quo but are pressured to “do something” can just announce that they’re sinking billions in it and voila! They’re green, they’re going to save the world.
It’s just a scam to get public money really.
Seems like it would be a nice way to keep the temperatures down.
It’s a really ridiculous idea.
None of these problems seem intractable, just really hard and probably not being solved soon, but one has to start somewhere... so at least the billionaires will fund some scientists and engineers who will do that work?
But for what benefit?
What about on the Moon? My understanding is that heat is the killer. There you could sink pipes into the surface and use that as a heat sink. There are “peaks of eternal light” near the poles where you could get 24/7 solar power.
Latency becomes high but you send large batches of work.
Probably not at all economical compared to anywhere on Earth but the physics work better than orbit where you need giant heat sinks.
It's not a viable heat sink because it's a thermal insulator that doesn't support transport of heat. The thermal conductivity of lunar regolith is lower than rock-wool insulation,
https://pmc.ncbi.nlm.nih.gov/articles/PMC9646997/ ("Thermophysical properties of the regolith on the lunar far side revealed by the in situ temperature probing of the Chang’E-4 mission" (2022))
https://www.engineeringtoolbox.com/thermal-conductivity-d_42...
(Imagine, for entertainment purposes, what would happen if you wrapped a running server rack in a giant ball of rock-wool insulation, 50 meters in radius).
Only way to dissipate large amounts of heat on the moon is with sky-facing radiators.
The Moon doesn't have a magnetic field, though, so the second half of the article discussing difficulties due to radiation would still apply, right?
We will need to develop very robust, space-worthy electronics eventually. We can't rely on natural magnetic fields forever.
We have been relying on natural magnetic fields for over a billion years, so we can probably continue doing so for a while.
Not if you bury it in regolith. That’s an idea for a Lunar base too. The design is called “Hobbit holes.” Bury the occupied structures in piles of basically any local mass you can bury them in.
It’s another huge problem for orbit though. Shielding would add a ton of mass and destroy the economics.
Lunar regolith is so abrasive that digging holes or tunnels isn't going to be cost effective.
You'd have most of the problems of building in space, an abrasive quasi-atmosphere of dust, half a month of darkness every month, and not as good of a heat sink as the Earth's atmosphere.
I had this same thought and mentioned it on an ArsTechnica forum. There was reply that suggested that lunar regolith wouldn't be a good heat sink and a bit of googling makes me think this is probably true.
That said anything has to be better then almost literally nothing so I'm still holding out for datacenters on the moon.
Regardless of how terrible an idea it is, I wouldn't mind some billionaires funding R&D that advances the state of the art in thermal management in space.
Perhaps Elmo can move his toxic, illegal, cancer inducing, gas generators there, instead of doing it in Memphis. https://tennesseelookout.com/2025/07/07/a-billionaire-an-ai-...
You probably wanna launch these https://www.youtube.com/watch?v=mfk0vTe46ds
One thing I haven't seen talked about at all: how quickly would space heat up?
I presume Earth's gravity largely keeps the exosphere it has around it. With some modest fractional % lost year by year. There is a colossal vast volume out there! But given that there's so little matter up in space, what if any temperature rise would we expect from say a constant 1TW of heat being added?
The sun’s radiation hitting earth is 44,000 terawatts. I think we’re fine with an “extra” terawatt. (It’s not even extra, because it would be derived from the sun’s existing energy.)
https://www.nasa.gov/wp-content/uploads/2015/03/135642main_b...
It’s better than having your DC confiscated (by Putin, in Russia), or bombed (in Ukraine, by Russia). As some hyperscalers realized.
“Terrible, horrible, no good” is the new “considered harmful.”
"Mind-bogglingly poorly thought out to the degree of a cynical money-grubbing scheme worthy of the finest cambodian slave camp" was taken and is disrespectful to the hard work and education of said slave camp's operators.
Apparently the book whose title the phrase comes from [1] was published in 1972, four years after Dijkstra published "Considered Harmful".
[1] https://en.wikipedia.org/wiki/Alexander_and_the_Terrible,_Ho...
Additionally, their distributions were different. People who read Dijkstra circa 1968 started using the phrase in their own publications within a decade, whereas people who read Viorst (or had it read to them) in 1972 and following years had at least a few decades of further delay before publishing anything using the corresponding phrase.
Except you don’t build a data center, you add a GPU to an individual starlink node. If you can do that a couple hundred or thousand times you’ve got a lot of compute in space. The next question is how would you redesign compute around your distributed power and cooling profiles? The article doesn’t talk about the actual engineering challenges. (Such as scaling down the radiative cooling design, matching compute node to the maximum feasible power profile, etc)
I’m not arguing it’ll be easy or will ultimately work, but articles like this are unhelpful because they don’t address the fundamental insight being proposed.
OpenAI has over 1 million GPU.
Starlink satellites would be pointless for doing computation because they are spread across the Earth resulting in horrible latency. AI companies spend lots of money on super fast connects within a datacenter.
Starlink with GPU might have some advantage for running edge GPU. But most Starlink customers are close to ground station and it makes a lot more sense to have GPUs there. It is a lot easier to manage them than launching new satellites which could take years.
I agree with most of this post and think the problems are harder than the proponents are making them seem.
But, 1) literally the smartest people and AI in the world will be working on this and 2) man I want to see us get to a type 2 civilisation bad.
The layout of this blog post is also very interesting, it presents a bunch of very hard items to solve and funny enough the last has been solved recently with starlink. So we can approach this problem, it requires great engineering but it’s possible. Maybe it’s as complicated as CERNs LHC but we have one of those.
Next up then is the strong why? When you’re in space, if you set the cost of electricity to zero, the equation gets massively skewed.
Thermal is the biggest challenge but if you have unlimited electricity, lots of stuff becomes possible. Fluorinert cooling, piezoelectric pumps and dual/multi stage cooling loops with step ups. We can put liquid cooling with piezos on phones now, so that technology is moving in the right direction.
For a thought experiment, if launch costs were $0/kg, would this be possible? If the answers yes, then at some point above $0/kg it becomes uneconomical, the challenge is then to beat that number.
The problem isn't "how to cool the chips", it's "how to cool the whole friggin data center."
Any active cooling solution you can think of actually makes the problem worse (unless it's "eject hot mass").
I don't agree with the logic that "something is hard/can't be done right now" is equivalent to "this is a terrible idea and won't work."
There are dozens of companies solving each problem outlined here; if we never attempt the 'hard' thing we will never progress. The author could have easily taken a tone of 'these are all the things that are hard that we will need to solve first' but actively chose to take the 'catastrophically bad idea' angle.
From a more positive angle, I'm a big fan of Northwood Space and they're tackling the 'Communications' problem outlined in this article pretty well.
It's not that it's hard, it's that it's stupid - it's based on a misunderstanding of the physics involved which completely negates any of the benefits.
It's the opposite of engineering, where you understand a problem space and then try to determine the optimal solution given the constraints. This starts with an assumption that the solution is correct, and then tries to engineer fixes to gaps in the solution, without ever reevaluating the solution choice.
From: https://engine.xyz/resident-companies/northwood-space
> Unlike traditional parabolic dish antennas, our phased array antenna can connect with multiple satellites simultaneously.
if that's how they plan to reach more than 1Gbps, then that's not 100Gbps per satellite, that's 100 for a collection of satellites.
Starlink is about 100Mbps. That's 1000x times less than 100Gbps
Unless thermodynamics suddenly changes, how exactly is the cooling problem being solved? Yeeting hot chunks of matter out the back? On a planetary body you have an entire massive system of matter to reject your heat into. In space, you have nothing.
The obvious solution is for half of the hardware to run on dark energy, counteracting the heat generated by the other half. Venture capitalists, use my gofundme site to give me the millions needed to research this, thanks.
That's not the argument though. The argument is "it can be done, the methods to do it are known, but the claims about space being an optimal location to locate our AI datacenters are false and the tradeoffs and unit economics of doing it makes no sense compared with building a data centre on earth somewhere with power and water, preferably not too hot.
But for a more nuanced and optimistic take, this one is good and highlights all the same issues and more https://www.peraspera.us/realities-of-space-based-compute/
(TLDR: the actual use cases for datacentres in space rely on the exact opposite assumption from visions of space clouds for LLMs: most of space is far away and has data transmission latency and throughput issues so you want to do a certain amount of processing for your space data collection and infrastructure and autonomous systems on the edge)
What reason is there to build datacenters in space, though? Literally, what limitation do we face in building datacenters on Earth would building them in space improve?
The surface area of the earth is the limit (which only gets sunlight half the time) and only gets 1 billionth the energy emitted by the sun vs relatively unlimited surface area of solar panels in space
Wouldn't it be easier to build multi storey datacenters than space datacenters?
Cooling data centers in space effectively can't be done right now … or ever.
There are things which are difficult and have unsolved problems, and there are things that just fundamentally make no sense.
Nobody is proposing data centers at the South Pole. This isn’t because it’s difficult. It is difficult, but that’s not the reason it’s not being looked at. Nobody’s doing it because it’s pointless. It’s a massive hassle for very little gain. It’s never going to be worth the cost no matter what problems get solved.
Data centers in space are like that. It’s not that it’s difficult. It’s that the downsides are fundamentally much worse than the advantages, because the advantages aren’t very significant. Ok, you get somewhat more consistent solar power and you can reach a wider ground area by radio or laser. And in exchange for that, you get to deal with cooling in a near perfect insulator, a significantly increased radiation environment, and difficult-to-impossible maintenance. Those challenges can be overcome, sure, but why?
This whole thing makes no sense. Maybe there’s something we just aren’t seeing, or maybe this is what happens when people are able to accumulate far too much money and nobody is willing to tell them they’re being stupid.
The one thing that space has going for itself is space. You could have way bigger datacenters than on Earth and just leave them there, assuming Starship makes it cheap enough to get them there. I think it would maybe make sense if 2 things: - We are sure we will need a lot of gpus for the next 30-40 years. - We can make the solar panels + cooling + GPUs have a great life expectancy, so that we can just leave them up there and accumulate them.
Latency wise it seems okay for llm training to put them higher than Starlink to make them last longer and avoid decelerating because of the atmosphere. And for inference, well, if the infra can be amortized over decades than it might make the inference price cheap enough to endure additional latencies.
Concerning communication, SpaceX I think already has inter-starlinks laser comms, at least a prototype.
You can't just "leave them there" though. They orbit at high speed, which effectively means they actually take up vastly more space, with other objects orbiting at high speed intersecting those orbits. The orbits that are most useful are relatively narrow bands shared with a lot of other satellites and a fair amount of debris, and orbits tend to decay over time (which is a problem if you're in low earth orbit because they'll decay all the way into the atmosphere, and a problem if you're in geostationary orbit because you'll lose the advantage of stationary bit for maintaining comms links). This is a solvable problem with propulsion, but that entails bringing the propellant with you and end-of-life (or an expensive refuelling operation) when it runs out. The cost of maintaining real estate space is vastly more than out right owning land.
Similarly, making stuff have a great life expectancy is much more expensive than having it optimized for cost and operational requirements instead but stored somewhere you can replace individual components as and when they fail, and it's also much easier to maximise life expectancy somewhere bombarded by considerably less radiation.
There is lots and lots and lots of space on Earth where hardly anyone is living. Cheap rural areas can support extremely large datacenters, limited only by availability of utilities and workers.
We also have to build a lot more solar and nuclear in addition of the datacenters themselves, which we need to do anyway but it would compound the land we use for energy production.
Yet a colossal number of servers on satellites would require the same energy-production facilities to be shipped into orbit (and to receive regular maintainence in orbit whenever they fail), which requires loads of land for launch facilities as well as processing for fuel and other consumable resources. Solar might be somewhat more efficient, but not nearly so much so as to make up for the added difficulty in cooling. One could maybe postulate asteroid mining and space manufacturing to reduce the total delta-V requirement per satellite-year, but missions to asteroids have fuel requirements of their own.
If anything, I'd expect large-scale Mars datacenters before large-scale space datacenters, if we can find viable resources there.
It makes sense, I would be curious to see the price computations done by the different space GPUs startups and Big Tech, I wonder how they are getting a cheaper cost, or maybe it is marketing.
Space is not much of an issue for datacenters. For one thing, compute density is growing; it's not uncommon for a datacenter to be capacity limited by power and/or cooling before space becomes an issue; especially for older datacenters.
There are plenty of data centers in urban centers; most major internet exchanges have their core in a skyscraper in a significant downtown, and there will almost always be several floors of colospace surrounding that, and typically in neighboring buildings as well. But when that is too expensive, it's almost always the case that there are satellite DCs in the surrounding suburbs. Running fiber out to the warehouse district isn't too expensive, especially compared to putting things in orbit; and terrestrial power delivery has got to be a lot less expensive and more reliable too.
According to a quick search, StarLink has one 100g space laser on equipped satellites; that's peanuts for terrestrial equipment.
We have tons of space on earth. Cooling in space would be so expensive.
Falcon heavy is only $1,500/kg to LEO. This rate is considerably undercut here on Earth by me, a weasley little nerd, who will move a kilogram in exchange for a pat on the head (if your praise is desirable) or up to tens of dollars (if it isn't).
In exchange for what benefit? There is literally no benefit to having a datacenter in space.
The benefit is capturing a larger percentage of the output of the sun than what hits the earth.
Can that really work? The datacentre will surely be measurably smaller than the earth.
Does your transportation system also have a risk of exploding catastrophically mid-flight? 'cause otherwise no deal. /s
What use is having lots of space, when to actually build out that space you need mass, which is absurdly expensive to launch?
Launching a datacenter like that carries an absurd cost even with Starship type launchers. Unless TSMC moves its production to LEO it's a joke of a proposal.
Underwater [0] is the obvious choice for both space and cooling. Seal the thing and chuck it next to an internet backbone cable.
> More than half the world’s population lives within 120 miles of the coast. By putting datacenters underwater near coastal cities, data would have a short distance to travel
> Among the components crated up and sent to Redmond are a handful of failed servers and related cables. The researchers think this hardware will help them understand why the servers in the underwater datacenter are eight times more reliable than those on land.
[0] https://news.microsoft.com/source/features/sustainability/pr...
I like the underwater idea did not think of that
The problem is a lot of people like the underwater idea and I’m worried we’re heading towards something like literally boiling the ocean as they say.
No worries, the oceans are cooked already.
Why does what it powers matter? As long as it can power something.
The obsolete stuff can be deorbited or recycled in space.
Starship is on a fast track to failure. It is not a cheaper way to get to orbit and will never get there at the current pace. And even if it were, it would not make getting to orbit so cheap that it would somehow make it economically viable to put a datacenter there.
You still have to build the GPUs, etc for the datacenter whether it’s on Earth or in orbit. But to put it in space you also need massive new cooling solution, radiation shielding, orbital boosting, data transmission bandwidth, and you have to launch all of that.
And then, there are zero benefits to putting a datacenter in space over building it on Earth. So why would you want to add all that extra expense?
"[..] deploying a solar array with photovoltaic cells – something essentially equivalent to what I have on the roof of my house here in Ireland, just in space. It works, but it isn't somehow magically better than installing solar panels on the ground – you don't lose that much power through the atmosphere"
As an armchair layman, this claim intuitively doesn't feel very correct.
Of course AI is far from a trustworthy source, but just using it here to get a rough idea of what it thinks about the issue:
"Ground sites average only a few kWh/m²/day compared to ~32.7 kWh/m²/day of continuous, top-of-atmosphere sunlight." .. "continuous exposure (depending on orbit), no weather, and the ability to use high-efficiency cells — all make space solar far denser in delivered energy per m² of panel."