AI Hits a Power Wall. Starcloud Launches Data Centers Into Orbit

Philip Johnston is co-founder and CEO of Starcloud, a company building data centers in space to solve AI's power crisis. Starcloud has already launched the first NVIDIA H100 GPU into orbit and is partnering with cloud providers like Crusoe to scale orbital computing infrastructure.

As AI demand accelerates, data centers are running into a new bottleneck: access to reliable, affordable power. Grid congestion, interconnection delays, and cooling requirements are slowing the deployment of new AI data centers, even as compute demand continues to surge. Traditional data centers face 5-10 year lead times for new power projects due to permitting, interconnection queues, and grid capacity constraints.

In this episode, Philip explains why Starcloud is building data centers in orbit, where continuous solar power is available and heat can be rejected directly into the vacuum of space. He walks through Starcloud’s first on-orbit GPU deployment, the realities of cooling and radiation in space, and how orbital data centers could relieve pressure on terrestrial power systems as AI infrastructure scales.

Episode recorded on Dec 11, 2025 (Published on Jan 13, 2026)


In this episode, we cover:

  • [04:59] What Starcloud's orbital data centers look like (and how they differ from terrestrial facilities)

  • [06:37] How SpaceX Starship's reusable launch vehicles change space economics

  • [10:45] The $500/kg breakeven point for space-based solar vs. Earth 

  • [14:15] Why space solar panels produce 8x more energy than ground-based arrays 

  • [21:19] Thermal management: Cooling NVIDIA GPUs in a vacuum using radiators 

  • [25:57] Edge computing in orbit: Real-time inference on satellite imagery 

  • [29:22] The Crusoe partnership: Selling power-as-a-service in space 

  • [31:21] Starcloud's business model: Power, cooling, and connectivity 

  • [34:18] Addressing critics: What could prevent orbital data centers from working


  • Cody Simms (00:00):

    Today on Inevitable, our guest is Philip Johnston, Co-founder and CEO of Starcloud. AI is flipping an old model on its head. Instead of asking, "Where can we fit another data center?" Organizations are beginning to ask, "Where's the power and how do we bring compute there?" Bitcoin and crypto helped pioneer this idea by chasing cheap, stranded energy. AI is now following the same pattern, looking for cleaner, cheaper, more reliable power, as fast as possible. But what happens if you take that logic to the extreme? If the constraint is clean, reliable 24 by seven power, where can you find effectively unlimited solar energy with no sighting fights or interconnection queues? Space. Starcloud is built around that question. Instead of forcing more data centers onto already stressed grids, they're exploring what it looks like to put compute directly in orbit, where solar is continuous and heat can be rejected into the vacuum.

    (01:08):

    It's a sharp reframing from delivering more power to data centers to bringing data centers to power in the most literal way possible. There are smart people who see this as a promising direction for AI infrastructure. Google, SpaceX, and many others are actively working on it. And there are others who question whether cooling, radiation, and communications constraints make it fundamentally impractical. I don't know who's right, but it's a debate worth having in the open. Philip and his team are among the first actually trying to build this future. So let's get into it. From MCJ, I'm Cody Simms, and this is Inevitable. Climate change is inevitable. It's already here, but so are the solutions shaping our future. Join us every week to learn from experts and entrepreneurs about the transition of energy and industry. Philip, welcome to the show.

    Philip Johnston (02:14):

    Thanks so much for having me.

    Cody Simms (02:15):

    So I had a fun surprise this morning, which is I opened up X or Twitter or whatever we want to call it these days. And the very first tweet I saw, as is often the case, was Elon. And what Elon was doing is he wrote one word, which was yeah. And in that yeah, he was retweeting you. And what you were saying was you were quoting Gavin Baker, who's a very respected deep tech investor who had said that the most important thing in the world over the next three to four years will be putting data centers into space. And so I thought I'd start with that conversation. What was it like to wake up, I guess, a day or two ago and had your tweet on this topic quoted by Elon Musk himself?

    Philip Johnston (03:00):

    It's kind of crazy being retweeted by Elon because immediately I got four million views and I think I got 4,000 follows or something on X and a bunch of messages and all this kind of stuff. It blew up my X feed for a little while.

    Cody Simms (03:15):

    Anything that has come out of it that's actually been useful or helpful or I guess just awareness in general probably.

    Philip Johnston (03:23):

    Awareness in general is always helpful. Nothing in particular at the moment, but yeah, it's definitely helpful for the awareness in general.

    Cody Simms (03:29):

    We're going to jump right into what you guys are building and what you're doing, but it begged the obvious question to me, which is like, is SpaceX going to go try to compete with you in data centers for space or Grock, I guess, to that end. I'm sure you give that a lot of thought. Is that something that you have to worry about?

    Philip Johnston (03:47):

    I'm not sure competing with us is the right word, but they're very explicitly trying to deliver hundreds of gigawatts of orbital data centers into 100 gigawatts per year within a sort of five to ten year timeframe. So that is exactly what we are also trying to do, but it is by far the largest market opportunity of all time times a billion. So I don't think there'll just be one company doing it. All of the other hyperscalers are going to realize in a couple of years, shit, if we don't have auto compute, we cannot scale anywhere near as fast as somebody that does. And at that point, either they pay for compute from SpaceX, which is possibility, or they build it themselves, which is also a possibility, or they partner with somebody who has that capability and at which point I think we become an interesting partner. As you mentioned, SpaceX will have a lower cost base than us because they own the launch, but we will have a lower cost space than all of the hype scalers.

    (04:38):

    And the hyperscalers that don't have a space arm are Microsoft, Meta, Oracle, Google, on the near Cloud front, Crusoe, Corweave, Lambda. So all of those guys are going to find themselves in a predicament quite soon.

    Cody Simms (04:52):

    So basically everyone but Amazon with Blue Origin, I suppose. Let's take a step back. Maybe describe what Starcloud is.

    Philip Johnston (04:59):

    Yeah, Starcloud is building data centers in space. Initially, we're providing cloud and edge services to other Spacecraft, particularly DOW and commercial Earth observation constellations. And then later in a sort of three-year timeframe, we're aiming to compete on energy costs with all data centers terrestrially. That is when we have a lower launch cost with Starship.

    Cody Simms (05:22):

    And you have a prototype up in orbit today?

    Philip Johnston (05:25):

    Yes. We just launched our first spacecraft a month ago, has the first NVIDIA H100 on board, which is about a hundred times more powerful sort of GPU or AI compute than has been in space before. And yesterday we did some press releases around that and we've trained the first model. We trained nanoGPT from Andrej Karpathy and he retweeted that as well. We ran the first version of Gemini in space. So the satellite is now talking to us in the same way that ChatGPT might. And we're about to do a bunch more demos. So we'll be running high-powered inference on synthetic capital radar data. So SAR data from Capella and some more demos and some more actual paid workloads from government customers. And then next year we're launching our second spacecraft, the Starcloud two, which will have a hundred times power generation of the first, by far the largest sort of commercial deployable radiator in space and a whole bunch more H100s, Blackwall Chip, some other chips.

    (06:20):

    And yeah, that one will be commercial.

    Cody Simms (06:22):

    We're going to spend some time diving into unpacking little bits of everything you just laid out. But before we do that, maybe walk us through how you got the notion to start this in the first place and decided that you wanted to be audacious enough to pursue it.

    Philip Johnston (06:37):

    I mean, I've been sort of fascinated by Space for a long time. I also previously had another company and decided when I would do another company, it would be in something I'm sort of passionate about and I think could have a big impact. A few years ago, I went down to Starbase Texas, SpaceX is building the Starship program. While the launch vehicle itself is very impressive, it's going to lower the cost of launch by between 10 and 100 X. What's really much more impressive to me at least was seeing the factories they're building down there, these two Starship gigafactories. So yeah, launch costs might come down by a lot. What's more impressive to me is launch capacity, a tonnage per year that we can get to orbit might go up by a thousand X or more. And the reason is these two Starship Gigafactories can produce three Starship per day.

    (07:21):

    Each one is reusable, so that capacity builds on itself. So unlike with Falcon 9, with Falcon IX, if you've got a new one every day for a year, at the end of the year, you still only have one Falcon 9 upstage because it's dispendable. Whereas with Starship, if you build a new one every day for a year, at the end of the year, you have 365 Starships. Each one has five times better capacity.

    Cody Simms (07:39):

    For the non-SpaceX deep experts listening, including myself, maybe unpack the difference between Starship and Falcon 9.

    Philip Johnston (07:47):

    Falcon 9 is a word you might have heard before. Falcon 9 just means what almost all Mass Orbit gets to space on right now. So it's the workhorse of getting stuff to orbit. It's run by SpaceX. It's the first one that has a reusable booster, but it still has a dispendable upstage. So every time they're throwing away $10 million or more on this upper stage. With Starship, it's completely revolutionary because it's the first one that has both a reusable booster and a reusable upstage. It's an incredibly hard engineering challenge because you have to make that one reenter, but it changes the fundamental economics completely. Right now, it's a bit like you can think of it like if you were to fly from LA to New York and every time you landed, you had to throw away the plane and rebuild a new one. Can you imagine how expensive the per seat ticket would be for that flight?

    (08:30):

    That's currently what happens with space travel. Soon it will be like you land and the plane can take off and go back again and take off and go back again and it lowers the cost by maybe a thousand times because this thing is reusable. That's what's coming down the line is with the Starship program is fully reusable launch vehicles.

    Cody Simms (08:48):

    Okay. So basically with Starship, you're building almost the equivalent of a space shuttle, but one that has all aspects of the rocket as part of that shuttle that goes up and down. So the boosters and everything are all connected.

    Philip Johnston (09:00):

    All reusable and connected. Yeah.

    Cody Simms (09:02):

    We haven't featured much on the show in the space ecosystem at all. This is where the entire sort of space tech world is looking toward where we can start to launch and take more things into space at a dramatically lower launch cost than they are today by using SpaceX essentially as the shuttle to get up and back from there.

    Philip Johnston (09:23):

    Yes, correct.

    Cody Simms (09:25):

    And so for you, that allows you to get a greater set of data centers into space. But back to my question at the beginning about competing with SpaceX, they get the transport for free, right? So you're still sort of navigating the cost of transport.

    Philip Johnston (09:39):

    Yeah, we are a customer of SpaceX. So we will have a higher cost base than SpaceX, and we will have a lower cost base than every other hyperscaler.

    Cody Simms (09:46):

    Getting back to the macro question, you saw this happening, and then what triggered you to think, oh, there's an opportunity for data centers here?

    Philip Johnston (09:55):

    So the point of being star based, seeing the factories, it got me thinking of these sort of sci-fi concepts that I remember reading about as a kid, I think as involved in the 40s was writing about space-based solar, which is this idea where you have these huge solar panels in space and you beam that power down somehow, which is a nice idea. And there is a breakeven launch cost where that makes sense. The problem with it is you lose sort of 90 to 95% of the energy and transmission from space to Earth. And so rather than beaming that power down, if you can find a cheap way to get the consumption endpoint to space, which in most cases, almost all new energy products on Earth right now are being built to power data centers or net new anyway. And so if you can, instead of beaming the power down to indirectly or directly plug that into a data center, if you can move the data center to space, you don't lose 95% of the energy.

    (10:45):

    And so instead of the breakeven launch cost being around $50 a kilo, which is what we think it is for space-based solar, you have a $500 a kilo breakeven point where it makes sense for data centers in space.

    Cody Simms (10:59):

    And that's mostly recouped through cost of power. And I assume I've heard you kind of mention cost of cooling as well as being the other big cost savings side of things. Is that right?

    Philip Johnston (11:10):

    Correct.

    Cody Simms (11:11):

    So rather than assuming why space is different, let's start with the constraints around Earth. So what are the constraints that a data center has on Earth today that you maybe thought, oh, this could be different?

    Philip Johnston (11:25):

    Primary constraint is power. A secondary constraint is cooling, but cooling is kind of a function of power. And when I say power, what I mean is if you construct a new data center, assuming that the grid is at capacity, which currently is mega over capacity, then you need to also at the same time construct a new energy project. Building a new data center is fairly easy, doesn't require too much permitting or any of that kind of stuff. Building a new energy project requires ridiculous amount of new permitting. So you can have like a five or 10 year long lead time on a new energy project of that scale, be that nuclear or solar or hydro or any sort of new form of energy. You're looking at decades of lead time in permitting for that.

    Cody Simms (12:10):

    So particularly it's not specifically power, but time to power, I think is probably the big constraint, right? And so for sure, time to power, interconnect, permitting, all of that is the huge lead time right now. It's a much greater lead time than the actual physical construction of a data center. And then talk about the cooling and water side of things as well. I assume that's also an area where there's maybe some differences in what you can do in orbit.

    Philip Johnston (12:36):

    Ideally, data centers on Earth use water for cooling because it's much cheaper than using air to cool if it's particularly depending on where you are in the world, but if you're in a warmer place like in Texas or somewhere, then using water is way more efficient than using air. Actually, the amount of water you use in the actual data center for cooling is less significant than the amount of water you would use. If you're, for example, building a new coal fired or nuclear power station, what they have is these enormous evaporation towers for cooling. That actually uses the bulk of the water versus the actual keeping the data center cold. They both consume quite a bit of water. So that's the main constraint there.

    Cody Simms (13:18):

    Now you had the aha that maybe space can help with minimizing these constraints. Let's talk about each of those. So maybe start with energy, talk about solar in orbit and what that looks like.

    Philip Johnston (13:29):

    So we run enormous solar panels, which can generate ... On our website, we have a concept video of five gigawatt, four kilometers by four kilometer solar panel. The one we've got an orbit right now is a one kilowatt peak power drawer. The next one will have 10 kilowatts, although much more solar than the first because the first is running on batteries a lot of the time. For the third version, which is 100 kilowatts, you're looking at tennis court-sized solar arrays, a reasonably decent sized solar arrays, and then we scale up from there essentially. And yeah, we get unlimited low cost energy in the form of solar in that way.

    Cody Simms (14:02):

    In terms of power production, are there any valleys in power production in space? Does anything about the shape of orbit impact the ability to produce power or are you 24 by seven power production?

    Philip Johnston (14:15):

    It kind of depends which orbit you're in, but you can fly in what they call a dawn dusk sun synchronous orbit at around 1200 kilometers. Then you have twenty four seven solar. And what that actually means is one square meter of solar panel in space over the course of a year will produce eight times the energy of one square meter of solar panel on Earth because you don't have seasonality, you don't have day night cycle, you don't have attenuation in the atmosphere. I mean, that's one of the big cost savings is less solar panels. It's the third cost saving, actually. So you're asking the right question. We should compare a terrestrial solar project with a solar project in space. Terrestrial solar has three big costs. One I first talked about already, which is cost of permitted land. Biggest cost in North America, it can be.

    (14:55):

    Second is the cost of battery storage because you have a day night cycle and you need to charge batteries so you still have power at night. And then the third is the cost of the solar cells themselves. So for number one, we don't need permitted land, biggest costs gone. For number two, we don't need battery storage. So second biggest cost gone. And then for the last one, we need eight times less solar cells since one square meter of solar panel produces eight times the energy of in space. The only additional cost we have is the launch cost or the main additional cost. All of the other costs are roughly either cheaper in space or the same. So the launch cost you can see is there's a breakeven point where the launch cost is below the cost of permitted land batteries in one eighth of solar or eight times solar.

    (15:34):

    And we see that break even point to be around $500 a kilo.

    Cody Simms (15:38):

    Maybe describe what the solar arrays look like. This is a different kind of PV than you would have terrestrially. Is that right?

    Philip Johnston (15:44):

    Used to be. Space companies used to use what they call gallium arsonite cells, which is ever slightly more efficient or it can be 50% more efficient. So instead of having 20% production of energy or useful transformation of sun to electricity, these gallium master night cells have 30% on a good day, but that is going out of fashion and people are just using bug standard terrestrial silicon cells. And the reason is if you're less mass constrained, which you are now with things like Starship and Falcon 9, then you don't care about that given the cost increase. So gallium arsonite is about a hundred times more expensive per watt than terrestrial silicon cells.

    Cody Simms (16:24):

    What about radiative damage? Do you worry about decay from no atmosphere getting in the way of the sun's radiation?

    Philip Johnston (16:33):

    Depends where you fly. If you're flying at the Silver 1200 orbit, you'd need most likely sort of cover glass because it's much higher radiation. In the lower altitudes, it's not quite as bad and you can have much thinner film coverings.

    Cody Simms (16:45):

    You mentioned the size of these arrays. You said for a hundred kilowatt system, you're at a tennis court sized solar array roughly. Is that correct?

    Philip Johnston (16:53):

    Slightly larger than that, but yeah.

    Cody Simms (16:55):

    How does that compare to what's been launched successfully into space today? The International Space Station, for example, I assume, has a substantial amount of solar array on it, though I don't know how modern those panels are at this point.

    Yin Lu (17:10):

    Hey everyone. I'm Yin, a partner at MCJ, here to take a quick minute to tell you about the MCJ Collective membership. Globally, startups are rewriting industries to be cleaner, more profitable, and more secure. And at MCJ, we recognize that a rapidly changing business landscape requires a workforce that can adapt. MCJ Collective is a vetted member network for tech and industry leaders who are building, working for, or advising on solutions that can address the transition of energy and industry. MCJ Collective connects members with one another, with MCJ's portfolio and our broader network. We do this through a powerful member hub, timely introductions, curated events, and a unique talent matchmaking system and opportunities to learn from peers and podcast guests. We started in 2019 and have grown to thousands of members globally. If you want to learn more, head over to mcj.vc and click the membership tab at the top.

    (18:09):

    Thanks and enjoy the rest of the show.

    Philip Johnston (18:12):

    In aggregate, by far, the most plentiful energy production space right now is on Starlink. So each one of their Starlink V2s, I don't have the exact number, but I would guess it's in the five to 10 kilowatt range. V3, they're saying is going to be 20 kilowatts. The V3 is probably more than the entire international space station. So the radiating on the ISS dissipate around 70 kilowatts, but some of that is also just dissipating the energy from the sun.

    Cody Simms (18:43):

    So these 100 kilowatt arrays a size of a tennis court roughly are sizable. How many NVIDIA chips are you able to run on that amount of power production?

    Philip Johnston (18:55):

    Well, with the Blackwater chip, they're approaching one kilowatt per chip.

    Cody Simms (18:58):

    So 100 or so per instance, I guess. Or do you imagine your panels scaling up even larger than that from a volumetric perspective?

    Philip Johnston (19:07):

    What you're constrained with for that styling V3 is the launch form factor, which is we're going to be launching out of the Starship Pez dispenser, which is like this stack that shoots stuff outside. I don't know if you've seen that. So you are constrained on volume and mass per individual satellite. And I would imagine you can probably get to about 1150 kilowatts, but you can't really go above that per satellite.

    Cody Simms (19:34):

    So I'm trying to think. So if you can get roughly a hundred chips, it sounds like one instance. Help me paint a middle picture of what that looks like today compared to a typical data center, which are massive, like thousands and thousands of chips. You're sort of at a different order of magnitude here, I would assume.

    Philip Johnston (19:52):

    Per launch, you're at a similar order of magnitude because each starship, let's say they can each take 50 styling V3 formE factors. So you're talking about five megawatts per launch, basically. I expect that can probably scale up as well.

    Cody Simms (20:07):

    Okay. So let's move on to the second area you talked about, which is cooling. So in space, you're at basically absolute zero, I assume, when you're on the dark side of anything you do. Is that true or does an object in orbit start to generate its own heat as it faces the sun?

    Philip Johnston (20:23):

    So even when you're on the dark side of the Earth, even when you're in Earth's eclipse, you actually get a lot of infrared from Earth. That's one thing you're probably not going to be in absolutely zero unless you are very shaded from both the Earth and the back of the spacecraft and the sun. And also, yeah, it's not absolutely about three degrees kelvin, but you definitely absorb a lot of heat from the sun. What's interesting and what most people don't realize is you can actually emit about 80% of the waste heat in terms of wattage towards the sun as you can away from the sun. So if the sun is like here and the spacecraft is a flat panel here of solar panels and radiators, if you're running radiators here, you can emit about 80% this way, then you can emit that way. And the reason is most of what you're emitting to is not sun.

    (21:07):

    Most of what you're emmitting to is deep space.

    Cody Simms (21:10):

    How are you expelling the heat where you don't have any evaporative property, I assume, because you don't have atmosphere?

    Philip Johnston (21:19):

    All of our heat loss must come through infrared radiation. So as we mentioned, we don't have ... The two ways you keep a data center cold on Earth is either water past the chips or cold air, and we don't have either water or cold air. So we have a liquid that goes past the chips and then it goes out to this radiator and then that radiator emits in infrared the heat.

    Cody Simms (21:41):

    The radiator emits its own infrared?

    Philip Johnston (21:44):

    Yeah. So I mean, everything is glowing in infrared all the time. If you had a thermal camera on your face, you'd see that your face is blowing infrared. When there's a temperature differential, even in a vacuum, when there's a temperature differential between two bodies, one will be emitting infrared towards the other. And so that's how it works essentially. Our radiator will be just glowing in infrared if we keep it at about 50 degrees C and that will get rid of the heat.

    Cody Simms (22:06):

    So those are some of the key differences when you talk about the constraints that Earth already has and how you would be different in space. But then space introduces obviously its own constraints as well that are very unique. Maintenance obviously becomes significantly greater challenge. I assume security is a totally different challenge. It's less around physical security and more around your ability to prevent malicious hacking, I guess, of the devices and penetration protection in that regard. Maybe describe some of the constraints that you're having to think about that a Earthborne data center developer is not having to navigate.

    Philip Johnston (22:42):

    The two big engineering constraints that I've mentioned, one is the dissipating the heat in a vacuum, so building radiuses. The other is making the chips work in a high radiation environment. And so that's a combination of shielding and software. The other constraints are, yeah, you mentioned security. So we will have encryption in the same way that a Starlink satellite has encryption essentially. There's also some question around physical security. It's actually much harder to blow up a data center in space than it is to blow up a data center in Virginia, which is where most data centers in the US are. So that's not as much of a problem as people think.

    Cody Simms (23:15):

    What do you think is the constraint that is most misunderstood by people coming at it from a Earthborne lens?

    Philip Johnston (23:22):

    I would say the thermal challenge is most misunderstood. There's like three layers of understanding for the thermal. There's the first layer which is, oh, space is cold, like you can put a data center there. And then there's the kind of midwhip meme, which is space is a vacuum, there's no convectional conduction. It's impossible. You need to run crazy sized radiators. And then there's the sort of like God mode meme where it's like, okay, but there comes a point on Earth where you literally can't spew out more waste heat otherwise you're going to be boiling the oceans. Actually, one of the key advantages of space is we can scale almost indefinitely with radiative infrared cooling. And on a long-term basis, that is actually one of the key constraint on scaling data centers on Earth is waste heat. So it's misunderstood on a few different levels, let's say.

    Cody Simms (24:09):

    Well, I'm glad I led with the midwit question. Amazing. So with all of this, what stays hard as the cost of launch sort of decreases and everyone who's trying to do this has access to launch vehicles. Basically what ultimately becomes your moat. If the GPUs are something everyone loosely has access to and launch is something that anyone can pay for, how do you stay sort of ahead of the pack?

    Philip Johnston (24:41):

    The core IP we're developing right now is I say around cooling and radiation, shielding and hardening. Over time, it wouldn't surprise me if the Chinese get very good at manufacturing low cost and lightweight radiators. People will not be able to use Chinese satellites for data processing. So I think we have a moat against Chinese and we can use their components, but yeah, we're just moving way faster than anybody else in terms of innovating new solutions for this stuff. We completely ripped apart the H100 that we've got in space. We cut 80% of the mass from it, removing the heat syncs, power subsystems like AC to DC converter, the casing and everything immersed in this liquid cooling thing. And there's a lot of new development that went into that, making that H100 work in space. And I think our, like any startup, our core moat for now is that we have the best engineering team in the world moving fastest.

    Cody Simms (25:31):

    You mentioned at the very start of our conversation, the current use case in space is for using actual space data. So it's pulling, I assume it's running inference loads off of data that's collected in space and helping to draw conclusions from those so that you're not having to deal with the sort of latency of sending large training loads up from Earth or sending on demand inference loads back down to Earth. Am I following that correctly?

    Philip Johnston (25:57):

    Correct.

    Cody Simms (25:58):

    Give me some examples. What do your initial customers look like?

    Philip Johnston (26:01):

    So anybody that needs to get information about what's happening on Earth down quickly. And so the bottleneck right now is you have to wait for a ground station and then let's say you're taking imagery of the straits of Taiwan. You want to know, is a ship left shiner towards Taiwan? What happens right now is because these satellites are not fixed above the Earth, they're orbiting pretty fast. So you take an image of straight Taiwan, then you have to wait for it to pass a ground station, and then you have to downlink imagery of the entire straits of Taiwan, which is maybe many hundreds or hundreds of terabytes or gigabytes or whatever. You're not going to get that information back quickly. When we're in space, people will be able to ship that data to us with an optical terminal. So we will fly three optical terminals on our second satellite, either directly or through a back or network.

    (26:48):

    And optical in space is much higher data rates than from space to ground because space to ground is RF and optical is just way, way higher data rates. So we can then run inference on that imagery on orbit and then we can just downlink in real time the insight from that. And the insight might be there is a vessel in this location or it might be there's a wildfire in this location or it could be a ship has capsized and there's a lifeboat here or things that you'd be interested in the latency sense to build.

    Cody Simms (27:14):

    Do you need to be line of sight to whatever satellites are collecting said data in order to do that? Or is there a cross-link capability in space that is still faster than having to deal with a uplink, down-link connection to Earth?

    Philip Johnston (27:28):

    There is three cross-link options coming online very soon. That would be one way of doing it. The other is when we have several of our own or more of our own spacecraft, then the idea would be that at least one of them is in line of sight of our customer satellites.

    Cody Simms (27:45):

    Do you envision a world in the future where there is substantial uplink downlink from Earth and you actually are running either inference or training data centers in space or is that pretty far out? There is a large enough market right now just on space-based intelligence calculations that there's a near-term opportunity here.

    Philip Johnston (28:04):

    Yes, I envision that word. That world is coming extremely fast. SpaceX is talking about building a hundred gigawatts per year of compute space. That's like the entire US power bridge in three or four years. So that is coming. I mean, they're not joking when they say that.

    Cody Simms (28:18):

    So talk about where you are today then. You've got an initial prototype running in space. As you said, you've got the first set of NVIDIA chips actually running inferences in space today. What did that look like? When did that go live and what's next?

    Philip Johnston (28:32):

    Sure. So we launched our first spacecraft in November 2nd, five weeks ago now with the first NVIDIA H-100 onboard, 100 times more powerful. GPU compute than has been in space before. We've trained the first model in space from Andrej Karpathy, this nano GPT model. We have run high powered inference on imagery and also we're running a version of Gemini on this more entertaining demo. It's an amazing comprement of the team, to be honest, because most people say that you couldn't run an H100 in space even a few months ago, and we've proved that you can.

    Cody Simms (29:03):

    And you've announced a partnership recently with Crusoe. That's how I originally came across you guys was I was reviewing some documents that I had access to from Crusoe and I saw your name in there and I was like, "Who are these guys?" Reached out to you. And since then, you've launched your initial prototype into space and you've announced this Crusoe partnership. So maybe share a little bit about what that looks like.

    Philip Johnston (29:22):

    So with Cruso, I've been many speaking with Cully, one of the co-founders and president there, and we've announced two partnerships, actually. The first one is we will be running a version of Crusoe Cloud on our second spacecraft next year. For later iterations, we've come to an agreement to provide them with power, so up to 10 gigawatts from the early 2030s of power. And essentially, the way that would work is we don't really have any ambition in building our own cloud because these guys have been doing this for decades. It's their core business, they have a great offering. Our core business is essentially being an energy provider, a low cost energy provider. So we will give Crusoe a box that has power cooling and connectivity. They can put whatever chip architecture they want in that. They sell that to their customers at whatever rate they want.

    (30:09):

    We give them power at three cents per kilowatt hour. And for that, we can very easily cover the cost of launching, designing, building, and all the rest of it.

    Cody Simms (30:18):

    Ultimately then, are you a power seller? Is that the business model for Starcloud?

    Philip Johnston (30:25):

    That's definitely one end state of the business model. People can put whatever chip architecture they like on us and we essentially sell power, power cooling connectivity.

    Cody Simms (30:34):

    Your power would be sort of all in to the cost of launch, the cost of maintenance of said spacecraft and the cooling and everything else, but you're basically selling them an all- in cost of power for them to run. And then do you manage and operate these spacecraft then?

    Philip Johnston (30:53):

    Yeah, we'll be managing and operating spacecraft. What they choose to do inside that box is up to them. They need to make sure that they don't require, at least for the first few spacecraft, we're not going to have too much maintenance capability on there. So some redundancy on some of the critical systems and over provisioning. Over time, the whole industry is moving towards robotic maintenance and we see that as the way it's going as well.

    Cody Simms (31:12):

    So it sounds like the big benchmark that matters the most for you then is dollar per kilowatt hour for GPU. Is that ultimately what you need to solve for?

    Philip Johnston (31:21):

    Yes.

    Cody Simms (31:25):

    Is theare a different business model where you are selling hardware to other people to run and operate these? Or is that, I guess probably still to be determined? You're pretty early in the path here.

    Philip Johnston (31:35):

    We are developing very useful IP. Let's put it that way. For example, any high energy use case in space is going to require being able to dissipate heat in a vacuum. So things like asteroid mining, refining of materials in space, manufacturing on orbit, space hotels, all of these types of things will require dissipating large amounts of heat in a vacuum, for which you'll need a very large low cost and low mass deployable radiator, which is the core IP that we are developing. So if somebody used to want to buy that as a component, I can see a world where we start selling that as well.

    Cody Simms (32:07):

    So now going back to our original conversation, the reason why you guys would exist as a standalone company relative to SpaceX and Starlink, it sounds like, is yes, they're going to always be cheaper to launch and get something into orbit, but you are laser focused on just this particular problem and this particular use case. So you will be always optimizing around operating these data center spacecraft in space and B, there are going to be plenty of hyperscalers and others who don't want their actual inference loads being managed and run by SpaceX. Is that sort of the core story of Starcloud?

    Philip Johnston (32:47):

    That is an accurate summary, yes.

    Cody Simms (32:50):

    And I guess lastly, just to complete the loop here, talk about your background. What were you doing before this?

    Philip Johnston (32:56):

    So my background is I started my career for the first five years on the engineering side and software. Before that, I studied applied math and theoretical physics, undergrad and masters. And then I moved to the more commercial product side of things. I was with McKinsey for a few years working with the space agencies of various governments and then founded and sold another company and then started on this two years ago.

    Cody Simms (33:17):

    Describe a bit about the company's path so far. I think you guys went through YC. You've raised a bit of funding from some notable investors. Where are you to date?

    Philip Johnston (33:28):

    Yeah, we started January last year, and then in June we went through Y Combinator. To date, we raised about $34 million and it will potentially look to go out for a Series A and Q1 next year.

    Cody Simms (33:39):

    I mean, that's a pretty amazing accomplishment for a small team on relatively little capital raise too for what you've accomplished. I guess the last question I would have to ask you is, what would be true for Starcloud to not work? There have been some critics out there who've pointed out all these reasons why your solution may not be credible from a cooling perspective, from a power perspective, et cetera. There's a NASA scientist who wrote some piece recently talking about a bunch of critiques of things he's seen in his career. Maybe talk a little bit about some of the critiques and where you believe they maybe fall short from your perspective.

    Philip Johnston (34:18):

    There's some pretty thoughtful analysis of what we're doing. They often point to the cooling. I think that's solvable. If there was to be a 10X reduction in energy costs on Earth for some reason for the next 50 years, that would probably mean we are not a viable business. I would say if you extend life out on a sort of more cosmic timeframe, like let's say even just a thousand years, there is zero possibility you can continue to scale compute on Earth.

    (35:05):

    So at some point you definitely have to figure it out. Where that timeline is, is up for debate. If the cost of energy on Earth were to come to 0.2 cents per kilowatt hour, and that may be ... The lowest forecast cost I've ever seen for fusion is 20 cents per kilowatt hour, and that's from Helion and they're kind of the most incentivized to give a low forecost. So it doesn't look like it's going to happen anytime soon, but that would be one thing that would certainly stop us from being viable. If for some reason the demand were to stop growing, I think we would probably not replace existing data centers on Earth. It's only if you need new data centers that you would be building in space. So if there was to be a massive dropoff in demand growth, that would mean we are not super useful at that point.

    (35:44):

    Thermal is completely solvable as is radiation. So in the end, I wouldn't say that's one of them. I mean, if the launch cost were to take a very long time to come down as well, that would also be a problem.

    Cody Simms (35:54):

    What do you think the next five years looks like?

    Philip Johnston (35:56):

    I'll say in 10 years. In 10 years, I think most new data centers will be being built in space, and that will still only be maybe less than 1% of the total data center stock, but it will be a much faster growing proportion. In five years, I think we'll be at rate production. So I think we'll be producing at least tens of gigawatts per year of all the compute, and then that will scale up to probably hundreds of gigawatts per year, but by the end of 10 years.

    Cody Simms (36:23):

    One last question. Anywhere you're particularly needing help or areas where if our audience that's listening is excited, that they can jump in and try to support you.

    Philip Johnston (36:33):

    On the hiring side, we're looking for electrical engineers, power, electronics, and software. We're pretty good on the mechanical thermal and spacecraft design type side of things.

    Cody Simms (36:43):

    Philip, this has been a incredible exercise for me just to think about something that forces me to think about the world differently. I appreciate you taking the time to join us and congrats on what you've achieved thus far. It's an amazing accomplishment for a small team, 12 people, and I'm excited to follow your journey and see what comes next.

    Philip Johnston (37:01):

    Thanks so much, Cody. Really appreciate it.

    Cody Simms (37:04):

    Inevitable is an MCJ podcast. At MCJ, we back founders, driving the transition of energy and industry and solving the inevitable impacts of climate change. If you'd like to learn more about MCJ, visit us at MCJ.vc and subscribe to our weekly newsletter at newsletter.mcj.vc. Thanks and see you next episode.

Next
Next

The Missing Piece Holding Back Advanced Nuclear with Standard Nuclear