Turning Wasted Renewable Power into AI Compute with Rune

William Layden is Co-founder and CEO at Rune, a company building modular, behind-the-meter micro data centers that plug directly into solar and wind plants. These units operate on a fully electric, DC-to-DC architecture—bypassing the traditional grid and unlocking new economics for compute at renewable energy sites.

In this episode of Inevitable, Layden explains how solar clipping and curtailment leave vast amounts of clean power stranded—and how Rune’s “RELIC” units turn that waste into usable compute. The conversation dives into DC architecture, Bitcoin as a beachhead market, and why traditional data centers are ill-suited to an era of distributed energy. Layden also unpacks why modular infrastructure may be the fastest path to deploying AI-scale compute at the edge of the energy transition.

Episode recorded on Jan 27, 2026 (Published on Feb 17, 2026)


In this episode, we cover:

  • (3:19) An overview of Rune

  • (7:15) How energy flows and gets los in today’s power stack

  • (10:50) Clipping: the hidden inefficiency in solar

  • (14:17) Curtailment: why the grid rejects clean energy

  • (20:47) Starting with Bitcoin before scaling to AI workloads

  • (25:50) Which compute loads can run interruptibly

  • (27:26) Rune’s business model and value to power producers

  • (33:16) Where Rune operates and who’s backing it

  • (36:10) Why modular, DC-native design matters for scale

Links:


  • [00:00:00] [Cody Simms]

    [00:00:00] Today on Inevitable, our guest is William Layden, Co-founder and CEO of Rune. Rune builds modular, behind-the-meter micro data centers that sit directly at solar and wind sites, operating on a DC architecture that bypasses the grid entirely and turns otherwise stranded renewable energy into compute. In order to understand why this matters, I decided to unpack a few things with William in this conversation.

    [00:00:31] First, I wanted to look at how electricity actually flows from renewable generation into the grid today. And second, how data centers are designed to receive and consume electricity today. Both of those systems have largely been bolted onto a legacy power grid rather than designed together.

    [00:00:51] And part of what I wanted to explore with William is where Rune has identified real inefficiencies in that setup, how they're trying to address them, and whether the future of compute could start to look more like energy, becoming modular and distributed rather than continuing to concentrate into ever-larger, monolithic, construction-heavy data centers. From MCJ, I'm Cody Simms, and this is Inevitable. Climate change is inevitable.

    [00:01:24] It's already here, but so are the solutions shaping our future. Join us every week to learn from experts and entrepreneurs about the transition of energy and industry. William, welcome to the show.

    [00:01:43] [William Layden]

    [00:01:43] Hey, Cody. Thanks for having me.

    [00:01:44] [Cody Simms]

    [00:01:45] Well, I'm really excited to learn from you today all about the power electronics that come to play when it comes to renewables, when it comes to data centers, and the solution that you are trying to bring in the middle of that, that I think quite dramatically changes some of the flow of electricity and power in this whole world that is so important to where the economy is heading.

    [00:02:09] Maybe just give us the high-level description of Rune, and then we'll dive in from there.

    [00:02:13] [William Layden]

    [00:02:13] Yeah, yeah, absolutely. I love that you said power electronics, too. I'm so glad we're just jumping right into power electronics.

    [00:02:19] Rune, at its simplest level, we're building solar-powered data centers, solar and wind-powered data centers. We have a vision to transform the world's most abundant energy resources into the world's most flexible and scalable platform for compute, and we do that through our data center product, which is called RELIC. And RELIC stands for Renewable Energy Linked Interruptible Compute.

    [00:02:42] It's a mouthful. That's why we say RELIC. And the RELIC is a highly modular data center product.

    [00:02:48] It's different from anything else that's out there. We think it has the potential to dramatically accelerate time to compute and also scale compute in a way that maybe you have to go to outer space to do.

    [00:02:59] [Cody Simms]

    [00:02:59] We recently had an outer space-focused data center company on the show. So that is one extreme example, but I think you're trying to solve the problem terrestrially. Maybe describe for a minute what the physical Rune product looks like, and then we'll get into where it sits in the stack.

    [00:03:16] [William Layden]

    [00:03:16] You know, maybe starting by saying what it's not. So it is not a building. It is not a 40-foot shipping container, and it is not grid connected.

    [00:03:24] We are building these modular data center products or compute clusters. The power factor is 100 kilowatts per RELIC, and the dimensions are roughly 8 feet long, 2 feet wide, 5 feet high. And that 100 kilowatt power amount and that really small dimensions, those are deliberate design choices, right?

    [00:03:45] So we are plugging into solar power plants and wind power plants. And solar especially tends to be not very energy dense. I think it's the least energy dense generation source we have.

    [00:03:56] And so we need to make our load equally as not energy dense to take advantage of all the energy we have. And so think of us as distributed load designed for distributed energy resources.

    [00:04:08] [Cody Simms]

    [00:04:09] How much do these things weigh roughly? Like how big are they?

    [00:04:11] [William Layden]

    [00:04:12] They're 2,000 pounds fully loaded. So they're 2,000 pounds fully loaded with compute, cooling, comms, the enclosure, everything like that. So maybe to walk you through the actual aspects of it, we call it modules, the different modules that make up a RELIC.

    [00:04:25] We've got a PLC, which is an onboard computer that turns the RELIC on and off. These RELICs turn on based on the availability of power and respond to the prices that the power would otherwise get.

    [00:04:37] We've got the stainless steel enclosure. So that's what actually houses the compute elements. And it's highly ruggedized. And then we've got the DC-DC converter, which back to power electronics, that's actually how we tap the high voltage DC coming out of the power plant.

    [00:04:52] And just in terms of how we deploy, they're rolled to the site on trucks, dropped off with a lift gate, put into place with a forklift till we take two wires and plug them in. And that whole process takes about 45 minutes. So we're energized and cash flowing in 45 minutes.

    [00:05:09] [Cody Simms]

    [00:05:09] So data centers that can essentially be constructed by being carried off a truck onto a forklift and then plugged in with, you said, two wires into whatever DC sort of power source you're getting access to.

    [00:05:22] [William Layden]

    [00:05:22] That's right. We have a philosophy, products, not construction projects. And that culminates in this design choice of the RELIC.

    [00:05:28] [Cody Simms]

    [00:05:29] And from a permitting perspective, there's got to be more to the story than just, oh, you can plug it in with two wires.

    [00:05:35] [William Layden]

    [00:05:35] Yeah, it's pretty light permitting, honestly. I mean, the main permitting we have to go through is through AHJs, which are authority-having jurisdictions. It's like your fire marshal and stuff like that.

    [00:05:46] Oftentimes, these are voluntary permitting. The beauty of these things is they're not permanent structures. So the permitting is very, very light.

    [00:05:56] [Cody Simms]

    [00:05:56] And then from a data connectivity perspective, is it Starlink connected, 5G? How are you actually running loads on these things and sending them up and down?

    [00:06:04] [William Layden]

    [00:06:05] Both, right? Starlink, 5G. We like to use Starlink.

    [00:06:09] And yeah, I think that's a great product. So we're primarily Starlink.

    [00:06:11] [Cody Simms]

    [00:06:12] Okay. So you've got 100 kilowatts. You've got a 100-kilowatt super micro modular data center.

    [00:06:18] I assume you're bringing them on site in rows that essentially sit underneath solar panels or sit in a cluster around a windmill. Is that the right way to think about it? A wind turbine?

    [00:06:31] [William Layden]

    [00:06:31] Yeah, yeah. That's right. So there's a concept in load or renewable energy called behind the meter.

    [00:06:38] And behind the meter refers to the substation meter. And you're tapping that substation. And so you're not really grid connected.

    [00:06:42] You're behind the meter. Very exciting. We're further behind the meter than that.

    [00:06:47] So if you think of a solar power plant, you've got the substation, you've got a step-up transformer, you've got an inverter, you've got a combiner box, and then you've got the modules themselves, the panels. And we're tapping the combiner box. So we are as integrated as possible to the actual generation source as possible.

    [00:07:07] Yeah, so we tap in to combiner boxes.

    [00:07:08] [Cody Simms]

    [00:07:09] So pretend you didn't exist. Maybe we can use solar, you can use wind, whichever example is easier to help people get their head around, help me get my head around. Talk about what does power look like today coming from a solar panel or a wind turbine, ultimately to get to that substation where you can have a behind the meter solution, which is not the typical data center setup today.

    [00:07:32] In fact, it even goes further, which is it goes onto the grid and then as a data center, you're buying power from the grid. So maybe walk us through, what does that electricity flow, that power flow look like in the normal world today if Rune didn't exist?

    [00:07:45] [William Layden]

    [00:07:45] Let's use solar as an example. So you've got so many solar modules. The power plants we work with, they've got like 500,000, 750,000 solar panels.

    [00:07:53] So these things are just of a massive scale. So you've got a collection of solar panels that feed into a combiner box, which feeds into an inverter, which feeds into a step-up transformer, which feeds into a substation. And so you're collecting electricity throughout that entire process, transforming it from direct current into alternating current and stepping it up in voltage.

    [00:08:16] So it gets ready for the grid or the bulk transmission system.

    [00:08:21] [Cody Simms]

    [00:08:21] And then today, so that grid is using AC, alternating current, and is going out wherever it may go. And then a data center is buying power off that grid, pulling it in as alternating current, and then ultimately needing to then convert it back to DC. In fact, actually sometimes multiple times in order to operate the data center.

    [00:08:41] Is that right?

    [00:08:42] [William Layden]

    [00:08:42] Yes. You can think that whole schematic of the solar facility that I laid out, it's the exact opposite, right? So you're getting the grid to the substation, stepping down in voltage, and then rectifying that power from AC to DC.

    [00:08:56] So this is very interesting where solar produces direct current, computers run on direct current, but every electron of direct current flows through a multi-chain conversion system that was designed for alternating current and designed for really the 20th century, the factories of the past, which did mechanical work, not really creating intelligence like the factories of today, the AI factories do.

    [00:09:27] So that's really the insight that Rune has. Why are you transforming DC all the way to AC going through an incredibly complex process to power compute when you can do DC to DC and view electricity as a native input? That's our philosophy.

    [00:09:42] [Cody Simms]

    [00:09:42] Even if you're using, quote unquote, "behind the meter power" and pulling it off of a substation, is that power still going through DC to AC, AC to DC conversion?

    [00:09:52] [William Layden]

    [00:09:52] Yeah, you as a data center, if you are tapping the substation as a behind the meter developer of the data center, you are just skipping the grid part, but you've still got to transform that high voltage alternating current power back down to low voltage direct current that your computers can use.

    [00:10:09] [Cody Simms]

    [00:10:09] So I'm going to introduce two concepts that I've learned about in my prep with you, one of which I think most of our listeners have probably heard of, which is the idea of curtailment. I'm going to come back to that one because that one's actually, for me, a little easier to understand. You've also talked a lot about this concept of clipping, which has to do with this DC to AC conversion and I think is a fundamental part of the business you're building.

    [00:10:33] Can you describe what clipping is and then describe how you see that as being a fundamental driver of economics for the business you're trying to build?

    [00:10:41] [William Layden]

    [00:10:42] You know, clipping goes back to like the architecture of these solar power plants. So the solar panels themselves operate in direct current, they produce direct current, and they go into an inverter that turns DC to AC. But the solar panels themselves are oversized relative to that inverter.

    [00:10:59] So you might have 1.3 units of DC or sometimes even 1.5 units of DC going into that inverter, which can take one unit of DC and transform it into one unit of AC. So I think of it like, imagine you've got a one and a half liter bottle and you're pouring it into a one liter glass. You're going to have spillage and that water is not recoverable.

    [00:11:22] And that's exactly what's happening with the solar industry. And it's not that big of a deal in terms of wasting the power because solar, after all, is a zero fuel cost, zero carbon cost energy source. However, if I were to go to you and say, hey, you are wasting five to 10% of your product and I'm willing to buy it, that's an interesting value proposition.

    [00:11:44] And that's what we can offer by being DC connected and being connected at the combiner box level.

    [00:11:50] [Cody Simms]

    [00:11:50] And you have to prove to these solar plant operators that they are indeed clipping X percent of the power they generate, or do they already have typically this insight?

    [00:11:59] [William Layden]

    [00:12:00] And to me, this, okay, I might sound like a nerd. I'm like so interested in all this.

    [00:12:04] [Cody Simms]

    [00:12:04] William, I'm going to warn you. You've already sounded a lot like a nerd through this whole conversation, which is why we love you.

    [00:12:10] [William Layden]

    [00:12:10] Great. So to me, there's basically no solar asset owner that can tell you reliably and continuously their DC output because what the asset owner with the solar power plant looks at is their inverter availability and the amount of power they're putting onto the grid. And that's a meter that is AC connected. So everything behind the inverter, all the DC stuff we're talking about, it's basically blind.

    [00:12:35] I don't want to say it's totally blind because there's ways to figure it out. They're cumbersome, costly, slow, but they don't actually know with accuracy the amount of DC power they're producing. They've got models, you know, solar is physics.

    [00:12:50] So we can tell you the amount of clipping you'd have, but reality often differs from these models. And that's been so interesting for us because we'll go to these solar power plants. We say, hey, we think you've got X amount of clipping.

    [00:13:00] And they're like, yeah, yeah, whatever. We don't think so. And it's a lot more than you expect.

    [00:13:04] And it's a lot more for a variety of reasons. So right now we've got data centers operating in the Atacama Desert in Chile, exclusively off clipping. And I'm running seven hours a day on just clipped power.

    [00:13:14] [Cody Simms]

    [00:13:15] I mean, if you said it's upwards of 10%, if you've got a 300 megawatt solar farm, that's 30 megawatts of power sitting there. That's a lot of power.

    [00:13:24] [William Layden]

    [00:13:24] Yeah. Basically there's a hidden power plant behind every single solar power plant. And with Rune, every single solar power plant is a latent data center.

    [00:13:33] That's what this technology is really enabling.

    [00:13:34] [Cody Simms]

    [00:13:35] So clipping is this idea that between the power generation and the inversion, there's this lossiness. And it sounds like most solar farms aren't measuring it. You're not really seeing the power come through until it's inverted.

    [00:13:49] And that's when you start to measure the amount of power that you have.

    [00:13:53] [William Layden]

    [00:13:53] Yeah.

    [00:13:53] [Cody Simms]

    [00:13:53] And so, A, you kind of have to prove to them that it exists.

    [00:13:57] But then B, once you do prove to them it exists, it sounds like it actually can be fairly substantial. Then there's also curtailment, which is a separate problem that renewable power has. Maybe describe curtailment.

    [00:14:08] [William Layden]

    [00:14:09] Yeah. Curtailment is basically when the grid tells you, hey, we don't want your electricity. And it tells you that in two ways.

    [00:14:16] One, it might give you a reliability signal where it says, hey, there's too much electricity trying to be exported from your area. So we're just going to shut you down because the lines can't handle it. And the other way that they signal curtailment is through price, economic-based curtailment.

    [00:14:32] And so, price of electricity in real-time markets is often priced every five minutes. And when the sun comes up in these solar-rich areas like Texas or California, you can just see the price collapse. And the price will collapse to negative five bucks, let's say, because oftentimes these power plants will bid into the market assuming they're going to be able to generate a renewable energy credit or it'll be even lower because of the production tax credit.

    [00:15:00] So it's a really interesting mix of economics and policy that drives curtailment. So it's wasted power. The moment that you are likely to produce the most power is the exact moment the grid says, turn off. We don't want your power.

    [00:15:15] [Cody Simms]

    [00:15:15] And when power is curtailed, at some point, it's basically, as I understand it, just shot into the ground, right? It's actually grounded power.

    [00:15:23] Where in the value chain that we just laid out, where is that typically happening?

    [00:15:27] [William Layden]

    [00:15:27] So with a solar facility, the inverter is going to, basically, that one unit of AC that says, hey, give me one unit of AC. I'll push out that one unit of AC. It goes down.

    [00:15:37] The inverter's set point will say, I don't want one. I want none. And so you've got all this infrastructure behind the inverter that's perfectly capable of producing power.

    [00:15:45] It might have been producing power five minutes ago at 100% capacity factor or 100% capacity. And yeah, now it just goes nowhere.

    [00:15:52] [Cody Simms]

    [00:15:52] Once again, it's the inverter that actually controls that decision to curtail today.

    [00:15:57] [William Layden]

    [00:15:57] Yeah, the inverter set point will drive that decision.

    [00:15:59] [Cody Simms]

    [00:15:59] Okay. So with all of that, that helps me understand sort of clipping, curtailment. Now, if I'm a solar farm owner, batteries seem like they also similarly solve these problems or no?

    [00:16:12] [William Layden]

    [00:16:13] Yeah, I think batteries can solve these problems. It's funny because we're deployed at a 200-megawatt solar power plant in Texas, and they've got batteries, and they're equally happy to work with us. I think that batteries, they're often AC connected.

    [00:16:29] So batteries are tapping that substation, and there's a good reason for that. Batteries, you want to be able to do, you want to be able to buy energy when it's cheap and sell it when it's high. And sometimes you can do that by buying directly from the power plant, but sometimes you can do that by buying directly from the market.

    [00:16:43] And so batteries, it's really important to be AC connected. Actually, previously they thought batteries would be DC connected, but now they're doing all AC, even though energy is stored in direct current and batteries. So the difference between us and batteries, I think fundamentally is that number one, there is no capital expenditure for the solar power plant or the wind power plant that we're working with.

    [00:17:03] So when we go to a solar power plant owner, we say, hey, let us buy the power you can't sell, or you don't want to otherwise sell, and it's not going to cost you a capital expenditure outlay. And so the value equation is dramatically different for that proposition compared to a battery. The other thing that I think is very different between us and batteries is I think batteries are actually anti-network effect technologies, meaning we all know that telephone is the classic network effect technology.

    [00:17:31] One telephone is not valuable. Who are you going to call? But a million telephones or a billion telephones, that's really valuable.

    [00:17:37] We've connected the whole world. And I think the opposite is true for batteries. And you see this play out in the economics.

    [00:17:42] So there are some batteries, the first movers, they're going to capture so much value. But every incremental battery you install, particularly in power markets that trade in nodally, like we have in the United States, you're going to cannibalize value. Because what did we just say was the main way batteries make money?

    [00:17:59] It's through arbitrage, buying low and sell high. Well, if enough people buy low and sell high, that spread collapses. And I think that's what you've seen in battery revenues over the past three years in markets like Texas and California.

    [00:18:13] And we don't do that. We're not an anti-network effect technology. We're a network effect technology, meaning we're taking this locally constrained electricity that's subjected to the whims of the power market and weather, frankly, and we're transforming it into a higher value commodity that's a part of a global market.

    [00:18:32] And if you string together a bunch of computers, I don't think you're actually going to destroy value. I think you're going to drive material and scientific progress. And by the way, that global market for compute is way, way, way harder to saturate than the nodal market in West Texas, let's say.

    [00:18:48] [Cody Simms]

    [00:18:49] So I heard kind of three big points. One, we're happy to work with batteries. Many of the plants we work with use Rune and have batteries.

    [00:18:56] Two, batteries sit way downstream of Rune. They're at the site of substations, meaning they're operating in AC because by definition, unless they are wholly there to just back up the local facility, which is probably more of a commercial industrial residential use case than it is a like solar farm use case, they need to be grid connected. And because their intent is to buy low, sell high and sell power back to the grid.

    [00:19:22] So they're thus a grid connected architecture. So they're sitting in that AC stack, which is downstream of the clippings and curtailment decisions that we talked about. And then three, a bit of a sort of theoretical view that the more batteries you add, the worse any one battery can be at arbitraging the buy low, sell high economics.

    [00:19:42] And so ultimately, batteries may have degrading returns as they become a scaled technology. Whereas your argument is that data centers and compute only increase in value as you build the footprint.

    [00:19:57] [William Layden]

    [00:19:57] Yeah, that's exactly right.

    [00:19:58] [Cody Simms]

    [00:19:59] So with that, talking about that network effect of compute, you've taken a similar approach to a company we've had on the pod a few times, Crusoe, which is you may have aspirations of serving AI workloads and things like that, but you've started with a compute business that has essentially an anonymous permissionless buyer on the other end, which is Bitcoin. Maybe describe a bit about that decision and what that sort of initial footprint looks like and how and when you decide to also add AI and other sort of compute workloads into your stack.

    [00:20:38] [William Layden]

    [00:20:39] You know, just from the jump, Crusoe has been hugely inspirational to us to see where they took stranded power in the form of flared natural gas and turned it into Bitcoin and then have driven to way up market to build Stargate and all the awesome things they're doing. We have deliberately selected Bitcoin mining as our beachhead market because it's a bit of a chicken or an egg problem. You need to convince power producers that they should work with you.

    [00:21:05] And then if you want to serve those non-Bitcoin workloads, traditionally you need some kind of customer or offtake agreement. So which are you going to get first? Are you going to get the offtake agreement or are you going to get the power producer to agree to work with you?

    [00:21:19] And we felt that getting an enterprise customer to purchase compute or to sell into one of these channel partners to sell compute, that's a fairly established sales notion and business model. Not saying it's easy, but it's fairly established. And we took the opposite approach.

    [00:21:33] We said, working with solar and wind producers to allow us to plug into their extremely expensive infrastructure in a novel way to purchase energy that they wouldn't otherwise sell. That's a more challenging aspect of our business model. How do we validate the assumption that they want to do that?

    [00:21:50] And so we started with that angle and Bitcoin was a great way to do that because it basically said, we've got an offtaker. They always want that compute and let's work on the power stuff.

    [00:21:59] [Cody Simms]

    [00:22:00] One of the things that's interesting to me about Bitcoin is the idea that if you're basically generating commodity, you can sell into a global market and you don't have to have a business development partner on the other end. You essentially can sell it on an exchange or spot sell it to a large buyer or whatever you want to do. And you have someone to transact on the other end.

    [00:22:19] [William Layden]

    [00:22:19] That's right. You've got a guaranteed buyer of your power or of your compute power. And that's really valuable for us. In terms of where we're going over the next, let's say 18 months, absolutely.

    [00:22:30] We're rolling out upgraded versions of our RELICs on a frequent basis. We are going to be integrating energy storage onboard the RELIC. We're going to be leveraging orchestration software across the network of RELICs.

    [00:22:42] And we think that's going to allow us to move up market to tap high performance compute workloads, including AI inference.

    [00:22:51] [Cody Simms]

    [00:22:51] Now, Bitcoin is, I think by definition, an interruptible load, meaning you don't have to be mining it all the time. You can decide to mine when you have the power at the right price to mine, and you can turn off your miners when you don't. As you move from ASICs and Bitcoin mining to GPUs and AI, is the same true?

    [00:23:15] Are AI companies willing to shut off a training load in the middle of training or willing to say, we can have higher latency for this inference request and allow you to navigate the usage of your data center according to your ability to do it at a price competitive rate?

    [00:23:33] [William Layden]

    [00:23:34] That's a big, important question and could be a very, very valuable answer the way we decide to answer that. I'll start with the technical aspects and then move to the more commercial aspects. But from a technical standpoint, all of these workloads can basically be checkpointed.

    [00:23:47] The first thing you're going to do when you're doing any of these expensive workloads is to institute some kind of checkpointing because even if you've got six nines, it's not 100%. So you always need checkpointing. The degree to which folks want to checkpoint or don't want to checkpoint, or I should say, will tolerate interruptions or not, will be determined by the master of all signals, which is price.

    [00:24:07] And I don't think that candidly, training of frontier models, they're ever going, like an open AI or any of these other frontier labs will ever say, I'm willing to make the trade-off to interrupt based on price. I think the value is just so immense there that that's not who we're going to target with interruptible workloads. However, when you look at the market for compute today, this product that we're discussing already exists.

    [00:24:35] So you do have preemptible instances or spot instances. And those instances are, you know, they're virtual machines that are offered by Azure, Google Cloud, AWS. And basically they say, hey, we'll give you a 90% discount in exchange for being able to kick you off the instance with a 30-second notice.

    [00:24:55] So there is already a market for this kind of interruptible compute, and it is determined by price. And I see that's an area that we will contribute to. The things that are different now than they were five years ago are it's incredibly difficult to bring new load onto the grid and to do these giant construction projects.

    [00:25:14] So how do we allow these, you know, hyperscalers or neoclouds to continue to serve those interruptible customers without sacrificing the extremely high margin workloads that they'd like to run all the time? And I think we can actually expand that interruptible instance offering through our product.

    [00:25:33] [Cody Simms]

    [00:25:34] So maybe just to unpack that a little bit more, describe some of the loads that you think are likely to become interruptible over time.

    [00:25:41] [William Layden]

    [00:25:42] Yeah. So like even like technology companies today, like if you're designing a, let's say, wind turbine, for example, you might not have the willingness to pay to have reserved compute capacity with a hyperscaler. You might say, I'm going to schedule my workloads in a queue.

    [00:25:58] And those workloads are going to be executed via a queue based on the availability of interruptible instances. So when the interruptible instance is available, we'll continue to chop down that queue. When it's not available, we're going to pause because the work we're doing is not as lucrative as a frontier model.

    [00:26:16] So these are, you know, climate simulations, hardware simulations, just, you know, physics-based simulations that require compute. These kinds of workloads are typically what's run on interruptible instances.

    [00:26:28] [Cody Simms]

    [00:26:28] Yeah. Interesting. So, you know, the theory there would be large-scale batch processes that are not directly market or transaction or real-time data oriented, in theory, could run in an on-again, off-again mode based on when there's availability.

    [00:26:48] [William Layden]

    [00:26:48] Yeah, that's right. And a lot of organizations do that today, right? They're queuing up their workloads to get executed in that as the compute becomes available.

    [00:26:54] [Cody Simms]

    [00:26:54] One thing we haven't really dug into is who's your actual customer. So on the one hand, you're needing to work with the renewable power plants themselves and actually get your product put on site. But are you selling hardware today?

    [00:27:06] Are you operating AI clouds? Are you partnering with data centers? What does that look like for you in terms of a business model and a customer set?

    [00:27:17] [William Layden]

    [00:27:17] Yeah, we view our customer as the power plant. So if you view your customer as not necessarily who you sell to, but how do you make money, we view our customer as the power plant. And so our value offering needs to be sufficiently compelling for the power plant.

    [00:27:30] The way we make money is simply we buy electricity at what we think are very attractive prices, and we convert that into a higher value product. In this case, it's Bitcoin. And we live off of our ability to manage that spread and deploy RELICs in an inexpensive way.

    [00:27:47] And that goes back, being able to deploy RELICs in an inexpensive way goes back to our design choices around, we're 100% electric tech stack, we're 100% direct current, and we are 0% construction project. There is no EPC budget or spend here.

    [00:28:04] [Cody Simms]

    [00:28:05] So today you have to be good at actually building the physical micro data center that you've developed, the RELIC. You have to be good at understanding price signals coming off of these power plants. Is there curtailment happening?

    [00:28:17] Is there clipping happening? How can you price access to that with the power plant providers? And then you separately have to be good at actually running essentially a Bitcoin arbitrage business, Bitcoin mining business, which it sounds like is part of your vision, but maybe not the long term vision of what the company needs to be excellent, excellent at over time.

    [00:28:40] Am I following that correctly?

    [00:28:42] [William Layden]

    [00:28:42] Yeah, we want to transform the world's most abundant energy resources into the world's most scalable and flexible compute platform. And those adjectives are very deliberately selected. So when we talk about scalable, well, there's more energy that hits the earth in one hour of sunshine than all of humanity consumes in one year.

    [00:29:00] So we are focused on tapping the most underutilized energy resource we have, and we think that can provide us with immense scale. And then the flexibility angle goes back to this entire framework around direct current, electric tech stack, and modularity. And if you give me a 10 megawatt solar facility in South America, I'll be able to work with you.

    [00:29:21] If you give me a 400 megawatt solar facility in the United States, I'll be able to work with you with the same product. And as our ability to purchase energy changes, meaning if we're just buying wasted power, we can do Bitcoin mining. If we are able to buy wasted power and power that would otherwise be exported to the grid, we can do higher value workloads.

    [00:29:43] So it's a very flexible product, scalable, flexible platform for compute.

    [00:29:48] [Cody Simms]

    [00:29:49] But you're not selling the hardware. So in the price that you are offering to these power plants, you have to absorb and account for profitability that includes the hardware, installation of the hardware, servicing and ongoing maintenance of the hardware, and then ultimately your ability to transact on the other end of your compute load in a profitable way.

    [00:30:10] [William Layden]

    [00:30:10] Yeah, that's exactly right. We are developers and operators of the data center. Correct.

    [00:30:15] [Cody Simms]

    [00:30:15] As you move into AI workloads, I assume then are you running cloud instances and selling that cloud instance to neoclouds or other people like that to then resell to an end customer?

    [00:30:28] Or how do you see that evolving?

    [00:30:29] [William Layden]

    [00:30:29] That's right. Yeah. I think the vision for the AI business or the high-performance computing business is to sell into those neoclouds or hyperscalers, contribute to their compute capacity rather than try to build a neocloud from scratch.

    [00:30:44] [Cody Simms]

    [00:30:44] And William, we jumped right into this and got into the nitty-gritty details from the start. We didn't even let you introduce yourself, but you've got quite a background in this space. This is not your first rodeo when it comes to building distributed compute, Bitcoin, power, et cetera.

    [00:31:01] Maybe describe a bit about your background and how ultimately you came to this thesis for building Rune.

    [00:31:07] [William Layden]

    [00:31:07] I started my career working for President Obama in the White House. And then at the end of the administration, I moved over to a hydropower company. We were buying and operating hydropower assets.

    [00:31:17] And one thing that always stuck out to me is that hydropower was used to make things. We would buy hydropower plants connected to pulp and paper mills, connected to aluminum smelters. And all those things were defunct.

    [00:31:29] And I felt that the power market just wasn't valuing hydropower's ability to make things. And so my question was, what can we make in America today that is energy intensive? And this was back in 2017, and I landed on Bitcoin mining.

    [00:31:46] And so we ended up spinning up one of the first vertically integrated, clean Bitcoin mining facilities. We sold that business in 2019 and then moved over to SoftBank Energy. And that's really where I got exposed to solar power.

    [00:32:01] And I wanted to run the same play that I ran with hydro. How do we use solar to make things? But solar is a very different shape.

    [00:32:09] It's a very different technology. Solar is the only way we make energy without spinning things. So you need a new way to consume that power and a new load to be optimal with that energy source.

    [00:32:22] And that's what Rune is. It's the company I wish I had when I was operating solar power plants. And, you know, it's been about two years building this company with an amazing team and an amazing collection of investors and partners.

    [00:32:34] [Cody Simms]

    [00:32:34] Can you describe a bit about where Rune is today, where you are from a sort of rollout perspective, how you've capitalized the company, anything to give us a sense of where you are in the world of white paper in theory to physically using electricity to actually do compute?

    [00:32:50] [William Layden]

    [00:32:51] You know, we're a small team based out of Mountain View, California. Despite that small team, we're highly leveraged and highly efficient. We've got three projects operating on three different continents.

    [00:33:03] So we've got RELICs operating at solar facilities in Texas. We've got RELICs operating at solar facilities in Chile. And we've got RELICs operating at wind facilities in Sweden.

    [00:33:15] We're a seed stage company and we're backed by, you know, great investors like Union Square, Lower Carbon Capital, Vestas, which is the largest wind turbine producer globally.

    [00:33:26] [Cody Simms]

    [00:33:26] So, you know, I think if you follow the recent news, there's all this discussion with everything going on with PJM about, you know, the Trump administration now has this idea that essentially they can create auctions and allow hyperscalers and data centers to show up at PJM and essentially offer to bring their own power and sort of reserve capacity to the grid. And I think this notion of the Wall Street Journal podcast, the journal was calling it BYOP, bring your own power, which I think is such a fascinating concept. You know, we've talked about Crusoe.

    [00:34:02] They've obviously really sort of been a big pilot of that, of, you know, we're going to commission a large data center and we're going to bring the power to the conversation so that the hyperscaler doesn't even have to fully think about that. And we're going to go get the deal done with the grid somehow. Where do you think that goes?

    [00:34:19] Like I asked Chase this question on an episode a few months ago, and, you know, he basically said he thinks that in the future, the tech companies will be the primary power producers in the world and the grid will essentially buy residual power from whatever the tech companies maybe aren't directly producing, but are essentially commissioning. Do you think that vision is true as well?

    [00:34:45] [William Layden]

    [00:34:45] That's a really interesting vision of where things could go. I think fundamentally the demand for compute is going to drive a lot of changes and there's an immense demand for this compute and there's an immense demand to have it come quickly. And that's really what this is all about.

    [00:35:05] It's all about new strategies to deliver conditioned power to compute quickly. And I think that's certainly one way of doing it. You know, the bring your own power aspect, the way that Crusoe or XAI or any of these other players are doing it.

    [00:35:19] That's very interesting. I think that we certainly have a different way of doing it. And I think it's a very exciting way.

    [00:35:26] Even if you've somehow obviated the grid through bringing your own power, there still remains the aspect of the traditional power system supply chain. And that's something that we avoid. So I think that we actually have the fastest way to deliver conditioned power for compute.

    [00:35:45] [Cody Simms]

    [00:35:45] That's a fascinating way to think about it, which is you don't have to sort of wait or expect the entire power dynamics in the energy markets to change. You're basically saying, hey, there's already this large deployed resource out there in terms of renewables, in terms of solar and wind, you know, double digit percentages of power generation in the United States, growing substantially 80 plus percent of new power generation in the US last year, 2024, I guess. And you can take advantage of the inefficiencies of that existing system that is still trying to power the grid.

    [00:36:23] They're not, you know, built for data. Data centers can use them, but they're not exclusively built for data centers. They're built to power our lives.

    [00:36:30] And there's this inefficiency in them that you can take advantage of without having to wait for the world around to change how power is bought and sold.

    [00:36:37] [William Layden]

    [00:36:38] That's exactly right. Every solar power plant with Rune is a latent data center. So we need to convert those underutilized resources into AI factories.

    [00:36:47] The way that we think is the best way to do that is through the electric tech stack, because we don't wait for transformers. We think direct current to direct current is the best way to do it because it's much faster, much cheaper, much easier to implement. I don't even have to do any conduit, right?

    [00:37:04] I don't even have to put cables underground to do this. And the modular approach allows us to be so much faster. Today it's end of January.

    [00:37:13] We just had a major snowstorm. Good luck pouring concrete in 10 degree weather. It is not happening.

    [00:37:19] We don't use any concrete. You get a forklift, you plop the RELIC down. It's up and running in 30 minutes.

    [00:37:24] I don't want to say all of these AI workloads can be run on the RELIC in its current form, but a substantial amount of compute can be run on the RELIC today in our current form factor, and we'll continue to pursue the amount of workloads that we can capture using these three design choices, electric tech stack, direct current, and modularity.

    [00:37:45] [Cody Simms]

    [00:37:46] And even if things like the 800 volt DC data center future shows up, which is what I hear a lot of NVIDIA and everybody talking about, you're still upstream of all of that.

    [00:37:58] [William Layden]

    [00:37:59] I certainly welcome that 800 volt DC busbar. We're tapping 800 to 1500 volt DC today and delivering it to compute. And we've done, you know, right now our product is Bitcoin, but we've done also delivering that 800 volt DC power to GPUs.

    [00:38:14] So we're already living in the high voltage DC era now. And I think I'm excited to see where all that all goes.

    [00:38:20] [Cody Simms]

    [00:38:20] William, anything else we should have covered? Anywhere you need help? Anything you want to put in the minds of people who are listening, who are interested in what you're doing?

    [00:38:28] [William Layden]

    [00:38:28] We're always hiring. We're looking for talented engineers, power electronics folks, mechanical folks, AI and ML engineers. So check out our website, www.Rune.energy for more information there.

    [00:38:42] [Cody Simms]

    [00:38:42] Thanks for your time today.

    [00:38:43] [William Layden]

    [00:38:43] Thank you.

    [00:38:43] [Cody Simms]

    [00:38:44] Inevitable is an MCJ podcast. At MCJ, we back founders driving the transition of energy and industry and solving the inevitable impacts of climate change. If you'd like to learn more about MCJ, visit us at mcj.vc and subscribe to our weekly newsletter at newsletter.mcj.vc. Thanks and see you next episode.

Next
Next

Turning AI Data Centers Into Grid Allies with Emerald AI