Headlines warn that data centers are straining the Texas grid. The reality is more interesting: data centers — through their own flexibility and by supporting distributed flexibility markets — can strengthen the grid.
I explored that topic and a lot more with Astrid Atkinson, CEO and co-founder of Camus Energy and former senior reliability engineer at Google. At Google, she led teams responsible for keeping the world’s search engine online, matching computing load to available capacity across continents. Her lessons from that experience translates well to the grid: reliability doesn’t come from scale alone.
Reliability comes from flexibility and orchestration.
Astrid calls it “grid orchestration” which means coordinating and optimizing both supply and demand in real time all the way down to homes and businesses, but starting with better data and better management of the distribution grid. We’re moving toward a more decentralized network of flexible resources: batteries, EVs, thermostats, and yes, data centers. We’re going to need a much smarter, much better orchestrated grid.
Texas already has the raw material for this shift.
Rooftop solar, batteries, and EVs are scaling faster than ever. We now have over 6 gigawatts of distributed resources in ERCOT, roughly the size of six large power plants.
But they’re not well coordinated and that disorganized integration means we’re leaving cost savings and reliability benefits on the table. Part of the problem is that market signals aren’t flowing to distributed resources at a level near their actual value.
That’s where data centers could potentially come in.
If data center developers fund load flexibility, they could potentially put money into consumers’ pockets and increase their speed to interconnection.
[D]ata centers fundamentally are not really budget constrained for getting these things built. They’re really time constrained. And so, I think in there is the opportunity to start thinking about off-market or kind of secondary market opportunities to get value for flexibility, both from the site itself, but also from you, me, batteries [and other DERs]…
Astrid’s experience offers two key lessons for Texas:
Automation must be simple and local. The best systems don’t depend on constant central control.
The biggest savings aren’t in wholesale prices, they’re in avoided infrastructure. Flexible demand can defer costly upgrades to poles and wires, easing pressure on bills.
We’re seeing movement in the right direction, ERCOT’s efforts to integrate distributed energy resources, electric cooperatives piloting new demand response tools, and increasing talk of creating distribution-level markets where buyers and sellers can trade flexibility directly.
Texas has always led by embracing what’s next before anyone else believed it could work. This is the next frontier: flexibility, orchestration, and coordination of DERs.
“There’s never been a more exciting time to work in this industry,” Astrid said.
She’s right. We have the tools, the data, and the entrepreneurial spirit. What we need now is the will to connect them.
The path forward isn’t about choosing between growth, affordability, and reliability. If we build smart, Texas can have it all.
If this perspective resonates, share it with someone who cares about where Texas energy goes next and subscribe to stay part of that conversation.
Watch the Interview Here:
Timestamps:
00:00 – Intro
02:30 – Astrid’s background and Camus
05:00 – Google reliability lessons
06:30 – Texas load growth reality
11:00 – Contracting flexibility, framing the problem
12:30 – Internet-scale orchestration parallels
14:30 – Major reliability event takeaways
18:00 – What a flexible grid requires
20:00 – Paying Texans for household flexibility
27:30 – Visibility before control (DSO layer)
31:30 – Intelligent automation, local control
34:00 – Value is in avoided T&D spend
38:00 – Co-ops and munis as testbeds
46:30 – Edge markets and price signals
56:30 – Bills down, capacity up, resilience
58:30 – Closing thoughts
Resources:
Guest & Company
Astrid Atkinson (LinkedIn)
Company & Industry News
“So What Does Camus Do Exactly?” (Camus Energy blog)
Camus wins Innovation Challenge Award at Data Center World (Camus Energy)
Access “Getting ahead of the EV tipping point” AES and Camus White Paper (Camus Energy)
Voltus “Bring Your Own Capacity” Announcement (Voltus)
ERCOT Selects GE Vernova to Help Drive Innovation in DERs Announcement (ERCOT)
ERCOT Grid Research, Innovation, and Transformation Announcement (ERCOT)
Community pressure mounts against CPS disconnection policy, rate structures (San Antonio Express News)
National Energy Assistance Directors Association. Energy Hardship Report.
Google’s new plan to keep its data centers from stressing the grid (Canary Media)
Texas law gives ERCOT authority to disconnect data centers in emergencies (Utility Dive)
Texas data center buildout, stranded-cost risks and planning challenges (Utility Dive)
MIT: Data center flexibility can cut costs, emissions vary by region (Utility Dive)
Related Podcasts by Doug
How Load Flexibility Could Unlock Energy Abundance (with Tyler Norris)
Why the Old Utility Business Model Doesn’t Fit Anymore with Lynne Kiesling (Part 1)
AI, Outage Risks, and Market Opportunities with Lynne Kiesling (Part 2)
YouTube clip — Texas Grid Growth Depends on Data Center Flexibility:
Related Substack Posts:
Transcript:
Doug Lewin (00:05.356)
Welcome to the Energy Capital Podcast. I’m your host, Doug Lewin. My guest this week is Astrid Atkinson. She is the CEO and co-founder of Camu Energy. The conversation was a really great one. We got into one of my favorite topics these days, which is how can data centers coming onto the grid actually increase the reliability of the grid and improve affordability for customers? So we talked a lot about data centers potentially actually creating funds to get demand reductions and demand flexibility in people’s homes and businesses that would put money back into their pockets while strengthening the grid, giving data centers speed to power. We talked about particularly the DSO model, distribution system operator model, and what that might mean in the United States. These are common in other parts of the world, but we really don’t have designated entities in the US that are DSOs. We talked about intelligent automation and how processes being automated can actually make it easier for human operators. The economics of DERs, distributed energy resources, so much of the value of DERs is the potential reduction in cost in the distribution system. Transmission distribution utilities have so many different investment opportunities, but how do you prioritize those to make sure that you are rate-basing the most important for reliability and expansion and all of the things that we need while also ensuring that where distributed energy resources might defer or even make unnecessary the need for additional investment that you are tapping those distributed resources. We even got into ERCOT’s demand response proposal, which is live right now at ERCOT, the ADER pilot in Texas. We covered a whole lot. Astrid is incredibly smart, brings a wealth of experience to this area and really enjoyed spending this hour with her. I hope you’ll enjoy it as well. Please like, rate, and review this wherever you listen to your podcast. Share it with friends, family, and colleagues. And thank you so much for listening.
Astrid Atkinson, welcome to the Energy Capital Podcast. So excited to have you here. So excited to learn from you. You are a wealth of knowledge on so many of the issues I love to talk about and work on. Why don’t we just start, if you would, just tell the audience a little bit about yourself and Camu and also your background coming out of Google and how your work there kind of informed what you’re doing now.
Astrid Atkinson (02:22.046)
Hi, it’s great to be here. Yeah, absolutely. So I’m CEO and one of the co-founders for a company called Camu Energy. And we provide grid software primarily for grid operators, but also we work with folks that are developing assets that need to get connected to the grid as well. So thinking both about how we manage the grid, but also about how we plug people into it. My background prior to co-founding this company about six years ago was on the big tech side. I was at Google for a really formative period from about 2004 until I started the company in 2019. And that was a period of time during which Google and the tech industry went through a really massive period of growth and just a fundamental change in how we think about software, the role of software in the world, and also the physical infrastructure that we use to provide that. So I was really fortunate to be part of the original push towards data center scale computing and cloud scale computing when that was being invented. I was part of the team that helped build the internal cloud that powers all of Google’s public facing products today. And in particular, I spent the majority of my career there on a team called Site Reliability Engineering, which deals with basically the interface between physical and built infrastructure. So data centers, networks, et cetera, the computers and servers that actually do work for software systems, and then the software and data infrastructure that we use to operate and manage those. My team was responsible for about five years for Google’s public-facing web presence, Google’s homepage. If you went to google.com to see if your internet was working between about 2007 and 2012, I was running the pager-carrying team, carrying a pager myself, which was responsible for maintaining five nines of uptime for that service. And, you know, we’d get woken up at three in the morning if it went down. So really strong tie between like direct kind of hands-on operation, but also building the software and systems and data that made all of that possible on a distributed infrastructure of unreliable parts and thinking about the role of the network, the data center, computers, and also the software to make all of that go. So when we started Camu, a big part of the goal for that was to leverage some of what we had learned about running global scale distributed systems and put that to work and thinking about how we manage the grid. There are a lot of parallels.
Doug Lewin (05:11.832)
Yeah. I mean, first of all, Site Reliability Engineer, like, do you say that whenever you walk into a utility to meet with them? You’ve got to have like instant credibility because they feel seen, like you understand what their job number one is, right? You couldn’t have Google just like go down or crash. It had to stay up. And during that period, you’re looking at this kind of decentralization, right? Of infrastructure to actually, if I’m understanding right, to increase reliability, less sort of single points of failure, more distributed architecture for resiliency and reliability. It sounds a lot like what we’re dealing with on the grid.
Astrid Atkinson (05:50.22)
Yeah, absolutely. I will definitely say that the operations background has been really helpful when we talk to utilities. It’s actually part of the reason I was interested in working with utilities in the first place is that I really liked that part of my job, thinking about critical infrastructure, thinking about how you design systems in ways that can add complexity, but still also increase reliability and resilience. So that’s been actually one of the things I love most about working with utilities.
Doug Lewin (06:16.034)
So you guys are very focused on, if I’m understanding right, distribution grid, distribution side. Like how fast are we seeing distributed energy resources? I mean, that’s a lot of the work, right? Is this orchestration? Like what is the pace of change that you’re seeing out there?
Astrid Atkinson (06:33.206)
Yeah, well, I will firstly say that we have in the last year started doing a lot more work on the interface with the transmission side and transmission system change as well. And a big part of the reason for that is that we are seeing a tremendous pace of change and that’s coming from a bunch of different forces, but load growth is probably the most urgent piece of it.
Doug Lewin (06:55.106)
I’ve heard about that. It’s happening, right? Yeah.
Astrid Atkinson (06:57.87)
A lot of that’s coming in from the data center side. And that’s particularly interesting to me and to my team because we know data centers. Data centers are a friend of mine. We spent a lot of time thinking about how you build them, how you run them, how you build reliable systems on top of them. And probably upwards of 50% literally of my team’s time in any role that I held at Google was optimizing data center utilization so that Google could make the most possible practical use of their limited compute resources. So these questions around, you know, how do we get more power? How do we get more compute capacity? How do we do that really quickly? Were actually a really big part of my job on the tech side as well. And it’s been kind of fun and really interesting to see like those two worlds come together.
Doug Lewin (07:56.91)
Yeah. So we’re seeing big data center growth in Texas. I think the growth of what we’re seeing in distributed energy resources is a little slower, but still quite fast. We’re up to, as of the end of last year, the end of 2024 is the most recent data I believe I’ve seen. We were at like 6, 6.2 gigawatts, something like that. So obviously with a peak demand of like 85, it’s not quite 10%, but it’s, you know, it’s not nothing either. And the growth, particularly in the solar and battery sectors. Although distributed natural gas, as you’d imagine in Texas, right, a lot of like generators, particularly in Houston, growing pretty rapidly too. So we’re seeing all kinds of increase in distributed assets here. You work around the country and I believe even around the world, you’re from Australia, so you have a view there. Like, are we seeing kind of exponential kind of growth rates? Is this happening? And where are sort of the hotspots for DERs you’re seeing?
Astrid Atkinson (08:52.694)
I think the growth rate in the US obviously varies regionally depending on local incentives and kind of where the communities are at and those kinds of things. I would put the US on the lower end of DER growth relative to Australia being at the very kind of high end where in many parts of their market, they’re upwards of 50% of local capacity served from rooftop solar kind of at peak. Wow. The UK is sort of somewhere in between. They have really, I think, increasingly robust programs around leveraging flexibility, particularly thinking about using that to manage limited grid capacity. And then if you look at places like Europe, there’s a lot of quietly pragmatic innovation in thinking about how do you get big fleet charging sites connected to the grid? Well, of course you’d throw a battery in. And you don’t really hear that as part of the big conversation about like DERs. But even in our work in the US, in the last few years we’ve been increasingly focused on growth in the sort of larger end of the DER spectrum. Thinking about the role that flexible assets can play, not just thinking about like solar and rooftop solar and thermostats and those kinds of things. There’s definitely a place for demand side management. But starting to really look at whether there are opportunities to get more flexibility out of things like C&I solar or backup power, batteries that folks are putting in as part of the power portfolio, industrial sites, manufacturing, those kinds of things. Thinking about the role that really big flexible assets like data centers can play. And a big part of the reason for that is just that if you don’t actually focus on getting those things connected with flexibility, they just get plugged in as big dumb loads. We have a lot of issues with load growth. And every time we plug in a new industrial facility or site without contracting flexibility from the batteries that they also have on site for resilience purposes, like a butterfly loses its wings. Like God kicks a puppy.
Doug Lewin (10:59.467)
Yes, bad outcomes all around, yes.
Astrid Atkinson (11:01.474)
Yes. Yeah. Yes.
Doug Lewin (11:02.989)
So let’s talk for a minute. We’re definitely going to talk about data centers and orchestrating flexible loads and all that stuff. I want to start at a little bit of a higher level. And can you just describe, I’ve heard you use the phrase many times in past podcasts you’ve been on and conferences and things like that, talking about grid orchestration. This is a term I really love because we’re starting to see these very high penetrations of distributed energy resources, whether they be very large or very small, it does require some amount of orchestration. Can you talk about what that actually kind of means in practice?
Astrid Atkinson (11:37.378)
Yeah. So, you know, in its broadest sense, the idea of orchestration is the idea that there should be mechanisms for managing supply and demand in something like real time, used broadly at all levels of the grid. And so, you know, when we think about the mechanisms that we have for managing supply and demand on the transmission side, those are pretty mature and pretty robust. In a lot of ways, what an ISO is doing is basically orchestrating those things day ahead in real time to make sure that we’ve got reliable power supply.
Doug Lewin (12:09.592)
On the supply side.
Astrid Atkinson (12:09.16)
On the supply side, yes, treating demand as effectively like a constant. Yeah, okay. And so there’s been plenty of digital ink spilled on the idea that as we progress through the energy growth, energy transition, whatever you want to call it, adding controls on the demand side give us that much more optionality and flexibility in terms of how we solve that supply and demand matching problem. I want to give a little bit of perspective from my past work in this space, working at Google, because Google actually has incredibly robust systems for matching supply and demand in real time. And we actually started in a place very similar to where the grid is at today, where, believe it or not, demand was originally considered inflexible. It came in when it came in, usually result of people waking up, getting to work, using the internet, Googling stuff, and a lot of the system operation role was thinking about flexing supply to make it. We had some tools the grid doesn’t like, you know, computers don’t blow up if they’re underloaded, but they do crash if they’re overloaded. So you don’t have quite the same real time balancing problem that the electrical system has, but it’s not dissimilar either. It was like, if our hose of load is coming at you all the time, and then you have to think about orchestrating your resource mix, like this data center, that data center, like these server pools, these server pools to be able to meet that need. And if you get it wrong, the whole system crashes and then all your operators are really sad and you lose money. And so, you know, we kind of started in this place where demand was inflexible the same way that it is in the grid. And in order to meet the kind of scale and growth and system demand and load that we were seeing, which increased some probably 10,000 to a hundred thousand times over during the time I was working on these systems, we had to start thinking about moving away from a peak driven capacity planning model. This will sound really familiar.
Doug Lewin (14:00.544)
Yes.
Astrid Atkinson (14:00.544)
Where you’re thinking about N plus one, N plus two capacity for the service as a whole in every region, very statically provisioned. So like these servers might sit idle most of the time, but we need them for that like one day a year when demand goes up by like a hundred times for some, you know, Michael Jackson dying was actually really big. This is how long I’ve been doing this.
Doug Lewin (14:28.91)
2009, right? It was summer, summer 09, I believe.
Astrid Atkinson (14:32.296)
It was a big reliability event for us. And so the system is provisioned to that sort of peak, and plus two, so you could take a data center out for service, and you could tolerate a failure. And that was tremendously expensive. When I said that we spent a lot of my time optimizing server usage, a lot of it was about saying, hey, could we move away from the statically provisioned capacity model to one in which maybe we have some tools to flex demand? Maybe we could just shunt some of it off. Maybe we could shed some, maybe we could think about changing our capacity planning model so that we can distribute these statically shared resources today between multiple users. Maybe we can over-provision parts of the system. All of these things are going to sound really familiar to folks who work in the grid space.
Doug Lewin (15:19.15)
I mean, so, so, so familiar, right? Like, so many p-
Astrid Atkinson (15:22.726)
Yeah. And we had to do literally all of it. We did all those things I just described. We also moved a lot of resources to edge serving. So both what I think of as inner edge, which you could think of as being capacity and equivalent to capacity in substations. It’s still owned by Google, but it’s closer to where the user is. So you’re actually saving optimizing network capacity at that point that had to be done for video. And also third party edge, which you could think of as being like residential battery or something like that, where it’s like serving capacity that’s located off network, basically sacrificial capacity, but you can use it as a tool in different ways to kind of increase the steady state efficiency of the system. So we had to do a tremendous amount of work on this because of the volume of growth that we were seeing was so large that there was nothing to do other than get creative.
Doug Lewin (16:14.72)
I just love like in general the parallels between things that people aren’t necessarily connecting and actually there was the term, you know, implied parallel, but actually like connecting them. And that is just such a fascinating one because it really is so close when you talk about like static, what did you call it? Static capacity or something along those lines, like the move from that kind of static system to a much more dynamic distributed flexible system obviously is so, so close to a lot of the changes that we’re seeing. So I think one other thing I want to talk about and just kind of introduce to the audience, most of the audience will have heard this term, but I think it’s important to kind of define terms at the outset and something you’ve talked about a lot. Before, I do want to talk about data centers and their power needs much, much more, but there’s one more thing I want to talk about before that. You’ve written some papers on this on DSO, Distribution System Operator. Can you talk a little bit about what that concept is and how it’s relevant to the challenges that we’re looking at on the system? And then we can dive a little bit into sort of who in Texas is a DSO, but first, what is it?
Astrid Atkinson (17:22.379)
Yeah, so DSO is the idea of having a version of the system operator that applies at the distribution level. So, you know, there’s a few different models for what that could look like, but most typically we’d be talking about like the distribution utility taking on something more like a system operator role and probably optimizing locally for services within that grid, while also acting as part of a larger system is kind of the most typical version of that. You asked about definitions of orchestration. And so, you know, if the idea is to be able to have controls over supply on the one hand and demand on the other, and to be able to flex both of those intelligently against each other in real time and over time, which is the actual definition I’d give you of orchestration. You need an orchestrator, like something that can actually take an intelligent view of those system needs, and that needs to extend all the way into the distribution side because that’s where the demand is mostly connected. So if we think about what would it take to actually be able to flex demand in that broader system operator model, you’ve got to go to where the demand is. And that is in the distribution grid for the most part.
Doug Lewin (18:29.388)
Yeah, it’s interesting when I bring this up, you know, in like tweets or LinkedIn posts, I inevitably get some kind of reply that is along the lines of all this DER kind of stuff is just so complicated. We just need to build enough dispatchable generation and just have that meet load. Two problems with that. One, just as you were describing with Google, like when they were trying to do this for data center capacity, it is very, very expensive. And even for an entity trying to do this, Google.
Astrid Atkinson (18:57.39)
Tremendously expensive.
Doug Lewin (18:58.698)
Right. Even for an entity like Google, making a lot of money, obviously, like not terribly starved for cash. Like nobody wants to just waste money. That’s not right. Like we are supposed to be using, we’re supposed to have an efficient economy because that creates wealth, right? So A, it’s wasteful and inefficient. B, it is also in many important ways, not as reliable and resilient because again, you have more of kind of the single points of failure, which is one of the things that we saw during winter storm Uri is like a major problem on the gas system, gas plants freezing up, wind turbines freezing, like these big generators and big systems failing. If you had had more edge systems that could have continued to operate, particularly, and I think kind of in this, in some of the maybe, not first wave, probably already into a first or second or third wave, whatever, but in some of the early stages, making sure critical infrastructure, your hospitals, your police stations, your communications facilities and water treatment facilities all have those kinds of generating assets also makes the system more reliable and more resilient. But it is challenging, right? Because then you used to have a system where you kind of knew here’s where that generation is coming from. I can maybe not count on my hands, but like, you know, there’s like a hundred or 200 points and like, I know where they are and they’re telemetered and I know what’s going on at them. Now you’re talking about this world where there’s millions. So we don’t want to minimize the challenge, but if we do it right, if we get that orchestration piece right, we can have a lower cost, higher reliability system. Correct?
Astrid Atkinson (20:33.838)
Absolutely. And so firstly, I think it’s really important to keep that goal centered that we want a low cost, high reliability system. And so there’s going to be a bunch of choices about how we operate it that comes from that. But it’s certainly true that adding more local storage capacity that’s co-located with loads is a big way that we get that. I went through this myself. I’m in an area of California that saw a lot of power outages over the last number of years. We had one year that was so bad that we had 45 days of power outage between New Year’s and March 30th.
Doug Lewin (21:07.982)
You’re in PG&E service territory, right?
Astrid Atkinson (21:09.97)
I am. But it’s also a particularly challenging mountain region. And I will say, in PG&E’s favor, that the reliability profile has improved. But also, I own batteries now. Like many of my neighbors and many of your neighbors, one set of experiences with that kind of outage, even as a household, is enough to motivate investment in local storage. And when you look at that from the perspective of folks who have a real obligation to the community or there’s public infrastructure that’s dependent on having power, like water treatment facilities, emergency response, those kinds of things, they really kind of have an obligation to look at investing in those resilience capabilities. And so in some ways, yes, that makes the grid more complex. But on the other hand, that’s a complexity that we must embrace. But the good news is that if we think about it from a system design perspective that also provides us with all of these extra tools to think about increasing reliability for the system as a whole. So, you know, you sort of talked about the complexity of adding all this stuff, all these points of control. How do we think about managing them? One of the biggest things that we went through during that massive scaling period at Google was thinking about how to manage what we talked about as cognitive bandwidth for people building and operating the systems.
Because if you’re going through, you know, adding hundreds, thousands, tens of thousands, millions of new like subservices, different components, I think we counted at some point that I had 10,000 different product teams that were customers of my shared infrastructure. The traditional way of scaling that and the place that we are in the utility industry today is to say like, look, we can add more operators maybe, but we can’t add them infinitely. Maybe we can put a halt to the complexity, but the only real way to cope with that is thinking about better tooling for the people building and operating within the system that manages that complexity down. And so there’s a really important critical role for intelligent automation within this system that is going to make it possible to deal with that kind of scale. You know, the scale that’s coming into the grid is significant, but it’s not the only place we’ve ever dealt with this problem, right? Like there are a lot of parallels to things we’ve dealt with within the kind of broader tech space. We have the technology. We don’t have the technology necessarily within the grid landscape today, but this is a tractable problem. We can get this done. But thinking about it from the perspective of managing that complexity so that human operators can continue to reason about the system and make good human decisions, I think is a really useful lens when we think about what kinds of solutions are most helpful.
Doug Lewin (23:58.702)
I think what you’re talking about and tell me if you were going in a different direction and I just missed it. But like, I think what you’re talking about effectively is like, effectively like using AI as a tool to make things better and in a sense, having some intelligence, you just said intelligent automation in the system, which is, I mean, there’s different ways of saying these things, but like, if you have AI that has sort of layered it, because again, you’ve got millions of little points. I think who knows if this is two years, five years, 10 years, 15 years away. But refrigerators with mini batteries, air conditioners are now starting to get sold with mini batteries in them. We’ve already got in Texas, and we’re way behind other states on EV adoption. Though EV adoption is happening here at a fairly good clip. I think we’re up to 400,000 vehicles, which is probably something like 25 gigawatt hours of batteries rolling around on four wheels in the state of Texas. To coordinate and orchestrate all that stuff basically needs to be done in an automated way enhanced by artificial intelligence with some human. Okay, go ahead. Yes, please correct me.
Astrid Atkinson (25:07.84)
So I actually want to put a pretty heavy caveat on the role of actual AI in the sense that people are thinking about it when they talk about that today in appropriately intelligent automation. And the big reason that the AI technologies that are literally really exciting people at the moment and driving a lot of this like load growth and the people think about when you say AI really mostly LLMs, they’re large language models. There’s a lot of things that people mean when they say AI.
Doug Lewin (25:37.026)
True.
Astrid Atkinson (25:37.026)
LLMs are not particularly useful for these types of operational problems because they make shit up. Sorry, you might have to edit that. They make things up. So when I talk about intelligent automation, what I really mean is intelligent in the sense of intelligently applied.
Doug Lewin (25:57.39)
Okay.
Astrid Atkinson (25:57.39)
There is definitely a role for AI technologies. Some of them are very boring. Things like really effective forecasting, really benefits from AI technologies, right? For sure. Complex system modeling and anything that really requires complex pattern recognition is a really good candidate for having AI applied to produce much better results. And these are all critical inputs to operating these more complicated systems effectively. There’s also a really important role for applying intelligently considered system design in the sense that you can add relatively simple local automation that makes the big problem a lot easier. So for example, the challenge of building a centrally managed, centrally monitored, centrally controlled orchestration system that both manages transmission system capacity and your fridge, a very big one.
Doug Lewin (26:36.224)
Yes.
Astrid Atkinson (26:36.224)
Like that’s not something that’s easy to sort of do well in one central system. But if you take a slightly different view on that and say like, hey, maybe the fridge’s job is managing the fridge. Maybe what we need to do is give that fridge enough inputs just to know like, is there going to be a storm such that you need to charge your battery ahead of time instead of like operating in a steady state, just like charge and discharge. Now the job of the fridge is handled. You know, likewise household level controller can apply to that and can respond to that sort of signal. But if it’s cut off from the central brain, the house will just do the right thing in that case. And again, a lot of parallels to the way that we thought about this in terms of like data center scale computing. Like if a Google data center is cut off from the herd, it can continue to operate in steady state without access to the central brain. By doing the last sensible set of, you know, the last set of instructions that it got, which it gets periodically around how to distribute and serve load is usually safe to operate on in a steady state under most circumstances for most services. And most disruptions are relatively short. It can rejoin the herd later and go do the right thing again. And so when we think about designing control systems, thinking about layers of abstraction. This gets back to your question in a moment of who takes the role of system operator and what does that look like? It becomes really important because you can make this problem a lot simpler by taking kind of a layer cake view of it, right? Like your fridge battery manages your fridge, your house battery or control system manages your house. Perhaps your distribution system operator’s control system manages the distribution system and does things like intelligent load shedding in emergency conditions, manages things like charging and then dispatching localized battery, manages things like optimizing within local areas of the grid to maintain efficiency under most conditions, but then drop some stuff off if we really need that capacity back for a peak. Those ideas that are really human ideas actually, around like layers of abstraction and managing complexity, are going to be a really big part of that future system. Our AI LLM friends will have important small roles to play. But I think it’s useful to think about the system design piece first and then where you apply those technologies second when we think about the role they’ll play.
Doug Lewin (29:09.102)
No, no, that makes a lot of sense. And really, there’s different technologies that are going to be brought to bear, kind of a little bit on the different sides of the meter, I suppose. And is it fair to say you guys at Camu are much more focused on kind of the distribution utility side of the meter? There really isn’t. Yeah, I mean, I honestly, I think both sides need more focus. It makes a lot of sense to pick a focus and you could obviously do a lot worse than focusing on the distribution system, which seems to be behind a little bit. I mean, we’ve seen this, right? I mean, that was one of the biggest problems with Winter Storm Uri. You talk about intelligent load shedding. There was no, I mean, no is too strong a word, but it wasn’t very intelligent the way it happened, right? I think it’s fair to say.
Astrid Atkinson (29:33.112)
But yeah.
Astrid Atkinson (29:50.679)
Not very.
Astrid Atkinson (29:56.962)
I think we’ve all learned a lot from recent emergencies, whether it’s wildfires in California or Winter Storm Uri around the need for better tools for this.
Doug Lewin (30:06.882)
Yeah, and basically if you’re doing like a hierarchy of need, like the fridge battery is a cool concept, but like getting like load shedding on the distribution system right, that like people aren’t dying because they’re freezing in their homes is higher on Maslow’s hierarchy, you know, or however you want to say that. Okay, great. So I do want to talk about AI. You’ve been kind of teeing this up the whole time and like sort of large loads, how those are playing in sort of grid management and some of the opportunities that come from that. So let’s just start there, just start talking about what you see as the opportunities there. I have some ideas I want to bounce off you, but rather than lead the witness, why don’t you start with what your ideas are top of your head. Yeah.
Astrid Atkinson (30:48.736)
Yeah, well, you know, we started our company with a real focus on orchestration within the distribution environment. So thinking about orchestrating those end user resources, whether that’s thermostats or batteries or, you know, EV charging or whatever. And I think there’s a really important role that that’s going to play. You know, I mentioned the importance from a system design perspective of having levers of control on load. But one thing that makes it really hard in the market today is that the economics are just really not exactly there in most utility markets and regions to actually be able to pay for those flexibility services in a way that incents their creation, maintenance, and growth at very large scale. Most regions will have some sort of demand response program. It kind of pays for you to flex like two to 10 times a year. That is actually a really big help to the grid. But it doesn’t really provide the sort of financial incentive that you would need to get like everybody signed up, nor does it really provide like opportunities for system optimization that would allow the kind of really rich set of control points and kind of efficiencies of scale that let you do things like optimize substation utilization by, you know, turning all the knobs. That’s been like an industry goal for so long and we’re just not there. And a big part of the reason for that is that the economics aren’t there either. We’ve done this analysis with a few different utilities, but we published a white paper with AES a couple of years ago, their Indiana utility, basically analyzing the long-term impact of load growth as a result of electrification, particularly EV adoption in their case, in terms of the time to upgrade for every single component on the grid driven by potential EV load growth. Like when would you have to upgrade the conductor, the transformer, the substation, et cetera, for every component with the intention of effectively quantifying like, okay, well, if you could manage the growth, could you avoid the upgrades? How long and what would you save if you did? And that was really intended to kind of get at this core question about like, what is the value of flexibility? And is there more out there than we can reach through demand response type programs today? And what we found in that analysis was that about three quarters of the value of demand flexibility is locked up in basically capital deferral. There is a tremendous amount of value in being able to manage load growth, even down to the very edges of the distribution system, in terms of not having to update every transformer over the next five years, if you can manage those additions. And that’s cool. That could inform different programs that potentially incent people to participate in VPPs, demand flexibility programs, all of those sorts of things. But boy, we’re just like nowhere close to being able to do that in most utilities today. Like this is a pipe dream as it stands for most utilities today. There’s not a regulatory structure that really supports that kind of incentive. So we’re sort of stuck with this place where it’s like, okay, you can pay like pennies for demand response and then you have to pay something to the software provider and then to the VPP operator and then back to the people who participate. And it’s just like not there. And this is all a long way of saying that if you could figure out how to unlock the value that that flexibility provides from a poles and wires perspective, you actually might be able to use it in the way that it could be used. And so this is where I think the data center boom and large load additions are really interesting. And this is like kind of a long way around to that.
Doug Lewin (34:29.214)
So worth it.
Doug Lewin (34:29.565)
It is so worth the journey. Take your time. The beauty of podcasts. We’re in no hurry, Astrid.
Astrid Atkinson (34:36.022)
Yeah, so we simultaneously have this situation arising where suddenly a lot of people want to build a lot of data centers. And our ability to meet that need, both from an overall power supply perspective, but also from a like, poles and wires, and every part of the system perspective, it’s not yet clear how we’re going to do that. But asking those loads themselves to be flexible certainly gives us more optionality. And you’ve had a couple of recent podcast guests talk about different ways that flexibility can be an unlock for being able to add those resources more cost efficiently from a network perspective as well as from a capacity perspective, open the door to oversubscription on both sides, those kinds of things. And there’s also an incentive on the data center side, they want to get connected quickly. When I worked at Google, the reason that we were doing all that optimization for data centers was not because Google wanted to save money on building new data centers. It was because we literally couldn’t build enough quickly enough to meet Google’s organic load growth demand. Like there was no other way to scale Google services than becoming more efficient. And what we’re seeing playing out in the market now is that suddenly data centers are willing to consider flexibility. They’re like, hey, if I could flex, could I get connected sooner? And that conversation is starting to play out, especially in Texas.
Doug Lewin (35:57.228)
Yes.
Astrid Atkinson (35:57.228)
I think what ERCOT is doing on this is really, and the Texas legislature is doing on this is really interesting. But it’s also just really being driven by business need from the data center. But it’s tricky because the reason they want to build that capacity, same as back in the day, Google wanted that capacity, is so they can run their business. And the opportunity cost of not having that capacity available is very, very, very high. We estimate that it’s on the order of $7 billion a year for a gigawatt-sized site for a hyperscaler. And that applies both to delays in building, but also it does kind of apply to not using the site once built. So there’s this opportunity that data centers have maybe to get connected more quickly by providing flexibility, especially at the site and getting creative about the ways that they power that, whether that’s local generation, on-site storage, flexing within data centers to load those kinds of things. But I think there’s also a really interesting opportunity to leverage other kinds of flexibility. So we started to see this emerge.
There was an announcement just this week from Voltus and Cloverleaf that they’re put together a third-party VPP program to provide flexibility basically alongside, but entirely separately from the data center itself to start offsetting some of that load. And I think that’s really interesting because data centers fundamentally are not really budget constrained for getting these things built. They’re really time constrained. And so like, I think in there is the opportunity to start thinking about like off market or kind of secondary market opportunities to get value for flexibility, both from the site itself, but also from, you know, you, me, my battery or your fridge, which I think is really new and interesting.
Doug Lewin (37:41.27)
So, so interesting. So many different questions based on that one. But I think one of the things I’m thinking about is, you said like off market or secondary market. I am generally obsessed with markets, but I’m now in a particularly obsessive moment about markets, largely because of the podcast I recorded with Lynn Kiesling and then her recommendation to read. Well, I think I brought it up as she recommended I read it, this Hayek biography and looking at like, you know, the time that he was sort of forming his economic theories in the thirties were like, everybody was into central planning. I didn’t realize until I read this book, like how much like Western Europe was like central planning is the thing. And again, like interesting parallels and connections to be made to what’s going on on the grid right now. And obviously you need some planning of a grid, particularly when you’re talking about things like high voltage lines and things like that. But there’s so much of this that could be done through markets. So one thing Astrid that’s going on right now that I think is a very interesting connection to what you’re talking about, literally today, the day we’re recording, October the 3rd, there was a meeting I haven’t listened to yet, but I will soon, the Wholesale Market Subcommittee at ERCOT was discussing a proposal that ERCOT put forward for residential demand response. And the way they did it was they said, we’re going to, we just want to pay $140 a kilowatt year for 500 megawatts. Now $140 a kilowatt year, right? Most houses have about a kilowatt of demand flex from their thermostat, maybe from one or two other things. Maybe it’s a kilowatt and a half, but that’s $140 per year that then the load serving entity has to somehow share with the customer. So that’s probably going to come down to $50 or $70 if it’s halfway, right? Then that’s 70 bucks for a customer to have their air conditioning cycle. That’s not per month, that’s per year. I’m talking about like five, six bucks a month. Like, I just don’t think that many people are actually gonna say, I don’t wanna be pessimistic. I wanna be real clear. I’m being careful to say this whenever I talk about this. I give a ton of credit to ERCOT for recognizing residential demand response, residential demand flexibility. These things are important. And that’s great. They made a really important first step there. But I think part of the problem they’re having Astrid, and frankly, I think they’re having this on the aggregated distributed energy resource pilot we have as well, which is a pilot to get stationary batteries in folks’ garages into the ancillary service market. Well, the ancillary service market is a tiny sliver of the overall market. So when I hear you say three quarters of the value from this AES study were in capital deferrals. Like if we can’t get after that part, if all we’re dealing with is the energy portion and the ancillary service portion, well in Texas, distribution rates, if they aren’t already half or a little more than half of the bill, they will be very shortly, right? The energy portion of the bill keeps dropping largely because of renewables, but also because of low cost gas. There’s obviously a multitude of factors and T&D keeps going up. So that’s just a long way. You gave a long answer. I’m asking a very long question, but basically I think the question is like, how do we get to that? And how do we get to the point where there is a price signal for the value that these distributed resources, whether they be on site with a data center or whether they be in this kind of secondary market, but that it is a market where somebody isn’t just picking a flat number per kilowatt year and a flat number of megawatts. There could be many times that number of megawatts available at a much lower rate, but we wouldn’t know it because they’re not really setting it up as a market. And this stuff is hard. I don’t expect you to have an answer to this, but I just wanted to kind of explore it more and unpack it more, because I think everybody’s trying to figure this out, including ERCOT.
Astrid Atkinson (41:29.708)
Yeah, I’d give maybe just like kind of a two part answer to your question, which is to say that there’s sort of a practical and technical component to this, which is like, is there an entity that wants to take on the problem of potentially forming such a market or incenting retail level participation in the broader optimization of the system in this way? And so that’s kind of one piece of it. And from that perspective, it’s actually really cool to see ERCOT moving quite seriously in that direction.
Doug Lewin (41:58.316)
Yes.
Astrid Atkinson (41:58.316)
There was an announcement just this week also that they recently did a procurement for software capabilities for real-time monitoring for DERs across the state. And I will tell you that from my operations background, visibility should always come before control. So I think that was something that we’ve been tracking for a while. We actually participated in that bid, although sadly we didn’t win it. But I think that definitely demonstrates concrete commitment from ERCOT to start really thinking about putting in place mechanisms that could answer the second part of your question, which is, could we actually use a market instead of these fixed incentives that we’re stuck with today? And I think this is where the potential role of the data centers comes in, because the market has two sides. The reason that construct that you just mentioned is kind of unappealing is because they’re not paying very much. When it’s hot and I’m running my air conditioner. I have to admit that having a smart thermostat and also not being entirely clear at various points with different programs going on, like what programs I might be signed up into, every time it’s a little warm in my house, I’m like, is Google turning down my thermostat? Even though I worked at Google and I know they’re probably not, except maybe by accident. So that problem of thinking about like, what would you actually have to pay for people to be willing and excited to participate in these programs is a very real starting place. And I think today that price is nowhere near being realized.
Doug Lewin (43:30.254)
That’s my sense, yeah.
Astrid Atkinson (43:30.254)
But, you know, markets typically do have two sides. Like, what would a hyperscaler bid to be able to, in the broader sense, potentially connect a load that otherwise would have to wait five to seven years because they’re buying gas turbine parts or waiting for a large transmission system build out or something like that. We know that in some cases, folks are willing to think about flexing their own loads, but a lot of data centers will also say that that’s not really feasible for them. Although I think the rise of co-locating generation assets or flexibility provides tools even for the folks who don’t feel like they can turn down loads. But there’s kind of a toolbox of mechanisms of flexibility where maybe data centers are willing to really self-manage this within their own site. But maybe they prefer to just worry about their own stuff and pay somebody else to reduce their load. And that’s where it’s not exactly getting to that capital deferral value of flexibility.
Doug Lewin (44:32.17)
We still gotta figure that part out.
Astrid Atkinson (44:32.17)
Part of the value stack. Maybe a new part of the value stack. But what you really have is this ability to turn opportunity cost into perhaps a direct-to-consumer payment. If you had the data center bid into the other side of that and be like, look, I’m willing to offer $500 for someone to flex a kilowatt year at this moment, now that’s more like a market. And I think that’s really interesting.
Doug Lewin (44:58.262)
I love this for so many reasons. I mean, number one, you know, affordability is a big, big, big deal, right? I mean, Texas has fairly low rates, though our bills, just as a function of it being so hot, so many days of the year, even with low rates, our bills are quite high. And we are seeing, there was an article, I’ll put it in the show notes so that if I get it wrong, people can look up exactly what it is and I’ll put it in there that I was wrong if I was wrong. But I think it was something like, I think it was earlier this year, CPS Energy in San Antonio said something like one out of six of their customers was in arrears.
Astrid Atkinson (45:26.55)
Yeah.
Doug Lewin (45:26.55)
And that is not uncommon, right? NASUCA, the state utility advocates organization had some similar numbers out last year. There’s a real problem. And you have a system in Texas, it’s different everywhere, but in Texas, you can really reduce your bills by moving your load around the four coincident peaks. And that’s probably going to change. But there’s also emergency response service. There’s all these different ways that large users, Bitcoin mines, big industrial manufacturers, data centers can reduce their bills. There’s a very small amount of residential opportunity there. So this would create that. This would help put money into people’s pockets, help the data centers, especially those that for whatever reason can’t really move their load around or markets or what’s that?
Astrid Atkinson (46:17.484)
Who care about affordability.
Astrid Atkinson (46:20.218)
There’s an argument being made for actually caring about affordability and overall system costs as well.
Doug Lewin (46:25.408)
Absolutely. And this is what markets are for, right? If you’re a data center developer and you’re like, okay, I’m going to put all this backup gen and storage and all this stuff on site. But if the next increment of that costs more than whatever this other distributed market is, and this other distributed market’s putting money back into the pockets of people living in this state, and I know I need public support to be able to build these data centers, it just kind of one of those, one of those sort of like stereotypical win-win-win-wins. I do want to talk about the capital deferral part, but anything else you want to say about that before I go there?
Astrid Atkinson (47:01.678)
Well, yes, actually, I would say that some of this is kind of speculative and forward looking, but the thing that I like about it is that there is a lot of motivation to get stuff done quickly in this space. And that’s a little bit unique in the sorts of changes that we’ve been managing in the grid and kind of power industry more broadly. We’ve been talking about local markets for forever, but they haven’t been strongly incented. And I think the thing that is potentially a little bit different about this moment is that there is a tremendous sense of urgency from those large loads. And so, you know, it’s a multi-part problem and a multi-part solution, but starting to really kind of socialize and fix the idea that like flexibility requirements when they connect and flexibility opportunities to connect more quickly could unlock a lot of good is a really helpful thing to have in people’s heads.
Doug Lewin (47:50.036)
Absolutely. And so let’s just extend that for just a minute to the distribution grid. Like you said, markets have to have like a buyer and a seller. So in the example you’re giving there, the data centers could be the purchaser, could be the sort of counterparty. What I’m thinking about on the distribution side is starting to get to a system where, and this has been talked about a little bit at the legislature, particularly post-Beryl, where there were some discussions around like performance-based regulation and aligning the financial interest of utilities with the financial interests of their customers, of having the Public Utility Commission, maybe with some assistance from ERCOT as they’re getting this, you know, their GE Vernova comes on and starts looking at, you know, getting them visibility into distributed energy resources, is to be able to stack up, like, what are the different investments you’re making on the distribution grid? To be very clear, distribution utility is still going to make a lot of money. Like the investor-owned utility is still going to, there’s so much investment needed here. We’re not having a death spiral discussion right now. Right? Like it’s not, it’s like, it’s not going to happen. Like the grid needs a lot of investment. Utilities are not starved for investment opportunities, but there could be some of those that could be obviated, deferred or obviated by smart orchestration of distributed resources. So maybe in that sense, you know, the PUC and ERCOT working with the utilities are identifying a couple of projects that look because of that visibility, they’re getting into the distribution grid. Like these are really high potential to have distributed energy resources meet the need. Thus, you’re not socializing the cost of tens or hundreds of millions of dollars. Instead, private capital is going in to pay for resources at people’s homes and businesses. It isn’t socialized to the whole system. So that’s preferable. And then there actually is some payment from the distribution utilities because while that is a payment, that payment would be socialized and rate-based. It could be far less than what they were gonna spend otherwise. And we know what that amount, we know what they’re gonna spend. Like that is a knowable, right? It’s an empirical matter. They’re going to recover it in rates. I don’t know. Could that work or is that just too complicated?
Astrid Atkinson (50:01.998)
Yeah, I think it could. So a couple of things on that. Firstly, local utilities or distribution utilities or all utilities, you know, they do have their own set of incentives for how they make a profit, you know, really focusing around capital expenditure and rate of return on that. And so, you know, on one of your previous episodes, Lynn Kiesling was talking about, you know, the alternate perspective of perhaps like a rate cap or usually referred to as a totex model, where instead of looking at just like CapEx, you’re actually looking at the whole cost structure for the utility and thinking about, you know, look, it’s the utility’s business to optimize between like operating expense versus like build out or whatever. You just have to, you know, basically provide a good service and, you know, we’ll think about the overall rate of return being a function of all of those things, which I think is very useful. But I don’t think we necessarily have to change everything about how we regulate utilities for there to be a really good case for things like investing in alternatives to physical infrastructure upgrades. The way that we like to think about this is not exactly capital deferral, but rather capital efficiency. So utilities don’t really have a shortage of opportunities to do upgrades at this point. To some degree, their business structure encourages them to do upgrades. What they have is more like a target rich environment that’s starting to cause issues with affordability, right? It’s like, there’s only so much you can pass along to ratepayers, reasonably speaking. And utilities now need to be increasingly careful about making sure that where they’re investing, it’s in places that have kind of the most public good or benefit the broader set of ratepayers. And so that does, I think, create inherent incentives to start thinking about ways to optimize, especially the very edges of the grid. And so I think it does create some incentives around things like managing edge resources and taking advantage of technology management stuff to think about how we can defer or even completely avoid local upgrades through more efficient edge management. And I think there’s a lot to be done there. What I will say is that a bigger blocker in my mind than the regulatory structure to doing this today is actually the current state of software and data capabilities within utilities. And now I’m a little biased on this because I run a company that provides software and data capabilities.
Doug Lewin (52:25.71)
I was just going to say we’re right back into your wheelhouse. So I see what you did there. No, but you’re right. It actually is a real issue. You can’t do this kind of thing without that really strong.
Astrid Atkinson (52:35.138)
Right.
Astrid Atkinson (52:35.768)
And most utilities today don’t have the data capabilities that they would need to be able to say, on a really robust and ongoing basis, this is exactly what we expect to spend on every piece. These are the potential efficiencies. This is the counterfactual if you managed it. And even looking at it on more of a kind of operations basis and thinking about moving towards a DSO, most utilities don’t have the ability to see even really what’s happening with rooftop solar, let alone kind of all the things that are getting plugged in. And the state of the art from a software perspective in the industry is not great. Most of these systems are coming from large legacy providers. They’re mostly running on-prem, even where they’re not. You know, they don’t really have the same kinds of scale and kind of robust large system capabilities that certainly we relied on when I was running large systems at Google and every equivalent provider does. When I was an operator at Google, I could see every one of the hundreds of thousands to millions of machines that were forming the collective fleet that were serving all of my web search requests. If we had a local outage, I’d get notified right away. I could drop into any single server and take a look at it. But I also had really robust tools to do things like analyze the flows of traffic across different parts of the network to drop into high performance analytics tools and say, there’s a pattern of outages here that are causing this system problem where this area has blinked out. Those kinds of capability, we are in the stone age in terms of what’s available to grid operators today to do those kinds of things. And I really think that without those capabilities, it’s very hard to envision a more dynamic way of managing like real physical reliability considerations like blowing up transformers, right? To trust software to not blow up a transformer is a really big change in the way that utilities do things. And it’s true that changing their regulatory structure to allow them to pay for software would be nice because that’s typically on the opex side today, which is not part of the rate base.
Doug Lewin (54:50.444)
Yeah, to put a finer point on that, like today, if they just take the transformer out and put a bigger transformer in or just add a second or third or fourth transformer, whatever it is, they’re getting 9%, 10%, 11% on every dollar they spend. If they go through all this trouble to upgrade their data systems and figure out exactly how these distributed energy resources are working and maybe even bring a few more onto the grid so that then they don’t need the transformers, congratulations, now you don’t get paid.
Astrid Atkinson (55:18.466)
Yep. Like even for a utility that absolutely wants to bring really intelligent solutions to saving money for ratepayers, that is a structural disincentive. That’s a real problem.
Doug Lewin (55:28.938)
Yes.
Doug Lewin (55:28.938)
So just one last thing on this, and I know we’re nearly out of time, but I think this is really important in the Texas context. So Texas is mostly a competitive market with monopoly, fully regulated transmission distribution utilities. Most of the conversation we’ve just had is sort of about that world, but also 25% of Texans or so are served by municipal utilities or co-ops. That’s not a small number of people. It’s like whatever, six, seven million people in the state. How does this look for those folks? Those are basically like vertically integrated, not exactly because there’s G&Ts and retail only co-ops and stuff like that. But basically in some ways it should be simpler, maybe not easier for all the reasons you just said, because the data is far behind and all of those issues. But at least in the sense that like, if you’re a CPS Energy, like you own generation, you own the poles and wires, you have the customer relationship. So with the huge caveat, you got to do a lot of work with data. Like you should be able to create a virtual power plant type of a thing within your service territory to save money on transmission distribution, save money on generation, put money back into your customer’s pockets, free up capacity so that then you can bring data centers into your service territory. What am I missing there and how are you thinking about that in Texas? And also just as part of that, I think you are doing work with a co-op in Texas. So it’d be interesting to hear about that before we end.
Astrid Atkinson (56:53.142)
Yeah, we currently do do some work with a co-op in Texas in the Dallas area. And so I think the situation for co-ops and munis is in some ways really rich with opportunity and in some other ways, I think really difficult. We’ve done quite a bit of work with co-ops. We kind of got our start working in that part of the market because they actually do have quite a bit of flexibility because they are vertically integrated and also because they have a very unambiguous charter within their community, they’re nonprofits, a lot of what they do is really directly tied to service to their member owners. And they have both the tools and the obligation to keep rates low within that
So, you know, the things that they don’t have, they are typically really resource strapped. They’re usually small organizations. They don’t have a lot of time and usually, you know, functions that would have a 50 person team in a large utility might have like a fraction of a person’s time in a co-op. So it’s hard for them to take on challenges that are really resource intensive. On the other hand, we loved working and continue to work with folks in the co-op sector because if you can actually go in and help them out with the heavy lift of moving data around and getting it assembled and stitching over gaps and all of those kinds of things, they have all the same data sources and many of the same challenges that larger utilities do. And they can sometimes move a lot more quickly. Part of the thing that was great about us starting our work with co-ops is that we developed really robust capabilities for doing things like stitching together data sets. And we actually used a lot of like kind of small AI to fill in gaps within data and all of those kinds of things that helped us out a lot as we’ve begun working with larger utilities as well. The thing that’s really, I think, difficult for co-ops is that they can end up with the short end of the stick in situations where there’s a lot of risk or a lot of money moving around. And particularly like Winter Storm Uri put some of the Texas co-ops into situations where they basically went bankrupt because they-
Doug Lewin (58:47.924)
Yeah.
Astrid Atkinson (58:47.924)
Because they had contract structures in place that obligated them to pay for power that maybe wasn’t even necessarily available to provide service. And it’s a very complex situation, but they are not necessarily in a really fantastic position to benefit from the efficiencies of scale of things like the ERCOT market, but are sometimes in a situation where they end up bearing some of the costs. And we saw those impacts from Uri radiate up into Colorado and over into Arizona with gas shortages and price spikes and kind of all of the above. I think they’ve got their own challenges, but what is fantastic about co-ops is they’re very responsive to customer needs. And so, you know, that can show up in innovation and affordability. That can also show up in innovation and thinking about like how to bring on a large load that you could help offset like costs for everybody else on the system, maybe help that large load get connected more quickly. So I think we’re still going to continue to see a lot of innovation, perhaps disproportionately on the co-op side. There’s a lot of them and they could try different things, but I also think they’re managing, you know, some really significant increasing challenges around system costs as well that have to be balanced there.
Doug Lewin (01:00:20.788)
That makes sense. And I think it also speaks to why it might make more sense to have these distributed markets either linked to each other somehow or have like a broader market because you could get a data center going to a co-op territory and you could get some residential or just distributed demand side reductions in that service territory, but it also may not have that large of a load period. So like even if you offset 100% of the load, it might only be 100 or 200 megawatts. Like...
Astrid Atkinson (01:00:48.398)
Honestly, the data centers that we’re talking about dwarf most of the co-ops that I’ve worked with in terms of overall load.
Doug Lewin (01:00:54.158)
Right. So we need to make sure that those folks, if there is some kind of market, this distributed market, they have a chance to participate and lower their bills, period. And there might be others as well.
Astrid Atkinson (01:01:05.614)
Absolutely. And so this is where I think ERCOT has a really unique position.
Doug Lewin (01:01:05.614)
Yes.
Astrid Atkinson (01:01:05.614)
Because they are in a good position to do something like that.
Doug Lewin (01:01:14.326)
And they did, and we’ll put a link in the show notes announced just days before we recorded this, not only the data piece you talked about, but the launch of their, it’s called GRIT. I forget what the acronym is, but it’s innovations and they’re like research and innovation technology, something like that. And DERs were like, it was one of the first two or three things on their list of things they really want to get into. So I do think this is a really, really fruitful area. And we’re going to see a lot of growth, a lot of dynamism in, I think. Astrid, you’ll be at the center of it. Your thought leadership on this, the leadership of your company, I think is really interesting and I’m looking forward to keeping in touch with you and learning more about the company as it grows. Before we end, is there anything that I should have asked you that I didn’t or anything else that you’d just like to say in closing?
Astrid Atkinson (01:02:00.29)
I think it’s a really interesting time in the industry today, and I think we’re likely to see a lot of changes in the next couple of years. You know, there’s really, I think, never been a more exciting time to be working in this industry. I’m really happy to be part of the conversation. Thanks for having me on the show. I really appreciate it.
Doug Lewin (01:02:16.844)
Yep, thanks for all you do, Astrid. Appreciate you.
Astrid Atkinson (01:02:19.832)
Thank you.
Doug Lewin (01:02:22.392)
Thanks for tuning in to the Energy Capital Podcast. If you got something out of this conversation, please share the podcast with a friend, family member or colleague and subscribe to the newsletter at douglewin.com. That’s where you’ll find all the stories where I break down the biggest things happening in Texas energy, national energy policy, markets, technology policy. It’s all there. You can also follow along at LinkedIn. You can find me there and at Twitter, Doug Lewin Energy, as well as YouTube, Doug Lewin Energy. Please follow me in all the places. Big thanks to Nathan Peavey, our producer, for making these episodes sound so crystal clear and good, and to Ari Lewin for writing the music.