“Everyone hates data centers.”
That was the subject line on the email newsletter from Heatmap Daily the day before I sat down with Dr. Varun Sivaram, co-founder and CEO of Emerald AI. Communities see huge new loads coming onto the grid, hear about billions in new infrastructure, and worry that their bills will go up.
It doesn’t have to work that way.
Varun argues there are two paths. On the villain path, AI data centers drive up power bills and increase the likelihood of outages. On the hero path, they become flexible grid assets that help us use existing capacity better, absorb much of the cost of new grid infrastructure, and help residential and small commercial customers pay for distributed batteries, heat pumps, and more.
Texas and ERCOT are at that fork in the road.
Two futures for AI data centers
Varun calls this a “critical juncture.” If ratepayers have to pay more and grid reliability takes a hit, communities start pushing projects away and the U.S. falls behind in the global AI race
The alternative is the hero path, where data centers show up as flexible partners:
Data centers in this hero path are going to contribute to grid reliability and help us to avoid rolling blackouts. I think we can get there, but we’re not on that path right now and folks are right to worry. And this is the moment where we switch from the villain to the hero.
Texas has a chance to innovate — both technologically and with policy. Regulatory innovation is as important as technological innovation — maybe more so.
Turning AI load into flexibility
Emerald AI is a software layer that makes AI workloads flexible. Varun breaks it down into four kinds of flexibility:
Temporal. Once you know what can move, you can shift it in time. Training a big model at 6 p.m., when ERCOT is tight, is very different than running it at 2 a.m. when prices are low and resources are abundant.
Spatial. Many jobs can move across locations. If a Texas node is stressed and another region is fine, traffic can be shifted without changing the user experience.
Resource. Some tasks truly need instant answers, others can wait minutes, hours, or days. Emerald deploys and optimizes onsite resources when necessary.
Adjacent. Data centers can purchase flexibility — putting money into the pockets of residential and small commercial customers — from distributed batteries, HVAC systems, and other controllable equipment.
Put together, these layers make a data center behave less like a rigid block of demand and more like a flexible grid asset when conditions require it.
ERCOT’s stakes and the Texas choice
Varun shared a conversation with ERCOT CEO Pablo Vegas. Vegas said he did not just want a tool that jumps in during emergencies. He wanted something that keeps the grid from getting to an emergency. Don’t want for the flashing red lights; have data centers contribute flexibility when the lights are flashing yellow.
That is the heart of the hero path.
ERCOT was already dealing with intense load growth from industrial projects, crypto-miners, traditional data centers, increasing population, hotter temperatures, and now AI data centers. Texans will not accept anything less than high reliability and lower bills. If the PUC and ERCOT treat AI as inflexible, we will need to build a lot more capacity and infrastructure than we might otherwise need.
If we require and reward flexibility, we can serve more load at lower cost, then add new infrastructure when truly needed.
Final Thoughts
The hardware and software inside AI data centers means they are already some of the most controllable loads connected to the system. With the right tools, incentives, and market structures, AI factories can act as shock absorbers instead of stress multipliers.
Texas leads on gas. Texas leads on wind. Texas leads on solar and storage. We can also lead on making AI an ally to the grid, not a villain. That will take work but it is possible. It’s a choice we can make.
If you enjoyed this podcast, please share it with a friend or colleague or family member or neighbor. The more Texans engage with these decisions, the better chance we have for a grid that is reliable, affordable, and cleaner for everyone.
Timestamps:
00:00 – Intro, Varun bio, Emerald AI
02:15 – The villain and hero paths for AI data centers
05:30 – Phoenix pilot as a tangible example of the hero path
09:00 – California simulation of 2020 outages
10:00 – Possibility of doing a pilot in ERCOT, Pablo Vegas’s comments
12:00 – What exactly does EmeraldAI do?
14:00 – Breaking down four flexibilities: temporal, spatial, onsite resource flexibility, adjacent
20:00 – Emerald AI’s focus is on onsite flexibility
24:00 – Real-world stress test results
27:00 – What excites Varun about AI
32:00 – How AI can help lower power bills: the central tenet of the hero path
36:00 – Why ERCOT is potentially the global model for speed to power
40:00 – Connect-and-manage for loads
43:00 – A reference design for AI factories from a pilot in Virginia
46:30 – The hero and villain path for AI and emissions
49:00 – Optimizing the system to buy time until nuclear, geothermal, etc. are ready
51:30 – Getting a win-win-win: on affordability, on AI innovation, and sustainable, reliable systems
52:30 – Final thoughts: the Emerald AI team
Resources:
Host, Guest & Company
• Varun Sivaram - Linkedin
• EmeraldAI - LinkedIn
• Doug Lewin - LinkedIn, Twitter(X), Bluesky, & YouTube
Company News
• Sharing Our Seed Extension - Press Release
• National Grid and Emerald AI announce strategic partnership - Press Release
• How AI Factories Can Help Relieve Grid Stress - Press Release
Books & Articles
•The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI by Dr. Fei-Fei Li
•The Country’s Biggest Grid Has a Plan to Manage Data Centers’ Power Use. Everyone Hates It. - Heatmap News
•The mechanics of data center flexibility - Catalyst Podcast (Latitude Media)
•How the world’s first flexible AI factory will work in tandem with the grid by Arushi Sharma Frank in Latitude Media
Policy & Reports
• Report on disorganized integration of data centers - Texas Reliability Entity
• 2025 State of Reliability - NERC
• The Worlds I See - Dr. Fei-Fei Li
• Arushi Sharma Frank’s ERCOT Planning Guide Revision Request
• Retail Electricity Price and Cost Trends: 2024 - Lawrence Berkeley Labs
• Rethinking Load Growth - Tyler Norris and Duke University
• ANOPR on Large Load Interconnection - FERC
• Emerald AI: presentation to ERCOT Large Flexible Load Task Force
• PGGR 135: Large Load Interconnection Queue Process Revision
Related Podcasts by Doug
• How Data centers Strengthen the Grid - Astrid Atkinson
• Texas’ Load Growth Challenges – And Opportunities, with Arushi Sharma Frank
• How Load Flexibility Could Unlock Energy Abundance with Tyler Norris
Related Substack Posts by Doug
• AI Data Centers Aren’t Causing Higher Prices
• Demand Side Resources Could Enable Load Growth
• Can AI Data Centers Lower Costs for Residential Consumers?
Transcript:
Doug Lewin (00:05.154)
Welcome to the Energy Capital Podcast. I’m your host, Doug Lewin. My guest this week is Dr. Varun Sivaram. Varun is one of the most interesting guests I’ve had in the three years now I’ve been doing podcasts, both the Energy Capital Podcast and going back to the Texas Power Podcast. He is the founder of Emerald AI, a company which is transforming energy-intensive data centers into grid assets and grid allies. We talked about all the different ways that data centers, if integrated right... The Texas reliability entity brought this up in a report: the possible disorganized integration of data centers into the grid is one of the biggest reliability risks. I would argue, and clearly Dr. Sivaram argues, the counter is true as well. The organized integration of data centers can actually make grids more reliable and spread costs out to more customers.
We got into all of that. Just a couple of notes on Varun: He was formerly the chief strategy and innovation officer at Ørsted. He was the chief technology officer of India’s largest clean energy company, ReNew Power. He was a diplomat at the US State Department. He is currently a senior fellow at the Council on Foreign Relations. He was named as one of Time Magazine’s Time 100 Next for the next 100 most influential people in the world. MIT Technology Review named him one of the top 35 innovators under 35. You get the idea. He also has a PhD in condensed matter physics from Oxford. This bio is kind of ridiculous. Clearly one of the smartest people out there in this space, and this company, Emerald AI, is really doing some super innovative things with some really high-level partners, including NVIDIA and others. I think you’ll enjoy this conversation as much as I did. Please leave us a five-star review wherever you listen.
And most importantly, if you are not already a subscriber at douglewin.com, please go there and become a subscriber today. Your support for the podcast really makes it possible. And with that, here is my conversation with Dr. Varun Sivaram.
Varun Sivaram, welcome to the Energy Capital Podcast.
Varun Sivaram (02:19.256)
Thanks so much for having me. It’s an honor.
Doug Lewin (02:21.514)
Hey, it’s great to talk with you. I have been reading article after article about Emerald AI. I saw your presentation to the large load task force a couple of months ago in Texas and have been meaning to do this for a while. So thanks so much for taking the time. We’re going to obviously talk about Emerald AI. We’re going to talk about Texas and data center growth. We’re going to talk about all of these things, but I just want to start from a very high level, Varun. We are recording here on November 7th. Yesterday, Heatmap News had a very provocative headline: “Everyone hates data centers.” I don’t know that that’s actually true, but I know what they’re trying to say. There certainly is a lot of opposition to data centers right now. You are doing a lot of work, obviously, around data centers, data center flexibility, just from a grid perspective, thinking about affordability, reliability, lower emissions—all these aspects of data centers. Why should people not hate data centers?
Varun Sivaram (03:19.928)
Well, look, Doug, I think we’re at a critical juncture, and that juncture is between what I consider to be the villain path for data centers and the hero path for data centers. And I don’t think either one of them is preordained. I think that folks may be right to say that they’re worried about the impact of data centers in their community the way things are headed today, right? The average annual household power bill in Columbus, Ohio rose by $240 directly attributable to data centers in 2025. And you’ve seen NERC studies, for example, and other reports showing that the advent of AI data centers could cause grid reliability issues. So there is a scary villain path that I’m worried about in which data centers come to town and communities don’t want them. They raise rates, they destabilize grids, and as a result, you just have fewer data centers getting built. I think the villain path is not just bad for the AI industry. The villain path is very bad for America because America needs AI infrastructure and AI data centers to compete in the 21st century in the most important economic sector we’ve ever seen. And data centers can provide economic development, and they can help us to compete with China. So we absolutely need a lot more AI data centers.
The hero path is the one that I’m obsessed with getting us onto because I actually think AI data centers, far from being the thing that undermines the grid, can actually be the asset that saves the grid. And in that hero path, if we get on it, data centers come to town, they actually lower your rates, or at least they arrest the increase in rates because they’re more efficiently utilizing your existing system. We can connect far more data centers much more quickly to existing power systems and defer the massively expensive overbuild of infrastructure and more prudently expand our grid and expand our generation. And by the way, data centers in this hero path are going to contribute to grid reliability and help us to avoid rolling blackouts. I think we can get there, but we’re not on that path right now and folks are right to worry. And this is the moment where we switch from the villain to the hero.
Doug Lewin (05:25.878)
Yeah, I definitely want to talk a lot about both paths because I think we’re seeing elements of both of them. But obviously, to me anyway, the hero path is much more interesting. Maybe this is my sunny outlook on life or something, but I think that there really is an opportunity here when you see the scale of investment that you see, and we know that the grid has been underinvested in. This is an opportunity to bring a lot of investment to the grid.
Where I think I want to go next is I do want to ask you about Emerald AI. And I think the way I kind of want to bring that in here is this test you guys did in Phoenix recently. Can you talk a little bit about what you did there and connect that, obviously, to the hero path?
Varun Sivaram (06:10.528)
Absolutely. So Phoenix, Arizona: deployment one of now four Emerald AI deployments. In that first deployment, we went to Phoenix with a range of very credible and authoritative partners like EPRI, the utility association that runs DCFlex, Oracle, in whose data center we were operating, NVIDIA, who’s both our investor and partner in this demonstration, as well as the local utility Salt River Project. The goal was to prove that on a grid that faces summertime strain—let’s say you’ve got a peak moment sometime in the summer where a million air conditioners are straining in Phoenix, Arizona—that a data center can actually flex its power consumption. It can reduce its power consumption at that very moment to provide badly needed relief. And in doing so, that data center demonstrates the kind of behavior where you say, “This is one of those grid-friendly data centers that if one of these comes to interconnect to my grid, I would love to have this hero of a data center connect—not in seven years, requiring me to build out transmission lines and power plants, but right now in seven months because it can provide badly needed relief when I need it and it’s not going to raise my peak load unsustainably.”
And so we went out to Phoenix and we worked with our partners, as well as the chief scientist of Databricks, to design a representative set of customer workloads running on this cluster of NVIDIA GPUs—a representative set of workloads across inference, fine-tuning, training of large language models. We went ahead and said, “Is it possible that if we get a signal during the peak demand on that day from the local utility that we can then reduce the power consumption by 25% for three hours?” And those were the parameters set to us by our utility partners. The test succeeded. In fact, the test didn’t just succeed once, it succeeded many times. And we’re really pleased—I think this is the first time I’m sharing this publicly, Doug—we’re very pleased that those results have now been formally peer-reviewed and accepted for publication at one of the world’s top scientific journals, Nature Energy.
So we’re delighted that this is kind of an inaugural first demonstration of AI computational flexibility where the data center itself is changing the way it operates in a way that is supporting the reliability standards of the AI customers. We made sure that our AI customers and partners were happy with the performance of their AI workloads while at the same time, the grid got exactly the performance it needed—to see that reduction over a controlled ramp rate, that 25% reduction over a three-hour period, which is what the instruction to us from the grid was, and then a controlled ramp back and no snapback beyond the baseline energy consumption. That’s the kind of behavior that if you replicated across many data centers can save a grid from a blackout.
Just very briefly, I’ll say, Doug, we also simulated a real California event. A little over five years ago in August 2020, we simulated what happened in California where a 500-megawatt gas plant just tripped offline. Had we had Emerald AI deployed on data centers in that service territory, we could have avoided the rolling blackouts that ensued. And what we demonstrated in this trial—again, this was in Phoenix, but now responding to a CAISO emergency need—we demonstrated that we could first reduce the consumption by a little bit and then reduce by a further amount if that’s what the grid operator signals that we need to do. And so this kind of dynamic ability to respond to the grid’s needs as they evolve while protecting the performance of the AI workloads, the most valuable workloads in human history—that’s the dual optimization that Emerald AI enabled in this Arizona test. And it’s just the first of many deployments that we’re super excited about.
Doug Lewin (09:48.246)
Now you had, I think a few months ago, again at that large load working group, said you guys were at least considering doing some kind of a test in Texas. Is that happening or on the roadmap?
Varun Sivaram (09:58.996)
It is my deep desire to get that test up and running. You know, ever since the Arizona demo, we went ahead and did another commercial demonstration again with EPRI DCFlex, and we’ll be excited once we finish EPRI’s independent technical validation to present those results to the public. We have a test that’s been announced and that will be done in the United Kingdom, our first international expansion to London, with National Grid, the national utility there, and a large data center with the most advanced NVIDIA GPUs. And then last week we made a major announcement about our fourth deployment—happy to talk more about it—a commercial scale of nearly 100 megawatts in Virginia with NVIDIA. So there’s a lot of excitement for what’s to come.
I really would like to do this in ERCOT. And I’ll just share that one of the most impactful statements I have heard came from Pablo Vegas, the CEO of ERCOT, who shared with me, he said, “Look, I would like your technology not just to try and relieve the grid when we’re in an emergency moment—imagine, you know, all signals flashing red—but rather when they’re flashing yellow.” That’s right. When they’re flashing yellow and it looks like, you know, we might be approaching a scarcity event, that’s when our grid-friendly data centers can really help bring the grid right back to that green zone and therefore avoid ever coming into the emergency situation to begin with. It’s why I’m so enthused about this hero pathway. If you have grid-friendly AI data centers—and Doug, we should talk about why Nvidia calls them AI factories—if you have these grid-friendly assets, they help every day to keep you in balance and avoid you entering that emergency condition from which then you have to take drastic action to recover from.
Doug Lewin (11:39.542)
Yeah, so I think there’s a ton of applicability here. What I’m trying to think through, and I do want to come back to talking some about AI and why there are data factories and not data centers and all of that is really interesting. I just want to get grounded a little bit more in what the technology is. So I’m going to repeat it back and then you’re going to tell me where I got it wrong and/or expand on it. You’re certainly going to tell me where I got it wrong. I’ll get it wrong somehow.
But let me see if I can break this down. So with an AI data center or data factory or however you want to call it, you’ve got a lot of different things going on. You’ve got some inference that could be doing things like, for instance, routing calls to a 911 call center or helping an autonomous vehicle interpret that that thing moving across the street is a pedestrian and it needs to stop. So there are certain functions that you can’t shut off. Those need to run and they’ve got to run quickly. You can’t even necessarily move them to a different data center. Latency is a major issue for those kinds of applications. That’s kind of on the extreme end. On the other extreme might be a large language model that’s training over a long period of time. Could be done from anywhere, could be done almost any time. And then there’s like a whole lot in between there. And what you’ve done is developed a software that understands all of those different use cases and can, within those use cases, kind of move workloads around and even sort of maximize the efficiency of the chip performance. This part I don’t understand. This part seems like magic and maybe this is just the magic of the technology you’ve developed and maybe you don’t want to say too much about it, but it seems like from what I’ve read, you’re maximizing the efficiency of that GPU in that moment such that for those use cases where they can’t be shifted, you’re still getting the exact same output, but at less energy use, which is basically the definition of energy efficiency.
Okay, so if I got that all wrong, you could just start over and describe what you do. If I got it partly right, then you could correct the other parts I got wrong. What did I get right, what did I get wrong? Grade my paper, Varun.
Varun Sivaram (13:49.858)
Doug, you get an A. You did a great job there. I’m really impressed. We’ve got to bring you on staff here. Look, let me just go up a level just to explain the broad framework of flexibility here. I think of kind of four components of flexibility for a data center.
Component number one is what we call temporal flexibility. Within a data center, you might have, just like you said, Doug, you might have some really mission-critical time-sensitive workloads that you can’t pause or slow down. You might have some other ones that you can pause or slow down and everything in between. And temporal flexibility takes advantage of slowing or pausing certain workloads. And you can do that in many ways, and we take advantage of all of these different ways as we demonstrated in Arizona, whether it’s changing the clock frequency on the Nvidia GPUs or it’s rescheduling workloads, or it is changing the resource allocation—what’s called in the industry auto-scaling—the GPU utilization for particular workloads, et cetera, et cetera. So there are lots of different things you can do, but basically in one data center, temporarily over time, I can slow or pause to create flexibility, reduce the energy draw from the grid in time.
The second way is what I call spatial flexibility. This is a very unique trait that data centers have, but other economic users like electric vehicles don’t have. Data centers can move their workloads from one location to another at the speed of light over the fiber optic network. Again, this works for some workloads, but not other ones. It’s probably not gonna work for a large training run because there’s too much data to transfer, but it can work. You can move queries for an inference query, for example. You can move queries from one location to another so long as you had the model set up in multiple locations. So the second is spatial flexibility, where if you have a problem in Arizona, the grid is strained, you move your query over to Dallas, right?
Doug Lewin (15:39.438)
Before you go to the third one, just real quick on that. So, because I think that’s where a lot of listeners will have some familiarity and direct experience with AI. If you’ve done a ChatGPT or Gemini or Grok or whatever kind of search you prefer—all three—it will often say like on OpenAI, you know, it’ll say on ChatGPT, it’ll say “thinking,” right? And you get that little lag there, right? And sometimes you want to, you can, you can change that and optimize it to give you a real fast answer. Sometimes you want it to think more, but that thinking, you know, when you’re dealing with fiber optics, like that could go around the world multiple times in a second. Like you think 15 seconds is a long time or a short time or whatever you think it is. Like you could move that around the world a bunch of different times. So, and the reason I want to dive into that, Varun, is one of the things I hear so often in conversations with energy people about data centers is there just is no flexibility in these things. The data centers are paying so much for these GPUs. They’ve got to run 100% of the time, not even 99.999%. They just got to run all the time. And that is true for some use cases, but for some of them, and perhaps the ones that are the biggest use case, maybe you can expound on that a little bit, there is a lot of this spatial flexibility because somebody’s asking a question and 10, 15 seconds is a perfectly fine response time.
Varun Sivaram (17:01.462)
Yeah, absolutely. You make a great point, Doug. The latency or the delay that you’ll face if you move a query from Phoenix, Arizona to Dallas or San Antonio is not going to be a second. It’s going to be measured in milliseconds, right? You will not notice it. You absolutely will not notice it. And that’s important because AI workloads in many ways are different from the historical class of workloads that data centers used. And again, we should talk, Doug, about why we’re moving toward this paradigm called AI factories. Yes, a factory that’s optimized for converting electricity into tokens of artificial intelligence. Historically, data centers have done all kinds of heterogeneous or different things. Those include routing 911 calls, by the way. No one ever wants to mess with routing a 911 call. No one ever wants to mess with Doug Lewin trying to send a Venmo transaction to Varun. I know you haven’t done this yet, but no one wants to mess with—
Doug Lewin (17:51.446)
Do I owe you money? Are you trying to tell me I owe you money? Oh dear.
Varun Sivaram (17:57.848)
Sorry, inapt example. But no one wants to mess with a transaction like that, right? Going forward, the set of AI workloads, there are these new sets of flexibility parameters. If you’re fine-tuning a model, for example, you might have some temporal flexibility. It may be okay for that fine-tuning operation to pause for an hour or two. When it comes to inference, you mentioned these chatbots. Yes, the time to first token, which is the time you wait around waiting for that first response or that first word back to you from ChatGPT, that can take a second, it can take a few seconds. And so the few milliseconds of moving the query is not noticeable.
I’ll also say that although Doug, you and I interact with ChatGPT every day, that’s not the only use case for AI. There are lots of use cases. You might be a scientist, for example, and you might send a request for protein folding configurations and expect that request back in the morning or next week. Right? That is what we call a batchable request, which you might be able to pause again for an hour if the Phoenix grid is strained. So there’s a range of different ways we’ll use AI and almost all of them have some kind of temporal or spatial flexibility.
So again, we had four ways. I told you about the first two: temporal and spatial. They’re computational flexibility. The third one is what I call resource flexibility. This one’s intuitive. You might have some batteries on site and you’re able to reduce your grid draw because you’ve got fully charged batteries and for that limited amount of time that your batteries can run, you can locally power your data center or part of your data center. And the fourth kind of flexibility is what I call adjacent flexibility. Adjacent means not on site. You might have, for example, at the same transmission node, you might have neighborhoods with batteries or Nest thermostats, and you may be able to aggregate them all together and provide a little bit of flexibility to that substation, for example, and the utility might be willing to treat that as flexibility that counts toward the data center’s own flexibility.
So these are the four types. The first three are onsite. The first two are computational, temporal, spatial. The third one is onsite resource flexibility. And the fourth one is adjacent flexibility. You put all those together, and I sincerely believe we can make AI a flexible resource. And Emerald AI is the software layer that sits above and enables this flexibility. We are orchestrating the temporal and the spatial computational flexibility. We are co-optimizing that with the resource flexibility, the batteries on site, for example, so that you can best harmonize what you’re doing on the compute side with what you’re doing with the resources on site. And my sincere hope is over time, we will also harmonize all of this with adjacent flexibility and all the other resources that are offsite.
Doug Lewin (20:38.456)
So is your software already doing this or set up to do the adjacent part of it? Because that part I’m really fascinated with, because you talk about a hero path, right? The potential for data centers to pay for reductions from customers and, to be very clear, and people will get tired of me saying this, but I’m going to say it every time I talk about this: on a voluntary basis, nobody will ever be required to do this if you just want to pay a lot for electricity and you just don’t care. That is your right as an American and I will defend it. But if you would like to lower your bill by participating, like you could actually have data centers paying for thermostats and batteries, and this is what markets are all about, right? There’s a price for that. You figure out what the price is. What’s the price to add another increment of battery on site? If the price is lower to put money into the pockets of the people at the neighborhood just across the way and be a hero and save money, right? The data center... I mean, you talk about a win-win-win all around. I’m really fascinated by that piece. I’m really glad that you laid it out with, because I was going to ask about it, but you gave it without a prompt. I’m curious though, is that something that is like part of the software or is your software more like one and two—temporal, spatial—and then that’s done by somebody else?
Varun Sivaram (21:55.02)
So look, right now, if I’m being perfectly transparent, Emerald AI is focused on flexibility at the data center. So we’re doing temporal flexibility, spatial flexibility, and coordinating with onsite resource flexibility, right? Emerald AI makes it possible to co-dispatch an onsite battery alongside with your computational flexibility. So we do the first three, but I’m a big fan of all the different buckets.
Let me just say a moment on this hero path. Look, data centers are already striving to be heroes. My friend, Chase Lochmiller runs Crusoe. He’s an investor in Emerald. And they’re a standout example of a company that, when they come to town to Abilene, Texas, for example, they invest in the community, they invest in the workforce. Chase actually has this vision of building even more generation than the data center will need so that you’re actually reducing overall power system costs. And by the way, to your point, Doug, it would be wonderful if data centers, through their high willingness to pay for their compute costs, are also willing to therefore subsidize flexibility in the adjoining communities.
And another good friend, I was recently on a panel with Justin Lopez at Base Power. Base Power is an example of a company that is putting together a range of battery resources in a neighborhood or in a community, and they’re able to bid that in or dispatch that as an adjacent flexibility to data centers. So I’m just delighted that all of these great innovators are coming with solutions. Emerald seeks to be this glue that makes it possible for the data center itself to flex and to play nicely with all that adjacent flex out there.
Doug Lewin (23:24.928)
And so, okay, that’s super helpful. I appreciate that. And so it’s obviously like, you don’t necessarily need a company to do all pieces of that. I had Astrid Atkinson from Camus on recently and they do a lot of that like aggregation of the JSON you were talking about. There are lots of different companies that do that. I was just wanting to be clear on what you guys do, but you do those first three and I’m interested in how you think about, and I guess maybe it’s not you thinking about it, it’s the software, but you’re sort of training the software like, what are the kind of trade-offs between those things? I mean, it’s got to still be early days, right? There’s not a ton of information yet, I wouldn’t think, but maybe there is, about how to sort of stack those things against each other. We’re going to switch a workload, do a different data center. We’re going to move the time, or we’re going to use the resource that’s on site, or we don’t have enough resource on site. We need to put more. That, I assume, is what is going on with the tests you’re doing and the early deployments, is you’re getting the information to train the model to continue to refine it? Is that accurate?
Varun Sivaram (24:26.262)
Yeah, absolutely. Doug, you asked, you know, “Hey, Varun, are you doing the thinking here?” You better hope I’m not the one thinking because... Far too slow, far too slow to pull this off. The magic, the secret sauce behind Emerald AI—the thing that takes my breath away—is the autonomous intelligence, the closed-loop functionality, the way the Emerald AI set of agents actually just operate at scale. And so there was a recent test where, you know, we were just watching it, but it was kind of epiphenomenal. We had no ability to change it. We were just like watching the results pour in. And one of our partners was like maniacally changing the workload mix, starting and stopping workloads. And at one point, one of the workloads failed because I think it was improperly prepared. And the Emerald system did some behavior we had never seen it... We’d never designed it to do, but it was fascinating. It enabled the overall power draw to look to the utility like we were still very smoothly ramping down by 20, 25% and holding steady, even while under the surface, there’s this churn of all of these different workload behaviors. Some are starting, stopping, some are even failing, which we had never encountered before.
So I love watching the system autonomously and intelligently make these trade-offs in real time by taking into account, hey, what’s the user okay with? What’s the user’s priority level for these jobs? What can be tolerated in terms of temporal or spatial flexibility? And then to your question Doug, how does this stack along with the battery, right? Because the battery comes with its own set of constraints. It has a particular state of charge. You can dispatch a battery but you then need to recharge it before you dispatch it again. What if you get two back-to-back events without a recharge time in between? This is why we’re going to need a combination of flexibility approaches. It’s why compute flex is so impressive, right? Compute flex is powerful because we can do multiple events in one single day. We can do long events, even if you haven’t sized your battery to achieve an eight-hour event. Let’s say you’re in PJM and there is a long event. This has historically happened. Compute flex can really bail you out if you exceed the capacity of your batteries.
I like to think about a supply curve. There’s a supply curve along many dimensions of different interventions you’ve got at the data center. You’ve got your temporal flex, your spatial flex, you’ve got your batteries on site, maybe you’ve got a fuel cell. You’ve got a diesel generator that you’re only allowed to run for X number of hours because of the air permit reasons. And you’re stacking all of these interventions and intelligently and autonomously, you’ve got to make good decisions in the moment because, and we should talk about this Doug, the utility and ERCOT are counting on you not to screw this up, right? They’re counting on you that if there is a curtailment signal, you better perform. And so for the data center, the goal is to make sure you perform while protecting the sanctity of these customer workloads, which are again, the most economically valuable workloads in history. Don’t screw them up.
Doug Lewin (27:21.08)
So before we were going to talk about ERCOT in just a minute, before we do that though, I do want to just linger on this for a minute. I think there are two things I want to kind of unpack a little bit more because I think they’re just fundamentally important on just a kind of a foundational level. Because again, back to that Heatmap headline, “Everyone hates data centers.” Like I feel like there’s a disconnect with the general public here. You’re obviously like, I could just see you just like light up as you’re talking about this. Would you talk about some of the use cases of AI that excite you the most? I just finished reading, it’s sitting over there, The Worlds I See by—I don’t know how to pronounce her name—is it Dr. Fei-Fei Li, who’s one of your investors, right? It’s a beautiful book. I highly recommend folks read it. She gives some of those insights into how AI could actually improve healthcare outcomes. I feel like sometimes, and it is an energy podcast, so of course we’re gonna talk about energy, but we sometimes skip over and I think that’s some of the disconnect with the public. So just take a minute and just like on a human level, like what excites you about AI? It’s a big question.
Varun Sivaram (28:22.626)
Everything. The best way I can frame it is, you know, just last week, the King of England’s Coronet Awards just awarded this major prize to both Dr. Fei-Fei Li, our investor, as well as to Jensen Huang, the CEO of NVIDIA. NVIDIA is another big investor in Emerald AI, as well as to NVIDIA’s chief scientist, Bill Dally. Jensen says, you know, there is this paradigm shift we’re seeing. AI is different from other inventions. Other inventions have been tools. AI can actually use tools. This agentic AI future in which AI agents are using tools that formerly humans would use opens a whole new world of discovery.
It’s why I love Dr. Fei-Fei Li’s book that you just mentioned, The Worlds I See. You know, fine, putting my science fiction hat on, I fully expect that it’s AI that will cure cancer. It’s AI that will enable the end of road fatalities. Yes. As we have far safer transportation, road transportation, autonomous vehicles. It’s AI that will end the rigmarole of meaningless work and open up far more leisure activities for everyday working-class citizens who don’t have to do things that we can now automate. And for those worried about job displacement, it’s AI that I believe will create almost unbounded economic gains. They will, I hope, help the United States become more fiscally sound through this incredible economic growth and revenue. And I hope that they will create enough of an economic bounty that even ordinary working-class citizens just get to share in those rewards and live meaningful, productive lives.
But we do not get there unless we invest right now. I do fear that if we don’t take this seriously—first, if we’re uncompetitive with other countries in the world, and second, if we just slow our trajectory compared with what we could achieve—we won’t realize these rewards. And every year we go that we haven’t cured every kind of cancer is just a year of unnecessary deaths. I know I’m exaggerating in some sense, but seeing AI solve protein folding or solve... these are fundamental advances.
Doug Lewin (30:39.514)
I think about a lot, I haven’t talked about this publicly or, but my dad has Parkinson’s. It’s a devastating, just devastating disease that doctors just don’t, they just don’t know. Everybody just kind of like, it’s kind of a shrug. We don’t know what causes it. We don’t know really how to treat it. They can give you some medicine that does a little bit here and there, but my God, like if we could use AI to try to understand what causes that, I mean, just the amount of human suffering from Alzheimer’s and dementia and Parkinson’s. There was just a constitutional amendment in Texas to establish some additional Alzheimer’s research. Now, I don’t want to be Pollyannaish. We all know AI could be used for bad stuff too, right? But this is where, when you talk about hero path and villain path, there’s that on the grid, and then there’s that more generally. And all these things we should talk about, which is really important, is why we’re talking about it right now. But I think a lot of times people are not thinking enough about how much it can help.
You mentioned cars. 40,000 people die on the roads every year. I think the math is something like equivalent to like a couple of plane crashes every week where everybody on board dies. If that was happening, the public would be up in arms, right? We wouldn’t accept it. We absolutely would not accept it. But somehow with cars, we all just kind of go, “Oh well.” It is unnecessary. We’re already seeing with Waymo the crashes and they’re running around Austin all the time and they’re... the rates of both injuries and of even any kind of crash are down. I forget what the numbers are—70, 80, 90%, something like that. So anyway, okay. Anything else you want to say about that before we move on? So we’re going to go to ERCOT in a minute. The other piece I just wanted to, and this relates to the ERCOT discussion as well, but the other piece I wanted to dial down into just a little bit more, drill into a little bit more is affordability. So we were talking about that fourth bucket you were talking about—adjacent where data centers could pay for reductions. I’m real excited about that, but there is a more fundamental way that data centers actually can help lower costs, which is, and this was in that LBL study that’s gotten a lot of traction recently over the last couple of weeks that states that have higher energy use actually have seen their rates go down. Now that’s uneven. Rates are going down more for large users than for residential. And that’s something we need to talk about and work on.
But overall, like the math is pretty simple, right? You have a fixed cost of a system and the more you spread those fixed costs out over multiple users, the lower costs go. So I don’t know if you want to say anything about that, but it’s just something I want to put out there more and more for folks to think about. That’s sort of part of the hero path as well, is as long as we have the right regulatory systems in place and ERCOT will be working on changing some of the transmission cost allocation and all that, there’s a real potential for costs for everybody to go down just from that simple math equation before you get into any of the whiz-bang exciting things AI can actually do to make the grid more reliable or affordable.
Varun Sivaram (33:33.826)
Doug, you’ve nailed it, so I won’t spend too much time repeating what you said, but it is a simple equation, but it’s central to the hero pathway. Because if you have to build out infrastructure faster than you’re bringing on the revenue from new kilowatt-hours, in other words, if you have to pay for new kilowatts faster than you get revenue from new kilowatt-hours, then everybody’s rates go up. Today we have a cost allocation problem because peak demand is rising rapidly, you have to pay for all of these new pieces of grid infrastructure, transmission lines, substations, as well as generation. And there aren’t as many kilowatt-hours getting paid for in order to make the math work. And so we socialize the cost. And then we have arguments over, “Well, should data centers pay more or should communities pay more?” et cetera. You can sidestep a lot of that through data center flexibility that allows data centers to better utilize the existing infrastructure.
Look, I still think we’re gonna need more. We’re gonna need to build more grid infrastructure and to modernize it, we’re gonna need more power generation capacity, but we can build it out prudently. And so the pace of kilowatt increases is outpaced by the kilowatt-hours that we get productive revenue from and that pays for all of this. So you have less of a cost allocation problem because you don’t have to fund this and therefore, it’s less about “What should communities pay or should data centers pay?” It’s that, as you said, flexible data centers coming on the system should actually reduce costs for everybody because the new kilowatt-hour payments really ought to pay for more than their share of what the kilowatt capacity increases are. So that’s the simple equation that I want us to keep in mind as the central tenet of the hero path.
Doug Lewin (35:17.934)
Unless anybody think that that just sounds very futuristic, this has actually happened over the last two years in Texas where the peak demand, the highest peak demand we had in 2023, we did not reach in 2024 or 2025. But our minimums for every month, the minimum demand in 2025 is up every single month compared to 2024. Our electric use is up like 11, 12% over the last two years. EIA thinks it’s going up 14% next year alone. So usage, overall usage of the system is going up while the peak is not. So exactly what you’re describing, it’s not just a futuristic thing. It has happened over the last two years, whether we keep it going or not, is gonna depend on good policy and market structures and all that kind of thing. Which brings me to, let’s talk about ERCOT and good market structures. You made a presentation to the Large Load Working Group. You’ve talked in this conversation about Pablo Vegas, ERCOT CEO. I saw you describe somewhere else, a very visionary thinker. Texas passed Senate Bill 6. Let’s just start at wherever you want to start. What is most interesting about Texas? What are you kind of either watching or participating in as far as how large loads are dealt with in the ERCOT market?
Varun Sivaram (36:28.93)
Let me first say, Doug, and I’m not just saying this because I’m your podcast guest today and you’re in Texas. I am a huge fan of ERCOT. Just to give you a sense, I used to work in India. I was the chief technology officer of a big Indian power producer. And in New Delhi, when folks would ask me, “Hey, what electricity sector reform should we be doing?” I’d say, “Look at ERCOT first. That is the electricity system you want to emulate.” I didn’t say any other system. I said ERCOT. So I’m a huge fan and I believe ERCOT does a lot of things right, whether on the generation side, it’s connect and manage, whether it’s using market signals to drive investment. Customers in ERCOT both have a lot of choice. They also have lower costs than everywhere else where it’s not requiring mandates and government preferences to drive the way that you decide to get your own energy and giving you a lot of choice and flexibility in the process. I’m ideologically a very big fan of ERCOT.
So why do I think ERCOT’s a big opportunity here? Three reasons. First, I think data centers and tech companies want to come to ERCOT. It’s a great opportunity for economic development for the state and for Texas to be a leader in AI. And as you know, that’s been a little difficult as we run into constraints on connecting new loads. So that’s the first point. Second though is, I think there’s a lot of headroom in ERCOT. Look, let me try this experiment out and you tell me if I’m doing this wrong, but your video producer started me off like this. He said, “Hey Varun, look at all that headroom.” For those of you listening on audio, I just tilted my camera up. And so now my head’s in the bottom of the frame. And he said, “You need to reduce that headroom.” Well, this is ERCOT today. There’s a lot of headroom, right? There’s 10 gigawatts or 15, depending on how you read Tyler Norris’ Duke study. And this is what I want to do. I want data centers to be flexible enough to take advantage of all of that spare headroom. So, you know, point number two is I think there’s a lot of headroom. And point number three is I think ERCOT moves fast. And we should talk about some of the fast-moving proposals right now, but I think ERCOT has this unique governance ability to move fast and lead the rest of the country. I’d love to see large flexible loads, AI factories that are flexible take root in ERCOT first in the nation.
Doug Lewin (38:40.334)
Everybody’s talking about speed to power, right? And I said earlier, you know, a lot of conversations I have, I just hear this all the time, right? That like, there’s just not flexibility in these data centers. It’s not that, usually when I drill down, when I ask the follow-up, “What do you mean there’s not flexibility?” Typically the answer I get, Varun, is that chips are just so expensive, and the next chips that are coming out in 12 or 18 or 24 months or six months, right? Or three months, like this thing is moving so fast, you have to be able to maximize the investment you made in the generation of chips you have so you don’t have any flexibility. You’ve got to just run all the time. But as you were just talking about, there’s flexibility, temporal, spatial, resource, all the things you said earlier, adjacent, like there’s flexibility there. And then there’s also that like you might have the chips or a line to get the chips or whatever, but if you don’t have the power, those chips don’t do you a lot of good, right? So it’s kind of speed to power. And then to kind of connect that to something ERCOT does very well on the generation side, but I don’t think it’s doing very well yet on the load side, but I agree with you. I’m very bullish on ERCOT. I think ERCOT is doing things quite well, and let’s be clear, relative to everywhere else, right? It’s like you have to grade all this on a curve, right? Because there’ll be a lot of people listening, they’re like, “I’m dealing with ERCOT and I’m very frustrated.” Okay. But you’re probably less frustrated here than you are in MISO or PJM or New York or CAISO or other places around the world even. But this connect and manage on the generation side, I keep thinking like where we need to get to is connect and manage on the load side. That yeah, we’ll connect you, we’re gonna manage that while we’re building out transmission. And that’s kind of exactly what you’re doing. Does that concept work or is there something that’s just too simplistic about trying to apply that to load? It applies to generation, but it doesn’t quite work for load. What are your thoughts on all that?
Varun Sivaram (40:35.362)
Look, the inspiration is absolutely right, Doug. Connect and Manage has worked so well for ERCOT on generation that the rest of the country is trying to copy it and they’re stumbling over themselves to do it. And we’re so proud that on the load side, our Emerald AI Senior Advisor, Arushi Sharma Frank, has recently put forward and worked very closely with ERCOT to develop this planning guide revision request related to this concept, Connect and Manage for the load side, to enable a large flexible load like an AI data center to connect and provide this kind of flexibility as a controllable load resource and thereby reduce its interconnection time below a couple of years. I mean, you said it right, Doug. The largest value here is speed to power and energy is the critical bottleneck. It’s no longer chips in the supply chain. You can get your chips now, but if you’re stuck waiting in the queue, that’s billions of dollars. We actually have $4 trillion of investment sitting on the sidelines waiting to build these AI data centers and speed to power is by far the most important value proposition to get a faster and a larger power interconnection.
So we’re delighted with that proposal. We co-signed it and I actually urge you to take a look both at the filing at the reply comments as well from another Emerald AI senior advisor, Peter Hirschboeck at Impact ECI. He’s got this magnum opus coming out on the four pillars of flexibility. But what Arushi has done, she’s been a real leader in Texas at Tesla and we’re very lucky to get to work with her. I know she was on your podcast recently and she’s done some great work on this. But I will say, I will say Doug, not everything is analogous. It is the case that a generator that connects will be subject in the connect and manage framework to curtailment at any time. Right, right. And a load, a large load, a data center... Look, there are limits to what data centers are going to be willing to tolerate. You and I have talked now three times in this podcast, Doug, about how economically valuable AI is. You don’t want to curtail AI 2000 hours a year, right?
Now the good news is that 10 to 15 gigawatts of headroom in ERCOT is achievable just with very minimal curtailment with 25% reductions in load for a couple hours at a time for up to 200 or fewer hours a year. So it’s not a whole lot of curtailment, right? It’s on the order of 0.5% of curtailment is what Duke University found. And so connect and manage with some guardrails along the lines of, you know, a data center that connects in this way can expect that it still gets to, you know, operate normally most of the time. That’s a very attractive proposition for AI factories. And I think, you know, we’re so proud that we are working with Nvidia and with others like Digital Realty and Eprion in PJM, not yet ERCOT, but delighted to come to ERCOT and do this. We’re working with them in Virginia on the world’s first power-flexible AI factory. And that’s going to be a 96-megawatt, almost a hundred megawatt facility. But more importantly, it’s the reference design for future AI factories. So the next thousand AI factories will have this capability to be power flexible, protecting your users’ workloads, but also precisely meeting what the grid needs from you in order to get that early connection. And you’re seeing this connect and manage framework in ERCOT, we hope that Arushi’s proposal is taken forward. And in addition, you’re seeing federally, you’re seeing Energy Secretary Chris Wright directing FERC, the Federal Energy Regulatory Commission, to consider this new what’s called an ANOPR to potentially speed the interconnection of flexible loads. So I’m seeing this pop up everywhere. I think it’s a wonderful idea. And again, I hope the rest of the country takes ERCOT’s lead.
Doug Lewin (44:18.434)
Yeah, and a little bit more on that. Kristi Hobbs, VP of, I think it’s System Operations or something like that at ERCOT, gave a presentation to the PUC just a week or two ago. I’ll just read one sentence from her slide. This is an official ERCOT slide, we’ll put it in the show notes. “With an evolving grid, large loads that are willing to respond to system conditions could have an opportunity to interconnect if they can adjust their consumption until transmission and/or resource additions can be incorporated on the system.” The key takeaway from the slide: large loads who are flexible could utilize available transmission capacity if they’re willing to curtail under certain conditions. So like you said, they don’t want to just be curtailed. This notion of ERCOT with some big red button they’re going to press and shut it down isn’t going to work. But as long as there is telemetry that is measured and verified reductions that can happen on site, that should be able to speed the interconnection of some of these massive investments. Anything you want to add to that or amend?
Varun Sivaram (45:15.032)
Well, look, that’s so important. And I’ll just say, you mentioned a big red button. We need to do this in a way that is amenable to these data center operators, America’s critical economic infrastructure. So you really don’t want the data center to feel like at any given time a big red button is going to come down, the circuit breaker is going to be flipped. And that’s why at Emerald AI, what we’re building is what we hope to be an elegant interface between the data center and the grid, enabling the data center to credibly, verifiably, and enforceably control its own load in response to signals that it receives and to prove that it’s doing that. This elegant software interface should give the grid a lot of comfort that the data center is going to perform the way that it needs to perform. And the data center operator, a lot of comfort that, look, I and my cloud tenant and the AI users of the compute are all comfortable that our workloads are going to continue at the level of stringent performance requirements that they demand. If you have any other inelegant solution, you threaten to drive away data centers from your service territory. And then you’re on neither the hero nor the villain path. You’re on the path where AI just doesn’t get better. And that’s the worst path of all.
Doug Lewin (46:28.812)
Yeah. Yeah. So I just kind of want to cover this before we end, Varun. This has been great. And we’ve obviously covered reliability and affordability in a lot of different ways, but I don’t want to leave out sustainability, especially because I know you’re somebody that’s worked on climate issues throughout your career. And, you know, this is again, something I hear a whole lot in conversations is AI is driving up emissions and we do see AI data centers purchasing a lot of gas, and in a lot of cases just kind of like old refurbished turbines. From your perspective as somebody again that has worked for many, many years on climate issues and obviously deeply cares about these issues, what do you say to folks that are really concerned about climate change and worried about the impact that AI data centers will have on emissions? Yeah, what’s the villain and hero path on climate for AI data centers?
Varun Sivaram (47:23.318)
It’s the correct framing, Doug. Those folks who say, “Look, this is just not gonna matter,” I think are wrong. I think AI, because it’s so transformative, and Doug, I don’t have to convince you, I think AI could become the world’s largest energy user this half of the century. I think you could see AI within a decade, decade and a half, start to use 25% of power on the grid. And therefore, the villain path is one in which AI drastically increases emissions alongside increasing rates and reducing reliability. The hero path is one that I think is very achievable, again, with flexibility. And that’s one in which we first don’t have to radically overbuild our infrastructure. We also can integrate a wide range of energy sources, firm, dispatchable and clean sources, such as nuclear and geothermal that we’ve kind of bought ourselves the time to get to by first utilizing existing grid capacity and second, variable and intermittent sources, including solar and wind, which are cheap and abundant, but unreliable. I will grant that they are unreliable. Well, if AI data centers are a little bit flexible. In the future, I believe that as they act as giant shock absorbers on the grid, they make a system with more variable renewable energy more stable.
So the third way of course is by turbocharging AI innovation itself, AI can help to advance clean energy systems. And we’re seeing this in a small way at Emerald AI. It’s our AI intelligence that makes it possible for these AI data centers to be orchestrated, to be flexible. And at a large scale, AI, I think, will make a very big impact in operating very efficient grids, which are, again, great for affordability, but also great for emissions. So AI’s ability to intelligently use tools, to invent new materials, to invent new clean energy technologies, and to operate very efficient grids only comes about if we improve AI itself. And so this is extremely meta and recursive, but Emerald AI is an AI for AI to make better AI.
Doug Lewin (49:32.972)
Love it. And look, we haven’t even talked about this yet, but like just things like dynamic line ratings, just using the transmission system better, right? I mean, the impacts of AI, we talked a little bit about through the adjacent stuff on the distribution grid, just optimizing devices around your house when your hot water heater actually heats up. You probably don’t care when it heats up as long as when you turn the shower on, it’s hot. There’s so much intelligence that can be brought to stuff that there literally is no intelligence. The example I love to use is like on a hot Texas summer when the sun’s going down. Now, thankfully, solar’s kind of solved the summertime problem as far as 4 PM. But when the sun goes down, you may get, you know, sometimes where energy is scarce. How many pools are in the Dallas and Houston area? All those pool pumps just running at seven o’clock. Like it’s really kind of a dumb grid. Like we’ve got all these devices just running whenever.
AI can do a lot of optimization there too. And I love the way you put that, Varun. I was actually, as far as the nuclear and geothermal and buying time, I was at the Texas Energy Summit this week and the ERCOT chairman, Bill Flores, spoke there and he was asked a question about nuclear as often happens at these energy summits. Everybody’s talking about nuclear these days. They said, “How quick do you think it can be?” He’s like, “Am I most optimistic? I’d love to see it be five years. I think realistically it’s probably 10.” And I think that’s probably right. Like who knows, right? We’re all guessing, but somewhere in that five to 10 year range. But if you can actually optimize the system that you have now, using what you have now, speed to power with solar and wind and some gas, a lot of batteries, and then buy that time to get to some of those clean, firm dispatchable technologies, I think there definitely is a hero path that I can see for AI data centers. I know everybody’s not seeing it, but I would encourage the audience, especially those that are particularly concerned about climate, to be open-minded and to work towards that, to engage with data centers wherever they are in your community, to say, like, we need you to be as low emission and reliable for the grid and affordable for consumers. Like, those things are possible. But it’s going to take a lot of discussions, a lot of participation, involvement from a lot of different thinkers to get to that future. So I’m going to let you respond to that. And I also want you, Varun, as we’re wrapping up here, is there anything I didn’t ask you that you wish that I would have? I’d love you to speak to that and anything else you want to leave the audience with in closing.
Varun Sivaram (51:56.842)
Absolutely. I’ll say just two things. The first is it’s rare in life to find these triple wins, right? You don’t get free lunches very often and politicians that tell you otherwise are probably wrong. There are real trade-offs in life. I started Emerald AI because I was astonished that there was a real win-win-win here. You could win on affordability. You could win on AI innovation, getting data centers built way faster. And you could win on sustainability and reliable power systems and you get it all at the same time. In fact, you don’t even have to care about emissions. You just get it for free if what you’re focused on is getting data centers connected really fast and making sure that rates don’t skyrocket through AI flexibility. So, you know, I feel blessed to have stumbled upon a win-win-win and that’s why I’m making Emerald AI my life’s work.
But the second thing I’ll say is it’s not just my life’s work. I’m the talking head, Doug. And Emerald AI is an intelligence, a superintelligence that’s doing its own thing, but in between is the extraordinarily committed team. I talked to you about some of our senior advisors, Tyler and Arushi and Peter, but in addition, we’re also so grateful to have just the world’s best team. Our chief scientist, Professor Ayșe Coskun, who spent over a decade building grid-friendly data center technologies at Boston University. Shayan Sengupta, who came from Amazon, ran a million GPUs. Aroon Vijaykar, who ran virtual power plants at Sunrun and so many other PhD AI scientists and others. So let me just thank them for making it possible for me to be the talking head to talk to you, Doug. And thanks to you. I know the work you do is amazing. ERCOT is the most exciting place in the land to work on energy issues. And I can’t wait to come down and do our next demonstration there in ERCOT, or ideally a big commercial deployment, because as you know, everything needs to be bigger in Texas.
Doug Lewin (53:44.962)
Let’s make it happen. Let me know how I can help. And Varun, I’m just so delighted by this conversation, so excited about Emerald AI and to watch as this company develops—an incredible team you put together, incredible technology. We’re all rooting for your success because your success will be all of our success in making all this stuff work. And with that, thank you so much for being on the Energy Capital Podcast.
Varun Sivaram (54:07.118)
Thank you, Doug.
Doug Lewin (54:09.112)
Thanks for tuning in to the Energy Capital Podcast. If you got something out of this conversation, please share the podcast with a friend, family member or colleague and subscribe to the newsletter at douglewin.com. That’s where you’ll find all the stories where I break down the biggest things happening in Texas energy, national energy policy, markets, technology, policy—it’s all there. You can also follow along on LinkedIn. You can find me there and on Twitter, Doug Lewin Energy, as well as YouTube, Doug Lewin Energy. Please follow me in all the places. Big thanks to Nathan Peeby, our producer, for making these episodes sound so crystal clear and good, and to Ari Lewin for writing the music. Until next time, please stay curious and stay engaged. Let’s keep building a better energy future. Thanks for listening.












