A brief history of “three-parent babies”

This week we heard that eight babies have been born in the UK following an experimental form of IVF that involves DNA from three people. The approach was used to prevent women with genetic mutations from passing mitochondrial diseases to their children. You can read all about the results, and the reception to them, here

But these eight babies aren’t the first “three-parent” children out there. Over the last decade, several teams have been using variations of this approach to help people have babies. This week, let’s consider the other babies born from three-person IVF.

I can’t go any further without talking about the term we use to describe these children. Journalists, myself included, have called them “three-parent babies” because they are created using DNA from three people. Briefly, the approach typically involves using the DNA from the nuclei of the intended parents’ egg and sperm cells. That’s where most of the DNA in a cell is found.

But it also makes use of mitochondrial DNA (mtDNA)—the DNA found in the energy-producing organelles of a cell—from a third person. The idea is to avoid using the mtDNA from the intended mother, perhaps because it is carrying genetic mutations. Other teams have done this in the hope of treating infertility.

mtDNA, which is usually inherited from a person’s mother, makes up a tiny fraction of total inherited DNA. It includes only 37 genes, all of which are thought to play a role in how mitochondria work (as opposed to, say, eye color or height).

That’s why some scientists despise the term “three-parent baby.” Yes, the baby has DNA from three people, but those three can’t all be considered parents, critics argue. For the sake of argument, this time around I’ll use the term “three-person IVF” from here on out.

So, about these babies. The first were reported back in the 1990s. Jacques Cohen, then at Saint Barnabas Medical Center in Livingston, New Jersey, and his colleagues thought they might be able to treat some cases of infertility by injecting the mitochondria-containing cytoplasm of healthy eggs into eggs from the intended mother. Seventeen babies were ultimately born this way, according to the team. (Side note: In their paper, the authors describe potential resulting children as “three-parental individuals.”)

But two fetuses appeared to have genetic abnormalities. And one of the children started to show signs of a developmental disorder. In 2002, the US Food and Drug Administration put a stop to the research.

The babies born during that study are in their 20s now. But scientists still don’t know why they saw those abnormalities. Some think that mixing mtDNA from two people might be problematic.

Newer approaches to three-person IVF aim to include mtDNA from just the donor, completely bypassing the intended mother’s mtDNA. John Zhang at the New Hope Fertility Center in New York City tried this approach for a Jordanian couple in 2016. The woman carried genes for a fatal mitochondrial disease and had already lost two children to it. She wanted to avoid passing it on to another child.

Zhang took the nucleus of the woman’s egg and inserted it into a donor egg that had had its own nucleus removed—but still had its mitochondria-containing cytoplasm. That egg was then fertilized with the woman’s husband’s sperm.

Because it was still illegal in the US, Zhang controversially did the procedure in Mexico, where, as he told me at the time, “there are no rules.” The couple eventually welcomed a healthy baby boy. Less than 1% of the boy’s mitochondria carried his mother’s mutation, so the procedure was deemed a success.

There was a fair bit of outrage from the scientific community, though. Mitochondrial donation had been made legal in the UK the previous year, but no clinic had yet been given a license to do it. Zhang’s experiment seemed to have been conducted with no oversight. Many questioned how ethical it was, although Sian Harding, who reviewed the ethics of the UK procedure, then told me it was “as good as or better than what we’ll do in the UK.”

The scandal had barely died down by the time the next “three-person IVF” babies were announced. In 2017, a team at the Nadiya Clinic in Ukraine announced the birth of a little girl to parents who’d had the treatment for infertility. The news brought more outrage from some quarters, as scientists argued that the experimental procedure should only be used to prevent severe mitochondrial diseases.

It wasn’t until later that year that the UK’s fertility authority granted a team in Newcastle a license to perform mitochondrial donation. That team launched a trial in 2017. It was big news—the first “official” trial to test whether the approach could safely prevent mitochondrial disease.

But it was slow going. And meanwhile, other teams were making progress. The Nadiya Clinic continued to trial the procedure in couples with infertility. Pavlo Mazur, a former embryologist who worked at that clinic, tells me that 10 babies were born there as a result of mitochondrial donation.

Mazur then moved to another clinic in Ukraine, where he says he used a different type of mitochondrial donation to achieve another five healthy births for people with infertility. “In total, it’s 15 kids made by me,” he says.

But he adds that other clinics in Ukraine are also using mitochondrial donation, without sharing their results. “We don’t know the actual number of those kids in Ukraine,” says Mazur. “But there are dozens of them.”

In 2020, Nuno Costa-Borges of Embryotools in Barcelona, Spain, and his colleagues described another trial of mitochondrial donation. This trial, performed in Greece, was also designed to test the procedure for people with infertility. It involved 25 patients. So far, seven children have been born. “I think it’s a bit strange that they aren’t getting more credit,” says Heidi Mertes, a medical ethicist at Ghent University in Belgium.

The newly announced UK births are only the latest “three-person IVF” babies. And while their births are being heralded as a success story for mitochondrial donation, the story isn’t quite so simple. Three of the eight babies were born with a non-insignificant proportion of mutated mitochondria, ranging between 5% and 20%, depending on the baby and the sample.

Dagan Wells of the University of Oxford, who is involved in the Greece trial, says that two of the seven babies in their study also appear to have inherited mtDNA from their intended mothers. Mazur says he has seen several cases of this “reversal” too.

This isn’t a problem for babies whose mothers don’t carry genes for mitochondrial disease. But it might be for those whose mothers do.

I don’t want to pour cold water over the new UK results. It was great to finally see the results of a trial that’s been running for eight years. And the births of healthy babies are something to celebrate. But it’s not a simple success story. Mitochondrial donation doesn’t guarantee a healthy baby. We still have more to learn, not only from these babies, but from the others that have already been born.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

A major AI training data set contains millions of examples of personal data

Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.

Thousands of images—including identifiable faces—were found in a small subset of DataComp CommonPool, a major AI training set for image generation scraped from the web. Because the researchers audited just 0.1% of CommonPool’s data, they estimate that the real number of images containing personally identifiable information, including faces and identity documents, is in the hundreds of millions. The study that details the breach was published on arXiv earlier this month.

The bottom line, says William Agnew, a postdoctoral fellow in AI ethics at Carnegie Mellon University and one of the coauthors, is that “anything you put online can [be] and probably has been scraped.”

The researchers found thousands of instances of validated identity documents—including images of credit cards, driver’s licenses, passports, and birth certificates—as well as over 800 validated job application documents (including résumés and cover letters), which were confirmed through LinkedIn and other web searches as being associated with real people. (In many more cases, the researchers did not have time to validate the documents or were unable to because of issues like image clarity.) 

A number of the résumés disclosed sensitive information including disability status, the results of background checks, birth dates and birthplaces of dependents, and race. When résumés were linked to people with online presences, researchers also found contact information, government identifiers, sociodemographic information, face photographs, home addresses, and the contact information of other people (like references).

Examples of identity-related documents found in CommonPool’s small-scale data set show a credit card, a Social Security number, and a driver’s license. For each sample, the type of URL site is shown at the top, the image in the middle, and the caption in quotes below. All personal information has been replaced, and text has been paraphrased to avoid direct quotations. Images have been redacted to show the presence of faces without identifying the individuals.
COURTESY OF THE RESEARCHERS

When it was released in 2023, DataComp CommonPool, with its 12.8 billion data samples, was the largest existing data set of publicly available image-text pairs, which are often used to train generative text-to-image models. While its curators said that CommonPool was intended for academic research, its license does not prohibit commercial use as well. 

CommonPool was created as a follow-up to the LAION-5B data set, which was used to train models including Stable Diffusion and Midjourney. It draws on the same data source: web scraping done by the nonprofit Common Crawl between 2014 and 2022. 

While commercial models often do not disclose what data sets they are trained on, the shared data sources of DataComp CommonPool and LAION-5B mean that the data sets are similar, and that the same personally identifiable information likely appears in LAION-5B, as well as in other downstream models trained on CommonPool data. CommonPool researchers did not respond to emailed questions.

And since DataComp CommonPool has been downloaded more than 2 million times over the past two years, it is likely that “there [are]many downstream models that are all trained on this exact data set,” says Rachel Hong, a PhD student in computer science at the University of Washington and the paper’s lead author. Those would duplicate similar privacy risks.

Good intentions are not enough

“You can assume that any large-scale web-scraped data always contains content that shouldn’t be there,” says Abeba Birhane, a cognitive scientist and tech ethicist who leads Trinity College Dublin’s AI Accountability Lab—whether it’s personally identifiable information (PII), child sexual abuse imagery, or hate speech (which Birhane’s own research into LAION-5B has found). 

Indeed, the curators of DataComp CommonPool were themselves aware it was likely that PII would appear in the data set and did take some measures to preserve privacy, including automatically detecting and blurring faces. But in their limited data set, Hong’s team found and validated over 800 faces that the algorithm had missed, and they estimated that overall, the algorithm had missed 102 million faces in the entire data set. On the other hand, they did not apply filters that could have recognized known PII character strings, like emails or Social Security numbers. 

“Filtering is extremely hard to do well,” says Agnew. “They would have had to make very significant advancements in PII detection and removal that they haven’t made public to be able to effectively filter this.”  

Examples of résumé documents and personal disclosures found in CommonPool’s small-scale data set. For each sample, the type of URL site is shown at the top, the image in the middle, and the caption in quotes below. All personal information has been replaced, and text has been paraphrased to avoid direct quotations. Images have been redacted to show the presence of faces without identifying the individuals. Image courtesy of the researchers.
COURTESY OF THE RESEARCHERS

There are other privacy issues that the face blurring doesn’t address. While the blurring filter is automatically applied, it is optional and can be removed. Additionally, the captions that often accompany the photos, as well as the photos’ metadata, often contain even more personal information, such as names and exact locations.

Another privacy mitigation measure comes from Hugging Face, a platform that distributes training data sets and hosts CommonPool, which integrates with a tool that theoretically allows people to search for and remove their own information from a data set. But as the researchers note in their paper, this would require people to know that their data is there to start with. When asked for comment, Florent Daudens of Hugging Face said that “maximizing the privacy of data subjects across the AI ecosystem takes a multilayered approach, which includes but is not limited to the widget mentioned,” and that the platform is “working with our community of users to move the needle in a more privacy-grounded direction.” 

In any case, just getting your data removed from one data set probably isn’t enough. “Even if someone finds out their data was used in a training data sets and … exercises their right to deletion, technically the law is unclear about what that means,”  says Tiffany Li, an associate professor of law at the University of San Francisco School of Law. “If the organization only deletes data from the training data sets—but does not delete or retrain the already trained model—then the harm will nonetheless be done.”

The bottom line, says Agnew, is that “if you web-scrape, you’re going to have private data in there. Even if you filter, you’re still going to have private data in there, just because of the scale of this. And that’s something that we [machine-learning researchers], as a field, really need to grapple with.”

Reconsidering consent

CommonPool was built on web data scraped between 2014 and 2022, meaning that many of the images likely date to before 2020, when ChatGPT was released. So even if it’s theoretically possible that some people consented to having their information publicly available to anyone on the web, they could not have consented to having their data used to train large AI models that did not yet exist.

And with web scrapers often scraping data from each other, an image that was originally uploaded by the owner to one specific location would often find its way into other image repositories. “I might upload something onto the internet, and then … a year or so later, [I] want to take it down, but then that [removal] doesn’t necessarily do anything anymore,” says Agnew.

The researchers also found numerous examples of children’s personal information, including depictions of birth certificates, passports, and health status, but in contexts suggesting that they had been shared for limited purposes.

“It really illuminates the original sin of AI systems built off public data—it’s extractive, misleading, and dangerous to people who have been using the internet with one framework of risk, never assuming it would all be hoovered up by a group trying to create an image generator,” says Ben Winters, the director of AI and privacy at the Consumer Federation of America.

Finding a policy that fits

Ultimately, the paper calls for the machine-learning community to rethink the common practice of indiscriminate web scraping and also lays out the possible violations of current privacy laws represented by the existence of PII in massive machine-learning data sets, as well as the limitations of those laws’ ability to protect privacy.

“We have the GDPR in Europe, we have the CCPA in California, but there’s still no federal data protection law in America, which also means that different Americans have different rights protections,” says Marietje Schaake, a Dutch lawmaker turned tech policy expert who currently serves as a fellow at Stanford’s Cyber Policy Center. 

Besides, these privacy laws apply to companies that meet certain criteria for size and other characteristics. They do not necessarily apply to researchers like those who were responsible for creating and curating DataComp CommonPool.

And even state laws that do address privacy, like California’s consumer privacy act, have carve-outs for “publicly available” information. Machine-learning researchers have long operated on the principle that if it’s available on the internet, then it is public and no longer private information, but Hong, Agnew, and their colleagues hope that their research challenges this assumption. 

“What we found is that ‘publicly available’ includes a lot of stuff that a lot of people might consider private—résumés, photos, credit card numbers, various IDs, news stories from when you were a child, your family blog. These are probably not things people want to just be used anywhere, for anything,” says Hong.  

Hopefully, Schaake says, this research “will raise alarm bells and create change.” 

This article previously misstated Tiffany Li’s affiliation. This has been fixed.

In defense of air-conditioning

I’ll admit that I’ve rarely hesitated to point an accusing finger at air-conditioning. I’ve outlined in many stories and newsletters that AC is a significant contributor to global electricity demand, and it’s only going to suck up more power as temperatures rise.

But I’ll also be the first to admit that it can be a life-saving technology, one that may become even more necessary as climate change intensifies. And in the wake of Europe’s recent deadly heat wave, it’s been oddly villainized

We should all be aware of the growing electricity toll of air-conditioning, but the AC hate is misplaced. Yes, AC is energy intensive, but so is heating our homes, something that’s rarely decried in the same way that cooling is. Both are tools for comfort and, more important, for safety.  So why is air-conditioning cast as such a villain?

In the last days of June and the first few days of July, temperatures hit record highs across Europe. Over 2,300 deaths during that period were attributed to the heat wave, according to early research from World Weather Attribution, an academic collaboration that studies extreme weather. And human-caused climate change accounted for 1,500 of the deaths, the researchers found. (That is, the number of fatalities would have been under 800 if not for higher temperatures because of climate change.)

We won’t have the official death toll for months, but these early figures show just how deadly heat waves can be. Europe is especially vulnerable, because in many countries, particularly in the northern part of the continent, air-conditioning is not common.

Popping on a fan, drawing the shades, or opening the windows on the hottest days used to cut it in many European countries. Not anymore. The UK was 1.24 °C (2.23 °F) warmer over the past decade than it was between 1961 and 1990, according to the Met Office, the UK’s national climate and weather service. One recent study found that homes across the country are uncomfortably or dangerously warm much more frequently than they used to be.

The reality is, some parts of the world are seeing an upward shift in temperatures that’s not just uncomfortable but dangerous. As a result, air-conditioning usage is going up all over the world, including in countries with historically low rates.

The reaction to this long-term trend, especially in the face of the recent heat wave, has been apoplectic. People are decrying AC across social media and opinion pages, arguing that we need to suck it up and deal with being a little bit uncomfortable.

Now, let me preface this by saying that I do live in the US, where roughly 90% of homes are cooled with air-conditioning today. So perhaps I am a little biased in favor of AC. But it baffles me when people talk about air-conditioning this way.

I spent a good amount of my childhood in the southeastern US, where it’s very obvious that heat can be dangerous. I was used to many days where temperatures were well above 90 °F (32 °C), and the humidity was so high your clothes would stick to you as soon as you stepped outdoors. 

For some people, being active or working in those conditions can lead to heatstroke. Prolonged exposure, even if it’s not immediately harmful, can lead to heart and kidney problems. Older people, children, and those with chronic conditions can be more vulnerable

In other words, air-conditioning is more than a convenience; in certain conditions, it’s a safety measure. That should be an easy enough concept to grasp. After all, in many parts of the world we expect access to heating in the name of safety. Nobody wants to freeze to death. 

And it’s important to clarify here that while air-conditioning does use a lot of electricity in the US, heating actually has a higher energy footprint. 

In the US, about 19% of residential electricity use goes to air-conditioning. That sounds like a lot, and it’s significantly more than the 12% of electricity that goes to space heating. However, we need to zoom out to get the full picture, because electricity makes up only part of a home’s total energy demand. A lot of homes in the US use natural gas for heating—that’s not counted in the electricity being used, but it’s certainly part of the home’s total energy use.

When we look at the total, space heating accounts for a full 42% of residential energy consumption in the US, while air conditioning accounts for only 9%.

I’m not letting AC off the hook entirely here. There’s obviously a difference between running air-conditioning (or other, less energy-intensive technologies) when needed to stay safe and blasting systems at max capacity because you prefer it chilly. And there’s a lot of grid planning we’ll need to do to make sure we can handle the expected influx of air-conditioning around the globe. 

But the world is changing, and temperatures are rising. If you’re looking for a villain, look beyond the air conditioner and into the atmosphere.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

How to run an LLM on your laptop

MIT Technology Review’s How To series helps you get things done. 

Simon Willison has a plan for the end of the world. It’s a USB stick, onto which he has loaded a couple of his favorite open-weight LLMs—models that have been shared publicly by their creators and that can, in principle, be downloaded and run with local hardware. If human civilization should ever collapse, Willison plans to use all the knowledge encoded in their billions of parameters for help. “It’s like having a weird, condensed, faulty version of Wikipedia, so I can help reboot society with the help of my little USB stick,” he says.

But you don’t need to be planning for the end of the world to want to run an LLM on your own device. Willison, who writes a popular blog about local LLMs and software development, has plenty of compatriots: r/LocalLLaMA, a subreddit devoted to running LLMs on your own hardware, has half a million members.

For people who are concerned about privacy, want to break free from the control of the big LLM companies, or just enjoy tinkering, local models offer a compelling alternative to ChatGPT and its web-based peers.

The local LLM world used to have a high barrier to entry: In the early days, it was impossible to run anything useful without investing in pricey GPUs. But researchers have had so much success in shrinking down and speeding up models that anyone with a laptop, or even a smartphone, can now get in on the action. “A couple of years ago, I’d have said personal computers are not powerful enough to run the good models. You need a $50,000 server rack to run them,” Willison says. “And I kept on being proved wrong time and time again.”

Why you might want to download your own LLM

Getting into local models takes a bit more effort than, say, navigating to ChatGPT’s online interface. But the very accessibility of a tool like ChatGPT comes with a cost. “It’s the classic adage: If something’s free, you’re the product,” says Elizabeth Seger, the director of digital policy at Demos, a London-based think tank. 

OpenAI, which offers both paid and free tiers, trains its models on users’ chats by default. It’s not too difficult to opt out of this training, and it also used to be possible to remove your chat data from OpenAI’s systems entirely, until a recent legal decision in the New York Times’ ongoing lawsuit against OpenAI required the company to maintain all user conversations with ChatGPT.

Google, which has access to a wealth of data about its users, also trains its models on both free and paid users’ interactions with Gemini, and the only way to opt out of that training is to set your chat history to delete automatically—which means that you also lose access to your previous conversations. In general, Anthropic does not train its models using user conversations, but it will train on conversations that have been “flagged for Trust & Safety review.” 

Training may present particular privacy risks because of the ways that models internalize, and often recapitulate, their training data. Many people trust LLMs with deeply personal conversations—but if models are trained on that data, those conversations might not be nearly as private as users think, according to some experts.

“Some of your personal stories may be cooked into some of the models, and eventually be spit out in bits and bytes somewhere to other people,” says Giada Pistilli, principal ethicist at the company Hugging Face, which runs a huge library of freely downloadable LLMs and other AI resources.

For Pistilli, opting for local models as opposed to online chatbots has implications beyond privacy. “Technology means power,” she says. “And so who[ever] owns the technology also owns the power.” States, organizations, and even individuals might be motivated to disrupt the concentration of AI power in the hands of just a few companies by running their own local models.

Breaking away from the big AI companies also means having more control over your LLM experience. Online LLMs are constantly shifting under users’ feet: Back in April, ChatGPT suddenly started sucking up to users far more than it had previously, and just last week Grok started calling itself MechaHitler on X.

Providers tweak their models with little warning, and while those tweaks might sometimes improve model performance, they can also cause undesirable behaviors. Local LLMs may have their quirks, but at least they are consistent. The only person who can change your local model is you.

Of course, any model that can fit on a personal computer is going to be less powerful than the premier online offerings from the major AI companies. But there’s a benefit to working with weaker models—they can inoculate you against the more pernicious limitations of their larger peers. Small models may, for example, hallucinate more frequently and more obviously than Claude, GPT, and Gemini, and seeing those hallucinations can help you build up an awareness of how and when the larger models might also lie.

“Running local models is actually a really good exercise for developing that broader intuition for what these things can do,” Willison says.

How to get started

Local LLMs aren’t just for proficient coders. If you’re comfortable using your computer’s command-line interface, which allows you to browse files and run apps using text prompts, Ollama is a great option. Once you’ve installed the software, you can download and run any of the hundreds of models they offer with a single command

If you don’t want to touch anything that even looks like code, you might opt for LM Studio, a user-friendly app that takes a lot of the guesswork out of running local LLMs. You can browse models from Hugging Face from right within the app, which provides plenty of information to help you make the right choice. Some popular and widely used models are tagged as “Staff Picks,” and every model is labeled according to whether it can be run entirely on your machine’s speedy GPU, needs to be shared between your GPU and slower CPU, or is too big to fit onto your device at all. Once you’ve chosen a model, you can download it, load it up, and start interacting with it using the app’s chat interface.

As you experiment with different models, you’ll start to get a feel for what your machine can handle. According to Willison, every billion model parameters require about one GB of RAM to run, and I found that approximation to be accurate: My own 16 GB laptop managed to run Alibaba’s Qwen3 14B as long as I quit almost every other app. If you run into issues with speed or usability, you can always go smaller—I got reasonable responses from Qwen3 8B as well.

And if you go really small, you can even run models on your cell phone. My beat-up iPhone 12 was able to run Meta’s Llama 3.2 1B using an app called LLM Farm. It’s not a particularly good model—it very quickly goes off into bizarre tangents and hallucinates constantly—but trying to coax something so chaotic toward usability can be entertaining. If I’m ever on a plane sans Wi-Fi and desperate for a probably false answer to a trivia question, I now know where to look.

Some of the models that I was able to run on my laptop were effective enough that I can imagine using them in my journalistic work. And while I don’t think I’ll depend on phone-based models for anything anytime soon, I really did enjoy playing around with them. “I think most people probably don’t need to do this, and that’s fine,” Willison says. “But for the people who want to do this, it’s so much fun.”

These four charts show where AI companies could go next in the US

No one knows exactly how AI will transform our communities, workplaces, and society as a whole. Because it’s hard to predict the impact AI will have on jobs, many workers and local governments are left trying to read the tea leaves to understand how to prepare and adapt.

A new interactive report released today by the Brookings Institution attempts to map how embedded AI companies and jobs are in different regions of the United States in order to prescribe policy treatments to those struggling to keep up. 

While the impact of AI on tech hubs like San Francisco and Boston is already being felt, AI proponents believe it will transform work everywhere, and in every industry. The report uses various proxies for what the researchers call “AI readiness” to document how unevenly this supposed transformation is taking place. 

Here are four charts to help understand where that could matter. 

1. AI development is still highly focused in tech hubs.

Brookings divides US cities into five categories based on how ready they are to adopt AI-related industries and job offerings. To do so, it looked at local talent pool development, innovations in local institutions, and adoption potential among local companies. 

The “AI Superstars” above represent, unsurprisingly, parts of the San Francisco Bay Area, such outliers that they are given their own category. The “Star AI Hubs,” on the other hand, include large metropolitan areas known for tech work, including Boston, Seattle, and Miami.

2. Concentration of workers and startups is highly centralized, too.

The data shows that the vast majority of people working with AI and startups focused on AI are clustered in the tech hubs above. The report found that almost two-thirds of workers advertising their AI skills work there, and well over 75% of AI startups were founded there. The so-called “Star AI Hubs,” from the likes of New York City and Seattle down to Columbus, Ohio, and Boulder, Colorado, take up another significant portion of the pie. 

It’s clear that most of the developments in AI are concentrated in certain large cities, and this pattern can end up perpetuating itself. According to the report, though, “AI activity has spread into most regional economies across the country,” highlighting the need for policy that encourages growth through AI without sacrificing other areas of the country.

3. Emerging centers of AI show promise but are lacking in one way or another.

Beyond the big, obvious tech-hub cities, Brookings claims, there are 14 regions that show promise in AI development and worker engagement with AI. Among these are cities surrounding academic institutions like the University of Wisconsin in Madison or Texas A&M University in College Station, and regional cultural centers like Pittsburgh, Detroit, and Nashville. 

However, according to Brookings, these places are lacking in some respect or another that limits their development. Take Columbia, South Carolina, for example. Despite a sizable regional population of about 860,000 people and the University of South Carolina right there, the report says the area has struggled with talent development; relatively few students graduate with science and engineering degrees, and few showcase AI skills in their job profiles. 

On the other hand, the Tampa, Florida, metropolitan area struggles with innovation, owing in large part to lagging productivity of local universities. The majority of the regions Brookings examined struggle with adoption, which in the report is measured largely by company engagement with AI-related tools like enterprise data and cloud services.

4. Emerging centers are generally leaning toward industry or government contracts, not both.

Still, these emerging centers show plenty of promise, and funders are taking note. To measure innovation and adoption of AI, the report tallies federal contracts for AI research and development as well as venture capital funding deals. 

If you examine how these emerging centers are collecting each, it appears that many of them are specializing as centers for federal research, like Huntsville, Alabama, or places for VC firms to scout, like the Sacramento area in California. 

While VC interest can beget VC interest, and likewise for government, this may give some indication of where these places have room to grow. “University presence is a tremendous influence on success here,” says Mark Muro, one of the authors of the report. Fostering the relationship between academia and industry could be key to improving the local AI ecosystem. 

Researchers announce babies born from a trial of three-person IVF

Eight babies have been born in the UK thanks to a technology that uses DNA from three people: the two biological parents plus a third person who supplies healthy mitochondrial DNA. The babies were born to mothers who carry genes for mitochondrial diseases and risked passing on severe disorders. The eight babies are healthy, say the researchers behind the trial.

“Mitochondrial disease can have a devastating impact on families,” Doug Turnbull of Newcastle University, one of the researchers behind the study, said in a statement. “Today’s news offers fresh hope to many more women at risk of passing on this condition, who now have the chance to have children growing up without this terrible disease.”

The study, which makes use of a technology called mitochondrial donation, has been described as a “tour de force” and “a remarkable accomplishment” by others in the field. In the team’s approach, patients’ eggs are fertilized with sperm, and the DNA-containing nuclei of those cells are transferred into donated fertilized eggs that have had their own nuclei removed. The new embryos contain the DNA of the intended parents along with a tiny fraction of mitochondrial DNA from the donor, floating in the embryos’ cytoplasm. 

“The concept of [mitochondrial donation] has attracted much commentary and occasionally concern and anxiety,” Stuart Lavery, a consultant in reproductive medicine at University College Hospitals NHS Foundation Trust, said in a statement. “The Newcastle team have demonstrated that it can be used in a clinically effective and ethically acceptable way to prevent disease and suffering.”

Not everyone sees the trial as a resounding success. While five of the children were born “with no health problems,” one developed a fever and a urinary tract infection, and another had muscle jerks. A third was treated for an abnormal heart rhythm. Three of the babies were born with a low level of the very mitochondrial-DNA mutations the treatment was designed to prevent.

Heidi Mertes, a medical ethicist at Ghent University, says she is “moderately optimistic.” “I’m happy that it worked,” she says. “But at the same time, it’s concerning … it’s a call for caution and treading carefully.”

Pavlo Mazur, a former embryologist who has used a similar approach in the conception of 15 babies in Ukraine, believes that trials like this one should be paused until researchers figure out what’s going on. Others believe that researchers should study the technique in people who don’t have mitochondrial mutations, to lower the risk of passing any disease-causing mutations to children.

Long time coming

The news of the births has been long awaited by researchers in the field. Mitochondrial donation was first made legal in the UK in 2015. Two years later, the Human Fertility and Embryology Authority (HFEA), which regulates fertility treatment and research in the UK, granted a fertility clinic in Newcastle the sole license to perform the procedure. Newcastle Fertility Centre at Life launched a trial of mitochondrial donation in 2017 with the aim of treating 25 women a year.

That was eight years ago. Since then, the Newcastle team have been extremely tight-lipped about the trial. That’s despite the fact that other teams elsewhere have used mitochondrial donation to help people achieve pregnancy. A New York–based doctor used a type of mitochondrial donation to help a Jordanian couple conceive in Mexico in 2016. Mitochondrial donation has also been trialed by teams in Ukraine and Greece.

But as the only trial overseen by the HFEA, the Newcastle team’s study was viewed by many as the most “official.” Researchers have been itching to hear how the work has been going, given the potential implications for researchers elsewhere (mitochondrial donation was officially made legal in Australia in 2022). “I’m very glad to see [the results] come out at last,” says Dagan Wells, a reproductive biologist at the University of Oxford who worked on the Greece trial. “It would have been nice to have some information out along the way.”

At the Newcastle clinic, each patient must receive approval from the HFEA to be eligible for mitochondrial donation. Since the trial launched in 2017, 39 patients have won this approval. Twenty-five of them underwent hormonal stimulation to release multiple eggs that could be frozen in storage.

Nineteen of those women went on to have mitochondrial donation. So far, seven of the women have given birth (one had twins), and an eighth is still pregnant. The oldest baby is two years old. The results were published today in the New England Journal of Medicine.

“As parents, all we ever wanted was to give our child a healthy start in life,” one of the mothers, who is remaining anonymous, said in a statement. “Mitochondrial donation IVF made that possible. After years of uncertainty this treatment gave us hope—and then it gave us our baby … Science gave us a chance.”

When each baby was born, the team collected a blood and urine sample to look at the child’s mitochondrial DNA. They found that the levels of mutated DNA were far lower than they would have expected without mitochondrial donation. Three of the mothers were “homoplasmic”—100% of their mitochondrial DNA carried the mutation. But blood tests showed that in the women’s four babies (including the twins), 5% or less of the mitochondrial DNA had the mutation, suggesting they won’t develop disease.

A mixed result

The researchers see this as a positive result. “Children who would otherwise have inherited very high levels are now inheriting levels that are reduced by 77% to 100%,” coauthor Mary Herbert, a professor of reproductive biology at Newcastle University and Monash University, told me during a press briefing.

But three of the eight babies had health symptoms. At seven months, one was diagnosed with a rare form of epilepsy, which seemed to resolve within the following three months. Another baby developed a urinary tract infection.

A third baby developed “prolonged” jaundice, high levels of fat in the blood, and a disturbed heart rhythm that required treatment. The baby seemed to have recovered by 18 months, and doctors believe that the symptoms were not related to the mitochondrial mutations, but the team members admit that they can’t be sure. Given the small sample size, it’s hard to make comparisons with babies conceived in other ways. 

And they acknowledge that a phenomenon called “reversal” is happening in some of the babies. In theory, the children shouldn’t inherit any “bad” mitochondrial DNA from their mothers. But three of them did. The levels of “bad” mitochondrial DNA in the babies’ blood ranged between 5% and 16%. And they were higher in the babies’ urine—the highest figure being 20%.

The researchers don’t know why this is happening. When an embryologist pulls out the nucleus of a fertilized egg, a bit of mitochondria-containing cytoplasm will inevitably be dragged along with it. But the team didn’t see any link between the amount of carried-over cytoplasm and the level of “bad” mitochondria. “We continue to investigate this issue,” Herbert said.

“As long as they don’t understand what’s happening, I would still be worried,” says Mertes.

Such low levels aren’t likely to cause mitochondrial diseases, according to experts contacted by MIT Technology Review. But some are concerned that the percentage of mutated DNA could be higher in different tissues, such as the brain or muscle, or that the levels might change with age. “You never know which tissues [reversal] will show up in,” says Mazur, who has seen the phenomenon in babies born through mitochondrial donation to parents who didn’t have mitochondrial mutations. “It’s chaotic.”

The Newcastle team says it hasn’t looked at other tissues, because it designed the study to be noninvasive.

There has been at least one case in which similar levels of “bad” mitochondria have caused symptoms, says Joanna Poulton, a mitochondrial geneticist at the University of Oxford. She thinks it’s unlikely that the children in the trial will develop any symptoms but adds that “it’s a bit of a worry.”

The age of reversal

No one knows exactly when this reversal happens. But Wells and his colleagues have some idea. In their study in Greece, they looked at the mitochondrial DNA of embryos and checked them again during pregnancy and after birth. The trial was designed to study the impact of mitochondrial donation for infertility—none of the parents involved had genes for mitochondrial disease.

The team has seen mitochondrial reversal in two of the seven babies born in the trial, says Wells. If you put the two sets of results together, mitochondrial donation “seems to have this possibility of reversal occurring in maybe about a third of children,” he says.

In his study, the reversal seemed to occur early on in the embryos’ development, Wells says. Five-day-old embryos “look perfect,” but mitochondrial mutations start showing up in tests taken at around 15 weeks of pregnancy, he says. After that point, the levels appear to be relatively stable. The Newcastle researchers say they will monitor the children until they are five years old.

People enrolling in future trials might opt for amniocentesis, which involves sampling blood from the fetus’s amniotic sac at around 15 to 18 weeks, suggests Mertes. That test might reveal the likely level of mitochondrial mutations in the resulting child. “Then the parents could decide what to do,” says Mertes. “If you could see there was a 90% mutation load [for a] very serious mitochondrial disease, they would still have an option to cancel the pregnancy,” she says.

Wells thinks the Newcastle team’s results are “generally reassuring.” He doesn’t think the trials should be paused. But he wants people to understand that mitochondrial donation is not without risk. “This can only be viewed as a risk reduction strategy, and not a guarantee of having an unaffected child,” he says.

And, as Mertes points out, there’s another option for women who carry mitochondrial DNA mutations: egg donation. Donor eggs fertilized with a partner’s sperm and transferred to a woman’s uterus won’t have her disease-causing mitochondria. 

That option won’t appeal to people who feel strongly about having a genetic link to their children. But Poulton asks: “If you know whose uterus you came out of, does it matter that the [egg] came from somewhere else?”

AI’s giants want to take over the classroom

School’s out and it’s high summer, but a bunch of teachers are plotting how they’re going to use AI this upcoming school year. God help them. 

On July 8, OpenAI, Microsoft, and Anthropic announced a $23 million partnership with one of the largest teachers’ unions in the United States to bring more AI into K–12 classrooms. Called the National Academy for AI Instruction, the initiative will train teachers at a New York City headquarters on how to use AI both for teaching and for tasks like planning lessons and writing reports, starting this fall

The companies could face an uphill battle. Right now, most of the public perceives AI’s use in the classroom as nothing short of ruinous—a surefire way to dampen critical thinking and hasten the decline of our collective attention span (a viral story from New York magazine, for example, described how easy it now is to coast through college thanks to constant access to ChatGPT). 

Amid that onslaught, AI companies insist that AI promises more individualized learning, faster and more creative lesson planning, and quicker grading. The companies sponsoring this initiative are, of course, not doing it out of the goodness of their hearts.

No—as they hunt for profits, their goal is to make users out of teachers and students. Anthropic is pitching its AI models to universities, and OpenAI offers free courses for teachers. In an initial training session for teachers by the new National Academy for AI Instruction, representatives from Microsoft showed teachers how to use the company’s AI tools for lesson planning and emails, according to the New York Times

It’s early days, but what does the evidence actually say about whether AI is helping or hurting students? There’s at least some data to support the case made by tech companies: A recent survey of 1,500 teens conducted by Harvard’s Graduate School of Education showed that kids are using AI to brainstorm and answer questions they’re afraid to ask in the classroom. Studies examining settings ranging from math classes in Nigeria to colleges physics courses at Harvard have suggested that AI tutors can lead students to become more engaged. 

And yet there’s more to the story. The same Harvard survey revealed that kids are also frequently using AI for cheating and shortcuts. And an oft-cited paper from Microsoft found that relying on AI can reduce critical thinking. Not to mention the fact that “hallucinations” of incorrect information are an inevitable part of how large language models work.

There’s a lack of clear evidence that AI can be a net benefit for students, and it’s hard to trust that the AI companies funding this initiative will give honest advice on when not to use AI in the classroom.

Despite the fanfare around the academy’s launch, and the fact the first teacher training is scheduled to take place in just a few months, OpenAI and Anthropic told me they couldn’t share any specifics. 

It’s not as if teachers themselves aren’t already grappling with how to approach AI. One such teacher, Christopher Harris, who leads a library system covering 22 rural school districts in New York, has created a curriculum aimed at AI literacy. Topics range from privacy when using smart speakers (a lesson for second graders) to misinformation and deepfakes (instruction for high schoolers). I asked him what he’d like to see in the curriculum used by the new National Academy for AI Instruction.

“The real outcome should be teachers that are confident enough in their understanding of how AI works and how it can be used as a tool that they can teach students about the technology as well,” he says. The thing to avoid would be overfocusing on tools and pre-built prompts that teachers are instructed to use without knowing how they work. 

But all this will be for naught without an adjustment to how schools evaluate students in the age of AI, Harris says: “The bigger issue will be shifting the fundamental approaches to how we assign and assess student work in the face of AI cheating.”

The new initiative is led by the American Federation of Teachers, which represents 1.8 million members, as well as the United Federation of Teachers, which represents 200,000 members in New York. If they win over these groups, the tech companies will have significant influence over how millions of teachers learn about AI. But some educators are resisting the use of AI entirely, including several hundred who signed an open letter last week.

Helen Choi is one of them. “I think it is incumbent upon educators to scrutinize the tools that they use in the classroom to look past hype,” says Choi, an associate professor at the University of Southern California, where she teaches writing. “Until we know that something is useful, safe, and ethical, we have a duty to resist mass adoption of tools like large language models that are not designed by educators with education in mind.”

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

AI text-to-speech programs could “unlearn” how to imitate certain people

A technique known as “machine unlearning” could teach AI models to forget specific voices—an important step in stopping the rise of audio deepfakes, where someone’s voice is copied to carry out fraud or scams.

Recent advances in artificial intelligence have revolutionized the quality of text-to-speech technology so that people can convincingly re-create a piece of text in any voice, complete with natural speaking patterns and intonations, instead of having to settle for a robotic voice reading it out word by word. “Anyone’s voice can be reproduced or copied with just a few seconds of their voice,” says Jong Hwan Ko, a professor at Sungkyunkwan University in Korea and the coauthor of a new paper that demonstrates one of the first applications of machine unlearning to speech generation.

Copied voices have been used in scams, disinformation, and harassment. Ko, who researches audio processing, and his collaborators wanted to prevent this kind of identity fraud. “People are starting to demand ways to opt out of the unknown generation of their voices without consent,” he says. 

AI companies generally keep a tight grip on their models to discourage misuse. For example, if you ask ChatGPT to give you someone’s phone number or instructions for doing something illegal, it will likely just tell you it cannot help. However, as many examples over time have shown, clever prompt engineering or model fine-tuning can sometimes get these models to say things they otherwise wouldn’t. The unwanted information may still be hiding somewhere inside the model so that it can be accessed with the right techniques. 

At present, companies tend to deal with this issue by applying guardrails; the idea is to check whether the prompts or the AI’s responses contain disallowed material. Machine unlearning instead asks whether an AI can be made to forget a piece of information that the company doesn’t want it to know. The technique takes a leaky model and the specific training data to be redacted and uses them to create a new model—essentially, a version of the original that never learned that piece of data. While machine unlearning has ties to older techniques in AI research, it’s only in the past couple of years that it’s been applied to large language models.

Jinju Kim, a master’s student at Sungkyunkwan University who worked on the paper with Ko and others, sees guardrails as fences around the bad data put in place to keep people away from it. “You can’t get through the fence, but some people will still try to go under the fence or over the fence,” says Kim. But unlearning, she says, attempts to remove the bad data altogether, so there is nothing behind the fence at all. 

The way current text-to-speech systems are designed complicates this a little more, though. These so-called “zero-shot” models use examples of people’s speech to learn to re-create any voice, including those not in the training set—with enough data, it can be a good mimic when supplied with even a small sample of someone’s voice. So “unlearning” means a model not only needs to “forget” voices it was trained on but also has to learn not to mimic specific voices it wasn’t trained on. All the while, it still needs to perform well for other voices. 

To demonstrate how to get those results, Kim taught a recreation of VoiceBox, a speech generation model from Meta, that when it was prompted to produce a text sample in one of the voices to be redacted, it should instead respond with a random voice. To make these voices realistic, the model “teaches” itself using random voices of its own creation. 

According to the team’s results, which are to be presented this week at the International Conference on Machine Learning, prompting the model to imitate a voice it has “unlearned” gives back a result that—according to state-of-the-art tools that measure voice similarity—mimics the forgotten voice more than 75% less effectively than the model did before. In practice, this makes the new voice unmistakably different. But the forgetfulness comes at a cost: The model is about 2.8% worse at mimicking permitted voices. While these percentages are a bit hard to interpret, the demo the researchers released online offers very convincing results, both for how well redacted speakers are forgotten and how well the rest are remembered. A sample from the demo is given below. 

A voice sample of a speaker to be forgotten by the model.
The generated text-to-speech audio from the original model using the above as a prompt.
The generated text-to-speech audio using the same prompt, but now from the model where the speaker was forgotten.

Ko says the unlearning process can take “several days,” depending on how many speakers the researchers want the model to forget. Their method also requires an audio clip about five minutes long for each speaker whose voice is to be forgotten.

In machine unlearning, pieces of data are often replaced with randomness so that they can’t be reverse-engineered back to the original. In this paper, the randomness for the forgotten speakers is very high—a sign, the authors claim, that they are truly forgotten by the model. 

 “I have seen people optimizing for randomness in other contexts,” says Vaidehi Patil, a PhD student at the University of North Carolina at Chapel Hill who researches machine unlearning. “This is one of the first works I’ve seen for speech.” Patil is organizing a machine unlearning workshop affiliated with the conference, and the voice unlearning research will also be presented there. 

She points out that unlearning itself involves inherent trade-offs between efficiency and forgetfulness because the process can take time, and can degrade the usability of the final model. “There’s no free lunch. You have to compromise something,” she says.

Machine unlearning may still be at too early a stage for, say, Meta to introduce Ko and Kim’s methods into VoiceBox. But there is likely to be industry interest. Patil is researching unlearning for Google DeepMind this summer, and while Meta did not respond with a comment, it has hesitated for a long time to release VoiceBox to the wider public because it is so vulnerable to misuse. 

The voice unlearning team seems optimistic that its work could someday get good enough for real-life deployment. “In real applications, we would need faster and more scalable solutions,” says Ko. “We are trying to find those.”

Google’s generative video model Veo 3 has a subtitles problem

As soon as Google launched its latest video-generating AI model at the end of May, creatives rushed to put it through its paces. Released just months after its predecessor, Veo 3 allows users to generate sounds and dialogue for the first time, sparking a flurry of hyperrealistic eight-second clips stitched together into ads, ASMR videos, imagined film trailers, and humorous street interviews. Academy Award–nominated director Darren Aronofsky used the tool to create a short film called Ancestra. During a press briefing, Demis Hassabis, Google DeepMind’s CEO, likened the leap forward to “emerging from the silent era of video generation.” 

But others quickly found that in some ways the tool wasn’t behaving as expected. When it generates clips that include dialogue, Veo 3 often adds nonsensical, garbled subtitles, even when the prompts it’s been given explicitly ask for no captions or subtitles to be added. 

Getting rid of them isn’t straightforward—or cheap. Users have been forced to resort to regenerating clips (which costs them more money), using external subtitle-removing tools, or cropping their videos to get rid of the subtitles altogether.

Josh Woodward, vice president of Google Labs and Gemini, posted on X on June 9 that Google had developed fixes to reduce the gibberish text. But over a month later, users are still logging issues with it in Google Labs’ Discord channel, demonstrating how difficult it can be to correct issues in major AI models.

Like its predecessors, Veo 3 is available to paying members of Google’s subscription tiers, which start at $249.99 a month. To generate an eight-second clip, users enter a text prompt describing the scene they’d like to create into Google’s AI filmmaking tool Flow, Gemini, or other Google platforms. Each Veo 3 generation costs a minimum of 20 AI credits, and the account can be topped up at a cost of $25 per 2,500 credits.

Mona Weiss, an advertising creative director, says that regenerating her scenes in a bid to get rid of the random captions is becoming expensive. “If you’re creating a scene with dialogue, up to 40% of its output has gibberish subtitles that make it unusable,” she says. “You’re burning through money trying to get a scene you like, but then you can’t even use it.”

When Weiss reported the problem to Google Labs through its Discord channel in the hopes of getting a refund for her wasted credits, its team pointed her to the company’s official support team. They offered her a refund for the cost of Veo 3, but not for the credits. Weiss declined, as accepting would have meant losing access to the model altogether. The Google Labs’ Discord support team has been telling users that subtitles can be triggered by speech, saying that they’re aware of the problem and are working to fix it. 

So why does Veo 3 insist on adding these subtitles, and why does it appear to be so difficult to solve the problem? It probably comes down to what the model has been trained on.  

Although Google hasn’t made this information public, that training data is likely to include YouTube videos, clips from vlogs and gaming channels, and TikTok edits, many of which come with subtitles. These embedded subtitles are part of the video frames rather than separate text tracks layered on top, meaning it’s difficult to remove them before they’re used for training, says Shuo Niu, an assistant professor at Clark University in Massachusetts who studies video sharing platforms and AI.

“The text-to-video model is trained using reinforcement learning to produce content that mimics human-created videos, and if such videos include subtitles, the model may ‘learn’ that incorporating subtitles enhances similarity with human-generated content,” he says.

“We’re continuously working to improve video creation, especially with text, speech that sounds natural, and audio that syncs perfectly,” a Google spokesperson says. “We encourage users to try their prompt again if they notice an inconsistency and give us feedback using the thumbs up/down option.”

As for why the model ignores instructions such as “No subtitles,” negative prompts (telling a generative AI model not to do something) are usually less effective than positive ones, says Tuhin Chakrabarty, an assistant professor at Stony Brook University who studies AI systems. 

To fix the problem, Google would have to check every frame of each video Veo 3 has been trained on, and either get rid of or relabel those with captions before retraining the model—an endeavor that would take weeks, he says. 

Katerina Cizek, a documentary maker and artistic director at the MIT Open Documentary Lab, believes the problem exemplifies Google’s willingness to launch products before they’re fully ready. 

“Google needed a win,” she says. “They needed to be the first to pump out a tool that generates lip-synched audio. And so that was more important than fixing their subtitle issue.”  

California is set to become the first US state to manage power outages with AI

California’s statewide power grid operator is poised to become the first in North America to deploy artificial intelligence to manage outages, MIT Technology Review has learned. 

“We wanted to modernize our grid operations. This fits in perfectly with that,” says Gopakumar Gopinathan, a senior advisor on power system technologies at the California Independent System Operator—known as the CAISO and pronounced KAI-so. “AI is already transforming different industries. But we haven’t seen many examples of it being used in our industry.” 

At the DTECH Midwest utility industry summit in Minneapolis on July 15, CAISO is set to announce a deal to run a pilot program using new AI software called Genie, from the energy-services giant OATI. The software uses generative AI to analyze and carry out real-time analyses for grid operators and comes with the potential to autonomously make decisions about key functions on the grid, a switch that might resemble going from uniformed traffic officers to sensor-equipped stoplights. 

But while CAISO may deliver electrons to cutting-edge Silicon Valley companies and laboratories, the actual task of managing the state’s electrical system is surprisingly analog. 

Today, CAISO engineers scan outage reports for keywords about maintenance that’s planned or in the works, read through the notes, and then load each item into the grid software system to run calculations on how a downed line or transformer might affect power supply.

“Even if it takes you less than a minute to scan one on average, when you amplify that over 200 or 300 outages, it adds up,” says Abhimanyu Thakur, OATI’s vice president of platforms, visualization, and analytics. “Then different departments are doing it for their own respective keywords. Now we consolidate all of that into a single dictionary of keywords and AI can do this scan and generate a report proactively.” 

If CAISO finds that Genie produces reliable, more efficient data analyses for managing outages, Gopinathan says, the operator may consider automating more functions on the grid. “After a few rounds of testing, I think we’ll have an idea about what is the right time to call it successful or not,” he says. 

Regardless of the outcome, the experiment marks a significant shift. Most grid operators are using the same systems that utilities have used “for decades,” says Richard Doying, who spent more than 20 years as a top executive at the Midcontinent Independent System Operator, the grid operator for an area encompassing 15 states from the upper Midwest down to Louisiana. 

“These organizations are carved up for people working on very specific, specialized tasks and using their own proprietary tools that they’ve developed over time,” says Doying, now a vice president at the consultancy Grid Strategies. “To the extent that some of these new AI tools are able to draw from data across different areas of an organization and conduct more sophisticated analysis, that’s only helpful for grid operators.”

Last year, a Department of Energy report found that AI had potential to speed up studies on grid capacity and transmission, improve weather forecasting to help predict how much energy wind and solar plants would produce at a given time, and optimize planning for electric-vehicle charging networks. Another report by the energy department’s Loan Programs Office concluded that adding more “advanced” technology such as sensors to various pieces of equipment will generate data that can enable AI to do much more over time. 

In April, the PJM Interconnection—the nation’s largest grid system, spanning 13 states along the densely populated mid-Atlantic and Eastern Seaboard—took a big step toward embracing AI by inking a deal with Google to use its Tapestry software to improve regional planning and speed up grid connections for new power generators. 

ERCOT, the Texas grid system, is considering adopting technology similar to what CAISO is now set to use, according to a source with knowledge of the plans who requested anonymity because they were not authorized to speak publicly. ERCOT did not respond to a request for comment. 

Australia offers an example of what the future may look like. In New South Wales, where grid sensors and smart technology are more widely deployed, AI software rolled out in February is now predicting the production and flow of electricity from rooftop solar units across the state and automatically adjusting how much power from those panels can enter the grid. 

Until now, much of the discussion around AI and energy has focused on the electricity demands of AI data centers (check out MIT Technology Review’s Power Hungry series for more on this).

“We’ve been talking a lot about what the grid can do for AI and not nearly as much about what AI can do for the grid,” says Charles Hua, a coauthor of one of last year’s Energy Department reports who now serves executive director of PowerLines, a nonprofit that advocates for improving the affordability and reliability of US grids. “In general, there’s a huge opportunity for grid operators, regulators, and other stakeholders in the utility regulatory system to use AI effectively and harness it for a more resilient, modernized, and strengthened grid.” 

For now, Gopinathan says, he’s remaining cautiously optimistic. 

“I don’t want to overhype it,” he says. 

Still, he adds, “it’s a first step for bigger automation.”

“Right now, this is more limited to our outage management system. Genie isn’t talking to our other parts yet,” he says. “But I see a world where AI agents are able to do a lot more.”