Why Microsoft made a deal to help restart Three Mile Island

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Nuclear power is coming back to Three Mile Island.

That nuclear power plant is typically associated with a very specific event. One of its reactors, Unit 2, suffered a partial meltdown in 1979 in what remains the most significant nuclear accident in US history. It has been shuttered ever since.

But the site, in Pennsylvania, is also home to another reactor—Unit 1, which consistently and safely generated electricity for decades until it was shut down in 2019. The site’s owner announced last week that it has plans to reopen the plant and signed a deal with Microsoft. The company will purchase the plant’s entire electric generating capacity over the next 20 years.  

This news is fascinating for so many reasons. Obviously this site holds a certain significance in the history of nuclear power in the US. There’s a possibility this would be one of the first reactors in the country to reopen after shutting down. And Microsoft will be buying all the electricity from the reactor. Let’s dig into what this says about the future of the nuclear industry and Big Tech’s power demand.  

Unit 2 at Three Mile Island operated for just a few months before the accident, in March 1979. At the time, Unit 1 was down for refueling. That reactor started back up, to some controversy, in the mid-1980s and produced enough electricity for hundreds of thousands of homes in the area for more than 30 years.

Eventually, though, the plant faced economic struggles. Even though it was operating at  relatively high efficiency and with low costs, it was driven out of business by record low prices for natural gas and the introduction of relatively cheap, subsidized renewable energy to the grid, says Patrick White, research director of the Nuclear Innovation Alliance, a nonprofit think tank. 

That situation has shifted in just the past few years, White says. There’s more money available now for nuclear, including new technology-agnostic tax credits in the Inflation Reduction Act. And there’s also rising concern about the increased energy demand on the power grid, in part from tech giants looking to power data centers like those needed to run AI.

In announcing its deal with Microsoft, Constellation Energy, the owner of Three Mile Island Unit 1, also shared that the plant is getting a rebrand—the site will be renamed the Crane Clean Energy Center. (Not sure if that one’s going to stick.)  

The confluence of the particular location of this reactor and the fact that the electricity will go to power data centers (and other infrastructure) makes this whole announcement instantly attention-grabbing. As one headline put it, “Microsoft AI Needs So Much Power It’s Tapping Site of US Nuclear Meltdown.”

For some people in climate circles, this deal makes a lot of sense. Nuclear power remains one of the most expensive forms of electricity today. But experts say it could play a crucial role on the grid, since the plants typically put out a consistent amount of electricity—it’s often referred to as “firm power,” in contrast with renewables like wind and solar that are intermittently available.

Without guaranteed money there’s a chance this reactor would simply have been decommissioned as planned. Reopening plants that shuttered recently could provide an opportunity to get the benefits of nuclear power without having to build an entirely new project. 

In March, the Palisades Nuclear Plant in Michigan got a loan guarantee from the US Department of Energy’s Loan Programs Office to the tune of over $1.5 billion to help restart. Palisades shut down in 2022, and the site’s owner says it hopes to get it back online by late 2025. It will be the first shuttered reactor in the US to come back online, if everything goes as planned. (For more details, check out my story from earlier this year.)

Three Mile Island may not be far behind—Constellation says the reactor could be running again by 2028. (Interestingly, the facility will need to separately undergo a relicensing process in just a few years, as it’s currently only licensed to run through 2034. A standard 20-year extension could have it running until 2054.)

If Three Mile Island comes back online, Microsoft will be the one benefiting, as its long-term power purchase agreement would secure it enough energy to power roughly 800,000 homes every year. Except in this case, it’ll be used to help run the company’s data center infrastructure in the region.

This isn’t the first recent sign Big Tech is jumping in on nuclear power: Earlier this year, Amazon purchased a data center site right next to the Susquehanna nuclear power plant, also in Pennsylvania.

While Amazon will use only part of the output of the Susquehanna plant, Microsoft will buy all the power that Three Mile Island produces. That raises the question of who’s paying for what in this whole arrangement. Ratepayers won’t be expected to shoulder any of the costs to restart the facility, Constellation CEO Joe Dominguez told the Washington Post. The company also won’t seek any special subsidies from the state, he added.

However, Dominguez also told the Post that federal money is key in allowing this project to go forward. Specifically, there are tax credits in the Inflation Reduction Act set aside for existing nuclear plants. 

The company declined to give the Post a value for the potential tax credits and didn’t respond to my request for comment, but I busted out a calculator and did my own math. Assuming an 835-megawatt plant running at 96.3% capacity (the figure Constellation gave for the plant’s final year of operation) and a $15-per-megawatt-hour tax credit, that could add up to about $100 million each year, assuming requirements for wages and price are met.

It’ll be interesting to see how much further this trend of restarting plants might go. The Duane Arnold nuclear plant in Iowa is one potential candidate—it shuttered in 2020 after 45 years, and the site’s owner has made public comments about the potential of reopening. 

Restarting any or all of these three sites could be the latest sign of an approaching nuclear resurgence. Big tech companies need lots of energy, and bringing old nuclear plants onto the grid—or, better yet, keeping aging ones open—seems to me like a great way to meet demand.

But given the relative rarity of opportunities to snag power from recently closed or closing plants, I think the biggest question for the industry is whether this wave of interest will translate into building new reactors as well.  


Now read the rest of The Spark

Related reading

Read my story from earlier this year for all the details on what it takes to reopen a shuttered nuclear power plant and what we might see at Palisades. 

In the latest in our virtual events series, my colleagues James Temple, Melissa Heikkilä, and David Rotman are having a discussion about AI’s climate impacts. Subscribers can join them for the discussion live at 12:30 p.m. Eastern today, September 25, or check out the recording later. 

AI is an energy hog, but the effects of the technology on emissions are a bit complicated, as I covered in this newsletter.  

Three more things

It’s been a busy week for the climate team here at MIT Technology Review, so let’s do a rapid-fire round: 

  1. Countries including Germany, Sweden, and New Zealand are ending EV subsidies. I wrote about why some experts are worried that the move is coming too soon for some of them
  2. A proposal to connect two of the US’s largest grids could be crucial to cleaning up our electricity system. The project just got a major boost in the form of hundreds of billions of dollars, and it could represent a long-awaited success for energy entrepreneur Michael Skelly, as my colleague James Temple covered in a new story.  
  3. Finally, there’s just one week until we drop our 2024 list of 15 Climate Tech Companies to Watch. Check out this preview story about the list, and keep your eyes peeled next week for the reveal. 

Keeping up with climate  

The US Department of Energy just announced $3 billion in funding to boost the battery and EV supply chain. (E&E News)

→ A single Minnesota mine could unlock billions of tax credits in the US. (MIT Technology Review)

Cheap solar panels are making that energy source abundantly available in Pakistan. But the boom also threatens making power pulled from the grid unaffordable. (Financial Times)

Individual action alone won’t solve the climate crisis, but there are some things people can do. Check out this package on how to decarbonize your life through choices about everything from food to transportation. (Heatmap News)

A group of major steel buyers wants a million tons of low-emissions steel in North America by 2028. These kinds of commitments from customers could help clean up heavy industry. (Canary Media)

This startup wants to use ground-up rocks and the ocean to soak up carbon dioxide. The result could transform the oceans. (New York Times)

North America’s largest food companies are struggling to cut emissions. The biggest culprit is their supply chains—the ingredients they use and the transportation needed to move them around. (Inside Climate News)
California is suing ExxonMobil, claiming the company misled consumers by perpetuating the myth that recycling could solve the plastic waste crisis. Only a small fraction of plastic waste is ever recycled. (The Verge)

Space travel is dangerous. Could genetic testing and gene editing make it safer?

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Recently, global news has been pretty bleak. So this week, I’ve decided to focus my thoughts beyond Earth’s stratosphere and well into space. A couple of weeks ago, SpaceX launched four private astronauts into orbit, where they performed the first ever spacewalk undertaken by private citizens (as opposed to astronauts trained by national agencies).

The company has more ambitious plans for space travel, and it’s not alone. Elon Musk, the founder of SpaceX, claimed on Sunday that he would launch uncrewed missions to Mars within two years, and crewed missions four years after that if the uncrewed missions were successful. (Other SpaceX timelines for reaching the Red Planet haven’t panned out.) NASA refers to Mars as its “​​horizon goal for human exploration.” China previously announced plans for a human mission as early as 2033 and recently moved up its timeline for an uncrewed sample return mission by two years. And the UAE has a 100-year plan to construct a habitable community on Mars by 2117.

None of this will be straightforward. Long-distance space travel can wreak havoc on human health. There’s radiation and microgravity to contend with, as well as the psychological toll of isolation and confinement. Research on identical twin astronauts has also revealed a slew of genetic changes that happen when a person spends a year in space.

That’s why some bioethicists are exploring the idea of radical treatments for future astronauts. Once we’ve figured out all the health impacts of space travel, they argue, we should edit the genomes of astronauts ahead of launch to offer them the best protection. Some have even suggested this might result in the creation of an all-new species: Homo spatialis. If this is starting to sound a bit like sci-fi, that’s because—for now, at least—it is. But there are biotechnologies we can use to help space travelers now, too.

Space travel is risky. When it comes down to it, a space launch essentially involves strapping humans into a capsule and exploding a bomb beneath them, says Paul Root Wolpe, who served as NASA’s senior bioethicist for 15 years.

Once you’re in space, you’re subject to far higher levels of radiation than you’d encounter on Earth. Too much radiation can increase a person’s risk of cancer and neurological disorders. It can also harm body tissues, resulting in cataracts or digestive diseases, for example. That’s why agencies like the US Department of Labor’s Occupational Safety and Health Administration set limits on radiation exposure. (NASA also sets limits on the amount of radiation astronauts can be exposed to.)

Then there’s microgravity. Our bodies have adapted to Earth’s gravity. Without that gravitational pull, strange things can happen. For one thing, internal fluids can start to pool at the top of the body. Muscles don’t need to work as hard when there’s no gravity, and astronauts tend to experience loss of muscle mass as well as bone.

Five years ago, scientists working with NASA published the results of a groundbreaking study comparing two identical twins—one of whom spent a year in space while the other remained on Earth. The twins, Mark and Scott Kelly, were both trained astronauts. And because they have the same set of genes, researchers were able to compare them to assess the impact of long-term space travel on how genes work.

The researchers found that both twins experienced some changes to the way their genes worked over that period, but they changed in different ways. Some of the effects in the space-faring brother lasted for more than six months. These changes are thought to be a response to the stress of space travel and perhaps a reaction to the DNA damage caused by space radiation.

Space travel comes with other risks, including weight loss, permanent eye damage caused by what is known as “spaceflight-associated neuro-ocular syndrome,” and psychological distress as a result of being far from friends and loved ones.

And if all that weren’t enough, injuries are also common on space missions, says Wolpe, who is now founding director of the Center for Peace Building and Conflict Transformation at Emory University. Tools and equipment can float around, knocking into people. Bungee cords snap. “Astronauts are supposed to wear safety goggles at all times, but they didn’t,” says Wolpe. “The injury list is lengthy … it’s really surprising how many injuries were [sustained] by astronauts on the space station.”

Commercial space travel brings a new set of dangers. Until very recently, the only people who traveled to space went through rigorous health tests and training programs overseen by national agencies. That isn’t the case for private space travel, where the rules are determined by the individual company, says Wolpe.

Astronauts are screened for common conditions like high blood pressure and diabetes. Space tourists might not be. We’re still learning the basics when it comes to the impact of space travel on health. We have no idea how it might affect a person who has various disorders and takes multiple medications.

Could gene editing protect astronauts from these potential problems? People who have adapted to high altitudes on Earth have genetic factors that allow them to thrive in low-oxygen environments—what if we could confer these factors to astronauts? And while we’re at it, why not throw in some more genetic changes—ones that might protect them from bone or muscle loss, for example?

Here’s where we get into Homo spatialis territory—the idea of a new species better suited to a life in space, or on a planet other than Earth. For the time being, this approach is not an option—there are currently no gene therapies that have been designed for people undertaking space travel. But one day “it might be in the best interests of the astronauts to undergo some genetic intervention, like gene editing, to safeguard them,” says Rosario Isasi, a bioethicist at the University of Miami. “It might be more than a duty, but a condition for an astronaut going on these missions.”

Wolpe is not keen on the idea. “There is some integrity to being human, and to the human body, that should not be breached,” he says. “These kinds of modifications are going to … end up with a number of disasters.” Isasi also hopes that advances in precision medicine, which will make possible bespoke treatments for individuals, might sidestep the need for genetic modifications.

In the meantime, genetic testing could be helpful for both astronauts and space tourists, says Wolpe. Some body tissues are more vulnerable to radiation damage, including the thyroid gland. Genetic tests that reveal a person’s risk of thyroid cancer might be useful for those considering space travel, he says.

Whether people are going into space as tourists, employees, scientists, or research subjects, figuring out how to send them safely is vitally important. After all, space tourism is nothing like regular tourism. “You’re putting [people] in a situation the human body was never designed to be in,” says Wolpe.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Scientists can test-drive space missions in extreme and remote environments here on Earth. “Analogue astronaut facilities,” which have been set up in deserts and in the Antarctic, simulate the isolating experience of real space travel, Sarah Scoles reports.

Astronaut meals could be set for a slightly weird overhaul. The prepackaged food currently used has a shelf life of around a year and a half. Making food from astronauts’ breath could one day be an alternative solution for longer space missions, writes Jonathan O’Callaghan.

Only 11 people can fit on the International Space Station at once. Perhaps a self-assembling space habitat—complete with a sea-anemone-inspired sofa—could provide alternative living quarters, writes Sarah Ward.

More than a dozen robotic vehicles are scheduled to land on the moon in the 2020s, and there are plans in the works for “lunar economies” and “permanent settlements,” reports Jonathan O’Callaghan in this piece that explores what’s next for the moon.

The International Space Station is getting old, and there are plans to destroy it by 2030. Now NASA is partnering with private companies to develop new commercial space stations for research, manufacturing, and tourism, reports David W. Brown.

From around the web

The team that earned the Nobel Prize for developing CRISPR is asking to cancel two of their own seminal patents. My colleague Antonio Regalado has the scoop. (MIT Technology Review)

In an attempt to protect young children from allergic reactions, did pediatricians inadvertently create an epidemic of peanut allergies? (Wall Street Journal)

Only 6% of the plastic produced in the US in 2021 ended up getting recycled, according to a Greenpeace report. It’s one of the reasons why microplastics are so ubiquitous. (National Geographic)

Axolotls age slowly, and no one really knows what they die. It now appears they pause at least one aspect of the aging process partway through their lives. (New Scientist)

“Mpox” has become the established name for a viral disease that has been responsible for over 200 deaths in the last couple of years—but only in the English language. Multiple names are still used in Spanish, French, and Portuguese, some of which have racist connotations. (The Lancet)

Being a living kidney donor today is less risky than it was a couple of decades ago. Data collected between 1994 and 2009 estimated 3.1 deaths within 90 days per 10,000 donations. This figure declined in the years between 2013 and 2022, to less than 1 death per 10,000 donations. (JAMA Network)

Why one developer won’t quit fighting to connect the US’s grids

Michael Skelly hasn’t learned to take no for an answer.

For much of the last 15 years, the Houston-based energy entrepreneur has worked to develop long-haul transmission lines to carry wind power across the Great Plains, Midwest, and Southwest, delivering clean electricity to cities like Albuquerque, Chicago, and Memphis. But so far, he has little to show for the effort. 

Skelly has long argued that building such lines and linking together the nation’s grids would accelerate the shift from coal- and natural-gas-fueled power plants to the renewables needed to cut the pollution driving climate change. But his previous business, Clean Line Energy Partners, shut down in 2019, after halting two of its projects and selling off interests in three more.

Skelly contends he was early, not wrong, about the need for such lines, and that the market and policymakers are increasingly coming around to his perspective. Indeed, the US Department of Energy just blessed his latest company’s proposed line with hundreds of millions in grants. 

The North Plains Connector would stretch about 420 miles from southeast Montana to the heart of North Dakota and create the first major connection between the US’s two largest grids, enabling system operators to draw on electricity generated by hydro, solar, wind, and other resources across much of the country. This could help keep regional power systems online during extreme weather events and boost the overall share of electricity generated by those clean sources. 

Skelly says he’s already secured the support of nine utilities around the region for the project, as well as more than 90% of the landowners along the route.

Michael Skelly
Michael Skelly founded Clean Line Energy Partners in 2009.
GRID UNITED

He says that more and more local energy companies have come to recognize that rising electricity demands, the growing threat storms and fires pose to power systems, and the increasing reliance on renewables have hastened the need for more transmission lines to stitch together and reinforce the country’s fraying, fractured grids.

“There’s a real understanding, really, across the country of the need to invest more in the grid,” says Skelly, now chief executive of Grid United, the Houston-based transmission development firm he founded in 2021. “We need more wires in the air.” 

Still, proposals to build long transmission lines frequently stir up controversy in the communities they would cross. It remains to be seen whether this growing understanding will be enough for Skelly’s project to succeed, or to get the US building anywhere near the number of transmission lines it now desperately needs.

Linking grids

Transmission lines are the unappreciated linchpin of the clean-energy transition, arguably as essential as solar panels in cutting emissions and as important as seawalls in keeping people safe.

These long, high, thick wires are often described as the highways of our power systems. They connect the big wind farms, hydroelectric plants, solar facilities, and other power plants to the edges of cities, where substations step down the voltage before delivering electricity into homes and businesses along distribution lines that are more akin to city streets. 

There are three major grid systems in the US: the Western Interconnection, the Eastern Interconnection, and the Texas Interconnected System. Regional grid operators such as the California Independent System Operator, the Midcontinent Independent System Operator, and the New York Independent System Operator oversee smaller local grids that are connected, to a greater or lesser extent, within those larger networks.

Transmission lines that could add significant capacity for sharing electricity back and forth across the nation’s major grid systems are especially valuable for cutting emissions and improving the stability of the power system. That’s because they allow those independent system operators to draw on a far larger pool of electricity sources. So if solar power is fading in one part of the country, they could still access wind or hydropower somewhere else. The ability to balance out fluctuations in renewables across regions and seasons, in turn, reduces the need to rely on the steady output of fossil-fuel plants. 

“There’s typically excess wind or hydro or other resources somewhere,” says James Hewett, manager of the US policy lobbying group at Breakthrough Energy, the Bill Gates–backed organization focusing on clean energy and climate issues. “But today, the limiting constraint is the ability to move resources from the place where they’re excessive to where they’re needed.” 

(Breakthrough Energy Ventures, the investment arm of the firm, doesn’t hold any investments in the North Plains Connector project or Grid United.)

It also means that even if regional wildfires, floods, hurricanes, or heat waves knock out power lines and plants in one area, operators may still be able to tap into adjacent systems to keep the lights on and air-conditioning running. That can be a matter of life and death in the event of such emergencies, as we’ve witnessed in the aftermath of heat waves and hurricanes in recent years.  

Studies have shown that weaving together the nation’s grids can boost the share of electricity that renewables reliably provide, significantly cut power-sector emissions, and lower system costs. A recent study by the Lawrence Berkeley National Lab found that the lines interconnecting the US’s major grids and the regions within them offer the greatest economic value among transmission projects, potentially providing more than $100 million in cost savings per year for every additional gigawatt of added capacity. (The study presupposes that the lines are operated efficiently and to their full capacity, among other simplifying assumptions.)

Experts say that grid interconnections can more than pay for themselves over time because, among other improved efficiencies, they allow grid operators to find cheaper sources of electricity at any given time and enable regions to get by with fewer power plants by relying on the redundancy provided by their neighbors.

But as it stands, the meager links between the Eastern Interconnection and Western Interconnection amount to “tiny little soda straws connecting two Olympic swimming pools,” says Rob Gramlich, president of Grid Strategies, a consultancy in Washington, DC. 

A win-win-win”

Grid United’s North Plains Connector, in contrast, would be a fat pipe.

The $3.2 billion, three-gigawatt project would more than double the amount of electricity that could zip back and forth between those grid systems, and it would tightly interlink a trio of grid operators that oversee regional parts of those larger systems: the Western Electricity Coordinating Council, the Midcontinent Independent System Operator, and the Southwest Power Pool. If the line is developed, each could then more easily tap into the richest, cheapest sources at any given time across a huge expanse of the nation, be it hydropower generated in the Northwest, wind turbines cranking across the Midwest, or solar power produced anywhere.

The North Plains Connector transmission line would stretch from from southeast Montana to the heart of North Dakota, connecting the nation's two biggest grids.
The North Plains Connector transmission line would stretch from from southeast Montana to the heart of North Dakota, connecting the nation’s two biggest grids.
COURTESY: ALLETE

This would ensure that utilities could get greater economic value out of those energy plants, which are expensive to build but relatively cheap to operate, and it would improve the reliability of the system during extreme weather, Skelly says.

“If you’ve got a heat dome in the Northwest, you can send power west,” he says. “If you have a winter storm in the Midwest, you can send power to the east.”

Grid United is developing the project as a joint venture with Allete, an energy company in Duluth, Minnesota, that operates several utilities in the region. 

The Department of Energy granted $700 million to a larger regional effort, known as the North Plains Connector Interregional Innovation project, which encompasses two smaller proposals in addition to Grid United’s. The grants will be issued through a more than $10 billion program established under the Bipartisan Infrastructure Law, enacted by President Joe Biden in 2021. 

That funding will likely be distributed to regional utilities and other parties as partial matching grants, designed to incentivize investments in the project among those likely to benefit from it. That design may also help address a chicken-and-egg problem that plagues independent transmission developers like Grid United, Breakthrough’s Hewett says. 

Regional utilities can pass along the costs of projects to their electricity customers. Companies like Grid United, however, generally can’t sign up the power producers that will pay to use their lines until they’ve got project approval, but they also often can’t secure traditional financing until they’ve lined up customers.

The DOE funding could ease that issue by providing an assurance of capital that would help get the project through the lengthy permitting process, Hewett says. 

“The states are benefiting, local utilities are benefiting, and the developer will benefit,” he says. “It’s a win-win-win.”

Transmission hurdles

Over the years, developers have floated various proposals to more tightly interlink the nation’s major grid systems. But it’s proved notoriously difficult to build any new transmission lines in the US—a problem that has only worsened in recent years. 

The nation is developing only 20% of the transmission capacity per year in the 2020s that it did in the early 2010s. On average, interstate transmission lines take eight to 10 years to develop “if they succeed at all,” according to a report from the Niskanen Center.

The biggest challenge in adding connections between grids, says Gramlich of Grid Strategies, is that there’s no clear processes for authorizing lines that cross multiple jurisdictions and no dedicated regional or federal agencies overseeing such proposals. The fact that numerous areas may benefit from such lines also sparks interregional squabbling over how the costs should be allocated. 

In addition, communities often balk at the sight of wires and towers, particularly if the benefits of the lines mostly accrue around the end points, not necessarily in all the areas the wires cross. Any city, county, or state, or even one landowner, can hold up a project for years, if not kill it.

But energy companies themselves share much of the blame as well. Regional energy agencies, grid operators, and utilities have actively fought proposals from independent developers to erect wires passing through their territories. They often simply don’t want to forfeit control of their systems, invite added competition, or deal with the regulatory complexity of such projects. 

The long delays in building new grid capacity have become a growing impediment to building new energy projects.

As of last year, there were 2,600 gigawatts’ worth of proposed energy generation or storage projects waiting in the wings for transmission capacity that would carry their electricity to customers, according to a recent analysis by Lawrence Berkeley National Lab. That’s roughly the electricity output of 2,600 nuclear reactors, or more than double the nation’s entire power system. 

The capacity of projects in the queue has risen almost eightfold from a decade ago, and about 95% of them are solar, wind, or battery proposals.

“Grid interconnection remains a persistent bottleneck,” Joseph Rand, an energy policy researcher at the lab and the lead author of the study, said in a statement.

The legacy of Clean Line Energy

Skelly spent the aughts as the chief development officer of Horizon Wind Energy, a large US wind developer that the Portuguese energy giant EDP snapped up in 2007 for more than $2 billion. Skelly then made a spirited though ill-fated run for Congress in 2008, as the Democratic nominee for the 7th Congressional District of Texas. He ran on a pro-renewables, pro-education campaign but lost by a sizable margin in a district that was solidly Republican.

The following year, he founded Clean Line Energy Partners. The company raised tens of millions of dollars and spent a decade striving to develop five long-range transmission projects that could connect the sorts of wind projects Skelly had worked to build before.

The company did successfully earn some of the permits required for several lines. But it was forced to shut down or offload its projects amid pushback from landowner groups and politicians opposed to renewables, as well as from regional utilities and public utility commissions. 

“He was going to play in other people’s sandboxes and they weren’t exactly keen on having him in there,” says Russell Gold, author of Superpower: One Man’s Quest to Transform American Energy, which recounted Skelly’s and Clean Line Energy’s efforts and failures.

Ultimately, those obstacles dragged out the projects beyond the patience of the company’s investors, who declined to continue throwing more money at them, he says. 

The company was forced to halt the Centennial West line through New Mexico and the Rock Island project across the Midwest. In addition, it sold off its stake in the Grain Belt Express, which would stretch from Kansas to Indiana, to Invenergy; the Oklahoma portion of the Plains and Eastern line to NextEra Energy; and the Western Spirit line through New Mexico, along with an associated wind farm project, to Pattern Development. 

Clean Line Energy itself wound down in 2019.

The Western Spirit transmission line was electrified in late 2021, but the other two projects are still slogging through planning and permitting.

“These things take a long time,” Skelly says. 

For all the challenges the company faced, Gold still credits it with raising awareness about the importance and necessity of long-distance interregional transmission. He says it helped spark conversations that led the Federal Energy Regulatory Commission to eventually enact rules to support regional transmission planning and encouraged other big players to focus more on building transmission lines.

“I do believe that there is a broader social, political, and commercial awareness now that the United States needs to interconnect its grids,” Gold says. 

Lessons learned

Skelly spent a few years as a senior advisor at Lazard, consulting with companies on renewable energy. But he was soon ready to take another shot at developing long-haul transmission lines and started Grid United in 2021.

The new company has proposed four transmission projects in addition to the North Plains Connector—one between Arizona and New Mexico, one between Colorado and Oklahoma, and one each within Texas and Wyoming.

Asked what he thinks the legacy of Clean Line Energy is, Skelly says it’s mixed. But he soon adds that the history of US infrastructure building is replete with projects that didn’t move ahead. The important thing, he says, is to draw the right lessons from those failures.

“When we’re smart about it, we look at the past to see what we can learn,” he says. “We certainly do that today in our business.”

Skelly says one of the biggest takeaways was that it’s important to do the expensive upfront work of meeting with landowners well in advance of applying for permitting, and to use their feedback to guide the line of the route. 

Anne Hedges, director of policy and legislative affairs at the Montana Environmental Information Center, confirms that this is the approach Grid United has taken in the region so far.

“A lot of developers seem to be more focused on drawing a straight line on a map rather than working with communities to figure out the best placement for the transmission system,” she says. “Grid United didn’t do that. They got out on the ground and talked to people and planned a route that wasn’t linear.”

The other change that may make Grid United’s project there more likely to move forward has more to do with what the industry’s learned than what Skelly has.  

Gramlich says regional grid operators and utilities have become more receptive to collaborating with developers on transmission lines—and for self-interested reasons. They’ll need greater capacity, and soon, to stay online and meet the growing energy demands of data centers, manufacturing facilities, electric vehicles, and buildings, and address the risks to power systems from extreme weather events.

Industry observers are also hopeful that an energy permitting reform bill pending in Congress, along with the added federal funding and new rules requiring transmission providers to do more advance planning, will also help accelerate development. The bipartisan bill promises to shorten the approval process for projects that are determined to be in the national interest. It would also require neighboring areas to work together on interregional transmission planning.

Hundreds of environmental groups have sharply criticized the proposal, which would also streamline approvals for certain oil and gas operations.

“This legislation guts bedrock environmental protections, endangers public health, opens up tens of millions of acres of public lands and hundreds of millions of acres of offshore waters to further oil and gas leasing, gives public lands to mining companies, and would defacto rubberstamp gas export projects that harm frontline communities and perpetuate the climate crisis,” argued a letter signed by 350.org, Earthjustice, the Center for Biological Diversity, the Union of Concerned Scientists, and hundreds of other groups.

But a recent analysis by Third Way, a center-left think tank in Washington, DC, found that the emissions benefits from accelerating transmission permitting could significantly outweigh the added climate pollution from the fossil-fuel provisions in the bill. It projects that the bill would, on balance, reduce global emissions by 400 million to 16.6 billion tons of carbon dioxide through 2050. 

“Guardedly optimistic” 

Grid United expects to begin applying for county and state permits in the next few months and for federal permits toward the end of the year. It hopes to begin construction within the next four years and switch the line on in 2032.

Since the applications haven’t been made, it’s not clear what individuals or groups are or will be opposed to it—though, given the history of such projects, some will surely object.

Hedges says the Montana Environmental Information Center is reserving judgment until it sees the actual application. She says the organization will be particularly focused on any potential impact on water and wildlife across the region, “making sure that they’re not harming what are already struggling resources in this area.”

So if Skelly was too early with his last company, the obvious question is: Are the market, regulatory, and societal conditions now ripe for interregional transmission lines?

“We’re gonna find out if they are, right?” he says. “We don’t know yet.”

Skelly adds that he doesn’t think the US is going to build as much transmission as it needs to. But he does believe we’ll start to see more projects moving forward—including, he hopes, the North Plains Connector.

“You just can’t count on anything, and you’ve just got to keep going and push, push, push,” he says. “But we’re making good progress. There’s a lot of utility interest. We have a big grant from the DOE, which will help bring down the cost of the project. So knock on wood, we’re guardedly optimistic.”

A tiny new open-source AI model performs as well as powerful big ones

The Allen Institute for Artificial Intelligence (Ai2), a research nonprofit, is releasing a family of open-source multimodal language models, called Molmo, that it says perform as well as top proprietary models from OpenAI, Google, and Anthropic. 

The organization claims that its biggest Molmo model, which has 72 billion parameters, outperforms OpenAI’s GPT-4o, which is estimated to have over a trillion parameters, in tests that measure things like understanding images, charts, and documents.  

Meanwhile, Ai2 says a smaller Molmo model, with 7 billion parameters, comes close to OpenAI’s state-of-the-art model in performance, an achievement it ascribes to vastly more efficient data collection and training methods. 

What Molmo shows is that open-source AI development is now on par with closed, proprietary models, says Ali Farhadi, the CEO of Ai2. And open-source models have a significant advantage, as their open nature means other people can build applications on top of them. The Molmo demo is available here, and it will be available for developers to tinker with on the Hugging Face website. (Certain elements of the most powerful Molmo model are still shielded from view.) 

Other large multimodal language models are trained on vast data sets containing billions of images and text samples that have been hoovered from the internet, and they can include several trillion parameters. This process introduces a lot of noise to the training data and, with it, hallucinations, says Ani Kembhavi, a senior director of research at Ai2. In contrast, Ai2’s Molmo models have been trained on a significantly smaller and more curated data set containing only 600,000 images, and they have between 1 billion and 72 billion parameters. This focus on high-quality data, versus indiscriminately scraped data, has led to good performance with far fewer resources, Kembhavi says.

Ai2 achieved this by getting human annotators to describe the images in the model’s training data set in excruciating detail over multiple pages of text. They asked the annotators to talk about what they saw instead of typing it. Then they used AI techniques to convert their speech into data, which made the training process much quicker while reducing the computing power required. 

These techniques could prove really useful if we want to meaningfully govern the data that we use for AI development, says Yacine Jernite, who is the machine learning and society lead at Hugging Face, and was not involved in the research. 

“It makes sense that in general, training on higher-quality data can lower the compute costs,” says Percy Liang, the director of the Stanford Center for Research on Foundation Models, who also did not participate in the research. 

Another impressive capability is that the model can “point” at things, meaning it can analyze elements of an image by identifying the pixels that answer queries.

In a demo shared with MIT Technology Review, Ai2 researchers took a photo outside their office of the local Seattle marina and asked the model to identify various elements of the image, such as deck chairs. The model successfully described what the image contained, counted the deck chairs, and accurately pinpointed to other things in the image as the researchers asked. It was not perfect, however. It could not locate a specific parking lot, for example. 

Other advanced AI models are good at describing scenes and images, says Farhadi. But that’s not enough when you want to build more sophisticated web agents that can interact with the world and can, for example, book a flight. Pointing allows people to interact with user interfaces, he says. 

Jernite says Ai2 is operating with a greater degree of openness than we’ve seen from other AI companies. And while Molmo is a good start, he says, its real significance will lie in the applications developers build on top of it, and the ways people improve it.

Farhadi agrees. AI companies have drawn massive, multitrillion-dollar investments over the past few years. But in the past few months, investors have expressed skepticism about whether that investment will bring returns. Big, expensive proprietary models won’t do that, he argues, but open-source ones can. He says the work shows that open-source AI can also be built in a way that makes efficient use of money and time. 

“We’re excited about enabling others and seeing what others would build with this,” Farhadi says. 

Two Nobel Prize winners want to cancel their own CRISPR patents in Europe

In the decade-long fight to control CRISPR, the super-tool for modifying DNA, it’s been common for lawyers to try to overturn patents held by competitors by pointing out errors or inconsistencies.

But now, in a surprise twist, the team that earned the Nobel Prize in chemistry for developing CRISPR is asking to cancel two of their own seminal patents, MIT Technology Review has learned. The decision could affect who gets to collect the lucrative licensing fees on using the technology.

­­The request to withdraw the pair of European patents, by lawyers for Nobelists Emmanuelle Charpentier and Jennifer Doudna, comes after a damaging August opinion from a European technical appeals board, which ruled that the duo’s earliest patent filing didn’t explain CRISPR well enough for other scientists to use it and doesn’t count as a proper invention.

The Nobel laureates’ lawyers say the decision is so wrong and unfair that they have no choice but to preemptively cancel their patents, a scorched-earth tactic whose aim is to prevent the unfavorable legal finding from being recorded as the reason. 

“They are trying to avoid the decision by running away from it,” says Christoph Then, founder of Testbiotech, a German nonprofit that is among those opposing the patents, who provided a copy of the technical opinion and response letter to MIT Technology Review. “We think these are some of the earliest patents and the basis of their licenses.”

Discovery of the century

CRISPR has been called the biggest biotech discovery of the century, and the battle to control its commercial applications—such as gene-altered plants, modified mice, and new medical treatments—has raged for a decade.

The dispute primarily pits Charpentier and Doudna, who were honored with the Nobel Prize in 2020 for developing the method of genome editing, against Feng Zhang, a researcher at the Broad Institute of MIT and Harvard, who claimed to have invented the tool first on his own.

Back in 2014, the Broad Institute carried out a coup de main when it managed to win, and later defend, the controlling US patent on CRISPR’s main uses. But the Nobel pair could, and often did, point to their European patents as bright points in their fight. In 2017, the University of California, Berkeley, where Doudna works, touted its first European patent as exciting, “broad,” and “precedent” setting.

After all, a region representing more than 30 countries had not only recognized the pair’s pioneering discovery; it had set a standard for other patent offices around the world. It also made the US Patent Office look like an outlier whose decisions favoring the Broad Institute might not hold up long term. A further appeal challenging the US decisions is pending in federal court.

Long-running saga

But now the European Patent Office is also saying—for different reasons—that Doudna and Charpentier can’t claim their basic invention. And that’s a finding their attorneys think is so damaging, and reached in such an unjust way, that they have no choice but to sacrifice their own patents. “The Patentees cannot be expected to expose the Nobel-prize winning invention … to the repercussions of a decision handed down under such circumstances,” says the 76page letter sent by German attorneys on their behalf on September 20.

The chief intellectual-property attorney at the University of California, Randi Jenkins, confirmed the plan to revoke the two patents but downplayed their importance. 

“These two European patents are just another chapter in this long-running saga involving CRISPR-Cas9,” Jenkins said. “We will continue pursuing claims in Europe, and we expect those ongoing claims to have meaningful breadth and depth of coverage.”

The patents being voluntarily disavowed are EP2800811, granted in 2017, and EP3401400, granted in 2019. Jenkins added the Nobelists still share one issued CRISPR patent in Europe, EP3597749, and one that is pending. That tally doesn’t include a thicket of patent claims covering more recent research from Doudna’s Berkeley lab that were filed separately.

Freedom to operate

The cancellation of the European patents will affect a broad network of biotech companies that have bought and sold rights as they seek to achieve either commercial exclusivity to new medical treatments or what’s called “freedom to operate”—the right to pursue gene-slicing research unmolested by doubts over who really owns the technique. 

These companies include Editas Medicine, allied with the Broad Institute; Caribou Biosciences and Intellia Therapeutics in the US, both cofounded by Doudna; and Charpentier’s companies, CRISPR Therapeutics and ERS Genomics.

ERS Genomics, which is based in Dublin and calls itself “the CRISPR licensing company,” was set up in Europe specifically to collect fees from others using CRISPR. It claims to have sold nonexclusive access to its “foundational patents” to more than 150 companies, universities, and organizations who use CRISPR in their labs, manufacturing, or research products.

For example, earlier this year Laura Koivusalo, founder of a small Finnish biotech company, StemSight, agreed to a “standard fee” because her company is researching an eye treatment using stem cells that were previously edited using CRISPR.

Although not every biotech company thinks it’s necessary to pay for patent rights long before it even has a product to sell, Koivusalo decided it would be the right thing to do. “The reason we got the license was the Nordic mentality of being super honest. We asked them if we needed a license to do research, and they said yes, we did,” she says.

A slide deck from ERS available online lists the fee for small startups like hers at $15,000 a year. Koivusalo says she agreed to buy a license to the same two patents that are now being canceled. She adds: “I was not aware they were revoked. I would have expected them to give a heads-up.” 

A spokesperson for ERS Genomics said its customers still have coverage in Europe based on the Nobelists’ remaining CRISPR patent and pending application.

In the US, the Broad Institute has also been selling licenses to use CRISPR. And the fees can get big if there’s an actual product involved. That was the case last year, when Vertex Pharmaceuticals won approval to sell the first CRISPR-based treatment, for sickle-cell disease. To acquire rights under the Broad Institute’s CRISPR patents, Vertex agreed to pay $50 million on the barrelhead—and millions more in the future.

PAM problem

There’s no doubt that Charpentier and Doudna were first to publish, in a 2012 paper, how CRISPR can function as a “programmable” means of editing DNA. And their patents in Europe withstood an initial round of formal oppositions filed by lawyers.

But this August, in a separate analysis, a technical body decided that Berkeley had omitted a key detail from its earliest patent application, making it so that “the skilled person could not carry out the claimed method,” according to the finding. That is, it said, the invention wasn’t fully described or enabled.

The omission relates to a feature of DNA molecules called “protospacer adjacent motifs,” or PAMs. These features, a bit like runway landing lights, determine at what general locations in a genome the CRISPR gene scissors are able to land and make cuts, and where they can’t.

In the 76-page reply letter sent by lawyers for the Nobelists, they argue there wasn’t really any need to mention these sites, which they say were so obvious that “even undergraduate students” would have known they were needed. 

The lengthy letter leaves no doubt the Nobel team feels they’ve been wronged. In addition to disavowing the patents, the text runs on because it seeks to “make of public record the reasons for which we strongly disagree with [the] assessment on all points” and to “clearly show the incorrectness” of the decision, which, they say, “fails to recognize the nature and origin of the invention, misinterprets the common general knowledge, and additionally applies incorrect legal standards.”

Want AI that flags hateful content? Build it.

Humane Intelligence, an organization focused on evaluating AI systems, is launching a competition that challenges developers to create a computer vision model that can track hateful image-based propaganda online. Organized in partnership with the Nordic counterterrorism group Revontulet, the bounty program opens September 26. It is open to anyone, 18 or older, who wants to compete and promises $10,000 in prizes for the winners.

This is the second of a planned series of 10 “algorithmic bias bounty” programs from Humane Intelligence, a nonprofit that investigates the societal impact of AI and was launched by the prominent AI researcher Rumman Chowdhury in 2022. The series is supported by Google.org, Google’s philanthropic arm.

“The goal of our bounty programs is to, number one, teach people how to do algorithmic assessments,” says Chowdhury, “but also, number two, to actually solve a pressing problem in the field.” 

Its first challenge asked participants to evaluate gaps in sample data sets that may be used to train models—gaps that may specifically produce output that is factually inaccurate, biased, or misleading. 

The second challenge deals with tracking hateful imagery online—an incredibly complex problem. Generative AI has enabled an explosion in this type of content, and AI is also deployed to manipulate content so that it won’t be removed from social media. For example, extremist groups may use AI to slightly alter an image that a platform has already banned, quickly creating hundreds of different copies that can’t easily be flagged by automated detection systems. Extremist networks can also use AI to embed a pattern into an image that is undetectable to the human eye but will confuse and evade detection systems. It has essentially created a cat-and-mouse game between extremist groups and online platforms. 

The challenge asks for two different models. The first, a task for those with intermediate skills, is one that identifies hateful images; the second, considered an advanced challenge, is a model that attempts to fool the first one. “That actually mimics how it works in the real world,” says Chowdhury. “The do-gooders make one approach, and then the bad guys make an approach.” The goal is to engage machine-learning researchers on the topic of mitigating extremism, which may lead to the creation of new models that can effectively screen for hateful images.  

A core challenge of the project is that hate-based propaganda can be very dependent on its context. And someone who doesn’t have a deep understanding of certain symbols or signifiers may not be able to tell what even qualifies as propaganda for a white nationalist group. 

“If [the model] never sees an example of a hateful image from a part of the world, then it’s not going to be any good at detecting it,” says Jimmy Lin, a professor of computer science at the University of Waterloo, who is not associated with the bounty program.

This effect is amplified around the world, since many models don’t have a vast knowledge of cultural contexts. That’s why Humane Intelligence decided to partner with a non-US organization for this particular challenge. “Most of these models are often fine-tuned to US examples, which is why it’s important that we’re working with a Nordic counterterrorism group,” says Chowdhury.

Lin, though, warns that solving these problems may require more than algorithmic changes. “We have models that generate fake content. Well, can we develop other models that can detect fake generated content? Yes, that is certainly one approach to it,” he says. “But I think overall, in the long run, training, literacy, and education efforts are actually going to be more beneficial and have a longer-lasting impact. Because you’re not going to be subjected to this cat-and-mouse game.”

The challenge will run till November 7, 2024. Two winners will be selected, one for the intermediate challenge and another for the advanced; they will receive $4,000 and $6,000, respectively. Participants will also have their models reviewed by Revontulet, which may decide to add them to its current suite of tools to combat extremism. 

An AI script editor could help decide what films get made in Hollywood

Every day across Hollywood, scores of film school graduates and production assistants work as script readers. Their job is to find the diamonds in the rough from the 50,000 or so screenplays pitched each year and flag any worth pursuing further. Each script runs anywhere from 100 to 150 pages, and it can take half a day to read one and write up a “coverage,” or summary of the strengths and weaknesses. With only about 50 of these scripts selling in a given year, readers are trained to be ruthless. 

Now the film-focused tech company Cinelytic, which works with major studios like Warner Bros. and Sony Pictures to analyze film budgets and box office potential, aims to offer script feedback with generative AI. 

Today it launched a new tool called Callaia, which amateur writers and professional script readers alike can use to analyze scripts at $79 each. Using AI, it takes Callaia less than a minute to write its own coverage, which includes a synopsis, a list of comparable films, grades for areas like dialogue and originality, and actor recommendations. It also makes a recommendation on whether or not the film should be financed, giving it a rating of “pass,” “consider,” “recommend,” or “strongly recommend.” Though the foundation of the tool is built with ChatGPT’s API, the team had to coach the model on script-specific tasks like evaluating genres and writing a movie’s logline, which summarize the story in a sentence. 

“It helps people understand the script very quickly,” says Tobias Queisser, Cinelytic’s cofounder and CEO, who also had a career as a film producer. “You can look at more stories and more scripts, and not eliminate them based on factors that are detrimental to the business of finding great content.”

The idea is that Callaia will give studios a more analytical way to predict how a script may perform on the screen before spending on marketing or production. But, the company says, it’s also meant to ease the bottleneck that script readers create in the filmmaking process. With such a deluge to sort through, many scripts can make it to decision-makers only if they have a recognizable name attached. An AI-driven tool would democratize the script selection process and allow better scripts and writers to be discovered, Queisser says.

The tool’s introduction may further fuel the ongoing Hollywood debate about whether AI will help or harm its creatives. Since the public launch of ChatGPT in late 2022, the technology has drawn concern everywhere from writers’ rooms to special effects departments, where people worry that it will cheapen, augment, or replace human talent.  

In this case, Callaia’s success will depend on whether it can provide critical feedback as well as a human script reader can. 

That’s a challenge because of what GPT and other AI models are built to do, according to Tuhin Chakrabarty, a researcher who studied how well AI can analyze creative works during his PhD in computer science at Columbia University. In one of his studies, Chakrabarty and his coauthors had various AI models and a group of human experts—including professors of creative writing and a screenwriter—analyze the quality of 48 stories, 12 that appeared in the New Yorker and the rest of which were AI-generated. His team found that the two groups virtually never agreed on the quality of the works. 

“Whenever you ask an AI model about the creativity of your work, it is never going to say bad things,” Chakrabarty says. “It is always going to say good things, because it’s trained to be a helpful, polite assistant.”

Cinelytic CTO Dev Sen says this trait did present a hurdle in the design of Callaia, and that the initial output of the model was overly positive. That improved with time and tweaking. “We don’t necessarily want to be overly critical, but aim for a more balanced analysis that points out both strengths and weaknesses in the script,” he says. 

Vir Srinivas, an independent filmmaker whose film Orders from Above won Best Historical Film at Cannes in 2021, agreed to look at an example of Callaia’s output to see how well the AI model can analyze a script. I showed him what the model made of a 100-page script about a jazz trumpeter on a journey of self-discovery in San Francisco, which Cinelytic provided. Srinivas says that the coverage generated by the model didn’t go deep enough to present genuinely helpful feedback to a screenwriter.

“It’s approaching the script in too literal a sense and not a metaphorical one—something which human audiences do intuitively and unconsciously,” he says. “It’s as if it’s being forced to be diplomatic and not make any waves.”

There were other flaws, too. For example, Callaia predicted that the film would need a budget of just $5 to $10 million but also suggested that expensive A-listers like Paul Rudd would have been well suited for the lead role.

Cinelytic says it’s currently at work improving the actor recommendation component, and though the company did not provide data on how well its model analyzes a given script, Sen says feedback from 100 script readers who beta-tested the model was overwhelmingly positive. “Most of them were pretty much blown away, because they said that the coverages were on the order of, if not better than, the coverages they’re used to,” he says. 

Overall, Cinelytic is pitching Callaia as a tool meant to quickly provide feedback on lots of scripts, not to replace human script readers, who will still read and adjust the tool’s findings. Queisser, who is cognizant that whether AI can effectively write or edit creatively is hotly contested in Hollywood, is hopeful the tool will allow script readers to more quickly identify standout scripts while also providing an efficient source of feedback for writers.

“Writers that embrace our tool will have something that can help them refine their scripts and find more opportunities,” he says. “It’s positive for both sides.”

OpenAI released its advanced voice mode to more people. Here’s how to get it.

OpenAI is broadening access to Advanced Voice Mode, a feature of ChatGPT that allows you to speak more naturally with the AI model. It allows you to interrupt its responses midsentence, and it can sense and interpret your emotions from your tone of voice and adjust its responses accordingly. 

These features were teased back in May when OpenAI unveiled GPT-4o, but they were not released until July—and then just to an invite-only group. (At least initially, there seem to have been some safety issues with the model; OpenAI gave several Wired reporters access to the voice mode back in May, but the magazine reported that the company “pulled it the next morning, citing safety concerns.”) Users who’ve been able to try it have largely described the model as an impressively fast, dynamic, and realistic voice assistant—which has made its limited availability particularly frustrating to some other OpenAI users. 

Today is the first time OpenAI has promised to bring the new voice mode to a wide range of users. Here’s what you need to know.

What can it do? 

Though ChatGPT currently offers a standard voice mode to paid users, its interactions can be clunky. In the mobile app, for example, you can’t interrupt the model’s often long-winded responses with your voice, only with a tap on the screen. The new version fixes that, and also promises to modify its responses on the basis of the emotion it’s sensing from your voice. As with other versions of ChatGPT, users can personalize the voice mode by asking the model to remember facts about themselves. The new mode also has improved its pronunciation of words in non-English languages.

AI investor Allie Miller posted a demo of the tool in August, which highlighted a lot of the same strengths of OpenAI’s own release videos: The model is fast and adept at changing its accent, tone, and content to match your needs.

The update also adds new voices. Shortly after the launch of GPT-4o, OpenAI was criticized for the similarity between the female voice in its demo videos, named Sky, and that of Scarlett Johansson, who played an AI love interest in the movie Her. OpenAI then removed the voice. Now it has launched five new voices, named Arbor, Maple, Sol, Spruce, and Vale, which will be available in both the standard and advanced voice modes. MIT Technology Review has not heard them yet, but OpenAI says they were made using professional voice actors from around the world. “We interviewed dozens of actors to find those with the qualities of voices we feel people will enjoy talking to for hours—warm, approachable, inquisitive, with some rich texture and tone,” a company spokesperson says. 

Who can access it and when?

For now, OpenAI is rolling out access to Advanced Voice Mode to Plus users, who pay $20 per month for a premium version, and Team users, who pay $30 per month and have higher message limits. The next group to receive access will be those in the Enterprise and Edu tiers. The exact timing, though, is vague; an OpenAI spokesperson says the company will “gradually roll out access to all Plus and Team users and will roll out to Enterprise and Edu tiers starting next week.” The company hasn’t committed to a firm deadline for when all users in these categories will have access. A message in the ChatGPT app indicates that all Plus users will have access by “the end of fall.”

There are geographic limitations. The new feature is not yet available in the EU, the UK, Switzerland, Iceland, Norway, or Liechtenstein.

There is no immediate plan to release Advanced Voice Mode to free users. (The standard mode remains available to all paid users.)

What steps have been taken to make sure it’s safe?

As the company noted upon the initial release in July and again emphasized this week, Advanced Voice Mode has been safety-tested by external experts “who collectively speak a total of 45 different languages, and represent 29 different geographies.” The GPT-4o system card details how the underlying model handles issues like generating violent or erotic speech, imitating voices without their consent, or generating copyrighted content. 

Still, OpenAI’s models are not open-source. Compared with such models, which are more transparent about their training data and the “model weights” that govern how the AI produces responses, OpenAI’s closed-source models are harder for independent researchers to evaluate from the perspective of safety, bias, and harm.

Why virologists are getting increasingly nervous about bird flu

Bird flu has been spreading in dairy cows in the US—and the scale of the spread is likely to be far worse than it looks. In addition, 14 human cases have been reported in the US since March. Both are worrying developments, say virologists, who fear that the country’s meager response to the virus is putting the entire world at risk of another pandemic.

The form of bird flu that has been spreading over the last few years has been responsible for the deaths of millions of birds and tens of thousands of marine and land mammals. But infections in dairy cattle, first reported back in March, brought us a step closer to human spread. Since then, the situation has only deteriorated. The virus appears to have passed from cattle to poultry on multiple occasions. “If that virus sustains in dairy cattle, they will have a problem in their poultry forever,” says Thomas Peacock, a virologist at the Pirbright Institute in Woking, UK.

Worse, this form of bird flu that is now spreading among cattle could find its way back into migrating birds. It might have happened already. If that’s the case, we can expect these birds to take the virus around the world.

“It’s really troubling that we’re not doing enough right now,” says Seema Lakdawala, a virologist at the Emory University School of Medicine in Atlanta, Georgia. “I am normally very moderate in terms of my pandemic-scaredness, but the introduction of this virus into cattle is really troubling.”

Not just a flu for birds

Bird flu is so named because it spreads stably in birds. The type of H5N1 that has been decimating bird populations for the last few years was first discovered in the late 1990s. But in 2020, H5N1 began to circulate in Europe “in a big way,” says Peacock. The virus spread globally, via migrating ducks, geese, and other waterfowl. In a process that took months and years, the virus made it to the Americas, Africa, Asia, and eventually even Antarctica, where it was detected earlier this year.

And while many ducks and geese seem to be able to survive being infected with the virus, other bird species are much more vulnerable. H5N1 is especially deadly for chickens, for example—their heads swell, they struggle to breathe, and they experience extreme diarrhea. Seabirds like puffins and guillemots also seem to be especially susceptible to the virus, although it’s not clear why. Over the last few years, we’ve seen the worst ever outbreak of bird flu in birds. Millions of farmed birds have died, and an unknown number of wild birds—in the tens of thousands at the very least—have also succumbed. “We have no idea how many just fell into the sea and were never seen again,” says Peacock.

Alarmingly, animals that hunt and scavenge affected birds have also become infected with the virus. The list of affected mammals includes bears, foxes, skunks, otters, dolphins, whales, sea lions, and many more. Some of these animals appear to be able to pass the virus to other members of their species. In 2022, an outbreak of H5N1 in sea lions that started in Chile spread to Argentina and eventually to Uruguay and Brazil. At least 30,000 died. The sea lions may also have passed the virus to nearby elephant seals in Argentina, around 17,000 of which have succumbed to the virus.

This is bad news—not just for the affected animals, but for people, too. It’s not just a bird flu anymore. And when a virus can spread in other mammals, it’s a step closer to being able to spread in humans. That is even more likely when the virus spreads in an animal that people tend to spend a lot of time interacting with.

This is partly why the virus’s spread in dairy cattle is so troubling. The form of the virus that is spreading in cows is slightly different from the one that had been circulating in migrating birds, says Lakdawala. The mutations in this virus have likely enabled it to spread more easily among the animals.

Evidence suggests that the virus is spreading through the use of shared milking machinery within cattle herds. Infected milk can contaminate the equipment, allowing the virus to infect the udder of another cow. The virus is also spreading between herds, possibly by hitching a ride on people that work on multiple farms, or via other animals, or potentially via airborne droplets.

Milk from infected cows can look thickened and yogurt-like, and farmers tend to pour it down drains. This ends up irrigating farms, says Lakdawala. “Unless the virus is inactivated, it just remains infectious in the environment,” she says. Other animals could be exposed to the virus this way.

Hidden infections

So far, 14 states have reported a total of 208 infected cattle herds. Some states have reported only one or two cases among their cattle. But this is extremely unlikely to represent the full picture, given how rapidly the virus is spreading among herds in states that are doing more testing, says Peacock. In Colorado, where state-licensed dairy farms that sell pasteurized milk are required to submit milk samples for weekly testing, 64 herds have been reported to be affected. Neighboring Wyoming, which does not have the same requirements, has reported only one affected herd.

We don’t have a good idea of how many people have been infected either, says Lakdawala. The official count from the CDC is 14 people since April 2024, but testing is not routine, and because symptoms are currently fairly mild in people, we’re likely to be missing a lot of cases.

“It’s very frustrating, because there are just huge gaps in the data that’s coming out,” says Peacock. “I don’t think it’s unfair to say that a lot of outside observers don’t think this outbreak is being taken particularly seriously.”

And the virus is already spreading from cows back into wild birds and poultry, says Lakdawala: “There is definitely a concern that the virus is going to [become more widespread] in birds and cattle … but also other animals that ruminate, like goats.”

It may already be too late to rid America’s cattle herds of the bird flu virus. If it continues to circulate, it could become stable in the population. This is what has happened with flu in pigs around the world. That could also spell disaster—not only would the virus represent a constant risk to humans and other animals that come into contact with the cows, but it could also evolve over time. We can’t predict how this evolution might take shape, but there’s a chance the result could be a form of the virus that is better at spreading in people or causing fatal infections.

So far, it is clear that the virus has mutated but hasn’t yet acquired any of these more dangerous mutations, says Michael Tisza, a bioinformatics scientist at Baylor College of Medicine in Houston. That being said, Tisza and his colleagues have been looking for the virus in wastewater from 10 cities in Texas—and they have found H5N1 in all of them.

Tisza and his colleagues don’t know where this virus is coming from—whether it’s coming from birds, milk, or infected people, for example. But the team didn’t find any signal of the virus in wastewater during 2022 or 2023, when there were outbreaks in migratory birds and poultry. “In 2024, it’s been a different story,” says Tisza. “We’ve seen it a lot.”

Together, the evidence that the virus is evolving and spreading among mammals, and specifically cattle, has put virologists on high alert. “This virus is not causing a human pandemic right now, which is great,” says Tisza. “But it is a virus of pandemic potential.”

How AI can help spot wildfires

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

In February 2024, a broken utility pole brought down power lines near the small town of Stinnett, Texas. In the following weeks, the fire reportedly sparked by that equipment grew to burn over 1 million acres, the biggest wildfire in the state’s history.

Anything from stray fireworks to lightning strikes can start a wildfire. While it’s natural for many ecosystems to see some level of fire activity, the hotter, drier conditions brought on by climate change are fueling longer fire seasons with larger fires that burn more land.

This means that the need to spot wildfires earlier is becoming ever more crucial, and some groups are turning to technology to help. My colleague James Temple just wrote about a new effort from Google to fund an AI-powered wildfire-spotting satellite constellation. Read his full story for the details, and in the meantime, let’s dig into how this project fits into the world of fire-detection tech and some of the challenges that lie ahead.

The earliest moments in the progression of a fire can be crucial. Today, many fires are reported to authorities by bystanders who happen to spot them and call emergency services. Technologies could help officials by detecting fires earlier, well before they grow into monster blazes.

One such effort is called FireSat. It’s a project from the Earth Fire Alliance, a collaboration between Google’s nonprofit and research arms, the Environmental Defense Fund, Muon Space (a satellite company), and others. This planned system of 52 satellites should be able to spot fires as small as five by five meters (about 16 feet by 16 feet), and images will refresh every 20 minutes.

These wouldn’t be the first satellites to help with wildfire detection, but many existing efforts can either deliver high-resolution images or refresh often—not both, as the new project is aiming to do.

A startup based in Germany, called OroraTech, is also working to launch new satellites that specialize in wildfire detection. The small satellites (around the size of a shoebox) will orbit close to Earth and use sensors that detect heat. The company’s long-term goal is to launch 100 of the satellites into space and deliver images every 30 minutes.

Other companies are staying on Earth, deploying camera stations that can help officials identify, confirm, and monitor fires. Pano AI is using high-tech camera stations to try to spot fires earlier. The company mounts cameras on high vantage points, like the tops of mountains, and spins them around to get a full 360-degree view of the surrounding area. It says the tech can spot wildfire activity within a 15-mile radius. The cameras pair up with algorithms to automatically send an alert to human analysts when a potential fire is detected.

Having more tools to help detect wildfires is great. But whenever I hear about such efforts, I’m struck by a couple of major challenges for this field. 

First, prevention of any sort can often be undervalued, since a problem that never happens feels much less urgent than one that needs to be solved.

Pano AI, which has a few camera stations deployed, points to examples in which its technology detected fires earlier than bystander reports. In one case in Oregon, the company’s system issued a warning 14 minutes before the first emergency call came in, according to a report given to TechCrunch.

Intuitively, it makes sense that catching a blaze early is a good thing. And modeling can show what might have happened if a fire hadn’t been caught early. But it’s really difficult to determine the impact of something that didn’t happen. These systems will need to be deployed for a long time, and researchers will need to undertake large-scale, systematic studies, before we’ll be able to say for sure how effective they are at preventing damaging fires. 

The prospect of cost is also a tricky piece of this for me to wrap my head around. It’s in the public interest to prevent wildfires that will end up producing greenhouse-gas emissions, not to mention endangering human lives. But who’s going to pay for that?

Each of PanoAI’s stations costs something like $50,000 per year. The company’s customers include utilities, which have a vested interest in making sure their equipment doesn’t start fires and watching out for blazes that could damage its infrastructure.

The electric utility Xcel, whose equipment allegedly sparked that fire in Texas earlier this year, is facing lawsuits over its role. And utilities can face huge costs after fires. Last year’s deadly blazes in Hawaii caused billions of dollars in damages, and Hawaiian Electric recently agreed to pay roughly $2 billion for its role in those fires. 

The proposed satellite system from the Earth Fire Alliance will cost more than $400 million all told. The group has secured about two-thirds of what it needs for the first phase of the program, which includes the first four launches, but it’ll need to raise a lot more money to make its AI-powered wildfire-detecting satellite constellation a reality.


Now read the rest of The Spark

Related reading

Read more about how an AI-powered satellite constellation can help spot wildfires faster here

Other companies are aiming to use balloons that will surf on wind currents to track fires. Urban Sky is deploying balloons in Colorado this year

Satellite images can also be used to tally up the damage and emissions caused by fires. Earlier this year I wrote about last year’s Canadian wildfires, which produced more emissions than the fossil fuels in most countries in 2023. 

Another thing

We’re just two weeks away from EmTech MIT, our signature event on emerging technologies. I’ll be on stage speaking with tech leaders on topics like net-zero buildings and emissions from Big Tech. We’ll also be revealing our 2024 list of Climate Tech Companies to Watch. 

For a preview of the event, check out this conversation I had with MIT Technology Review executive editor Amy Nordrum and editor in chief Mat Honan. You can register to join us on September 30 and October 1 at the MIT campus or online—hope to see you there!

Keeping up with climate  

The US Postal Service is finally getting its long-awaited electric vehicles. They’re funny-looking, and the drivers seem to love them already. (Associated Press)

→ Check out this timeline I made in December 2022 of the multi-year saga it took for the agency to go all in on EVs. (MIT Technology Review)

Microsoft is billing itself as a leader in AI for climate innovation. At the same time, the tech giant is selling its technology to oil and gas companies. Check out this fascinating investigation from my former colleague Karen Hao. (The Atlantic)

Imagine solar panels that aren’t affected by a cloudy day … because they’re in space. Space-based solar power sounds like a dream, but advances in solar tech and falling launch costs have proponents arguing that it’s a dream closer than ever to becoming reality. Many are still skeptical. (Cipher)

Norway is the first country with more EVs on the road than gas-powered cars. Diesel vehicles are still the most common, though. (Washington Post

The emissions cost of delivering Amazon packages keeps ticking up. A new report from Stand.earth estimates that delivery emissions have increased by 75% since just 2019. (Wired)

BYD has been dominant in China’s EV market. The company is working to expand, but to compete in the UK and Europe, it will need to win over wary drivers. (Bloomberg)

Some companies want to make air-conditioning systems in big buildings smarter to help cut emissions. Grid-interactive efficient buildings can cut energy costs and demand at peak hours. (Canary Media)