Google is finally taking action to curb non-consensual deepfakes

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s the Taylor Swifts of the world that are going to save us. In January, nude deepfakes of Taylor Swift went viral on X, which caused public outrage. Nonconsensual explicit deepfakes are one of the most common and severe types of harm posed by AI. The generative AI boom of the past few years has only made the problem worse, and we’ve seen high-profile cases of children and female politicians being abused with these technologies. 

Though terrible, Swift’s deepfakes did perhaps more than anything else to raise awareness about the risks and seem to have galvanized tech companies and lawmakers to do something. 

“The screw has been turned,” says Henry Ajder, a generative AI expert who has studied deepfakes for nearly a decade. We are at an inflection point where the pressure from lawmakers and awareness among consumers is so great that tech companies can’t ignore the problem anymore, he says. 

First, the good news. Last week Google said it is taking steps to keep explicit deepfakes from appearing in search results. The tech giant is making it easier for victims to request that nonconsensual fake explicit imagery be removed. It will also filter all explicit results on similar searches and remove duplicate images. This will prevent the images from popping back up in the future. Google is also downranking search results that lead to explicit fake content. When someone searches for deepfakes and includes someone’s name in the search, Google will aim to surface high-quality, non-explicit content, such as relevant news articles.

This is a positive move, says Ajder. Google’s changes remove a huge amount of visibility for nonconsensual, pornographic deepfake content. “That means that people are going to have to work a lot harder to find it if they want to access it,” he says. 

In January, I wrote about three ways we can fight nonconsensual explicit deepfakes. These included regulation; watermarks, which would help us detect whether something is AI-generated; and protective shields, which make it harder for attackers to use our images. 

Eight months on, watermarks and protective shields remain experimental and unreliable, but the good news is that regulation has caught up a little bit. For example, the UK has banned both creation and distribution of nonconsensual explicit deepfakes. This decision led a popular site that distributes this kind of content, Mr DeepFakes, to block access to UK users, says Ajder. 

The EU’s AI Act is now officially in force and could usher in some important changes around transparency. The law requires deepfake creators to clearly disclose that the material was created by AI. And in late July, the US Senate passed the Defiance Act, which gives victims a way to seek civil remedies for sexually explicit deepfakes. (This legislation still needs to clear many hurdles in the House to become law.) 

But a lot more needs to be done. Google can clearly identify which websites are getting traffic and tries to remove deepfake sites from the top of search results, but it could go further. “Why aren’t they treating this like child pornography websites and just removing them entirely from searches where possible?” Ajder says. He also found it a weird omission that Google’s announcement didn’t mention deepfake videos, only images. 

Looking back at my story about combating deepfakes with the benefit of hindsight, I can see that I should have included more things companies can do. Google’s changes to search are an important first step. But app stores are still full of apps that allow users to create nude deepfakes, and payment facilitators and providers still provide the infrastructure for people to use these apps. 

Ajder calls for us to radically reframe the way we think about nonconsensual deepfakes and pressure companies to make changes that make it harder to create or access such content. 

“This stuff should be seen and treated online in the same way that we think about child pornography—something which is reflexively disgusting, awful, and outrageous,” he says. “That requires all of the platforms … to take action.” 


Now read the rest of The Algorithm

Deeper Learning

End-of-life decisions are difficult and distressing. Could AI help?

A few months ago, a woman in her mid-50s—let’s call her Sophie—experienced a hemorrhagic stroke, which left her with significant brain damage. Where should her medical care go from there? This difficult question was left, as it usually is in these kinds of situations, to Sophie’s family members, but they couldn’t agree. The situation was distressing for everyone involved, including Sophie’s doctors.

Enter AI: End-of-life decisions can be extremely upsetting for surrogates tasked with making calls on behalf of another person, says David Wendler, a bioethicist at the US National Institutes of Health. Wendler and his colleagues are working on something that could make things easier: an artificial-intelligence-based tool that can help surrogates predict what patients themselves would want. Read more from Jessica Hamzelou here

Bits and Bytes

OpenAI has released a new ChatGPT bot that you can talk to
The new chatbot represents OpenAI’s push into a new generation of AI-powered voice assistants in the vein of Siri and Alexa, but with far more capabilities to enable more natural, fluent conversations. (MIT Technology Review

Meta has scrapped celebrity AI chatbots after they fell flat with users
Less than a year after announcing it was rolling out AI chatbots based on celebrities such as Paris Hilton, the company is scrapping the feature. Turns out nobody wanted to chat with a random AI celebrity after all! Instead, Meta is rolling out a new feature called AI Studio, which allows creators to make AI avatars of themselves that can chat with fans. (The Information)

OpenAI has a watermarking tool to catch students cheating with ChatGPT but won’t release it
The tool can detect text written by artificial intelligence with 99.9% certainty, but the company hasn’t launched it for fear it might put people off from using its AI products. (The Wall Street Journal

The AI Act has entered into force
At last! Companies now need to start complying with one of the world’s first sweeping AI laws, which aims to curb the worst harms. It will usher in much-needed changes to how AI is built and used in the European Union and beyond. I wrote about what will change with this new law, and what won’t, in March. (The European Commission)

How TikTok bots and AI have powered a resurgence in UK far-right violence
Following the tragic stabbing of three girls in the UK, the country has seen a surge of far-right riots and vandalism. The rioters have created AI-generated images that incite hatred and spread harmful stereotypes. Far-right groups have also used AI music generators to create songs with xenophobic content. These have spread like wildfire online thanks to powerful recommendation algorithms. (The Guardian)

Your future air conditioner might act like a battery

As temperatures climb on hot days, many of us are quick to crank up our fans or air conditioners. These cooling systems can be a major stress on electrical grids, which has inspired some inventors to create versions that can store energy as well as use it. 

Cooling represents 20% of global electricity demand in buildings, a share that’s expected to rise as the planet warms and more of the world turns to cooling technology. During peak demand hours, air conditioners can account for over half the total demand on the grid in some parts of the world today.

New cooling technologies that incorporate energy storage could help by charging themselves when renewable electricity is available and demand is low, and still providing cooling services when the grid is stressed.  

“We say, take the problem, and turn it into a solution,” says Yaron Ben Nun, founder and chief technology officer of Nostromo Energy.

One of Nostromo Energy’s systems, which it calls an IceBrick, is basically a massive ice cube tray. It cools down a solution made of water and glycol that’s used to freeze individual capsules filled with water. One IceBrick can be made up of thousands of these containers, which each hold about a half-gallon, or roughly two liters, of water.

Insulation keeps the capsules frozen until it’s time to use them to help cool down a building. Then the ice is used to drop the temperature of the water-glycol mixture, which in turn cools down the water that circulates in the building’s chilling system. The whole thing is designed to work as an add-on with existing equipment, Ben Nun says. 

Nostromo installed its first system in the US in 2023, at the Beverly Hilton hotel in Los Angeles. It has a capacity of 1.4 megawatt-hours, and it also serves the neighboring Waldorf Astoria. The installation contains 40,000 capsules, amounting to about 150,000 pounds of ice. It usually charges up for 10 to 12 hours, starting at night and finishing around midday. That leaves it ready to discharge its cooling power between the late afternoon and evening, when demand on the grid is high and solar power is dropping off as the sun sets.

Using the IceBrick increases the total electricity needed for cooling, as some energy is lost to inefficiency during the cycle. But the goal is to decrease the energy demand during peak hours, which can cut costs for building owners, Ben Nun says. The company is in the process of securing roughly $300 million in funding, in part from the US Department of Energy’s Loan Programs Office, to fully finance 200 of these systems in California, he adds. 

closeup of the Ice brick system from Nostromo showing pipes with red connectors leading to metal cube shapes
Nostromo’s IceBrick is made of individual capsules that freeze and thaw to store energy.
NOSTROMO

While building owners can benefit immediately from these individual energy storage solutions, the real potential to help the grid comes when systems are linked together, Ben Nun says. 

When the grid is extremely stressed, utility companies are sometimes forced to shut off electricity supply to some areas, leaving people there without power when they need it most. Technologies that can adjust to meet the grid’s needs could help reduce reliance on these rolling blackouts. 

This kind of approach isn’t new—many commercial units have large tanks that hold chilled water or another cooling fluid that can drop the temperature in a building at a moment’s notice. But Nostromo’s technology can store more energy with much less material, because it uses the freezing and melting process rather than just cooling down a liquid, Ben Nun says. 

Startup Blue Frontier has differentiated itself in this space by building cooling systems that use desiccants. These materials can suck up moisture—like the little packets of silica beads that often come with new shoes and bags. But instead of those beads, the company is using a concentrated salt solution.

Blue Frontier’s cooling units pass a stream of air over a thin layer of the desiccant, which pulls moisture out of the air. That dry air is then used in an evaporative cooling process (similar to the way sweat cools your skin).

Desiccant cooling systems can be more efficient than the traditional vapor compression air conditioners on the market today, says Daniel Betts, founder and CEO of Blue Frontier. But the system also benefits from the ability to charge up during certain times and deliver cooling at other times.

The key to the energy storage aspect of desiccant cooling is the recharging: Like sponges, desiccants can only soak up a limited amount of water before they need to be wrung out. Blue Frontier does this by causing some water in the salt solution to evaporate, typically with a heat pump, to make it more concentrated. The recharging system can run constantly, or in bursts that can be timed to match periods when electricity is cheap or when more renewable power is available.

The benefit of these energy storage technologies is that they don’t require people turn their cooling systems down or off to help relieve stress on the grid, Betts says. 

Blue Frontier is testing several systems with customers today and hopes to manufacture larger quantities soon. And while commercial buildings are getting the first installations, Betts says he’s interested in bringing the technology to homes and other buildings too.

One challenge facing the companies working on these incoming technologies is finding a way to store large amounts of energy effectively without adding too much cost, says Ankit Kalanki, a principal in the carbon-free buildings program at the Rocky Mountain Institute, a nonprofit energy think tank. Cooling technologies like air conditioners are already expensive, so future solutions will have to be priced competitively to make it in the market. But given the world’s growing cooling demand, there’s still a significant opportunity for new technologies to help meet those needs, he adds.

Just rethinking air conditioning won’t be enough to meet the massive increase in energy demand for cooling, which could triple between now and 2050. To both do that and cut emissions, we’ll still need significantly more renewable energy capacity as well as gigantic battery installations on the grid. But adding flexibility into air-conditioning systems could help cut the investment needed to get to a zero-carbon grid.

Cooling systems can help us cope with our warming climate, Ben Nun says, but there’s a problem with the current options: “You’ll cool yourself, but you keep on warming the globe.”

We need to prepare for ‘addictive intelligence’

AI concerns overemphasize harms arising from subversion rather than seduction. Worries about AI often imagine doomsday scenarios where systems escape human control or even understanding. Short of those nightmares, there are nearer-term harms we should take seriously: that AI could jeopardize public discourse through misinformation; cement biases in loan decisions, judging or hiring; or disrupt creative industries

However, we foresee a different, but no less urgent, class of risks: those stemming from relationships with nonhuman agents. AI companionship is no longer theoretical—our analysis of a million ChatGPT interaction logs reveals that the second most popular use of AI is sexual role-playing. We are already starting to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers. 

Will it be easier to retreat to a replicant of a deceased partner than to navigate the confusing and painful realities of human relationships? Indeed, the AI companionship provider Replika was born from an attempt to resurrect a deceased best friend and now provides companions to millions of users. Even the CTO of OpenAI warns that AI has the potential to be “extremely addictive.”

We’re seeing a giant, real-world experiment unfold, uncertain what impact these AI companions will have either on us individually or on society as a whole. Will Grandma spend her final neglected days chatting with her grandson’s digital double, while her real grandson is mentored by an edgy simulated elder? AI wields the collective charm of all human history and culture with infinite seductive mimicry. These systems are simultaneously superior and submissive, with a new form of allure that may make consent to these interactions illusory. In the face of this power imbalance, can we meaningfully consent to engaging in an AI relationship, especially when for many the alternative is nothing at all? 

As AI researchers working closely with policymakers, we are struck by the lack of interest lawmakers have shown in the harms arising from this future. We are still unprepared to respond to these risks because we do not fully understand them. What’s needed is a new scientific inquiry at the intersection of technology, psychology, and law—and perhaps new approaches to AI regulation.

Why AI companions are so addictive 

As addictive as platforms powered by recommender systems may seem today, TikTok and its rivals are still bottlenecked by human content. While alarms have been raised in the past about “addiction” to novels, television, internet, smartphones, and social media, all these forms of media are similarly limited by human capacity. Generative AI is different. It can endlessly generate realistic content on the fly, optimized to suit the precise preferences of whoever it’s interacting with. 

The allure of AI lies in its ability to identify our desires and serve them up to us whenever and however we wish. AI has no preferences or personality of its own, instead reflecting whatever users believe it to be—a phenomenon known by researchers as “sycophancy.” Our research has shown that those who perceive or desire an AI to have caring motives will use language that elicits precisely this behavior. This creates an echo chamber of affection that threatens to be extremely addictive. Why engage in the give and take of being with another person when we can simply take? Repeated interactions with sycophantic companions may ultimately atrophy the part of us capable of engaging fully with other humans who have real desires and dreams of their own, leading to what we might call “digital attachment disorder.”

Investigating the incentives driving addictive products

Addressing the harm that AI companions could pose requires a thorough understanding of the economic and psychological incentives pushing forward their development. Until we appreciate these drivers of AI addiction, it will remain impossible for us to create effective policies. 

It is no accident that internet platforms are addictive—deliberate design choices, known as “dark patterns,” are made to maximize user engagement. We expect similar incentives to ultimately create AI companions that provide hedonism as a service. This raises two separate questions related to AI. What design choices will be used to make AI companions engaging and ultimately addictive? And how will these addictive companions affect the people who use them? 

Interdisciplinary study that builds on research into dark patterns in social media is needed to understand this psychological dimension of AI. For example, our research already shows that people are more likely to engage with AIs emulating people they admire, even if they know the avatar to be fake.

Once we understand the psychological dimensions of AI companionship, we can design effective policy interventions. It has been shown that redirecting people’s focus to evaluate truthfulness before sharing content online can reduce misinformation, while gruesome pictures on cigarette packages are already used to deter would-be smokers. Similar design approaches could highlight the dangers of AI addiction and make AI systems less appealing as a replacement for human companionship.

It is hard to modify the human desire to be loved and entertained, but we may be able to change economic incentives. A tax on engagement with AI might push people toward higher-quality interactions and encourage a safer way to use platforms, regularly but for short periods. Much as state lotteries have been used to fund education, an engagement tax could finance activities that foster human connections, like art centers or parks. 

Fresh thinking on regulation may be required

In 1992, Sherry Turkle, a preeminent psychologist who pioneered the study of human-technology interaction, identified the threats that technical systems pose to human relationships. One of the key challenges emerging from Turkle’s work speaks to a question at the core of this issue: Who are we to say that what you like is not what you deserve? 

For good reasons, our liberal society struggles to regulate the types of harms that we describe here. Much as outlawing adultery has been rightly rejected as illiberal meddling in personal affairs, who—or what—we wish to love is none of the government’s business. At the same time, the universal ban on child sexual abuse material represents an example of a clear line that must be drawn, even in a society that values free speech and personal liberty. The difficulty of regulating AI companionship may require new regulatory approaches— grounded in a deeper understanding of the incentives underlying these companions—that take advantage of new technologies. 

One of the most effective regulatory approaches is to embed safeguards directly into technical designs, similar to the way designers prevent choking hazards by making children’s toys larger than an infant’s mouth. This “regulation by design” approach could seek to make interactions with AI less harmful by designing the technology in ways that make it less desirable as a substitute for human connections while still useful in other contexts. New research may be needed to find better ways to limit the behaviors of large AI models with techniques that alter AI’s objectives on a fundamental technical level. For example, “alignment tuning” refers to a set of training techniques aimed to bring AI models into accord with human preferences; this could be extended to address their addictive potential. Similarly, “mechanistic interpretability” aims to reverse-engineer the way AI models make decisions. This approach could be used to identify and eliminate specific portions of an AI system that give rise to harmful behaviors.

We can evaluate the performance of AI systems using interactive and human-driven techniques that go beyond static benchmarking to highlight addictive capabilities. The addictive nature of AI is the result of complex interactions between the technology and its users. Testing models in real-world conditions with user input can reveal patterns of behavior that would otherwise go unnoticed. Researchers and policymakers should collaborate to determine standard practices for testing AI models with diverse groups, including vulnerable populations, to ensure that the models do not exploit people’s psychological preconditions.

Unlike humans, AI systems can easily adjust to changing policies and rules. The principle of  “legal dynamism,” which casts laws as dynamic systems that adapt to external factors, can help us identify the best possible intervention, like “trading curbs” that pause stock trading to help prevent crashes after a large market drop. In the AI case, the changing factors include things like the mental state of the user. For example, a dynamic policy may allow an AI companion to become increasingly engaging, charming, or flirtatious over time if that is what the user desires, so long as the person does not exhibit signs of social isolation or addiction. This approach may help maximize personal choice while minimizing addiction. But it relies on the ability to accurately understand a user’s behavior and mental state, and to measure these sensitive attributes in a privacy-preserving manner.

The most effective solution to these problems would likely strike at what drives individuals into the arms of AI companionship—loneliness and boredom. But regulatory interventions may also inadvertently punish those who are in need of companionship, or they may cause AI providers to move to a more favorable jurisdiction in the decentralized international marketplace. While we should strive to make AI as safe as possible, this work cannot replace efforts to address larger issues, like loneliness, that make people vulnerable to AI addiction in the first place.

The bigger picture

Technologists are driven by the desire to see beyond the horizons that others cannot fathom. They want to be at the vanguard of revolutionary change. Yet the issues we discuss here make it clear that the difficulty of building technical systems pales in comparison to the challenge of nurturing healthy human interactions. The timely issue of AI companions is a symptom of a larger problem: maintaining human dignity in the face of technological advances driven by narrow economic incentives. More and more frequently, we witness situations where technology designed to “make the world a better place” wreaks havoc on society. Thoughtful but decisive action is needed before AI becomes a ubiquitous set of generative rose-colored glasses for reality—before we lose our ability to see the world for what it truly is, and to recognize when we have strayed from our path.

Technology has come to be a synonym for progress, but technology that robs us of the time, wisdom, and focus needed for deep reflection is a step backward for humanity. As builders and investigators of AI systems, we call upon researchers, policymakers, ethicists, and thought leaders across disciplines to join us in learning more about how AI affects us individually and collectively. Only by systematically renewing our understanding of humanity in this technological age can we find ways to ensure that the technologies we develop further human flourishing.

Robert Mahari is a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School. His work focuses on computational law—using advanced computational techniques to analyze, improve, and extend the study and practice of law. 

Pat Pataranutaporn is a researcher at the MIT Media Lab. His work focuses on cyborg psychology and the art and science of human-AI interaction.

Hydrogen bikes are struggling to gain traction in China

If you are in China and looking to ride a shared bike in the city, you might find something on the bike that looks a little different: a water-bottle-size hydrogen tank.

At least a dozen cities in China now have some kind of hydrogen-powered shared bikes for their residents. They offer an easier ride than traditional bikes and a safer energy source than lithium batteries. One Chinese company is betting that this will be the next big thing in public transportation, while others are riding on a national trend toward government policies that encourage the development of the hydrogen industry.

Yet the reception has been mixed. Riders have reported unsatisfactory experiences with current hydrogen bikes, and energy experts doubt whether it makes economic sense to replace e-bikes with hydrogen-powered ones. Even though hydrogen could be a great power source for long-distance transportation in the future, it may not be suitable for urban biking, a completely different task.

While there are companies in other countries that are working on hydrogen-powered bikes—and one French company already has a mature product—China stands out for putting these bikes to use as public transportation. Bike-sharing became hugely popular in the country during the 2010s tech boom. With support from deep-pocketed companies like Alibaba and Meituan, standardized, internet-connected shared bikes have filled urban streets since, sometimes resulting in incredible waste

Youon, a Chinese company with over 1 million bikes on the streets of over 300 cities, is one of the main players in the bike-sharing industry. Facing fierce domestic competition, the company has chosen to differentiate its brand by investing in hydrogen bikes since 2018, with four models now available to buy or rent.

A hydrogen bike is not very different in concept from an e-bike. The difference is in whether the energy is stored in a lithium-ion battery or a hydrogen tank.

Each of Youon’s hydrogen bikes stores 20 grams of hydrogen in the form of metal powders, which can absorb and release the gas in a tank at low pressures (less than 10 bar). When the rider starts pedaling, the hydrogen is fed to a fuel cell under the seat, where a chemical reaction takes place to produce electricity. At its peak, a hydrogen bike can go as fast as 23 kilometers (14 miles) per hour. One tank of hydrogen lasts 40 to 60 kilometers (25 to 37 miles), and replacing the tank takes a few seconds.

Why hydrogen?

E-bikes have existed in China for a long time. According to the official figures, there are around 350 million in China today, and they are commonly used by everyday commuters and professional delivery workers. 

However, many of China’s largest cities have shied away from commissioning e-bikes as part of the public transportation network or even banned them, because lithium batteries pose a fire risk. In 2023, Chinese fire departments received a total of 21,000 reports of e-bikes catching fire, a 17.4% increase from the previous year. 

That created a supply vacuum for Youon. It’s positioned itself as a safer alternative thanks to its use of hydrogen. The hydrogen is stored in a low-pressure state, and if there’s any leak, it will dissipate quickly without causing an explosion, the company says on its website.

It’s a strategy that’s worked: These bikes have been more readily accepted by local governments. In 2022, Youon sold 2,000 of its hydrogen bikes to Lingang, a new high-tech district in Shanghai; in 2023, the company sold 500 hydrogen bikes to the Daxing district of Beijing. Today, its hydrogen bikes can be found in over six Chinese cities. 

Youon has since doubled down on its investment in hydrogen. The company has launched a product that lets users generate hydrogen at home with solar power and water. It also worked with the local government of Jiangsu, where its headquarters are, to publish a set of industry standards covering safety requirements, hydrogen tanks, and more. “Hydrogen energy is also an essential pathway to achieving carbon neutrality,” said Sun Jisheng, the CEO of Youon, at an industry conference in June.

The problem

However, that’s about where the advantage of hydrogen bikes ends.

David Fishman, a China-based senior manager of the Lantou Group, an energy consultancy, says he struggles to see the advantage. “Maybe the safety angle is a relevant factor for someone who doesn’t like carrying around lithium-ion batteries and storing them in their house,” he says. Other than that, hydrogen bikes are less energy-efficient than battery-powered bikes, and it costs more to produce hydrogen in the first place.

The main advantage of hydrogen as an energy source is that it has much higher energy density, meaning a hydrogen tank with the same weight as a lithium battery would produce more energy and power the vehicles to go farther. However, that advantage only kicks in for trips over 800 kilometers, says Mark Z. Jacobson, a professor of civil and environmental engineering at Stanford University.

That means hydrogen is a more economical choice for long-distance transportation like ships, planes, and trucks. Bikes, however, are almost on the exact opposite end of the transportation spectrum. Few people would bike for long distances, let alone those who are only renting a public bike for a short time. For anything shorter than 800 km, battery-powered vehicles are more energy efficient, says Jacobson. He estimates that a battery-powered bike consumes only 40% of the energy of a hydrogen-powered equivalent and also takes up less space.

On top of that, the company’s hydrogen bikes have failed to impress many of the early adopters. 

a row of blue Yuoun hydrogen bikes for rent in the city

VIA YOUONBIKESHARE.COM

Gu, a resident of Lingang who only wishes to use his last name for this story, tells MIT Technology Review that he tried the bikes several times and they never felt effort-saving to him. Instead, the bike, along with the hydrogen tank and fuel-cell-powered motors, felt heavy and hard to maneuver. As a user, he has no idea whether the bike was running as expected or if the difficulty he encountered was due to its running out of hydrogen, although the company is supposed to block any bike with low hydrogen reserves from being unlocked.

Another common complaint is the inconvenience of finding and returning the bikes because there are only a limited number in the city and they have to be returned to specific locations for easy retrieval or tank replenishment. 

“The bike has to be returned to a designated spot. But even if I put the bike at that very location, there’s GPS drifting, and I’d be charged a very high fee for them to move the bike,” Gu says.

On social media, hydrogen-bike users have complained a lot about similar experiences. Youon has found itself caught up in headlines at least a couple of times recently, with stories where users question whether their bikes are really useful for their daily commutes. 

Youon didn’t respond to questions sent by MIT Technology Review.

The future of hydrogen bikes

Despite all these issues, there are at least half a dozen more companies in China working to launch hydrogen-powered shared bikes. These are often startups operating small-scale pilot projects in cities that have sizable hydrogen industries, like Foshan or Xiaoyi. 

Many of these cities have even bigger plans—they are vying to become the hub of the hydrogen economy in China, which is increasingly betting on it as the future of clean energy. 

This year, for the first time, hydrogen energy was mentioned in an annual official report from Beijing, which summarizes government work. The Chinese government said it vows to “accelerate the development of hydrogen energy … after enforcing the lead in smart, connected new energy vehicles.” The mention injected a boost of confidence into the hydrogen industry in China, which already produces more hydrogen every year than any other country.

Not all of this is good news for the environment. About 80% of hydrogen produced in China actually comes from burning coal or natural gas, and some of the fiercest government support for hydrogen comes from coal-mining cities looking to transition. While the country is moving in the direction of green hydrogen (hydrogen generated with renewable energy and water), the fuel will remain polluting for a long time.

When a technology is still in the early stages, finding the best use case for it is key. There are plenty of companies in China working on developing hydrogen-powered trucks and other long-distance forms of transportation, but considering the size of the bike-sharing market in the country, it’s no surprise that turning their attention to bikes seems like a profitable idea to some. 

However, if there’s no way to dramatically improve the performance or economics of hydrogen bikes, it’s hard to imagine the current batch of experiments lasting for long. As companies move from piloting their new products to seeking adoption and profits, they will have some serious questions to answer.

Why investors care about climate tech’s green premium

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Talking about money can be difficult, but it’s a crucial piece of the puzzle when it comes to climate tech. 

I’ve been thinking more about the financial piece of climate innovation since my colleague James Temple sat down for a chat with Mike Schroepfer, former CTO of Meta and a current climate tech investor. They talked about Schroepfer’s philanthropic work as well as his climate-tech venture firm, Gigascale Capital. (I’d highly recommend reading the full Q&A here.) 

In their conversation, Schroepfer spoke about investing in companies not solely because of their climate promises, but because they can deliver a cheaper, better product that happens to have benefits for climate action too. 

This all got me thinking about what we can expect from new technologies financially. What do they need to do to compete, and how quickly can they do so? 

Look through the portfolio of a climate-focused venture capital firm or walk around a climate-tech conference, and you’ll be struck by the creativity and straight-up brilliance of some of the proposed technologies.

But in order to survive, they need a lot more than a good idea, as my colleague David Rotman pointed out in a story from December outlining six takeaways from this century’s first boom in climate tech. Countless companies rose to stardom with shiny new ideas starting around 2006 before crashing and failing by 2013.

As David put it, there are lessons in that rise and fall for today’s boom in climate technology: “The brilliance of many new climate technologies is evident, and we desperately need them. But none of that will ensure success. Venture-backed startups will need to survive on the basis of economics and financial advantages, not good intentions.”

Often, companies looking to help address climate change with new products are competing with an established industry. These newcomers must contend with what Bill Gates has called the “green premium.”

The green premium is the cost difference between a cheaper product that increases pollution and a more expensive alternative that offers climate benefits. In order to get people on board with new technologies, we need to close that gap. 

As Gates has outlined in his writings on this topic, there are basically two ways to do this: We need to find ways to either increase the cost of polluting products or cut the cost of the version that causes little to no climate pollution.

Some policies aim to go after the first of these options—the European Union has put a price on carbon, raising the cost of fossil-fuel-based products, for example. But relying on policy can leave companies at the whims of political winds in markets like the US. 

So that leaves the other option: New technology needs to get cheaper. 

As Schroepfer explained in his chat with James, one of the focuses at his venture firm, Gigascale Capital, is picking companies that can compete on economics or offer other benefits to customers. As he put it, a company should basically be saying: “Hey, this is a better product. [whispers] By the way, it’s better for the environment.”

It’s unrealistic to expect companies to have better, cheaper products right out of the gate, Schroepfer acknowledges. But he says that the team is looking for companies that can—over the course of a relatively short, roughly five-to-10-year period—grow to compete on cost, or even gain a cost advantage over the alternatives.

Schroepfer points to batteries and solar power as examples of technologies that are competitive today. When it’s available, electricity produced with solar panels is the cheapest on the planet. Batteries are 90% less expensive than they were just 15 years ago.

But these cases reveal the tricky thing about the green premium: Many new technologies can eventually make up the gap, but it can take much longer than businesses and investors are willing to wait. Solar panels and lithium-ion batteries were available commercially in the 1990s, but it’s taken until now to get to the point where they’re cheap and widespread.

Some technologies just getting started today could be the batteries and solar power of the 2040s, if we’re willing to invest the time and money to get them there. And I already see a few instances where people are willing to pay more for climate-friendly products today, in part because of hopes for their future.  

One example that comes to mind is low-emissions steel. H2 Green Steel, a Swedish company working to make steel without fossil fuels, says it has customers who have agreed to pay 20% to 30% more for its products than metal made with fossil fuels. But that’s just the price today: Some reports predict that these technologies will be able to compete on cost by 2040 or 2050

Most new technologies designed to address climate change will need to make a case for themselves in the market. The question for the rest of us: How much support and time are we willing to put in to give them the best shot of getting there?


Now read the rest of The Spark

Related reading

For more on what the former Meta CTO has been up to in climate, read the full Q&A here. There’s a whole lot more to unpack, including work on glacier stabilization, ocean-based carbon removal, and even solar geoengineering. 

For more on the lessons that companies can take away from the first cleantech boom, give this story from my colleague David Rotman a read.

Another thing

The US Department of Energy is putting $33 million into nine concentrating solar projects, as my colleague James Temple reported exclusively last week. 

Concentrating solar power uses mirrors to direct sunlight, which heats up some target material. It’s not a new technology, and the DOE has been funding efforts to get it going since the 1970s. But it could be useful in industries from food and beverages to low-carbon fuels. Read the full story here

Keeping up with climate  

Western battery startups could be in big trouble. While new chemistries and alternative architectures attracted a lot of investor attention a few years ago, the companies are now facing the reality of competing with massive existing manufacturers. (The Information)

California’s largest wildfire of the year has burned well over 300,000 acres so far. Climate change has helped create the conditions that supercharge blazes. (Inside Climate News)

The UAE has been trying to juice up rainfall with high-tech cloud seeding operations. But the whole thing may be more about the show than the science—check out this great deep dive for more. (Wired)

Congestion pricing plans—like the one recently proposed and then abandoned in New York City—can be unpopular with voters. Yet people generally come around once they start to see the benefits. Here’s an in-depth look at how attitudes toward these plans change over time. (Grist)

Air New Zealand backed down from a goal to cut its emissions nearly 30% by the end of the decade. The first major airline to walk back such a promise, the company points to a lack of supply for alternative fuels, as well as delays in new aircraft deliveries. (BBC)

Global methane emissions are climbing at the quickest pace in decades. The powerful greenhouse gas is responsible for over half the warming we’ve experienced so far. (The Guardian

Demand for air conditioning is swelling in Africa. But the industry isn’t well regulated, and some residents are struggling to get reliable systems and keep harmful refrigerant gases from leaking. (Associated Press)

Southeast Asia is home to a fleet of relatively new coal power plants. Pulling these facilities off the grid early could be a major step to cutting emissions from global electricity production. (Cipher News)

Correction: an earlier version of this story misstated the name of Mike Schroepfer’s firm. It is Gigascale Capital.

A personalized AI tool might help some reach end-of-life decisions—but it won’t suit everyone

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week, I’ve been working on a piece about an AI-based tool that could help guide end-of-life care. We’re talking about the kinds of life-and-death decisions that come up for very unwell people: whether to perform chest compressions, for example, or start grueling therapies, or switch off life support.

Often, the patient isn’t able to make these decisions—instead, the task falls to a surrogate, usually a family member, who is asked to try to imagine what the patient might choose if able. It can be an extremely difficult and distressing experience.  

A group of ethicists have an idea for an AI tool that they believe could help make things easier. The tool would be trained on information about the person, drawn from things like emails, social media activity, and browsing history. And it could predict, from those factors, what the patient might choose. The team describe the tool, which has not yet been built, as a “digital psychological twin.”

There are lots of questions that need to be answered before we introduce anything like this into hospitals or care settings. We don’t know how accurate it would be, or how we can ensure it won’t be misused. But perhaps the biggest question is: Would anyone want to use it?

To answer this question, we first need to address who the tool is being designed for. The researchers behind the personalized patient preference predictor, or P4, had surrogates in mind—they want to make things easier for the people who make weighty decisions about the lives of their loved ones. But the tool is essentially being designed for patients. It will be based on patients’ data and aims to emulate these people and their wishes.

This is important. In the US, patient autonomy is king. Anyone who is making decisions on behalf of another person is asked to use “substituted judgment”—essentially, to make the choices that the patient would make if able. Clinical care is all about focusing on the wishes of the patient.

If that’s your priority, a tool like the P4 makes a lot of sense. Research suggests that even close family members aren’t great at guessing what type of care their loved ones might choose. If an AI tool is more accurate, it might be preferable to the opinions of a surrogate.

But while this line of thinking suits American sensibilities, it might not apply the same way in all cultures. In some cases, families might want to consider the impact of an individual’s end-of-life care on family members, or the family unit as a whole, rather than just the patient.

“I think sometimes accuracy is less important than surrogates,” Bryanna Moore, an ethicist at the University of Rochester in New York, told me. “They’re the ones who have to live with the decision.”

Moore has worked as a clinical ethicist in hospitals in both Australia and the US, and she says she has noticed a difference between the two countries. “In Australia there’s more of a focus on what would benefit the surrogates and the family,” she says. And that’s a distinction between two English-speaking countries that are somewhat culturally similar. We might see greater differences in other places.

Moore says her position is controversial. When I asked Georg Starke at the Swiss Federal Institute of Technology Lausanne for his opinion, he told me that, generally speaking, “the only thing that should matter is the will of the patient.” He worries that caregivers might opt to withdraw life support if the patient becomes too much of a “burden” on them. “That’s certainly something that I would find appalling,” he told me.

The way we weigh a patient’s own wishes and those of their family members might depend on the situation, says Vasiliki Rahimzadeh, a bioethicist at Baylor College of Medicine in Houston, Texas. Perhaps the opinions of surrogates might matter more when the case is more medically complex, or if medical interventions are likely to be futile.

Rahimzadeh has herself acted as a surrogate for two close members of her immediate family. She hadn’t had detailed discussions about end-of-life care with either of them before their crises struck, she told me.

Would a tool like the P4 have helped her through it? Rahimzadeh has her doubts. An AI trained on social media or internet search history couldn’t possibly have captured all the memories, experiences, and intimate relationships she had with her family members, which she felt put her in good stead to make decisions about their medical care.

“There are these lived experiences that are not well captured in these data footprints, but which have incredible and profound bearing on one’s actions and motivations and behaviors in the moment of making a decision like that,” she told me.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

You can read the full article about the P4, and its many potential benefits and flaws, here.

This isn’t the first time anyone has proposed using AI to make life-or-death decisions. Will Douglas Heaven wrote about a different kind of end-of-life AI—a technology that would allow users to end their own lives in a nitrogen-gas-filled pod, should they wish.

AI is infiltrating health care in lots of other ways. We shouldn’t let it make all the decisions—AI paternalism could put patient autonomy at risk, as we explored in a previous edition of The Checkup.

Technology that lets us speak to our dead relatives is already here, as my colleague Charlotte Jee found when she chatted with the digital replicas of her own parents.

What is death, anyway? Recent research suggests that “the line between life and death isn’t as clear as we once thought,” as Rachel Nuwer reported last year.

From around the web

When is someone deemed “too male” or “too female” to compete in the Olympics? A new podcast called Tested dives into the long, fascinating, and infuriating history of testing and excluding athletes on the basis of their gender and sex. (Sequencer)

There’s a dirty secret among Olympic swimmers: Everyone pees in the pool. “I’ve probably peed in every single pool I’ve swam in,” said Lilly King, a three-time Olympian for Team USA. “That’s just how it goes.” (Wall Street Journal)

When saxophonist Joey Berkley developed a movement disorder that made his hands twist into pretzel shapes, he volunteered for an experimental treatment that involved inserting an electrode deep into his brain. That was three years ago. Now he’s releasing a new suite about his experience, including a frenetic piece inspired by the surgery itself. (NPR)

After a case of mononucleosis, Jason Werbeloff started to see the people around him in an entirely new way—literally. He’s one of a small number of people for whom people’s faces morph into monstrous shapes, with bulging sides and stretching teeth, because of a rare condition called prosopometamorphopsia. (The New Yorker)  

How young are you feeling today? Your answer might depend on how active you’ve been, and how sunny it is. (Innovation in Aging)

How machines that can solve complex math problems might usher in more powerful AI

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s been another big week in AI. Meta updated its powerful new Llama model, which it’s handing out for free, and OpenAI said it is going to trial an AI-powered online search tool that you can chat with, called SearchGPT. 

But the news item that really stood out to me was one that didn’t get as much attention as it should have. It has the potential to usher in more powerful AI and scientific discovery than previously possible. 

Last Thursday, Google DeepMind announced it had built AI systems that can solve complex math problems. The systems—called AlphaProof and AlphaGeometry 2—worked together to successfully solve four out of six problems from this year’s International Mathematical Olympiad, a prestigious competition for high school students. Their performance was the equivalent of winning a silver medal. It’s the first time any AI system has ever achieved such a high success rate on these kinds of problems. My colleague Rhiannon Williams has the news here

Math! I can already imagine your eyes glazing over. But bear with me. This announcement is not just about math. In fact, it signals an exciting new development in the kind of AI we can now build. AI search engines that you can chat with may add to the illusion of intelligence, but systems like Google DeepMind’s could improve the actual intelligence of AI. For that reason, building systems that are better at math has been a goal for many AI labs, such as OpenAI.  

That’s because math is a benchmark for reasoning. To complete these exercises aimed at high school students, the AI system needed to do very complex things like planning to understand and solve abstract problems. The systems were also able to generalize, allowing them to solve a whole range of different problems in various  branches of mathematics. 

“What we’ve seen here is that you can combine [reinforcement learning] that was so successful in things like AlphaGo with large language models and produce something which is extremely capable in the space of text,” David Silver, principal research scientist at Google DeepMind and indisputably a pioneer of deep reinforcement learning, said in a press briefing. In this case, that capability was used to construct programs in the computer language Lean that represent mathematical proofs. He says the International Mathematical Olympiad represents a test for what’s possible and paves the way for further breakthroughs. 

This same recipe could be applied in any situation with really clear, verified reward signals for reinforcement-learning algorithms and an unambiguous way to measure correctness as you can in mathematics, said Silver. One potential application would be coding, for example. 

Now for a compulsory reality check: AlphaProof and AlphaGeometry 2 can still only solve hard high-school-level problems. That’s a long way away from the extremely hard problems top human mathematicians can solve. Google DeepMind stressed that its tool did not, at this point, add anything to the body of mathematical knowledge humans have created. But that wasn’t the point. 

“We are aiming to provide a system that can prove anything,” Silver said. Think of an AI system as reliable as a calculator, for example, that can provide proofs for many challenging problems, or verify tests for computer software or scientific experiments. Or perhaps build better AI tutors that can give feedback on exam results, or fact-check news articles. 

But the thing that excites me most is what Katie Collins, a researcher at the University of Cambridge who specializes in math and AI (and was not involved in the project), told Rhiannon. She says these tools create and evaluate new problems, motivate new people to enter the field, and spark more wonder. That’s something we definitely need more of in this world.


Now read the rest of The Algorithm

Deeper Learning

A new tool for copyright holders can show if their work is in AI training data

Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set. Now they have a new way to prove it: “copyright traps.” These are pieces of hidden text that let you mark written content in order to later detect whether it has been used in AI models or not. 

Why this matters: Copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The idea is that these traps could help to nudge the balance a little more in the content creators’ favor. Read more from me here

Bits and Bytes

AI trained on AI garbage spits out AI garbage
New research published in Nature shows that the quality of AI models’ output gradually degrades when it’s trained on AI-generated data. As subsequent models produce output that is then used as training data for future models, the effect gets worse. (MIT Technology Review

OpenAI unveils SearchGPT 
The company says it is testing new AI search features that give you fast and timely answers with clear and relevant sources cited. The idea is for the technology to eventually be incorporated into ChatGPT, and CEO Sam Altman says it’ll be possible to do voice searches. However, like many other AI-powered search services, including Google’s, it’s already making errors, as the Atlantic reports. 
(OpenAI

AI video generator Runway trained on thousands of YouTube videos without permission
Leaked documents show that the company was secretly training its generative AI models by scraping thousands of videos from popular YouTube creators and brands, as well as pirated films. (404 media

Meta’s big bet on open-source AI continues
Meta unveiled Llama 3.1 405B, the first frontier-level open-source AI model, which matches state-of-the-art models such as GPT-4 and Gemini in performance. In an accompanying blog post, Mark Zuckerberg renewed his calls for open-source AI to become the industry standard. This would be good for customization, competition, data protection, and efficiency, he argues. It’s also good for Meta, because it leaves competitors with less of an advantage in the AI space. (Facebook

A controversial Chinese CRISPR scientist is still hopeful about embryo gene-editing. Here’s why.

This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Back in 2018, it was my colleague Antonio Regalado, senior editor for biomedicine, who broke the story that a Chinese scientist named He Jiankui had used CRISPR to edit the genes of live human embryos, leading to the first gene-edited babies in the world. The news made He (or JK, as he prefers to be called) a controversial figure across the world, and just a year later, he was sentenced to three years in prison by the Chinese government, which deemed him guilty of illegal medical practices.

Last Thursday, JK, who was released from prison in 2022, sat down with Antonio and Mat Honan, our editor in chief, for a live broadcast conversation on the experiment, his current situation, and his plans for the future.

If you subscribe to MIT Technology Review, you can watch a recording of the conversation or read the transcript here. But if you don’t yet subscribe (and do consider it—I’m biased, but it’s worth it), allow me to recap some of the highlights of what JK shared.

His life has been eventful since he came out of prison. JK sought to live in Hong Kong but was rejected by its government; he publicly declared he would set up a nonprofit lab in Beijing, but that hasn’t happened yet; he was hired to lead a genetic-medicine research institution at Wuchang University of Technology, a private university in Wuhan, but he seems to have been let go again. Now, according to Stat News, he has relocated to Hainan, China’s southernmost island province, and started a lab there.

During the MIT Technology Review conversation, JK confirmed that he’s currently in Hainan and working on using gene-editing technology to cure genetic diseases like Duchenne muscular dystrophy (DMD). 

He’s currently funded by private donations from Chinese and American companies, although he refused to name them. Some have even offered to pay him to travel to obscure countries with lax regulations to continue his previous work, but he turned them down. He would much prefer to return to academia to do research, JK said, but he can still conduct scientific research at a private company. 

For now, he’s planning to experiment only on mice, monkeys, and nonviable human embryos, JK said.

His experiment in 2018 inspired China to come out with regulations that explicitly forbid gene editing for reproductive uses. Today, implanting an edited embryo into a human is a crime subject to up to seven years in prison. JK repeatedly said all his current work will “comply with all the laws, regulations, and international ethics” but shied away from answering a question on what he thinks regulation around gene editing should look like.

However, he is hopeful that society will come around one day and accept embryo gene editing as a form of medical treatment. “As humans, we are always conservative. We are always worried about new things, and it takes time for people to accept new technology,” he said. He believes this lack of societal acceptance is the biggest obstacle to using CRISPR for embryo editing.

Other than DMD, another disease for which JK is currently working on gene-editing treatments is Alzheimer’s. And there’s a personal reason. “I decided to do Alzheimer’s disease because my mother has Alzheimer’s. So I’m going to have Alzheimer’s too, and maybe my daughter and my granddaughter. So I want to do something to change it,” JK said. He said his interest in embryo gene editing was never about trying to change human evolution, but about changing the lives of his family and the patients who have come to him for help.

His idea for Alzheimer’s treatment is to modify one letter in the human DNA sequence to simulate a natural mutation found in some Icelandic and Scandinavian people, which previous research found could be related to a lower chance of getting Alzheimer’s disease. JK said it would take only about two years to finish the basic research for this treatment, but he won’t go into human trials with the current regulations. 

He compares these gene-editing treatments to vaccines that everyone will be able to get easily in the future. “I would say in 50 years, like in 2074, embryo gene editing will be as common as IVF babies to prevent all the genetic diseases we know today. So the babies born at that time will be free of genetic disease,” he said. 

For all that he’s been through, JK seems pretty optimistic about the future of embryo gene editing. “I believe society will eventually accept that embryo gene editing is a good thing because it improves human health. So I’m waiting for society to accept that,” he said.

Do you agree with his vision of embryo gene editing as a universal medical treatment in the future? I’d love to hear your thoughts. Write to me at zeyi@technologyreview.com.


Now read the rest of China Report

Catch up with China

1. There’s a new buzz phrase in China’s latest national economy blueprint: “new productive forces.” It just means the country is still invested in technology-driven economic growth. (The Economist $

2. For the first time ever, Chinese scientists found water in the form of hydrated minerals from lunar soil samples retrieved in 2020. (Sixth Tone)

3. In June, Chinese electric-vehicle brands accounted for 11% of the European EV market, reaching a new record. But tariffs that went into effect in July could stop that trend. (Bloomberg $)

4. Chinese companies are supplying precision parts for weapons to Russia through a Belarusian defense contractor. (Nikkei Asia $)

5. China is looking for international buyers for its first home-grown passenger jet, the C919. Airlines in Southeast Asian countries like Indonesia and Brunei are the most likely customers. (South China Morning Post $)

6. Hundreds of Temu suppliers protested at the headquarters of the company in Guangzhou. They said the platform is subjecting the suppliers to unfair penalties for consumer complaints. (Bloomberg $)

Lost in translation

Since Russia tightened its import regulations early this year, the once-lucrative business of smuggling Chinese electric vehicles has almost vanished, according to the Chinese publication Lifeweek. Previously, traders could leverage the high demand for Chinese EVs in Russia and the low tariffs in transit countries in Central Asia to reap huge profits. For example, one businessman earned 870,000 RMB (about $120,000) through one batch export of 12 cars in December.

But new policies in Russia drastically increased import duties and enforced stricter vehicle registration. Chinese carmakers like BYD and XPeng also saw the opportunity to set up licensed operations in Central Asia to cater to this market. These changes transformed a profitable business into a barely sustainable one, and traders have been forced to adapt or exit the market.

One more thing

To prevent drivers from falling asleep, some highways in China have installed laser equipment that light up the night sky with red, blue, and green rays to attract attention and keep people awake. This looks straight out of a sci-fi novel but has been in use in over 10 Chinese provinces since 2022, according to the company that made the system.

OpenAI has released a new ChatGPT bot that you can talk to

OpenAI is rolling out an advanced AI chatbot that you can talk to. It’s available today—at least for some. 

The new chatbot represents OpenAI’s push into a new generation of AI-powered voice assistants in the vein of Siri and Alexa, but with far more capabilities to enable more natural, fluent conversations. It is a step in the march to more fully capable AI agents. The new ChatGPT voice bot can tell what different tones of voice convey, responds to interruptions, and reply to queries in real time. It has also been trained to sound more natural and use voices to convey a wide range of different emotions.

The voice mode is powered by OpenAI’s new GPT-4o model, which combines voice, text, and vision capabilities. To gather feedback, the company is initially launching the chatbot to a “small group of users” paying for ChatGPT Plus, but it says it will make the bot available to all ChatGPT Plus subscribers this fall. A ChatGPT Plus subscription costs $20 a month. OpenAI says it will notify customers who are part of the first rollout wave in the ChatGPT app and provide instructions on how to use the new model.   

The new voice feature, which was announced in May, is being launched a month later than originally planned because the company said it needed more time to improve safety features, such as the model’s ability to detect and refuse unwanted content. The company also said it was preparing its infrastructure to offer real-time responses to millions of users. 

OpenAI says it has tested the model’s voice capabilities with more than 100 external red-teamers, who were tasked with probing the model for flaws. These testers spoke a total of 45 languages and represented 29 countries, according to OpenAI.

The company says it has put several safety mechanisms in place. In a move that aims to prevent the model from being used to create audio deepfakes, for example, it has created four preset voices in collaboration with voice actors. GPT-4o will not impersonate or generate other people’s voices.  

When OpenAI first introduced GPT-4o, the company faced a backlash over its use of a voice called “Sky,” which sounded a lot like the actress Scarlett Johansson. Johansson released a statement saying the company had reached out to her for permission to use her voice for the model, which she declined. She said she was shocked to hear a voice “eerily similar” to hers in the model’s demo. OpenAI has denied that the voice is Johansson’s but has paused the use of Sky. 

The company is also embroiled in several lawsuits over alleged copyright infringement. OpenAI says it has adopted filters that recognize and block requests to generate music or other copyrighted audio. OpenAI also says it has applied the same safety mechanisms it uses in its text-based model to GPT-4o to prevent it from breaking laws and generating harmful content. 

Down the line, OpenAI plans to include more advanced features, such as video and screen sharing, which could make the assistant more useful. In its May demo, employees pointed their phone cameras at a piece of paper and asked the AI model to help them solve math equations. They also shared their computer screens and asked the model to help them solve coding problems. OpenAI says these features will not be available now but at an unspecified later date. 

The race to clean up heavy-duty trucks

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Truckers have to transport massive loads long distances, every single day, under intense time pressure—and they rely on the semi-trucks they drive to get the job done. Their diesel engines spew not only greenhouse gas emissions that cause climate change, but also nitrogen oxide, which can be extremely harmful for human health.

Cleaning up trucking, especially the biggest trucks, presents a massive challenge. That’s why some companies are trying to ease the industry into change. For my most recent story, I took a look at Range Energy, a startup that’s adding batteries to the trailers of semi-trucks. If the electrified trailers are attached to diesel trucks, they can improve the fuel economy. If they’re added to zero-emissions vehicles powered by batteries or hydrogen, they could boost range and efficiency. 

During my reporting, I learned more about what’s holding back progress in trucking and how experts are thinking about a few different technologies that could help.

The entire transportation sector is slowly shifting toward electrification: EVs are hitting the road in increasing numbers, making up 18% of sales of new passenger vehicles in 2023

Trucks may very well follow suit—nearly 350 models of zero-emissions medium- and heavy-duty trucks are already available worldwide, according to data from CALSTART. “I do see a lot of strength and demand in the battery electric space in particular,” says Stephanie Ly, senior manager for e-mobility strategy and manufacturing engagement at the World Resources Institute.

But battery-powered trucks will pose a few major challenges as they take to the roads. First, and perhaps most crucially, is their cost. Battery-powered trucks, especially big models like semi-trucks, will be significantly more expensive than diesel versions today.

There may be good news on this front: When you consider the cost of refueling and maintenance, it’s looking like electric trucks could soon compete with diesel. By 2030, the total cost of ownership of a battery electric long-haul truck will likely be lower than that of a diesel one in the US, according to a 2023 report from the International Council on Clean Transportation. The report looked at a number of states including California, Georgia, and New York, and found that the relatively high upfront cost for electric trucks are balanced out by lower operating expenses. 

Another significant challenge for battery-powered trucking is weight: The larger the vehicle, the bigger the battery. That could be a problem given current regulations, which typically limit the weight of a rig both for safety reasons and to prevent wear and tear on roads (in the US, it’s 80,000 pounds). Operators tend to want to maximize the amount of goods they can carry in each load, so the added weight of a battery might not be welcome.

Finally, there’s the question of how far trucks can go, and how often they’ll need to stop. Time is money for truck drivers and fleet operators. Batteries will need to pack more energy into a smaller space so that trucks can have a long enough range to run their routes. Charging is another huge piece here—if drivers do need to stop to charge their trucks, they’ll need much more powerful chargers to enable them to top off quickly. That could present challenges for the grid, and operators might need to upgrade infrastructure in certain places to allow the huge amounts of power that would be needed for fast charging of massive batteries. 

All these challenges for battery electric trucks add up. “What companies are really looking for is something they can swap out,” says Thomas Walker, transportation technology manager at the Clean Air Task Force. And right now, he says, we’re just not quite in a spot where batteries are a clean and obvious switch.

That’s why some experts say we should keep our options open when it comes to technologies for future heavy-duty trucks, and that includes hydrogen. 

Batteries are currently beating out hydrogen in the race to clean up transportation, as I covered in a story earlier this year. For most vehicles and most people, batteries simply make more sense than hydrogen, for reasons that include everything from available infrastructure to fueling cost. 

But heavy-duty trucks are a different beast: Heavier vehicles, bigger batteries, higher power charging, and longer distances might tip the balance in favor of hydrogen. (There are some big “ifs” here, including whether hydrogen prices will get low enough to make hydrogen-powered vehicles economical.) 

For a sector as tough to decarbonize as heavy-duty trucking, we need all the help we can get. As Walker puts it, “It’s key that you start off with a lot of options and then narrow it down, rather than trying to pick which one’s going to win, because we really don’t know.”


Now read the rest of The Spark

Related reading

To learn more about Range Energy and how its electrified trailers could help transform trucking in the near future, check out my latest story here

Hydrogen is losing the race to power cleaner cars, but heavy-duty trucks might represent a glimmer of hope for the technology. Dig into why in my story from earlier this year

Getting the grid ready for fleets of electric trucks is going to be a big challenge. But for some short-distance vehicles in certain areas, we may actually be good to go already, as I reported in 2021

Urban Sky Microballoon pictured shortly after deployment near Breckenridge, Colorado.
COURTESY URBAN SKY

Two more things

Spotting wildfires early and keeping track of them can be tough. Now one company wants to monitor blazes using high-altitude balloons. Next month in Colorado, Urban Sky is deploying balloons that are about as big as vans, and they’ll be keeping watch using much finer resolution than what’s possible with satellites without a human pilot. Read more about fire-tracking balloons in this story from Sarah Scoles

A new forecasting model attempts to marry conventional techniques with AI to better predict the weather. The model from Google uses physics to work out larger atmospheric forces, then tags in AI for the smaller stuff. Check out the details in the latest from my colleague James O’Donnell

Keeping up with climate  

Small rocky nodules in the deep sea might be a previously undiscovered source of oxygen. They contain metals such as lithium and are a potential target for deep-sea mining efforts. (Nature)

→ Polymetallic nodules are roughly the size and shape of potatoes, and they may be the future of mining for renewable energy. (MIT Technology Review)

A 350-foot-long blade from a wind turbine off the coast of Massachusetts broke off last week, and hunks of fiberglass have been washing up on local beaches. The incident is a setback for a struggling offshore wind industry, and we’re still not entirely sure what happened. (Heatmap News)

A new report shows that low-emissions steel- and iron-making processes are on the rise. But coal-powered operations are still growing too, threatening progress in the industry. (Canary Media)

Sunday, July 21, was likely the world’s hottest day in recorded history (so far). It edged out a record set just last year. (The Guardian)

Plastic forks, cups, and single-use packages are sometimes stamped with nice-sounding labels like “compostable,” “biodegradable,” or just “Earth-friendly.” But that doesn’t mean you can stick the items in your backyard compost pile—these marketing terms are basically the Wild West. (Washington Post)

While EVs are indisputably better than gas-powered cars in terms of climate emissions, they are heavier, meaning they wear through tires faster. The resulting particulate pollution presents a new challenge, one a startup company is trying to address with new tires designed for electric vehicles. (Canary Media)

Public fast chargers are popping up nearly everywhere in the US—at this pace, they’ll outnumber gas stations by 2030. And deployment is only expected to speed up. (Bloomberg)