AI-generated content doesn’t seem to have swayed recent European elections 

AI-generated falsehoods and deepfakes seem to have had no effect on election results in the UK, France, and the European Parliament this year, according to new research. 

Since the beginning of the generative-AI boom, there has been widespread fear that AI tools could boost bad actors’ ability to spread fake content with the potential to interfere with elections or even sway the results. Such worries were particularly heightened this year, when billions of people were expected to vote in over 70 countries. 

Those fears seem to have been unwarranted, says Sam Stockwell, the researcher at the Alan Turing Institute who conducted the study. He focused on three elections over a four-month period from May to August 2024, collecting data on public reports and news articles on AI misuse. Stockwell identified 16 cases of AI-enabled falsehoods or deepfakes that went viral during the UK general election and only 11 cases in the EU and French elections combined, none of which appeared to definitively sway the results. The fake AI content was created by both domestic actors and groups linked to hostile countries such as Russia. 

These findings are in line with recent warnings from experts that the focus on election interference is distracting us from deeper and longer-lasting threats to democracy.   

AI-generated content seems to have been ineffective as a disinformation tool in most European elections this year so far. This, Stockwell says, is because most of the people who were exposed to the disinformation already believed its underlying message (for example, that levels of immigration to their country are too high). Stockwell’s analysis showed that people who were actively engaging with these deepfake messages by resharing and amplifying them had some affiliation or previously expressed views that aligned with the content. So the material was more likely to strengthen preexisting views than to influence undecided voters. 

Tried-and-tested election interference tactics, such as flooding comment sections with bots and exploiting influencers to spread falsehoods, remained far more effective. Bad actors mostly used generative AI to rewrite news articles with their own spin or to create more online content for disinformation purposes. 

“AI is not really providing much of an advantage for now, as existing, simpler methods of creating false or misleading information continue to be prevalent,” says Felix Simon, a researcher at the Reuters Institute for Journalism, who was not involved in the research. 

However, it’s hard to draw firm conclusions about AI’s impact upon elections at this stage, says Samuel Woolley, a disinformation expert at the University of Pittsburgh. That’s in part because we don’t have enough data.

“There are less obvious, less trackable, downstream impacts related to uses of these tools that alter civic engagement,” he adds.

Stockwell agrees: Early evidence from these elections suggests that AI-generated content could be more effective for harassing politicians and sowing confusion than changing people’s opinions on a large scale. 

Politicians in the UK, such as former prime minister Rishi Sunak, were targeted by AI deepfakes that, for example, showed them promoting scams or admitting to financial corruption. Female candidates were also targeted with nonconsensual sexual deepfake content, intended to disparage and intimidate them. 

“There is, of course, a risk that in the long run, the more that political candidates are on the receiving end of online harassment, death threats, deepfake pornographic smears—that can have a real chilling effect on their willingness to, say, participate in future elections, but also obviously harm their well-being,” says Stockwell. 

Perhaps more worrying, Stockwell says, his research indicates that people are increasingly unable to discern the difference between authentic and AI-generated content in the election context. Politicians are also taking advantage of that. For example, political candidates in the European Parliament elections in France have shared AI-generated content amplifying anti-immigration narratives without disclosing that they’d been made with AI. 

“This covert engagement, combined with a lack of transparency, presents in my view a potentially greater risk to the integrity of political processes than the use of AI by the general population or so-called ‘bad actors,’” says Simon. 

Google is funding an AI-powered satellite constellation that will spot wildfires faster

Early next year, Google and its partners plan to launch the first in a series of satellites that together would provide close-up, frequently refreshed images of wildfires around the world, offering data that could help firefighters battle blazes more rapidly, effectively, and safely.

The online search giant’s nonprofit and research arms have collaborated with the Moore Foundation, the Environmental Defense Fund, the satellite company Muon Space, and others to deploy 52 satellites equipped with custom-developed sensors over the coming years. 

The FireSat satellites will be able to spot fires as small as 5 by 5 meters (16 by 16 feet) on any speck of the globe. Once the full constellation is in place, the system should be capable of updating those images about every 20 minutes, the group says.

Those capabilities together would mark a significant upgrade over what’s available from the satellites that currently provide data to fire agencies. Generally, they can provide either high-resolution images that aren’t updated rapidly enough to track fires closely or frequently refreshed images that are relatively low-resolution.

The Earth Fire Alliance collaboration will also leverage Google’s AI wildfire tools, which have been trained to detect early indications of wildfires and track their progression, to draw additional insights from the data.

The images and analysis will be provided free to fire agencies around the world, helping to improve understanding of where fires are, where they’re moving, and how hot they’re burning. The information could help agencies stamp out small fires before they turn into raging infernos, place limited firefighting resources where they’ll do the most good, and evacuate people along the safest paths.

“In the satellite image of the Earth, a lot of things can be mistaken for a fire: a glint, a hot roof, smoke from another fire,” says Chris Van Arsdale, climate and energy research lead at Google Research and chairman of the Earth Fire Alliance. “Detecting fires becomes a game of looking for needles in a world of haystacks. Solving this will enable first responders to act quickly and precisely when a fire is detected.”

Some details of FireSat were unveiled earlier this year. But the organizations involved will announce additional information about their plans today, including the news that Google.org, the company’s charitable arm, has provided $13 million to the program and that the inaugural launch is scheduled to occur next year. 

Reducing the fog of war

The news comes as large fires rage across millions of acres in the western US, putting people and property at risk. The blazes include the Line Fire in Southern California, the Shoe Fly Fire in central Oregon, and the Davis Fire south of Reno, Nevada.

Wildfires have become more frequent, extreme, and dangerous in recent decades. That, in part, is a consequence of climate change: Rising temperatures suck the moisture from trees, shrubs, and grasses. But fires increasingly contribute to global warming as well. A recent study found that the fires that scorched millions of acres across Canada last year pumped out 3 billion tons of carbon dioxide, four times the annual pollution produced by the airline industry.

GOOGLE

Humans have also increased fire risk by suppressing natural fires for decades, which has allowed fuel to build up in forests and grasslands, and by constructing communities on the edge of wilderness boundaries without appropriate rules, materials, and safeguards

Observers say that FireSat could play an important role in combating fires, both by enabling fire agencies to extinguish small ones before they grow into large ones and by informing effective strategies for battling them once they’re crossed that point.

“What these satellites will do is reduce the fog of war,” says Michael Wara, director of the climate and energy policy program at Stanford University’s Woods Institute for the Environment, who is focused on fire policy issues. “Like when a situation is really dynamic and very dangerous for firefighters and they’re trying to make decisions very quickly about whether to move in to defend structures or try to evacuate people.” 

(Wara serves on the advisory board of the Moore Foundation’s Wildfire Resilience Initiative.)

Some areas, like California, already have greater visibility into the current state of fires or early signs of outbreaks, thanks to technology like Department of Defense satellites, remote camera networks, and planes, helicopters, and drones. But FireSat will be especially helpful for “countries that have less-well-resourced wildland fighting capability,” Wara adds.

Better images, more data, and AI will not be able to fully counter the increased fire dangers. Wara and other fire experts argue that regions need to use prescribed burns and other efforts to more aggressively reduce the buildup of fuel, rethink where and how we build communities in fire-prone areas, and do more to fund and support the work of firefighters on the ground. 

Sounding an earlier alarm for fires will only help reduce dangers when regions have, or develop, the added firefighting resources needed to combat the most dangerous ones quickly and effectively. Communities will also need to put in place better policies to determine what types of fires should be left to burn, and under what conditions.

‘A game changer’

Kate Dargan Marquis, a senior wildfire advisor to the Moore Foundation who previously served as state fire marshal for California, says she can “personally attest” to the difference that such tools will make to firefighters in the field.

“It is a game changer, especially as wildfires are becoming more extreme, more frequent, and more dangerous for everyone,” she says. “Information like this will make a lifesaving difference for firefighters and communities around the globe.”

Kate Dargan Marquis, senior advisor, Moore Foundation.
GOOGLE

Google Research developed the sensors for the satellite and tested them as well as the company’s AI fire detection models by conducting flights over controlled burns in California. Google intends to work with Earth Fire Alliance “to ensure AI can help make this data as useful as possible, and also that wildfire information is shared as widely as possible,” the company said.

Google’s Van Arsdale says that providing visual images of every incident around the world from start to finish will be enormously valuable to scientists studying wildfires and climate change. 

“We can combine this data with Google’s existing models of the Earth to help advance our understanding of fire behavior and fire dynamics across all of Earth’s ecosystems,” he says. “All this together really has the potential to help mitigate the environmental and social impact of fire while also improving people’s health and safety.”

Specifically, it could improve assessments of fire risk, as well as our understanding of the most effective means of preventing or slowing the spread of fires. For instance, it could help communities determine where it would be most cost-effective to remove trees and underbrush. 

Figuring out the best ways to conduct such interventions is another key goal of the program, given their high cost and the limited funds available for managing wildlands, says Genny Biggs, the program director for the Moore Foundation’s Wildfire Resilience Initiative.

The launch

The idea for FireSat grew out of a series of meetings that began with a 2019 workshop hosted by the Moore Foundation, which provided the first philanthropic funding for the program. 

The first satellite, scheduled to be launched aboard a SpaceX rocket early next year, will be fully functional aside from some data transmission features. The goals of the “protoflight” mission include testing the onboard systems and the data they send back. The Earth Fire Alliance will work with a handful of early-adopter agencies to prepare for the next phases. 

The group intends to launch three fully operational satellites in 2026, with additional deployments in the years that follow. Muon Space will build and operate the satellites. 

Agencies around the world should be able to receive hourly wildfire updates once about half of the constellation is operational, says Brian Collins, executive director of the Earth Fire Alliance. It hopes to launch all 52 satellites by around the end of this decade.

Each satellite is designed to last about five years, so the organization will eventually need to deploy 10 more each year to maintain the constellation.

The Earth Fire Alliance has secured about two-thirds of the funding it needs for the first phase of the program, which includes the first four launches. The organization will need to raise additional money from government agencies, international organizations, philanthropies, and other groups  to deploy, maintain, and operate the full constellation. It estimates the total cost will exceed $400 million, which Collins notes “is 1/1000th of the economic losses due to extreme wildfires annually in the US alone.”

Asked if commercial uses of the data could also support the program, including potentially military ones, Collins said in an email: “Adjacent applications range from land use management and agriculture to risk management and industrial impact and mitigation.” 

“At the same time, we know that as large agencies and government agencies adopt FireSat data to support a broad public safety mandate, they may develop all-hazard, emergenc[y] management, and security related uses of data,” he added. “As long as opportunities are in balance with our charter to advance a global approach to wildfire and climate resilience, we welcome new ideas and applications of our data.”

‘Living with fire’

A wide variety of startups have emerged in recent years promising to use technology to reduce the frequency and severity of wildfires—for example, by installing cameras and sensors in forests and grasslands, developing robots to carry out controlled burns, deploying autonomous helicopters that can drop suppressant, and harnessing AI to predict wildfire behavior and inform forest and fire management strategies

So far, even with all these new tools, it’s still been difficult for communities to keep pace with the rising dangers.

Dargan Marquis—who founded her own wildfire software company, Intterra—says she is confident the incidence of disastrous fires can be meaningfully reduced with programs like FireSat, along with other improved technologies and policies. But she says it’s likely to take decades to catch up with the growing risks, as the world continues warming up.

“We’re going to struggle in places like California, these Mediterranean climates around the world, while our technology and our capabilities and our inventions, etc., catch up with that level of the problem,” she says. 

“We can turn that corner,” she adds. “If we work together on a comprehensive strategy with the right data and a convincing plan over the next 50 years, I do think that by the end of the century, we absolutely can be living with fire.”

Neuroscientists and architects are using this enormous laboratory to make buildings better

Have you ever found yourself lost in a building that felt impossible to navigate? Thoughtful building design should center on the people who will be using those buildings. But that’s no mean feat.

It’s not just about navigation, either. Just think of an office that left you feeling sleepy or unproductive, or perhaps a health center that had a less-than-reviving atmosphere. A design that works for some people might not work for others. People have different minds and bodies, and varying wants and needs. So how can we factor them all in?

To answer that question, neuroscientists and architects are joining forces at an enormous laboratory in East London—one that allows researchers to build simulated worlds. In this lab, scientists can control light, temperature, and sound. They can create the illusion of a foggy night, or the tinkle of morning birdsong.

And they can study how volunteers respond to these environments, whether they be simulations of grocery stores, hospitals, pedestrian crossings, or schools. That’s how I found myself wandering around a fake art gallery, wearing a modified baseball cap with a sensor that tracked my movements.

I first visited the Person-Environment-Activity Research Lab, referred to as PEARL, back in July. I’d been chatting to Hugo Spiers, a neuroscientist based at University College London, about the use of video games to study how people navigate. Spiers had told me he was working on another project: exploring how people navigate a lifelike environment, and how they respond during evacuations (which, depending on the situation, could be a matter of life or death).

For their research, Spiers and his colleagues set up what they call a “mocked-up art gallery” within PEARL. The center in its entirety is pretty huge as labs go, measuring around 100 meters in length and 40 meters across, with 10-meter-high ceilings in places. There’s no other research center in the world like this, Spiers told me.

The gallery setup looked a little like a maze from above, with a pathway created out of hanging black sheets. The exhibits themselves were videos of dramatic artworks that had been created by UCL students.

When I visited in July, Spiers and his colleagues were running a small pilot study to trial their setup. As a volunteer participant, I was handed a numbered black cap with a square board on top, marked with a large QR code. This code would be tracked by cameras above and around the gallery. The cap also carried a sensor, transmitting radio signals to devices around the maze that could pinpoint my location within a range of 15 centimeters.

At first, all the volunteers (most of whom seemed to be students) were asked to explore the gallery as we would any other. I meandered around, watching the videos, and eavesdropping on the other volunteers, who were chatting about their research and upcoming dissertation deadlines. It all felt pretty pleasant and calm.

That feeling dissipated in the second part of the experiment, when we were each given a list of numbers, told that each one referred to a numbered screen, and informed that we had to visit all the screens in the order in which they appeared on our lists. “Good luck, everybody,” Spiers said.

Suddenly everyone seemed to be rushing around, slipping past each other and trying to move quickly while avoiding collisions. “It’s all got a bit frantic, hasn’t it?” I heard one volunteer comment as I accidentally bumped into another. I hadn’t managed to complete the task by the time Spiers told us the experiment was over. As I walked to the exit, I noticed that some people were visibly out of breath.

The full study took place on Wednesday, September 11. This time, there were around 100 volunteers (I wasn’t one of them). And while almost everyone was wearing a modified baseball cap, some had more complicated gear, including EEG caps to measure brainwaves, or caps that use near-infrared spectroscopy to measure blood flow in the brain. Some people were even wearing eye-tracking devices that monitored which direction they were looking.

“We will do something quite remarkable today,” Spiers told the volunteers, staff, and observers as the experiment started. Taking such detailed measurements from so many individuals in such a setting represented “a world first,” he said.

I have to say that being an observer was much more fun than being a participant. Gone was the stress of remembering instructions and speeding around a maze. Here in my seat, I could watch as the data collected from the cameras and sensors was projected onto a screen. The volunteers, represented as squiggly colored lines, made their way through the gallery in a way that reminded me of the game Snake.

The study itself was similar to the pilot study, although this time the volunteers were given additional tasks. At one point, they were given an envelope with the name of a town or city in it, and asked to find others in the group who had been given the same one. It was fascinating to see the groups form. Some had the names of destination cities like Bangkok, while others had been assigned fairly nondescript English towns like Slough, made famous as the setting of the British television series The Office. At another point, the volunteers were asked to evacuate the gallery from the nearest exit.

The data collected in this study represents something of a treasure trove for researchers like Spiers and his colleagues. The team is hoping to learn more about how people navigate a space, and whether they move differently if they are alone or in a group. How do friends and strangers interact, and does this depend on whether they have certain types of material to bond over? How do people respond to evacuations—will they take the nearest exit as directed, or will they run on autopilot to the exit they used to enter the space in the first place?

All this information is valuable to neuroscientists like Spiers, but it’s also useful to architects like his colleague Fiona Zisch, who is based at UCL’s Bartlett School of Architecture. “We do really care about how people feel about the places we design for them,” Zisch tells me. The findings can guide not only the construction of new buildings, but also efforts to modify and redesign existing ones.

PEARL was built in 2021 and has already been used to help engineers, scientists, and architects explore how neurodivergent people use grocery stores, and the ideal lighting to use for pedestrian crossings, for example. Zisch herself is passionate about creating equitable spaces—particularly for health and education—that everyone can make use of in the best possible way.

In the past, models used in architecture have been developed with typically built, able-bodied men in mind. “But not everyone is a 6’2″ male with a briefcase,” Zisch tells me. Age, gender, height, and a range of physical and psychological factors can all influence how a person will use a building. “We want to improve not just the space, but the experience of the space,” says Zisch. Good architecture isn’t just about creating stunning features; it’s about subtle adaptations that might not even be noticeable to most people, she says.

The art gallery study is just the first step for researchers like Zisch and Spiers, who plan to explore other aspects of neuroscience and architecture in more simulated environments at PEARL. The team won’t have results for a while yet. But it’s a fascinating start. Watch this space.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Brain-monitoring technology has come a long way, and tech designed to read our minds and probe our memories is already being used. Futurist and legal ethicist Nita Farahany explained why we need laws to protect our cognitive liberty in a previous edition of The Checkup.

Listening in on the brain can reveal surprising insights into how this mysterious organ works. One team of neuroscientists found that our brains seem to oscillate between states of order and chaos.

Last year, MIT Technology Review published our design issue of the magazine. If you’re curious, this piece on the history and future of the word “design,” by Nicholas de Monchaux, head of architecture at MIT, might be a good place to start

Design covers much more than buildings, of course. Designers are creating new ways for users of prosthetic devices to feel more comfortable in their own skin—some of which have third thumbs, spikes, or “superhero skins.”

Achim Menges is an architect creating what he calls “self-shaping” structures with wood, which can twist and curve with changes in humidity. His approach is a low-energy way to make complex curved architectures, Menges told John Wiegand.

From around the web

Scientists are meant to destroy research samples of the poliovirus, as part of efforts to eradicate the disease it causes. But lab leaks of the virus may be more common than we’d like to think. (Science)

Neurofeedback allows people to watch their own brain activity in real time, and learn to control it. It could be a useful way to combat the impacts of stress. (Trends in Neurosciences)

Microbes, some of which cause disease in people, can travel over a thousand miles on wind, researchers have shown. Some appear to be able to survive their journey. (The Guardian)

Is the X chromosome involved in Alzheimer’s disease? A study of over a million people suggests so. (JAMA Neurology)

A growing number of men are paying thousands of dollars a year for testosterone therapies that are meant to improve their physical performance. But some are left with enlarged breasts, shrunken testicles, blood clots, and infertility. (The Wall Street Journal)

Meet the radio-obsessed civilian shaping Ukraine’s drone defense

Serhii “Flash” Beskrestnov hates going to the front line. The risks terrify him. “I’m really not happy to do it at all,” he says. But to perform his particular self-appointed role in the Russia-Ukraine war, he believes it’s critical to exchange the relative safety of his suburban home north of the capital for places where the prospect of death is much more immediate. “From Kyiv,” he says, “nobody sees the real situation.”

So about once a month, he drives hundreds of kilometers east in a homemade mobile intelligence center: a black VW van in which stacks of radio hardware connect to an array of antennas on the roof that stand like porcupine quills when in use. Two small devices on the dash monitor for nearby drones. Over several days at a time, Flash studies the skies for Russian radio transmissions and tries to learn about the problems facing troops in the fields and in the trenches.

He is, at least in an unofficial capacity, a spy. But unlike other spies, Flash does not keep his work secret. In fact, he shares the results of these missions with more than 127,000 followers—including many soldiers and government officials—on several public social media channels. Earlier this year, for instance, he described how he had recorded five different Russian reconnaissance drones in a single night—one of which was flying directly above his van.

“Brothers from the Armed Forces of Ukraine, I am trying to inspire you,” he posted on his Facebook page in February, encouraging Ukrainian soldiers to learn how to recognize enemy drone signals as he does. “You will spread your wings, you will understand over time how to understand distance and, at some point, you will save the lives of dozens of your colleagues.”

Drones have come to define the brutal conflict that has now dragged on for more than two and a half years. And most rely on radio communications—a technology that Flash has obsessed over since childhood. So while Flash is now a civilian, the former officer has still taken it upon himself to inform his country’s defense in all matters related to radio.

As well as the frontline information he shares on his public channels, he runs a “support service” for almost 2,000 military communications specialists on Signal and writes guides for building anti-drone equipment on a tight budget. “He’s a celebrity,” one special forces officer recently shouted to me over the thump of music in a Kyiv techno club. He’s “like a ray of sun,” an aviation specialist in Ukraine’s army told me. Flash tells me that he gets 500 messages every day asking for help.

Despite this reputation among rank-and-file service members—and maybe because of it—Flash has also become a source of some controversy among the upper echelons of Ukraine’s military, he tells me. The Armed Forces of Ukraine declined multiple requests for comment, but Flash and his colleagues claim that some high-ranking officials perceive him as a security threat, worrying that he shares too much information and doesn’t do enough to secure sensitive intel. As a result, some refuse to support or engage with him. Others, Flash says, pretend he doesn’t exist. Either way, he believes they are simply insecure about the value of their own contributions—“because everybody knows that Serhii Flash is not sitting in Kyiv like a colonel in the Ministry of Defense,” he tells me in the abrasive fashion that I’ve come to learn is typical of his character. 

But above all else, hours of conversations with numerous people involved in Ukraine’s defense, including frontline signalmen and volunteers, have made clear that even if Flash is a complicated figure, he’s undoubtedly an influential one. His work has become greatly important to those fighting on the ground, and he recently received formal recognition from the military for his contributions to the fight, with two medals of commendation—one from the commander of Ukraine’s ground forces, the other from the Ministry of Defense. 

With a handheld directional antenna and a spectrum analyzer, Flash can scan for hostile signals.
EMRE ÇAYLAK

Despite a small number of semi-autonomous machines with a reduced reliance on radio communications, the drones that saturate the skies above the battlefield will continue to largely depend on this technology for the foreseeable future. And in this race for survival—as each side constantly tries to best the other, only to start all over again when the other inevitably catches up—Ukrainian soldiers need to develop creative solutions, and fast. As Ukraine’s wartime radio guru, Flash may just be one of their best hopes for doing that. 

“I know nothing about his background,” says “Igrok,” who works with drones in Ukraine’s 110th Mechanized Brigade and whom we are identifying by his call sign, as is standard military practice. “But I do know that most engineers and all pilots know nothing about radios and antennas. His job is definitely one of the most powerful forces keeping Ukraine’s aerial defense in good condition.”

And given the mounting evidence that both militaries and militant groups in other parts of the world are now adopting drone tactics developed in Ukraine, it’s not only his country’s fate that Flash may help to determine—but also the ways that armies wage war for years to come.

A prescient hobby

Before I can even start asking questions during our meeting in May, Flash is rummaging around in the back of the Flash-mobile, pulling out bits of gear for his own version of show-and-tell: a drone monitor with a fin-shaped antenna; a walkie-talkie labeled with a sticker from Russia’s state security service, the FSB; an approximately 1.5-meter-long foldable antenna that he says probably came from a US-made Abrams tank.

Flash has parked on a small wooded road beside the Kyiv Sea, an enormous water reservoir north of the capital. He’s wearing a khaki sweat-wicking polo shirt, combat trousers, and combat boots, with a Glock 19 pistol strapped to his hip. (“I am a threat to the enemy,” he tells me, explaining that he feels he has to watch his back.) As we talk, he moves from one side to the other, as if the electromagnetic waves that he’s studied since childhood have somehow begun to control the motion of his body.

Now 49, Flash grew up in a suburb of Kyiv in the ’80s. His father, who was a colonel in the Soviet army, recalls bringing home broken radio equipment for his preteen son to tinker with. Flash showed talent from the start. He attended an after-school radio club, and his father fixed an antenna to the roof of their apartment for him. Later, Flash began communicating with people in countries beyond the Iron Curtain. “It was like an open door to the big world for me,” he says.

Flash recalls with amusement a time when a letter from the KGB arrived at his family home, giving his father the fright of his life. His father didn’t know that his son had sent a message on a prohibited radio frequency, and someone had noticed. Following the letter, when Flash reported to the service’s office in downtown Kyiv, his teenage appearance confounded them. Boy, what are you doing here? Flash recalls an embarrassed official saying. 

Ukraine had been a hub of innovation as part of the Soviet Union. But by the time Flash graduated from military communications college in 1997, Ukraine had been independent for six years, and corruption and a lack of investment had stripped away the armed forces’ former grandeur. Flash spent just a year working in a military radio factory before he joined a private communications company developing Ukraine’s first mobile network, where he worked with technologies far more advanced than what he had used in the military. The  project was called “Flash.” 

A decade and a half later, Flash had risen through the ranks of the industry to become head of department at the progenitor to the telecommunications company Vodafone Ukraine. But boredom prompted him to leave and become an entrepreneur. His many projects included a successful e-commerce site for construction services and a popular video game called Isotopium: Chernobyl, which he and a friend based on the “really neat concept,” according to a PC Gamer review, of allowing players to control real robots (fitted with radios, of course) around a physical arena. Released in 2019, it also received positive reviews from Reuters and BBC News.

But within just a few years, an unexpected attack would hurl his country into chaos—and upend Flash’s life. 

“I am here to help you with technical issues,” Flash remembers writing to his Signal group when he first started offering advice. “Ask me anything and I will try to find the answer for you.”
EMRE ÇAYLAK

By early 2022, rumors were growing of a potential attack from Russia. Though he was still working on Isotopium, Flash began to organize a radio network across the northern suburbs of Kyiv in preparation. Near his home, he set up a repeater about 65 meters above ground level that could receive and then rebroadcast transmissions from all the radios in its network across a 200-square-kilometer area. Another radio amateur programmed and distributed handheld radios.

When Russian forces did invade, on February 24, they took both fiber-optic and mobile networks offline, as Flash had anticipated. The radio network became the only means of instant communications for civilians and, critically, volunteers mobilizing to fight in the region, who used it to share information about Russian troop movements. Flash fed this intel to several professional Ukrainian army units, including a unit of special reconnaissance forces. He later received an award from the head of the district’s military administration for his part in Kyiv’s defense. The head of the district council referred to Flash as “one of the most worthy people” in the region.

Yet it was another of Flash’s projects that would earn him renown across Ukraine’s military.

Despite being more than 100 years old, radio technology is still critical in almost all aspects of modern warfare, from secure communications to satellite-guided missiles. But the decline of Ukraine’s military, coupled with the movement of many of the country’s young techies into lucrative careers in the growing software industry, created a vacuum of expertise. Flash leaped in to fill it.

Within roughly a month of Russia’s incursion, Flash had created a private group called “Military Signalmen” on the encrypted messaging platform Signal, and invited civilian radio experts from his personal network to join alongside military communications specialists. “I am here to help you with technical issues,” he remembers writing to the group. “Ask me anything and I will try to find the answer for you.”

The kinds of questions that Flash and his civilian colleagues answered in the first months were often basic. Group members wanted to know how to update the firmware on their devices, reset their radios’ passwords, or set up the internal communications networks for large vehicles. Many of the people drafted as communications specialists in the Ukrainian military had little relevant experience; Flash claims that even professional soldiers lacked appropriate training and has referred to large parts of Ukraine’s military communications courses as “either nonsense or junk.” (The Korolov Zhytomyr Military Institute, where many communications specialists train, declined a request for comment.)

After Russia’s invasion of Ukraine, Flash transformed his VW van into a mobile radio intelligence center.
EMRE ÇAYLAK

He demonstrates handheld spectrum analyzers with custom Ukrainian firmware.

News of the Signal group spread by word of mouth, and it soon became a kind of 24-hour support service that communications specialists in every sector of Ukraine’s frontline force subscribed to. “Any military engineer can ask anything and receive the answer within a couple of minutes,” Flash says. “It’s a nice way to teach people very quickly.” 

As the war progressed into its second year, Military Signalmen became, to an extent, self-sustaining. Its members had learned enough to answer one another’s questions themselves. And this is where several members tell me that Flash has contributed the most value. “The most important thing is that he brought together all these communications specialists in one team,” says Oleksandr “Moto,” a technician at an EU mission in Kyiv and an expert in Motorola equipment, who has advised members of the group. (He asked to not be identified by his surname, due to security concerns.) “It became very efficient.”

Today, Flash and his partners continue to answer occasional questions that require more advanced knowledge. But over the past year, as the group demanded less of his time, Flash has begun to focus on a rapidly proliferating weapon for which his experience had prepared him almost perfectly: the drone.  

A race without end

The Joker-10 drone, one of Russia’s latest additions to its arsenal, is equipped with a hibernation mechanism, Flash warned his Facebook followers in March. This feature allows the operator to fly it to a hidden location, leave it there undetected, and then awaken it when it’s time to attack. “It is impossible to detect the drone using radio-electronic means,” Flash wrote. “If you twist and turn it in your hands—it will explode.” 

This is just one example of the frequent developments in drone engineering that Ukrainian and Russian troops are adapting to every day. 

Larger strike drones similar to the US-made Reaper have been familiar in other recent conflicts, but sophisticated air defenses have rendered them less dominant in this war. Ukraine and Russia are developing and deploying vast numbers of other types of drones—including the now-notorious “FPV,” or first-person view, drone that pilots operate by wearing goggles that stream video of its perspective. These drones, which can carry payloads large enough to destroy tanks, are cheap (costing as little as $400), easy to produce, and difficult to shoot down. They use direct radio communications to transmit video feeds, receive commands, and navigate.

A Ukrainian soldier prepares an FPV drone equipped with dummy ammunition for a simulated flight operation.
MARCO CORDONE/SOPA IMAGES/SIPA USA VIA AP IMAGES

But their reliance on radio technology is a major vulnerability, because enemies can disrupt the signals that the drones emit—making them far less effective, if not inoperable. This form of electronic warfare—which most often involves emitting a more powerful signal at the same frequency as the operator’s—is called “jamming.”

Jamming, though, is an imperfect solution. Like drones, jammers themselves emit radio signals that can enable enemies to locate them. There are also effective countermeasures to bypass jammers. For example, a drone operator can use a tactic called “frequency hopping,” rapidly jumping between different frequencies to avoid a jammer’s signal. But even this method can be disrupted by algorithms that calculate the hopping patterns.

For this reason, jamming is a frequent focus of Flash’s work. In a January post on his Telegram channel, for instance, which people viewed 48,000 times, Flash explained how jammers used by some Ukrainian tanks were actually disrupting their own communications. “The cause of the problems is not direct interference with the reception range of the radio station, but very powerful signals from several [electronic warfare] antennae,” he wrote, suggesting that other tank crews experiencing the same problem might try spreading their antennas across the body of the tank. 

It is all part of an existential race in which Russia and Ukraine are constantly hunting for new methods of drone operation, drone jamming, and counter-jamming—and there’s no end in sight. In March, for example, Flash says, a frontline contact sent him photos of a Russian drone with what looks like a 10-kilometer-long spool of fiber-optic cable attached to its rear—one particularly novel method to bypass Ukrainian jammers. “It’s really crazy,” Flash says. “It looks really strange, but Russia showed us that this was possible.”

Flash’s trips to the front line make it easier for him to track developments like this. Not only does he monitor Russian drone activity from his souped-up VW, but he can study the problems that soldiers face in situ and nurture relationships with people who may later send him useful intel—or even enemy equipment they’ve seized. “The main problem is that our generals are located in Kyiv,” Flash says. “They send some messages to the military but do not understand how these military people are fighting on the front.”

Besides the advice he provides to Ukrainian troops, Flash also publishes online his own manuals for building and operating equipment that can offer protection from drones. Building their own tools can be soldiers’ best option, since Western military technology is typically expensive and domestic production is insufficient. Flash recommends buying most of the parts on AliExpress, the Chinese e-commerce platform, to reduce costs.

While all his activity suggests a close or at least cooperative relationship between Flash and Ukraine’s military, he sometimes finds himself on the outside looking in. In a post on Telegram in May, as well as during one of our meetings, Flash shared one of his greatest disappointments of the war: the military’s refusal of his proposal to create a database of all the radio frequencies used by Ukrainian forces. But when I mentioned this to an employee of a major electronic warfare company, who requested anonymity to speak about the sensitive subject, he suggested that the only reason Flash still complains about this is that the military hasn’t told him it already exists. (Given its sensitivity, MIT Technology Review was unable to independently confirm the existence of this database.) 

Flash believes that generals in Kyiv “do not understand how these military people are fighting on the front.” So even though he doesn’t like the risks they involve, he takes trips to the frontline about once a month.
EMRE ÇAYLAK

This anecdote is emblematic of Flash’s frustration with a military complex that may not always want his involvement. Ukraine’s armed forces, he has told me on several occasions, make no attempt to collaborate with him in an official manner. He claims not to receive any financial support, either. “I’m trying to help,” he says. “But nobody wants to help me.”

Both Flash and Yurii Pylypenko, another radio enthusiast who helps Flash manage his Telegram channel, say military officials have accused Flash of sharing too much information about Ukraine’s operations. Flash claims to verify every member of his closed Signal groups, which he says only discuss “technical issues” in any case. But he also admits the system is not perfect and that Russians could have gained access in the past. Several of the soldiers I interviewed for this story also claimed to have entered the groups without Flash’s verification process. 

It’s ultimately difficult to determine if some senior staff in the military hold Flash at arm’s length because of his regular, often strident criticism—or whether Flash’s criticism is the result of being held at arm’s length. But it seems unlikely either side’s grievances will subside soon; Pylypenko claims that senior officers have even tried to blackmail him over his involvement in Flash’s work. “They blame my help,” he wrote to me over Telegram, “because they think Serhii is a Russian agent reposting Russian propaganda.” 

Is the world prepared?

Flash’s greatest concern now is the prospect of Russia overwhelming Ukrainian forces with the cheap FPV drones. When they first started deploying FPVs, both sides were almost exclusively targeting expensive equipment. But as production has increased, they’re now using them to target individual soldiers, too. Because of Russia’s production superiority, this poses a serious danger—both physical and psychological—to Ukrainian soldiers. “Our army will be sitting under the ground because everybody who goes above ground will be killed,” Flash says. Some reports suggest that the prevalence of FPVs is already making it difficult for soldiers to expose themselves at all on the battlefield.

To combat this threat, Flash has a grand yet straightforward idea. He wants Ukraine to build a border “wall” of jamming systems that cover a broad range of the radio spectrum all along the front line. Russia has already done this itself with expensive vehicle-based systems, but these present easy targets for Ukrainian drones, which have destroyed several of them. Flash’s idea is to use a similar strategy, albeit with smaller, cheaper systems that are easier to replace. He claims, however, that military officials have shown no interest.

Although Flash is unwilling to divulge more details about this strategy (and who exactly he pitched it to), he believes that such a wall could provide a more sustainable means of protecting Ukrainian troops. Nevertheless, it’s difficult to say how long such a defense might last. Both sides are now in the process of developing artificial-intelligence programs that allow drones to lock on to targets while still outside enemy jamming range, rendering them jammer-proof when they come within it. Flash admits he is concerned—and he doesn’t appear to have a solution.

Flash admits he is worried about Russia overwhelming Ukrainian forces with the cheap FPV drones: “Our army will be sitting under the ground because everybody who goes above ground will be killed.”
EMRE ÇAYLAK

He’s not alone. The world is entirely unprepared for this new type of warfare, says Yaroslav Kalinin, a former Ukrainian intelligence officer and the CEO of Infozahyst, a manufacturer of equipment for electronic warfare. Kalinin recounts talking at an electronic-warfare-focused conference in Washington, DC, last December where representatives from some Western defense companies weren’t able to recognize the basic radio signals emitted by different types of drones. “Governments don’t count [drones] as a threat,” he says. “I need to run through the streets like a prophet—the end is near!”

Nevertheless, Ukraine has become, in essence, a laboratory for a new era of drone warfare—and, many argue, a new era of warfare entirely. Ukraine’s and Russia’s soldiers are its technicians. And Flash, who sometimes sleeps curled up in the back of his van while on the road, is one of its most passionate researchers. “Military developers from all over the world come to us for experience and advice,” he says. Only time will tell whether their contributions will be enough to see Ukraine through to the other side of this war. 

Charlie Metcalfe is a British journalist. He writes for magazines and newspapers, including Wired, the Guardian, and MIT Technology Review.

Meet 2024’s climate innovators under 35

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

One way to know where a field is going? Take a look at what the sharpest new innovators are working on.

Good news for all of us: MIT Technology Review’s list of 35 Innovators Under 35 just dropped. And a decent number of the people who made the list are working in fields that touch climate and energy in one way or another.

Looking through, I noticed a few trends that might provide some hints about the future of climate tech. Let’s dig into this year’s list and consider what these innovators’ work might mean for efforts to combat climate change.

Power to the people

Perhaps unsurprisingly, quite a few innovators on this list are working on energy—and many of them have an interest in making energy consistently available where and when it’s needed. Wind and solar are getting cheap, but we need solutions for when the sun isn’t shining and the wind isn’t blowing.

Tim Latimer cofounded Fervo Energy, a geothermal company hoping to provide consistently available, carbon-free energy using Earth’s heat. You may be familiar with his work, since Fervo was on our list of 15 Climate Tech Companies to Watch in 2023.

Another energy-focused innovator on the list is Andrew Ponec of Antora Energy, a company working to build thermal energy storage systems. Basically, the company’s technology heats up blocks when cheap renewables are available, and then stores that heat and delivers it to industrial processes that need constant power. (You, the readers, named thermal energy storage the readers’ choice on this year’s 10 Breakthrough Technologies list.)

Rock stars

While new ways of generating electricity and storing energy can help cut our emissions in the future, other people are focused on how to clean up the greenhouse gases already in the atmosphere. At this point, removing carbon dioxide from the atmosphere is basically required for any scenario where we limit warming to 1.5 °C over preindustrial levels. A few of the new class of innovators are turning to rocks for help soaking up and locking away atmospheric carbon. 

Noah McQueen cofounded Heirloom Carbon Technologies, a carbon removal company. The technology works by tweaking the way minerals soak up carbon dioxide from the air (before releasing it under controlled conditions, so they can do it all again). The company has plans for facilities that could remove hundreds of thousands of tons of carbon dioxide each year. 

Another major area of research focuses on how we might store captured carbon dioxide. Claire Nelson is the cofounder of Cella Mineral Storage, a company working on storage methods to better trap carbon dioxide underground once it’s been mopped up.  

Material world

Finally, some of the most interesting work on our new list of innovators is in materials. Some people are finding new ones that could help us address our toughest problems, and others are trying to reinvent old ones to clean up their climate impacts.

Julia Carpenter found a way to make a foam-like material from metal. Its high surface area makes it a stellar heat sink, meaning it can help cool things down efficiently. It could be a huge help in data centers, where 40% of energy demand goes to cooling.

And I spoke with Cody Finke, cofounder and CEO of Brimstone, a company working on cleaner ways of making cement. Cement alone is responsible for nearly 7% of global greenhouse-gas emissions, and about half of those come from chemical reactions necessary to make it. Finke and Brimstone are working to wipe out the need for these reactions by using different starting materials to make this crucial infrastructural glue.

Addressing climate change is a sprawling challenge, but the researchers and founders on this list are tackling a few of the biggest issues I think about every day. 

Ensuring that we can power our grid, and all the industrial processes that we rely on for the stuff in our daily lives, is one of the most substantial remaining challenges. Removing carbon dioxide from the atmosphere in an efficient, cheap process could help limit future warming and buy us time to clean up the toughest sectors. And finding new materials, and new methods of producing old ones, could be a major key to unlocking new climate solutions. 

To read more about the folks I mentioned here and other innovators working in climate change and beyond, check out the full list.


Now read the rest of The Spark

Related reading

Fervo Energy (cofounded by 2024 innovator Tim Latimer) showed last year that its wells can be used like a giant underground battery.

A growing number of companies—including Antora Energy, whose CEO Andrew Ponec is a 2024 innovator—are working to bring thermal energy storage systems to heavy industry.

Cement is one of our toughest challenges, as Brimstone CEO and 2024 innovator Cody Finke will tell you. I wrote about Brimstone and other efforts to reinvent cement earlier this year.

A plant with yellow flowers

Another thing

We need a whole lot of metals to address climate change, from the copper in transmission lines to the nickel in lithium-ion batteries that power electric vehicles. Some researchers think plants might be able to help. 

Roughly 750 species of plants are so-called hyperaccumulators, meaning they naturally soak up and tolerate relatively high concentrations of metal. A new program is funding research into how we might use this trait to help source nickel, and potentially other metals, in the future. Read the full story here.

Keeping up with climate  

A hurricane that recently formed in the Gulf of Mexico is headed for Louisiana, ending an eerily quiet few weeks of the season. (Scientific American)

→ After forecasters predicted a particularly active season, the lull in hurricane activity was surprising. (New Scientist)

Rising sea levels are one of the symptoms of a changing climate, but nailing down exactly what “sea level” means is more complicated than you might think. We’ve gotten better at measuring sea level over the past few centuries, though. (New Yorker)

The US Department of Energy’s Loan Programs Office has nearly $400 million in lending authority. This year’s election could shift the focus of that office drastically, making it a bellwether of how the results could affect energy priorities. (Bloomberg)

What if fusion power ends up working, but it’s too expensive to play a significant role on the grid? Some modelers think the technology will remain expensive and could come too late to make a dent in emissions. (Heatmap)

Electric-vehicle sales are up overall, but some major automakers are backing away from goals on zero-emissions vehicles. Even though sales are increasing, uptake is slower than many thought it would be, contributing to the nervous energy in the industry. (Canary Media)

It’s a tough time to be in the business of next-generation batteries. The woes of three startups reveal that difficult times are here, likely for a while. (The Information)

Google’s new tool lets large language models fact-check their responses

As long as chatbots have been around, they have made things up. Such “hallucinations” are an inherent part of how AI models work. However, they’re a big problem for companies betting big on AI, like Google, because they make the responses it generates unreliable. 

Google is releasing a tool today to address the issue. Called DataGemma, it uses two methods to help large language models fact-check their responses against reliable data and cite their sources more transparently to users. 

The first of the two methods is called Retrieval-Interleaved Generation (RIG), which acts as a sort of fact-checker. If a user prompts the model with a question—like “Has the use of renewable energy sources increased in the world?”—the model will come up with a “first draft” answer. Then RIG identifies what portions of the draft answer could be checked against Google’s Data Commons, a massive repository of data and statistics from reliable sources like the United Nations or the Centers for Disease Control and Prevention. Next, it runs those checks and replaces any incorrect original guesses with correct facts. It also cites its sources to the user.

The second method, which is commonly used in other large language models, is called Retrieval-Augmented Generation (RAG). Consider a prompt like “What progress has Pakistan made against global health goals?” In response, the model examines which data in the Data Commons could help it answer the question, such as information about access to safe drinking water, hepatitis B immunizations, and life expectancies. With those figures in hand, the model then builds its answer on top of the data and cites its sources.

“Our goal here was to use Data Commons to enhance the reasoning of LLMs by grounding them in real-world statistical data that you could source back to where you got it from,” says Prem Ramaswami, head of Data Commons at Google. Doing so, he says, will “create more trustable, reliable AI.”

It is only available to researchers for now, but Ramaswami says access could widen further after more testing. If it works as hoped, it could be a real boon for Google’s plan to embed AI deeper into its search engine.  

However, it comes with a host of caveats. First, the usefulness of the methods is limited by whether the relevant data is in the Data Commons, which is more of a data repository than an encyclopedia. It can tell you the GDP of Iran, but it’s unable to confirm the date of the First Battle of Fallujah or when Taylor Swift released her most recent single. In fact, Google’s researchers found that with about 75% of the test questions, the RIG method was unable to obtain any usable data from the Data Commons. And even if helpful data is indeed housed in the Data Commons, the model doesn’t always formulate the right questions to find it. 

Second, there is the question of accuracy. When testing the RAG method, researchers found that the model gave incorrect answers 6% to 20% of the time. Meanwhile, the RIG method pulled the correct stat from Data Commons only about 58% of the time (though that’s a big improvement over the 5% to 17% accuracy rate of Google’s large language models when they’re not pinging Data Commons). 

Ramaswami says DataGemma’s accuracy will improve as it gets trained on more and more data. The initial version has been trained on only about 700 questions, and fine-tuning the model required his team to manually check each individual fact it generated. To further improve the model, the team plans to increase that data set from hundreds of questions to millions.

Chatbots can persuade people to stop believing in conspiracy theories

The internet has made it easier than ever before to encounter and spread conspiracy theories. And while some are harmless, others can be deeply damaging, sowing discord and even leading to unnecessary deaths.

Now, researchers believe they’ve uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people’s belief in it by about 20%—even among participants who claimed that their beliefs were important to their identity. The research is published today in the journal Science.

The findings could represent an important step forward in how we engage with and educate people who espouse such baseless theories, says Yunhao (Jerry) Zhang, a postdoc fellow affiliated with the Psychology of Technology Institute who studies AI’s impacts on society.

“They show that with the help of large language models, we can—I wouldn’t say solve it, but we can at least mitigate this problem,” he says. “It points out a way to make society better.” 

Few interventions have been proven to change conspiracy theorists’ minds, says Thomas Costello, a research affiliate at MIT Sloan and the lead author of the study. Part of what makes it so hard is that different people tend to latch on to different parts of a theory. This means that while presenting certain bits of factual evidence may work on one believer, there’s no guarantee that it’ll prove effective on another.

That’s where AI models come in, he says. “They have access to a ton of information across diverse topics, and they’ve been trained on the internet. Because of that, they have the ability to tailor factual counterarguments to particular conspiracy theories that people believe.”

The team tested its method by asking 2,190 crowdsourced workers to participate in text conversations with GPT-4 Turbo, OpenAI’s latest large language model.

Participants were asked to share details about a conspiracy theory they found credible, why they found it compelling, and any evidence they felt supported it. These answers were used to tailor responses from the chatbot, which the researchers had prompted to be as persuasive as possible.

Participants were also asked to indicate how confident they were that their conspiracy theory was true, on a scale from 0 (definitely false) to 100 (definitely true), and then rate how important the theory was to their understanding of the world. Afterwards, they entered into three rounds of conversation with the AI bot. The researchers chose three to make sure they could collect enough substantive dialogue.

After each conversation, participants were asked the same rating questions. The researchers followed up with all the participants 10 days after the experiment, and then two months later, to assess whether their views had changed following the conversation with the AI bot. The participants reported a 20% reduction of belief in their chosen conspiracy theory on average, suggesting that talking to the bot had fundamentally changed some people’s minds.

“Even in a lab setting, 20% is a large effect on changing people’s beliefs,” says Zhang. “It might be weaker in the real world, but even 10% or 5% would still be very substantial.”

The authors sought to safeguard against AI models’ tendency to make up information—known as hallucinating—by employing a professional fact-checker to evaluate the accuracy of 128 claims the AI had made. Of these, 99.2% were found to be true, while 0.8% were deemed misleading. None were found to be completely false. 

One explanation for this high degree of accuracy is that a lot has been written about conspiracy theories on the internet, making them very well represented in the model’s training data, says David G. Rand, a professor at MIT Sloan who also worked on the project. The adaptable nature of GPT-4 Turbo means it could easily be connected to different platforms for users to interact with in the future, he adds.

“You could imagine just going to conspiracy forums and inviting people to do their own research by debating the chatbot,” he says. “Similarly, social media could be hooked up to LLMs to post corrective responses to people sharing conspiracy theories, or we could buy Google search ads against conspiracy-related search terms like ‘Deep State.’”

The research upended the authors’ preconceived notions about how receptive people were to solid evidence debunking not only conspiracy theories, but also other beliefs that are not rooted in good-quality information, says Gordon Pennycook, an associate professor at Cornell University who also worked on the project. 

“People were remarkably responsive to evidence. And that’s really important,” he says. “Evidence does matter.”

Google says it’s made a quantum computing breakthrough that reduces errors

Google researchers claim to have made a breakthrough in quantum error correction, one that could pave the way for quantum computers that finally live up to the technology’s promise.

Proponents of quantum computers say the machines will be able to benefit scientific discovery in fields ranging from particle physics to drug and materials design—if only their builders can make the hardware behave as intended. 

One major challenge has been that quantum computers can store or manipulate information incorrectly, preventing them from executing algorithms that are long enough to be useful. The new research from Google Quantum AI and its academic collaborators demonstrates that they can actually add components to reduce these errors. Previously, because of limitations in engineering, adding more components to the quantum computer tended to introduce more errors. Ultimately, the work bolsters the idea that error correction is a viable strategy toward building a useful quantum computer. Some critics had doubted that it was an effective approach, according to physicist Kenneth Brown of Duke University, who was not involved in the research. 

“This error correction stuff really works, and I think it’s only going to get better,” wrote Michael Newman, a member of the Google team, on X. (Google, which posted the research to the preprint server arXiv in August, declined to comment on the record for this story.) 

Quantum computers encode data using objects that behave according to the principles of quantum mechanics. In particular, they store information not only as 1s and 0s, as a conventional computer does, but also in “superpositions” of 1 and 0. Storing information in the form of these superpositions and manipulating their value using quantum interactions such as entanglement (a way for particles to be connected even over long distances) allows for entirely new types of algorithms.

In practice, however, developers of quantum computers have found that errors quickly creep in because the components are so sensitive. A quantum computer represents 1, 0, or a superposition by putting one of its components in a particular physical state, and it is too easy to accidentally alter those states. A component then ends up in a physical state that does not correspond to the information it’s supposed to represent. These errors accumulate over time, which means that the quantum computer cannot deliver accurate answers for long algorithms without error correction.

To perform error correction, researchers must encode information in the quantum computer in a distinctive way. Quantum computers are made of individual components known as physical qubits, which can be made from a variety of different materials, such as single atoms or ions. In Google’s case, each physical qubit consists of a tiny superconducting circuit that must be kept at an extremely cold temperature. 

Early experiments on quantum computers stored each unit of information in a single physical qubit. Now researchers, including Google’s team, have begun experimenting with encoding each unit of information in multiple physical qubits. They refer to this constellation of physical qubits as a single “logical” qubit, which can represent 1, 0, or a superposition of the two. By design, the single “logical” qubit can hold onto a unit of information more robustly than a single “physical” qubit can. Google’s team corrects the errors in the logical qubit using an algorithm known as a surface code, which makes use of the logical qubit’s constituent physical qubits.

In the new work, Google made a single logical qubit out of varying numbers of physical qubits. Crucially, the researchers demonstrated that a logical qubit composed of 105 physical qubits suppressed errors more effectively than a logical qubit composed of 72 qubits. That suggests that putting increasing numbers of physical qubits together into a logical qubit “can really suppress the errors,” says Brown. This charts a potential path to building a quantum computer with a low enough error rate to perform a useful algorithm, although the researchers have yet to demonstrate they can put multiple logical qubits together and scale up to a larger machine. 

The researchers also report that the lifetime of the logical qubit exceeds the lifetime of its best constituent physical qubit by a factor of 2.4. Put another way, Google’s work essentially demonstrates that it can store data in a reliable quantum “memory.”

However, this demonstration is just a first step toward an error-corrected quantum computer, says Jay Gambetta, the vice president of IBM’s quantum initiative. He points out that while Google has demonstrated a more robust quantum memory, it has not performed any logical operations on the information stored in that memory. 

“At the end of the day, what matters is: How big of a quantum circuit could you run?” he says. (A “quantum circuit” is a series logic of operations executed on a quantum computer.) “And do you have a path to show how you’re going to run bigger and bigger quantum circuits?”

IBM, whose quantum computers are also composed of qubits made of superconducting circuits, is taking an error correction approach that’s different from Google’s surface code method.  It thinks this method, known as low-density parity-check code, will be easier to scale, with each logical qubit requiring fewer physical qubits to achieve comparable error suppression rates. By 2026, IBM intends to demonstrate that it can make 12 logical qubits out of 244 physical qubits, says Gambetta.

Other researchers are exploring other promising approaches, too. Instead of superconducting circuits, a team affiliated with the Boston-based quantum computing company QuEra uses neutral atoms as physical qubits. Earlier this year, it published in Nature a study showing that it had executed algorithms using up to 48 logical qubits made of rubidium atoms.

Gambetta cautions researchers to be patient and not to overhype the progress. “I just don’t want the field to think error correction is done,” he says. Hardware development simply takes a long time because the cycle of designing, building, and troubleshooting is time consuming, especially when compared with software development. “I don’t think it’s unique to quantum,” he says. 

To execute algorithms with guaranteed practical utility, a quantum computer needs to perform around a billion logical operations, says Brown. “And no one’s near a billion operations yet,” he says. Another milestone would be to create a quantum computer with 100 logical qubits, which QuEra has set as a goal for 2026. A quantum computer of that size would be capable of simulations beyond the reach of classical computers. Google scientists have made a single high-quality logical qubit—but the next step is to show that they can actually do something with it.

Why a ruling against the Internet Archive threatens the future of America’s libraries

I was raised in the 1980s and ’90s, and for my generation and generations before us, the public library was an equalizing force in every town, helping anyone move toward the American dream. In Chantilly, Virginia, where I grew up, it didn’t matter if you didn’t have a computer or your parents lacked infinite money for tutors—you could get a lifetime’s education for free at the public library. A ruling from the US Second Circuit against the Internet Archive and in favor of publisher Hachette has just thrown that promise of equality into doubt by limiting libraries’ access to digital lending.

To understand why this is so important to the future of libraries, you first have to understand the dire state of library e-book lending. 

Libraries have traditionally operated on a basic premise: Once they purchase a book, they can lend it out to patrons as much (or as little) as they like. Library copies often come from publishers, but they can also come from donations, used book sales, or other libraries. However the library obtains the book, once the library legally owns it, it is theirs to lend as they see fit. 

Not so for digital books. To make licensed e-books available to patrons, libraries have to pay publishers multiple times over. First, they must subscribe (for a fee) to aggregator platforms such as Overdrive. Aggregators, like streaming services such as HBO’s Max, have total control over adding or removing content from their catalogue. Content can be removed at any time, for any reason, without input from your local library. The decision happens not at the community level but at the corporate one, thousands of miles from the patrons affected. 

Then libraries must purchase each individual copy of each individual title that they want to offer as an e-book. These e-book copies are not only priced at a steep markup—up to 300% over consumer retail—but are also time- and loan-limited, meaning the files self-destruct after a certain number of loans. The library then needs to repurchase the same book, at a new price, in order to keep it in stock. 

This upending of the traditional order puts massive financial strain on libraries and the taxpayers that fund them. It also opens up a world of privacy concerns; while libraries are restricted in the reader data they can collect and share, private companies are under no such obligation.

Some libraries have turned to another solution: controlled digital lending, or CDL, a process by which a library scans the physical books it already has in its collection, makes secure digital copies, and lends those out on a one-to-one “owned to loaned” ratio.  The Internet Archive was an early pioneer of this technique.

When the digital copy is loaned, the physical copy is sequestered from borrowing; when the physical copy is checked out, the digital copy becomes unavailable. The benefits to libraries are obvious; delicate books can be circulated without fear of damage, volumes can be moved off-site for facilities work without interrupting patron access, and older and endangered works become searchable and can get a second chance at life. Library patrons, who fund their local library’s purchases with their tax dollars, also benefit from the ability to freely access the books.

Publishers are, unfortunately, not a fan of this model, and in 2020 four of them sued the Internet Archive over its CDL program. The suit ultimately focused on the Internet Archive’s lending of 127 books that were already commercially available through licensed aggregators. The publisher plaintiffs accused the Internet Archive of mass copyright infringement, while the Internet Archive argued that its digitization and lending program was a fair use. The trial court sided with the publishers, and on September 4, the Court of Appeals for the Second Circuit reaffirmed that decision with some alterations to the underlying reasoning. 

This decision harms libraries. It locks them into an e-book ecosystem designed to extract as much money as possible while harvesting (and reselling) reader data en masse. It leaves local communities’ reading habits at the mercy of curatorial decisions made by four dominant publishing companies thousands of miles away. It steers Americans away from one of the few remaining bastions of privacy protection and funnels them into a surveillance ecosystem that, like Big Tech, becomes more dangerous with each passing data breach. And by increasing the price for access to knowledge, it puts up even more barriers between underserved communities and the American dream.

It doesn’t stop there. This decision also renders the fair use doctrine—legally crucial in everything from parody to education to news reporting—almost unusable. And while there were occasional moments of sanity (such as recognizing that a “Donate here” button does not magically turn a nonprofit into a commercial enterprise), this decision fractured, rather than clarified, the law. 

If the courts won’t recognize CDL-based library lending as fair use, then the next step falls to Congress. Libraries are in crisis, caught between shrinking budgets and growing demand for services. Congress must act now to ensure that a pillar of equality in our communities isn’t sacrificed on the altar of profit. 

Chris Lewis is president and CEO of Public Knowledge, a consumer advocacy group that works to shape technology policy in the public interest. Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works.

What impact will AI have on video game development?

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.

Video game development has long been plagued by fear of the “crunch”—essentially, being forced to work overtime on a game to meet a deadline. In the early days of video games, the crunch was often viewed as a rite of passage: In the last days before release, an obsessed group of scrappy developers would work late into the night to perfect their dream game. 

However, nowadays the crunch is less likely to be glamorized than to be seen as a form of exploitation that risks causing mental illness and burnout. Part of the issue is that crunch time used to be just before a game launched, but now whole game development periods are “crunchy.” With games getting more expensive, companies are incentivized to make even more short-term profits by squeezing developers. 

But what if AI could help to alleviate game-development hell? It may already be happening. According to a recent poll by a16z, 87% of studios are using generative AI tools like Midjourney to create in-game environments. Others are using it for game testing or looking for bugs, while Ubisoft is experimenting with using AI to create different basic dialogue options.  

And even more help is coming. A tool developed by the team at Roblox aims to allow developers to make 3D environments and scenes in an instant with nothing but text prompts. Typically, creating an environment may take a week for a small game or much longer for a studio project, depending on how complex the designs are. But Roblox aims to let developers almost instantly bring their personal vision to life. 

For example, let’s say you wanted your game to be set in a spaceship with the interior design of a Buddhist temple. You’d just put that into a prompt—“Create a spaceship …”—and BAM! Your one-of-a-kind environment would be generated immediately.

The technology behind this can be used for any 3D environment, not just Roblox. My article here goes into more depth, but essentially, if ChatGPT’s tokens are words, the Roblox system’s tokens are 3D cubes that form a larger scene, allowing the 3D generation equivalent of what ChatGPT can do for text. This means the model could potentially be used to generate a whole city in the Grand Theft Auto universe. That said, the demo I saw from Roblox was far smaller, generating only a racetrack. So more realistically, I imagine it would be used to build one aspect of a city in Grand Theft Auto, like a stadium—at least for now.

Roblox claims you’re also able to modify a scene with prompts. So let’s say you get bored of the Buddhist temple aesthetic. You can prompt the model again—“Make the spaceship interior a forest”—and within an instant, all the Buddhist statues will turn to trees.

A lot of these types of things can already be done manually, of course, but it can take a lot of time. Ideally, this kind of technology will allow 3D artists to offload some of the tedium of their job to an AI. (Though some of them may argue that building the environment is creatively fulfilling—maybe even one of their favorite parts of their job. Having an AI spawn an environment in an instant may take away some of the joy of slowly discovering an environment as you build it.)

Personally, I’m fairly skeptical of AI in video games. As a former developer myself, I cringe a little bit when I hear about AI being used to write dialogue for characters. I worry about terribly stilted results and the possibility that writers will lose their jobs. In the same vein, I worry about putting 3D artists out of work and ending up with 3D environments that look off, or obviously generated by AI without care or thought.

It’s clear that the big AI wave is crashing upon us. And whether it leads to better work-life balance for game developers is going to be determined by how these systems are implemented. Will developers have a tool to reduce tedium and eliminate repetitive tasks, or will they have fewer colleagues, and new colleagues who insist on using words like “delves” and “showcasing” in every other sentence? 

Now read the rest of The Algorithm


Deeper learning

AI is already being used in games for eliminating inappropriate language
This new Roblox development comes after the company introduced AI to analyze in-game voice chat in real time last fall. Other games, like Call of Duty, have implemented similar systems. If the AI determines that a player is using foul language, it will issue a warning, and then a ban if restricted words keep coming. 

Why this matters: As we’ve written previously, content moderation with AI has proved to be tricky. It seems like an obvious way to make good use of the technology’s ability to look at masses of information and make quick assessments, but AI still has a hard time with nuance and cultural contexts. That hasn’t stopped it from being implemented in video games, which have been and will continue to be one of the testing grounds for the latest innovations in AI. My colleague Niall explains in his recent piece how it could make virtual worlds more immersive and flexible.

Bits and bytes

What this futuristic Olympics video says about the state of generative AI
Filmmaker Josh Kahn used AI to create a short video that imagines what an Olympics in LA might look like in the year 3028, which he shared exclusively with MIT Technology Review. The short demonstrates AI’s immense power for video creation, but it also highlights some of the issues with using the technology for that purpose. 
(MIT Technology Review)

A Dutch regulator has slapped Clearview AI with a $33 million fine 
Years ago, Clearview AI scraped images of people from the internet without their permission. Now Dutch authorities are suing the company, claiming that Clearview’s database is illegal because it violates individuals’ right to privacy. Clearview hasn’t paid past fines and doesn’t plan to pay this one, claiming that Dutch authorities have no jurisdiction over the company since it doesn’t have a business in the Netherlands. The Dutch are considering holding the directors of Clearview personally financially liable.
(The Verge)

How OpenAI is changing
OpenAI continues to evolve; recent moves include adding the former director of the US National Security Agency to its board and considering plans to restructure the company to be more attractive for investors. Additionally, there are talks over a new investment into OpenAI that would value it at over $100 billion. It sure feels like a long time since OpenAI could credibly claim to just be a research lab. 
(The New York Times)

NaNoWriMo says condemning AI Is “classist and ableist”
The organizers of the “write a book in a month” challenge have got themselves into hot water recently, with a big backlash against their decision to support the use of AI for writers. They’ve countered the haters by claiming that opposing the use of AI in writing is both classist and ableist, as some people require extra assistance and accommodation from AI tools. 
(404 media)