NASA has made an air traffic control system for drones

On Thanksgiving weekend of 2013, Jeff Bezos, then Amazon’s CEO, took to 60 Minutes to make a stunning announcement: Amazon was a few years away from deploying drones that would deliver packages to homes in less than 30 minutes. 

It lent urgency to a problem that Parimal Kopardekar, director of the NASA Aeronautics Research Institute, had begun thinking about earlier that year.

“How do you manage and accommodate large-scale drone operations without overloading the air traffic control system?” Kopardekar, who goes by PK, recalls wondering. Busy managing all airplane takeoffs and landings, air traffic controllers clearly wouldn’t have the capacity to oversee the fleets of package-delivering drones Amazon was promising. 

The solution PK devised, which subsequently grew into a collaboration between federal agencies, researchers, and industry, is a system called unmanned-­aircraft-system traffic management, or UTM. Instead of verbally communicating with air traffic controllers, drone operators using UTM share their intended flight paths with each other via a cloud-based network.

This highly scalable approach may finally open the skies to a host of commercial drone applications that have yet to materialize. Amazon Prime Air launched in 2022 but was put on hold after crashes at a testing facility, for example. On any given day, only 8,500 or so unmanned aircraft fly in US airspace, the vast majority of which are used for recreational purposes rather than for services like search and rescue missions, real estate inspections, video surveillance, or farmland surveys. 

One obstacle to wider use has been concern over possible midair drone-to-drone collisions. (Drones are typically restricted to airspace below 400 feet and their access to airports is limited, which significantly lowers the risk of drone-airplane collisions.) Under Federal Aviation Administration regulations, drones generally cannot fly beyond an operator’s visual line of sight, limiting flights to about a third of a mile. This prevents most collisions but also most use cases, such as delivering medication to a patient’s doorstep or dispatching a police drone to an active crime scene so first responders can better prepare before arriving.

Now, though, drone operators are increasingly incorporating UTM into their flights. The system uses path planning algorithms, like those that run in Google Maps, to chart a course that considers not only weather and obstacles like buildings and trees but the flight paths of nearby drones. It’ll automatically reroute a flight before takeoff if another drone has reserved the same volume of airspace at the same time, making the new flight trajectory visible to subsequent pilots. Drones can then fly autonomously to and from their destination, and no air traffic controller is required. 

Over the past decade, NASA and industry have demonstrated to the FAA through a series of tests that drones can safely maneuver around each other by adhering to UTM. And last summer, the agency gave the go-ahead for multiple drone delivery companies using UTM to begin flying simultaneously in the same airspace above Dallas—a first in US aviation history. Drone operators without in-house UTM capabilities have also begun licensing UTM services from FAA-approved third-party providers.

UTM only works if all participants abide by the same rules and agree to share data, and it’s enabled a level of collaboration unusual for companies competing to gain a foothold in a young, hot field, notes Peter Sachs, head of airspace integration strategy at Zipline, a drone delivery company based in South San Francisco that’s approved to use UTM. 

“We all agree that we need to collaborate on the practical, behind-the-scenes nuts and bolts to make sure that this preflight deconfliction for drones works really well,” Sachs says. (“Strategic deconfliction” is the technical term for processes that minimize drone-drone collisions.) Zipline and the drone delivery companies Wing, Flytrex, and DroneUp all operate in the Dallas area and are racing to expand to more cities, yet they disclose where they’re flying to one another in the interest of keeping the airspace conflict-free.

Greater adoption of UTM may be on the way. The FAA is expected to soon release a new rule called Part 108 that may allow operators to fly beyond visual line of sight if, among other requirements, they have some UTM capability, eliminating the need for the difficult-­to-obtain waiver the agency currently requires for these flights. To safely manage this additional drone traffic, drone companies will have to continue working together to keep their aircraft out of each other’s way. 

Yaakov Zinberg is a writer based in Cambridge, Massachusetts.

We need targeted policies, not blunt tariffs, to drive “American energy dominance”

President Trump and his appointees have repeatedly stressed the need to establish “American energy dominance.” 

But the White House’s profusion of executive orders and aggressive tariffs, along with its determined effort to roll back clean-energy policies, are moving the industry in the wrong direction, creating market chaos and economic uncertainty that are making it harder for both legacy players and emerging companies to invest, grow, and compete.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


The current 90-day pause on rolling out most of the administration’s so-called “reciprocal” tariffs presents a critical opportunity. Rather than defaulting to broad, blunt tariffs, the administration should use this window to align trade policy with a focused industrial strategy—one aimed at winning the global race to become a manufacturing powerhouse in next-generation energy technologies. 

By tightly aligning tariff design with US strengths in R&D and recent government investments in the energy innovation lifecycle, the administration can turn a regressive trade posture into a proactive plan for economic growth and geopolitical advantage.

The president is right to point out that America is blessed with world-leading energy resources. Over the past decade, the country has grown from being a net importer to a net exporter of oil and the world’s largest producer of oil and gas. These resources are undeniably crucial to America’s ability to reindustrialize and rebuild a resilient domestic industrial base, while also providing strategic leverage abroad. 

But the world is slowly but surely moving beyond the centuries-old model of extracting and burning fossil fuels, a change driven initially by climate risks but increasingly by economic opportunities. America will achieve true energy dominance only by evolving beyond being a mere exporter of raw, greenhouse-gas-emitting energy commodities—and becoming the world’s manufacturing and innovation hub for sophisticated, high-value energy technologies.

Notably, the nation took a lead role in developing essential early components of the cleantech sector, including solar photovoltaics and electric vehicles. Yet too often, the fruits of that innovation—especially manufacturing jobs and export opportunities—have ended up overseas, particularly in China.

China, which is subject to Trump’s steepest tariffs and wasn’t granted any reprieve in the 90-day pause, has become the world’s dominant producer of lithium-ion batteries, EVs, wind turbines, and other key components of the clean-energy transition.

Today, the US is again making exciting strides in next-generation technologies, including fusion energy, clean steel, advanced batteries, industrial heat pumps, and thermal energy storage. These advances can transform industrial processes, cut emissions, improve air quality, and maximize the strategic value of our fossil-fuel resources. That means not simply burning them for their energy content, but instead using them as feedstocks for higher-value materials and chemicals that power the modern economy.

The US’s leading role in energy innovation didn’t develop by accident. For several decades, legislators on both sides of the political divide supported increasing government investments into energy innovation—from basic research at national labs and universities to applied R&D through ARPA-E and, more recently, to the creation of the Office of Clean Energy Demonstrations, which funds first-of-a-kind technology deployments. These programs have laid the foundation for the technologies we need—not just to meet climate goals, but to achieve global competitiveness.

Early-stage companies in competitive, global industries like energy do need extra support to help them get to the point where they can stand up on their own. This is especially true for cleantech companies whose overseas rivals have much lower labor, land, and environmental compliance costs.

That’s why, for starters, the White House shouldn’t work to eliminate federal investments made in these sectors under the Bipartisan Infrastructure Law and the Inflation Reduction Act, as it’s reportedly striving to do as part of the federal budget negotiations.

Instead, the administration and its Republican colleagues in Congress should preserve and refine these programs, which have already helped expand America’s ability to produce advanced energy products like batteries and EVs. Success should be measured not only in barrels produced or watts generated, but in dollars of goods exported, jobs created, and manufacturing capacity built.

The Trump administration should back this industrial strategy with smarter trade policy as well. Steep, sweeping tariffs won’t  build long-term economic strength. 

But there are certain instances where reasonable, modern, targeted tariffs can be a useful tool in supporting domestic industries or countering unfair trade practices elsewhere. That’s why we’ve seen leaders of both parties, including Presidents Biden and Obama, apply them in recent years.

Such levies can be used to protect domestic industries where we’re competing directly with geopolitical rivals like China, and where American companies need breathing room to scale and thrive. These aims can be achieved by imposing tariffs on specific strategic technologies, such as EVs and next-generation batteries.

But to be clear, targeted tariffs on a few strategic sectors are starkly different from Trump’s tariffs, which now include 145% levies on most Chinese goods, a 10% “universal” tariff on other nations and 25% fees on steel and aluminum. 

Another option is implementing a broader border adjustment policy, like the Foreign Pollution Fee Act recently reintroduced by Senators Cassidy and Graham, which is designed to create a level playing field that would help clean manufacturers in the US compete with heavily polluting businesses overseas.  

Just as important, the nation must avoid counterproductive tariffs on critical raw materials like steel, aluminum, and copper or retaliatory restrictions on critical minerals—all of which are essential inputs for US manufacturing. The nation does not currently produce enough of these materials to meet demand, and it would take years to build up that capacity. Raising input costs through tariffs only slows our ability to keep or bring key industries home.

Finally, we must be strategic in how we deploy the country’s greatest asset: our workforce. Americans are among the most educated and capable workers in the world. Their time, talent, and ingenuity shouldn’t be spent assembling low-cost, low-margin consumer goods like toasters. Instead, we should focus on building cutting-edge industrial technologies that the world is demanding. These are the high-value products that support strong wages, resilient supply chains, and durable global leadership.

The worldwide demand for clean, efficient energy technologies is rising rapidly, and the US cannot afford to be left behind. The energy transition presents not just an environmental imperative but a generational opportunity for American industrial renewal.

The Trump administration has a chance to define energy dominance not just in terms of extraction, but in terms of production—of technology, of exports, of jobs, and of strategic influence. Let’s not let that opportunity slip away.

Addison Killean Stark is the chief executive and cofounder of AtmosZero, an industrial steam heat pump startup based in Loveland, Colorado. He was previously a fellow at the Department of Energy’s ARPA-E division, which funds research and development of advanced energy technologies.

How a 1980s toy robot arm inspired modern robotics

As a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm. 

A drawing from the patent application for the Armatron robotic arm.
COURTESY OF TAKARA TOMY

Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was the rare toy that lived up to the hype printed on the front of the box. This was a legit robotic arm. You could rotate the arm to spin around its base, tilt it up and down, bend it at the “elbow” joint, rotate the “wrist,” and open and close the bright-­orange articulated hand in elegant chords of movement, all using only the twistable twin joysticks. 

Anyone who played with this toy will also remember the sound it made. Once you slid the power button to the On position, you heard a constant whirring sound of plastic gears turning and twisting. And if you tried to push it past its boundaries, it twitched and protested with a jarring “CLICK … CLICK … CLICK.”

It wasn’t just kids who found the Armatron so special. It was featured on the cover of the November/December 1982 issue of Robotics Age magazine, which noted that the $31.95 toy (about $96 today) had “capabilities usually found only in much more expensive experimental arms.”

pieces of the armatron disassembled and arranged on a table

JIM GOLDEN

A few years ago I found my Armatron, and when I opened the case to get it working again, I was startled to find that other than the compartment for the pair of D-cell batteries, a switch, and a tiny three-volt DC motor, this thing was totally devoid of any electronic components. It was purely mechanical. Later, I found the patent drawings for the Armatron online and saw how incredibly complex the schematics of the gearbox were. This design was the work of a genius—or a madman.

The man behind the arm

I needed to know the story of this toy. I reached out to the manufacturer, Tomy (now known as Takara Tomy), which has been in business in Japan for over 100 years. It put me in touch with Hiroyuki Watanabe, a 69-year-old engineer and toy designer living in Tokyo. He’s retired now, but he worked at Tomy for 49 years, building many classic handheld electronic toys of the ’80s, including Blip, Digital Diamond, Digital Derby, and Missile Strike. Watanabe’s name can be found on 44 patents, and he was involved in bringing between 50 and 60 products to market. Watanabe answered emailed questions via video, and his responses were translated from Japanese.

“I didn’t have a period where I studied engineering professionally. Instead, I enrolled in what Japan would call a technical high school that trains technical engineers, and I actually [entered] the electrical department there,” he told me. 

Afterward, he worked at Komatsu Manufacturing—because, he said, he liked bulldozers. But in 1974, he saw that Tomy was hiring, and he wanted to make toys. “I was told that it was the No. 1 toy company in Japan, so I decided [it was worth a look],” he said. “I took a night train from Tohoku to Tokyo to take a job exam, and that’s how I ended up joining the company.”

The inspiration for the Armatron came from a newspaper clipping that Watanabe’s boss brought to him one day. “It showed an image of a [mechanical arm] holding an egg with three fingers. I think we started out thinking, ‘This is where things are heading these days, so let’s make this,’” he recalled. 

As the lead of a small team, Watanabe briefly turned his attention to another project, and by the time he returned to the robotic arm, the team had a prototype. But it was quite different from the Armatron’s final form. “The hand stuck out from the main body to the side and could only move about 90 degrees. The control panel also had six movement positions, and they were switched using six switches. I personally didn’t like that,” said Watanabe. So he went back to work.

The Armatron’s inventor, Hiroyuki Watanabe, in Tokyo in 2025
COURTESY OF TAKARA TOMY

Watanabe’s breakthrough was inspired by the radio-controlled helicopters he operated as a hobby. Holding up a radio remote controller with dual joystick controls, he told me, “This stick operation allows you to perform four movements with two arms, but I thought that if you twist this part, you can use six movements.”

Watanabe at work at Tomy in Tokyo in 1982.
COURTESY OF HIROYUKI WATANABE

“I had always wanted to create a system that could rotate 360 degrees, so I thought about how to make that system work,” he added.

Watanabe stressed that while he is listed as the Armatron’s primary inventor, it was a team effort. A designer created the case, colors, and logo, adding touches to mimic features seen on industrial robots of the time, such as the rubber tubes (which are just for looks). 

When the Armatron first came out, in 1981, robotics engineers started contacting Watanabe. “I wasn’t so much hearing from people at toy stores, but rather from researchers at university laboratories, factories, and companies that were making industrial robots,” he said. “They were quite encouraging, and we often talked together.”

The long reach of the robot at Radio Shack

The bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics.

One of them was Adam Borrell, a mechanical design engineer who has been building robots for 15 years at Boston Dynamics, including Petman, the YouTube-famous Atlas, and the dog-size quadruped called Spot. 

Borrell grew up a few blocks away from a Radio Shack in New York City. “If I was going to the subway station, we would walk right by Radio Shack. I would stop in and play with it and set the timer, do the challenges,” he says. “I know it was a toy, but that was a real robot.” The Armatron was the hook that lured him into Radio Shack and then sparked his lifelong interest in engineering: “I would roll pennies and use them to buy soldering irons and solder at Radio Shack.” 

“There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

Borrell had a fateful reunion with the toy while in grad school for engineering. “One of my office mates had an Armatron at his desk,” he recalls, “and it was broken. We took it apart together, and that was the first time I had seen the guts of it. 

“It had this fantastic mechanical gear train to just engage and disengage this one motor in a bunch of different ways. And it was really fascinating that it had done so much—the one little motor. And that sort of got me back thinking about industrial robot arms again.” 

Eric Paulos, a professor of electrical engineering and computer science at the University of California, Berkeley, recalls nagging his parents about what an educational gift Armatron would make. Ultimately, he succeeded in his lobbying. 

“It was just endless exploration of picking stuff up and moving it around and even just watching it move. It was mesmerizing to me. I felt like I really owned my own little robot,” he recalls. “I cherish this thing. I still have it to this day, and it’s still working.” 

The Armatron on the cover of the November/December 1982 issue of Robotics Age magazine.
PUBLIC DOMAIN

Today, Paulos builds robots and teaches his students how to build their own. He challenges them to solve problems within constraints, such as building with cardboard or Play-Doh; he believes the restrictions facing Watanabe and his team ultimately forced them to be more creative in their engineering.

It’s not very hard to draw connections between the Armatron—an impossibly analog robot—and highly advanced machines that are today learning to move in incredible new ways, powered by AI advancements like computer vision and reinforcement learning.

Paulos sees parallels between the problems he tackled as a kid with his Armatron and those that researchers are still trying to deal with today: “What happens when you pick things up and they’re too heavy, but you can sort of pick it up if you approach it from different angles? Or how do you grip things? There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

While AI may be taking over the world of robotics, the field still requires engineers—builders and tinkerers who can problem-solve in the physical world. 

A page from the 1984 Radio Shack catalogue,
featuring the Armatron for $31.95.
COURTESY OF RADIOSHACKCATALOGS.COM

The Armatron encouraged kids to explore these analog mechanics, a reminder that not all breakthroughs happen on a computer screen. And that hands-on curiosity hasn’t faded. Today, a new generation of fans are rediscovering the Armatron through online communities and DIY modifications. Dozens of Armatron videos are on YouTube, including one where the arm has been modified to run on steam power

“I’m very happy to see people who love mechanisms are amazed,” Watanabe told me. “I’m really happy that there are still people out there who love our products in this way.” 

Jon Keegan writes about technology and AI and publishes Beautiful Public Data, a curated collection of government data sets (beautifulpublicdata.com).

These four charts sum up the state of AI and energy

While it’s rare to look at the news without finding some headline related to AI and energy, a lot of us are stuck waving our hands when it comes to what it all means.

Sure, you’ve probably read that AI will drive an increase in electricity demand. But how that fits into the context of the current and future grid can feel less clear from the headlines. That’s true even for people working in the field. 

A new report from the International Energy Agency digs into the details of energy and AI, and I think it’s worth looking at some of the data to help clear things up. Here are four charts from the report that sum up the crucial points about AI and energy demand.

1. AI is power hungry, and the world will need to ramp up electricity supply to meet demand. 

This point is the most obvious, but it bears repeating: AI is exploding, and it’s going to lead to higher energy demand from data centers. “AI has gone from an academic pursuit to an industry with trillions of dollars at stake,” as the IEA report’s executive summary puts it.

Data centers used less than 300 terawatt-hours of electricity in 2020. That could increase to nearly 1,000 terawatt-hours in the next five years, which is more than Japan’s total electricity consumption today.

Today, the US has about 45% of the world’s data center capacity, followed by China. Those two countries will continue to represent the overwhelming majority of capacity through 2035.  

2. The electricity needed to power data centers will largely come from fossil fuels like coal and natural gas in the near term, but nuclear and renewables could play a key role, especially after 2030.

The IEA report is relatively optimistic on the potential for renewables to power data centers, projecting that nearly half of global growth by 2035 will be met with renewables like wind and solar. (In Europe, the IEA projects, renewables will meet 85% of new demand.)

In the near term, though, natural gas and coal will also expand. An additional 175 terawatt-hours from gas will help meet demand in the next decade, largely in the US, according to the IEA’s projections. Another report, published this week by the energy consultancy BloombergNEF, suggests that fossil fuels will play an even larger role than the IEA projects, accounting for two-thirds of additional electricity generation between now and 2035.

Nuclear energy, a favorite of big tech companies looking to power operations without generating massive emissions, could start to make a dent after 2030, according to the IEA data.

3. Data centers are just a small piece of expected electricity demand growth this decade.

We should be talking more about appliances, industry, and EVs when we talk about energy! Electricity demand is on the rise from a whole host of sources: Electric vehicles, air-conditioning, and appliances will each drive more electricity demand than data centers between now and the end of the decade. In total, data centers make up a little over 8% of electricity demand expected between now and 2030.

There are interesting regional effects here, though. Growing economies will see more demand from the likes of air-conditioning than from data centers. On the other hand, the US has seen relatively flat electricity demand from consumers and industry for years, so newly rising demand from high-performance computing will make up a larger chunk. 

4. Data centers tend to be clustered together and close to population centers, making them a unique challenge for the power grid.  

The grid is no stranger to facilities that use huge amounts of energy: Cement plants, aluminum smelters, and coal mines all pull a lot of power in one place. However, data centers are a unique sort of beast.

First, they tend to be closely clustered together. Globally, data centers make up about 1.5% of total electricity demand. However, in Ireland, that number is 20%, and in Virginia, it’s 25%. That trend looks likely to continue, too: Half of data centers under development in the US are in preexisting clusters.

Data centers also tend to be closer to urban areas than other energy-intensive facilities like factories and mines. 

Since data centers are close both to each other and to communities, they could have significant impacts on the regions where they’re situated, whether by bringing on more fossil fuels close to urban centers or by adding strain to the local grid. Or both.

Overall, AI and data centers more broadly are going to be a major driving force for electricity demand. It’s not the whole story, but it’s a unique part of our energy picture to continue watching moving forward. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

How creativity became the reigning value of our time

Americans don’t agree on much these days. Yet even at a time when consensus reality seems to be on the verge of collapse, there remains at least one quintessentially modern value we can all still get behind: creativity. 

We teach it, measure it, envy it, cultivate it, and endlessly worry about its death. And why wouldn’t we? Most of us are taught from a young age that creativity is the key to everything from finding personal fulfillment to achieving career success to solving the world’s thorniest problems. Over the years, we’ve built creative industries, creative spaces, and creative cities and populated them with an entire class of people known simply as “creatives.” We read thousands of books and articles each year that teach us how to unleash, unlock, foster, boost, and hack our own personal creativity. Then we read even more to learn how to manage and protect this precious resource. 

Given how much we obsess over it, the concept of creativity can feel like something that has always existed, a thing philosophers and artists have pondered and debated throughout the ages. While it’s a reasonable assumption, it’s one that turns out to be very wrong. As Samuel Franklin explains in his recent book, The Cult of Creativity, the first known written use of creativity didn’t actually occur until 1875, “making it an infant as far as words go.” What’s more, he writes, before about 1950, “there were approximately zero articles, books, essays, treatises, odes, classes, encyclopedia entries, or anything of the sort dealing explicitly with the subject of ‘creativity.’”

This raises some obvious questions. How exactly did we go from never talking about creativity to always talking about it? What, if anything, distinguishes creativity from other, older words, like ingenuity, cleverness, imagination, and artistry? Maybe most important: How did everyone from kindergarten teachers to mayors, CEOs, designers, engineers, activists, and starving artists come to believe that creativity isn’t just good—personally, socially, economically—but the answer to all life’s problems?

Thankfully, Franklin offers some potential answers in his book. A historian and design researcher at the Delft University of Technology in the Netherlands, he argues that the concept of creativity as we now know it emerged during the post–World War II era in America as a kind of cultural salve—a way to ease the tensions and anxieties caused by increasing conformity, bureaucracy, and suburbanization.

“Typically defined as a kind of trait or process vaguely associated with artists and geniuses but theoretically possessed by anyone and applicable to any field, [creativity] provided a way to unleash individualism within order,” he writes, “and revive the spirit of the lone inventor within the maze of the modern corporation.”

Brainstorming, a new method for encouraging creative thinking, swept corporate America in the 1950s. A response to pressure for new products and new ways of marketing them, as well as a panic over conformity, it inspired passionate debate about whether true creativity should be an individual affair or could be systematized for corporate use.
INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS

I spoke to Franklin about why we continue to be so fascinated by creativity, how Silicon Valley became the supposed epicenter of it, and what role, if any, technologies like AI might have in reshaping our relationship with it. 

I’m curious what your personal relationship to creativity was growing up. What made you want to write a book about it?

Like a lot of kids, I grew up thinking that creativity was this inherently good thing. For me—and I imagine for a lot of other people who, like me, weren’t particularly athletic or good at math and science—being creative meant you at least had some future in this world, even if it wasn’t clear what that future would entail. By the time I got into college and beyond, the conventional wisdom among the TED Talk register of thinkers—people like Daniel Pink and Richard Florida—was that creativity was actually the most important trait to have for the future. Basically, the creative people were going to inherit the Earth, and society desperately needed them if we were going to solve all of these compounding problems in the world. 

On the one hand, as someone who liked to think of himself as creative, it was hard not to be flattered by this. On the other hand, it all seemed overhyped to me. What was being sold as the triumph of the creative class wasn’t actually resulting in a more inclusive or creative world order. What’s more, some of the values embedded in what I call the cult of creativity seemed increasingly problematic—specifically, the focus on self-­realization, doing what you love, and following your passion. Don’t get me wrong—it’s a beautiful vision, and I saw it work out for some people. But I also started to feel like it was just a cover for what was, economically speaking, a pretty bad turn of events for many people.  

Staff members at the University of California’s Institute of Personality Assessment and Research simulate a situational procedure involving group interaction, called the Bingo Test. Researchers of the 1950s hoped to learn how factors in people’s lives and environments shaped their creative aptitude.
INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS

Nowadays, it’s quite common to bash the “follow your passion,” “hustle culture” idea. But back when I started this project, the whole move-fast-and-break-things, disrupter, innovation-economy stuff was very much unquestioned. In a way, the idea for the book came from recognizing that creativity was playing this really interesting role in connecting two worlds: this world of innovation and entrepreneurship and this more soulful, bohemian side of our culture. I wanted to better understand the history of that relationship.

When did you start thinking about creativity as a kind of cultone that we’re all a part of? 

Similar to something like the “cult of domesticity,” it was a way of describing a historical moment in which an idea or value system achieves a kind of broad, uncritical acceptance. I was finding that everyone was selling stuff based on the idea that it boosted your creativity, whether it was a new office layout, a new kind of urban design, or the “Try these five simple tricks” type of thing. 

You start to realize that nobody is bothering to ask, “Hey, uh, why do we all need to be creative again? What even is this thing, creativity?” It had become this unimpeachable value that no one, regardless of what side of the political spectrum they fell on, would even think to question. That, to me, was really unusual, and I think it signaled that something interesting was happening.

Your book highlights midcentury efforts by psychologists to turn creativity into a quantifiable mental trait and the “creative person” into an identifiable type. How did that play out? 

The short answer is: not very well. To study anything, you of course need to agree on what it is you’re looking at. Ultimately, I think these groups of psychologists were frustrated in their attempts to come up with scientific criteria that defined a creative person. One technique was to go find people who were already eminent in fields that were deemed creative—writers like Truman Capote and Norman Mailer, architects like Louis Kahn and Eero Saarinen—and just give them a battery of cognitive and psychoanalytic tests and then write up the results. This was mostly done by an outfit called the Institute of Personality Assessment and Research (IPAR) at Berkeley. Frank Barron and Don MacKinnon were the two biggest researchers in that group.

Another way psychologists went about it was to say, all right, that’s not going to be practical for coming up with a good scientific standard. We need numbers, and lots and lots of people to certify these creative criteria. This group of psychologists theorized that something called “divergent thinking” was a major component of creative accomplishment. You’ve heard of the brick test, where you’re asked to come up with many creative uses for a brick in a given amount of time? They basically gave a version of that test to Army officers, schoolchildren, rank-and-file engineers at General Electric, all kinds of people. It’s tests like those that ultimately became stand-ins for what it means to be “creative.”

Are they still used? 

When you see a headline about AI making people more creative, or actually being more creative than humans, the tests they are basing that assertion on are almost always some version of a divergent thinking test. It’s highly problematic for a number of reasons. Chief among them is the fact that these tests have never been shown to have predictive value—that’s to say, a third grader, a 21-year-old, or a 35-year-old who does really well on divergent thinking tests doesn’t seem to have any greater likelihood of being successful in creative pursuits. The whole point of developing these tests in the first place was to both identify and predict creative people. None of them have been shown to do that. 

Reading your book, I was struck by how vague and, at times, contradictory the concept of “creativity” was from the beginning. You characterize that as “a feature, not a bug.” How so?

Ask any creativity expert today what they mean by “creativity,” and they’ll tell you it’s the ability to generate something new and useful. That something could be an idea, a product, an academic paper—whatever. But the focus on novelty has remained an aspect of creativity from the beginning. It’s also what distinguishes it from other similar words, like imagination or cleverness. But you’re right: Creativity is a flexible enough concept to be used in all sorts of ways and to mean all sorts of things, many of them contradictory. I think I write in the book that the term may not be precise, but that it’s vague in precise and meaningful ways. It can be both playful and practical, artsy and technological, exceptional and pedestrian. That was and remains a big part of its appeal. 

The question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming [AI] into our lives as advisors and assistants.

Is that emphasis on novelty and utility a part of why Silicon Valley likes to think of itself as the new nexus for creativity?

Absolutely. The two criteria go together. In techno-solutionist, hypercapitalist milieus like Silicon Valley, novelty isn’t any good if it’s not useful (or at least marketable), and utility isn’t any good (or marketable) unless it’s also novel. That’s why they’re often dismissive of boring-but-important things like craft, infrastructure, maintenance, and incremental improvement, and why they support art—which is traditionally defined by its resistance to utility—only insofar as it’s useful as inspiration for practical technologies.

At the same time, Silicon Valley loves to wrap itself in “creativity” because of all the artsy and individualist connotations. It has very self-consciously tried to distance itself from the image of the buttoned-down engineer working for a large R&D lab of a brick-and-mortar manufacturing corporation and instead raise up the idea of a rebellious counterculture type tinkering in a garage making weightless products and experiences. That, I think, has saved it from a lot of public scrutiny.

Up until recently, we’ve tended to think of creativity as a human trait, maybe with a few exceptions from the rest of the animal world. Is AI changing that?

When people started defining creativity in the ’50s, the threat of computers automating white-collar work was already underway. They were basically saying, okay, rational and analytical thinking is no longer ours alone. What can we do that the computers can never do? And the assumption was that humans alone could be “truly creative.” For a long time, computers didn’t do much to really press the issue on what that actually meant. Now they’re pressing the issue. Can they do art and poetry? Yes. Can they generate novel products that also make sense or work? Sure.

I think that’s by design. The kinds of LLMs that Silicon Valley companies have put forward are meant to appear “creative” in those conventional senses. Now, whether or not their products are meaningful or wise in a deeper sense, that’s another question. If we’re talking about art, I happen to think embodiment is an important element. Nerve endings, hormones, social instincts, morality, intellectual honesty—those are not things essential to “creativity” necessarily, but they are essential to putting things out into the world that are good, and maybe even beautiful in a certain antiquated sense. That’s why I think the question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming them into our lives as advisors and assistants. 

This interview is based on two conversations and has been edited and condensed for clarity.

Bryan Gardiner is a writer based in Oakland, California.

Longevity clinics around the world are selling unproven treatments

The quest for long, healthy life—and even immortality—is probably almost as old as humans are, but it’s never been hotter than it is right now. Today my newsfeed is full of claims about diets, exercise routines, and supplements that will help me live longer.

A lot of it is marketing fluff, of course. It should be fairly obvious that a healthy, plant-rich diet and moderate exercise will help keep you in good shape. And no drugs or supplements have yet been proved to extend human lifespan.

The growing field of longevity medicine is apparently aiming for something in between these two ends of the wellness spectrum. By combining the established tools of clinical medicine (think blood tests and scans) with some more experimental ones (tests that measure your biological age), these clinics promise to help their clients improve their health and longevity.

But a survey of longevity clinics around the world, carried out by an organization that publishes updates and research on the industry, is revealing a messier picture. In reality, these clinics—most of which cater only to the very wealthy—vary wildly in their offerings.

Today, the number of longevity clinics is thought to be somewhere in the hundreds. The proponents of these clinics say they represent the future of medicine. “We can write new rules on how we treat patients,” Eric Verdin, who directs the Buck Institute for Research on Aging, said at a professional meeting last year.

Phil Newman, who runs Longevity.Technology, a company that tracks the longevity industry, says he knows of 320 longevity clinics operating around the world. Some operate multiple centers on an international scale, while others involve a single “practitioner” incorporating some element of “longevity” into the treatments offered, he says. To get a better idea of what these offerings might be, Newman and his colleagues conducted a survey of 82 clinics around the world, including the US, Australia, Brazil, and multiple countries in Europe and Asia.

Some of the results are not all that surprising. Three-quarters of the clinics said that most of their clients were Gen Xers, aged between 44 and 59. This makes sense—anecdotally, it’s around this age that many people start to feel the effects of aging. And research suggests that waves of molecular changes associated with aging hit us in our 40s and again in our 60s. (Longevity influencers Bryan Johnson, Andrew Huberman, and Peter Attia all fall into this age group too.)

And I wasn’t surprised to see that plenty of clinics are offering aesthetic treatments, focusing more on how old their clients look. Of the clinics surveyed, 28% said they offered Botox injections, 35% offered hair loss treatments, and 38% offered “facial rejuvenation procedures.”  “The distinction between longevity medicine and aesthetic medicine remains blurred,” Andrea Maier of the National University of Singapore, and cofounder of a private longevity clinic, wrote in a commentary on the report.

Maier is also former president of the Healthy Longevity Medicine Society, an organization that was set up with the aim of establishing clinical standards and credibility for longevity clinics. Other results from the survey underline how much of a challenge this will be; many clinics are still offering unproven treatments. Over a third of the clinics said they offered stem-cell treatments, for example. There is no evidence that those treatments will help people live longer—and they are not without risk, either.

I was a little surprised to see that most of the clinics are also offering prescription medicines off label. In other words, drugs that have been approved for specific medical issues are apparently being prescribed for aging instead. This is also not without risks—all medicines have side effects. And, again, none of them have been proved to slow or reverse human aging.

And these prescriptions are coming from certified medical doctors. More than 80% of clinics reported that their practice was overseen by a medical doctor with more than 10 years of clinical experience.

It was also a little surprising to learn that despite their high fees, most of these clinics are not making a profit. For clients, the annual costs of attending a longevity clinic range between $10,000 and $150,000, according to Fountain Life, a company with clinics in Florida and Prague. But only 39% of the surveyed clinics said they were turning a profit and 30% said they were “approaching breaking even,” while 16% said they were operating at a loss.

Proponents of longevity clinics have high hopes for the field. They see longevity medicine as nothing short of a revolution—a move away from reactive treatments and toward proactive health maintenance. But these survey results show just how far they have to go.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The world’s biggest space-based radar will measure Earth’s forests from orbit

Forests are the second-largest carbon sink on the planet, after the oceans. To understand exactly how much carbon they trap, the European Space Agency and Airbus have built a satellite called Biomass that will use a long-prohibited band of the radio spectrum to see below the treetops around the world. It will lift off from French Guiana toward the end of April and will boast the largest space-based radar in history, though it will soon be tied in orbit by the US-India NISAR imaging satellite, due to launch later this year.

Roughly half of a tree’s dry mass is made of carbon, so getting a good measure of how much a forest weighs can tell you how much carbon dioxide it’s taken from the atmosphere. But scientists have no way of measuring that mass directly. 

“To measure biomass, you need to cut the tree down and weigh it, which is why we use indirect measuring systems,” says Klaus Scipal, manager of the Biomass mission. 

These indirect systems rely on a combination of field sampling—foresters roaming among the trees to measure their height and diameter—and remote sensing technologies like lidar scanners, which can be flown over the forests on airplanes or drones and used to measure treetop height along lines of flight. This approach has worked well in North America and Europe, which have well-established forest management systems in place. “People know every tree there, take lots of measurements,” Scipal says. 

But most of the world’s trees are in less-mapped places, like the Amazon jungle, where less than 20% of the forest has been studied in depth on the ground. To get a sense of the biomass in those remote, mostly inaccessible areas, space-based forest sensing is the only feasible option. The problem is, the satellites we currently have in orbit are not equipped for monitoring trees. 

Tropical forests seen from space look like green plush carpets, because all we can see are the treetops; from imagery like this, we can’t tell how high or thick the trees are. Radars we have on satellites like Sentinel 1 use short radio wavelengths like those in the C band, which fall between 3.9 and 7.5 centimeters. These bounce off the leaves and smaller branches and can’t penetrate the forest all the way to the ground. 

This is why for the Biomass mission ESA went with P-band radar. P-band radio waves, which are about 10 times longer in wavelength, can see bigger branches and the trunks of trees, where most of their mass is stored. But fitting a P-band radar system on a satellite isn’t easy. The first problem is the size. 

“Radar systems scale with wavelengths—the longer the wavelength, the bigger your antennas need to be. You need bigger structures,” says Scipal. To enable it to carry the P-band radar, Airbus engineers had to make the Biomass satellite two meters wide, two meters thick, and four meters tall. The antenna for the radar is 12 meters in diameter. It sits on a long, multi-joint boom, and Airbus engineers had to fold it like a giant umbrella to fit it into the Vega C rocket that will lift it into orbit. The unfolding procedure alone is going to take several days once the satellite gets to space. 

Sheer size, though, is just one reason we have generally avoided sending P-band radars to space. Operating such radar systems in space is banned by International Telecommunication Union regulations, and for a good reason: interference. 

workers moving the BIOMASS satellite in a clean space
Workers roll the BIOMASS satellite out into a cleanroom to be inspected before the launch
ESA-CNES-ARIANESPACE/OPTIQUE VIDéO DU CSG–S. MARTIN

“The primary frequency allocation in P band is for huge SOTR [single-object-tracking radars] Americans use to detect incoming intercontinental ballistic missiles. That was, of course, a problem for us,” Scipal says. To get an exemption from the ban on space-based P-band radars, ESA had to agree to several limitations, the most painful of which was turning the Biomass radar off over North America and Europe to avoid interfering with SOTR coverage.

“This was a pity. It’s a European mission, so we wanted to do observations in Europe,” Scipal says. The rest of the world, though, is fair game.

The Biomass mission is scheduled to last five years. Calibration of the radar and other systems is going to take the first five months. After that, Biomass will enter its tomography phase, gathering data to create detailed biomass maps of the forests in India, Australia, Siberia, South America, Africa—everywhere but North America and Europe. “Tomography will work like a CT scan in a hospital. We will take images of each area from various different positions and create the 3D map of the forests,” Scipal says. 

Getting full, global coverage is expected to take 18 months. Then, for the rest of the mission, Biomass will switch to a different measurement method, capturing one full global map every nine months to measure how the condition of our forests changes over time. 

“The scientific goal here is to really understand the role of forests in the global carbon cycle. The main interest is the tropics because it’s the densest forest which is under the biggest threat of deforestation and the one we know the least about,” Scipal says.

Biomass is going to provide hectare-scale-resolution 3D maps of those tropical forests, including everything from the tree heights to ground topography—something we’ve never had before. But there are limits to what it can do. 

“One drawback is that we won’t get insights into seasonal deviations in forest throughout the year because of the time it takes for Biomass to do global coverage,” says Irena Hajnsek, a professor of Earth observation at ETH Zurich, who is not involved in the Biomass mission. And Biomass is still going to leave some of our questions about carbon sinks unanswered.

“In all our estimations of climate change, we know how much carbon is in the atmosphere, but we do not know so much about how much carbon is stored on land,” says Hajnsek. Biomass will have its limits, she says, since significant amounts of carbon are trapped in the soil in permafrost areas, which the mission won’t be able to measure.

“But we’re going to learn how much carbon is stored in the forests and also how much of it is getting released due to disturbances like deforestation or fires,” she says. “And that is going to be a huge contribution.”

This spa’s water is heated by bitcoin mining

At first glance, the Bathhouse spa in Brooklyn looks not so different from other high-end spas. What sets it apart is out of sight: a closet full of cryptocurrency-­mining computers that not only generate bitcoins but also heat the spa’s pools, marble hammams, and showers. 

When cofounder Jason Goodman opened Bathhouse’s first location in Williamsburg in 2019, he used conventional pool heaters. But after diving deep into the world of bitcoin, he realized he could fit cryptocurrency mining seamlessly into his business. That’s because the process, where special computers (called miners) make trillions of guesses per second to try to land on the string of numbers that will earn a bitcoin, consumes tremendous amounts of electricitywhich in turn produces plenty of heat that usually goes to waste. 

 “I thought, ‘That’s interestingwe need heat,’” Goodman says of Bathhouse. Mining facilities typically use fans or water to cool their computers. And pools of water, of course, are a prominent feature of the spa. 

It takes six miners, each roughly the size of an Xbox One console, to maintain a hot tub at 104 °F. At Bathhouse’s  Williamsburg location, miners hum away quietly inside two large tanks, tucked in a storage closet among liquor bottles and teas. To keep them cool and quiet, the units are immersed directly in non-conductive oil, which absorbs the heat they give off and is pumped through tubes beneath Bathhouse’s hot tubs and hammams. 

Mining boilers, which cool the computers by pumping in cold water that comes back out at 170 °F, are now also being used at the site. A thermal battery stores excess heat for future use. 

Goodman says his spas aren’t saving energy by using bitcoin miners for heat, but they’re also not using any more than they would with conventional water heating. “I’m just inserting miners into that chain,” he says. 

Goodman isn’t the only one to see the potential in heating with crypto. In Finland, Marathon Digital Holdings turned fleets of bitcoin miners into a district heating system to warm the homes of 80,000 residents. HeatCore, an integrated energy service provider, has used bitcoin mining to heat a commercial office building in China and to keep pools at a constant temperature for fish farming. This year it will begin a pilot project to heat seawater for desalination. On a smaller scale, bitcoin fans who also want some extra warmth can buy miners that double as space heaters. 

Crypto enthusiasts like Goodman think much more of this is comingespecially under the Trump administration, which has announced plans to create a bitcoin reserve. This prospect alarms environmentalists. 

The energy required for a single bitcoin transaction varies, but as of mid-March it was equivalent to the energy consumed by an average US household over 47.2 days, according to the Bitcoin Energy Consumption Index, run by the economist Alex de Vries. 

Among the various cryptocurrencies, bitcoin mining gobbles up the most energy by far. De Vries points out that others, like ethereum, have eliminated mining and implemented less energy-­intensive algorithms. But bitcoin users resist any change to their currency, so de Vries is doubtful a shift away from mining will happen anytime soon. 

One key barrier to using bitcoin for heating, de Vries says, is that the heat can only be transported short distances before it dissipates. “I see this as something that is extremely niche,” he says. “It’s just not competitive, and you can’t make it work at a large scale.” 

The more renewable sources that are added to electric grids to replace fossil fuels, the cleaner crypto mining will become. But even if bitcoin is powered by renewable energy, “that doesn’t make it sustainable,” says Kaveh Madani, director of the United Nations University Institute for Water, Environment, and Health. Mining burns through valuable resources that could otherwise be used to meet existing energy needs, Madani says. 

For Goodman, relaxing into bitcoin-heated water is a completely justifiable use of energy. It soothes the muscles, calms the mind, and challenges current economic structures, all at the same time. 

Carrie Klein is a freelance journalist based in New York City.

A Google Gemini model now has a “dial” to adjust how much it reasons

Google DeepMind’s latest update to a top Gemini AI model includes a dial to control how much the system “thinks” through a response. The new feature is ostensibly designed to save money for developers, but it also concedes a problem: Reasoning models, the tech world’s new obsession, are prone to overthinking, burning money and energy in the process.

Since 2019, there have been a couple of tried and true ways to make an AI model more powerful. One was to make it bigger by using more training data, and the other was to give it better feedback on what constitutes a good answer. But toward the end of last year, Google DeepMind and other AI companies turned to a third method: reasoning.

“We’ve been really pushing on ‘thinking,’” says Jack Rae, a principal research scientist at DeepMind. Such models, which are built to work through problems logically and spend more time arriving at an answer, rose to prominence earlier this year with the launch of the DeepSeek R1 model. They’re attractive to AI companies because they can make an existing model better by training it to approach a problem pragmatically. That way, the companies can avoid having to build a new model from scratch. 

When the AI model dedicates more time (and energy) to a query, it costs more to run. Leaderboards of reasoning models show that one task can cost upwards of $200 to complete. The promise is that this extra time and money help reasoning models do better at handling challenging tasks, like analyzing code or gathering information from lots of documents. 

“The more you can iterate over certain hypotheses and thoughts,” says Google DeepMind chief technical officer Koray Kavukcuoglu, the more “it’s going to find the right thing.”

This isn’t true in all cases, though. “The model overthinks,” says Tulsee Doshi, who leads the product team at Gemini, referring specifically to Gemini Flash 2.5, the model released today that includes a slider for developers to dial back how much it thinks. “For simple prompts, the model does think more than it needs to.” 

When a model spends longer than necessary on a problem, it makes the model expensive to run for developers and worsens AI’s environmental footprint.

Nathan Habib, an engineer at Hugging Face who has studied the proliferation of such reasoning models, says overthinking is abundant. In the rush to show off smarter AI, companies are reaching for reasoning models like hammers even where there’s no nail in sight, Habib says. Indeed, when OpenAI announced a new model in February, it said it would be the company’s last nonreasoning model. 

The performance gain is “undeniable” for certain tasks, Habib says, but not for many others where people normally use AI. Even when reasoning is used for the right problem, things can go awry. Habib showed me an example of a leading reasoning model that was asked to work through an organic chemistry problem. It started out okay, but halfway through its reasoning process the model’s responses started resembling a meltdown: It sputtered “Wait, but …” hundreds of times. It ended up taking far longer than a nonreasoning model would spend on one task. Kate Olszewska, who works on evaluating Gemini models at DeepMind, says Google’s models can also get stuck in loops.

Google’s new “reasoning” dial is one attempt to solve that problem. For now, it’s built not for the consumer version of Gemini but for developers who are making apps. Developers can set a budget for how much computing power the model should spend on a certain problem, the idea being to turn down the dial if the task shouldn’t involve much reasoning at all. Outputs from the model are about six times more expensive to generate when reasoning is turned on.

Another reason for this flexibility is that it’s not yet clear when more reasoning will be required to get a better answer.

“It’s really hard to draw a boundary on, like, what’s the perfect task right now for thinking?” Rae says. 

Obvious tasks include coding (developers might paste hundreds of lines of code into the model and then ask for help), or generating expert-level research reports. The dial would be turned way up for these, and developers might find the expense worth it. But more testing and feedback from developers will be needed to find out when medium or low settings are good enough.

Habib says the amount of investment in reasoning models is a sign that the old paradigm for how to make models better is changing. “Scaling laws are being replaced,” he says. 

Instead, companies are betting that the best responses will come from longer thinking times rather than bigger models. It’s been clear for several years that AI companies are spending more money on inferencing—when models are actually “pinged” to generate an answer for something—than on training, and this spending will accelerate as reasoning models take off. Inferencing is also responsible for a growing share of emissions.

(While on the subject of models that “reason” or “think”: an AI model cannot perform these acts in the way we normally use such words when talking about humans. I asked Rae why the company uses anthropomorphic language like this. “It’s allowed us to have a simple name,” he says, “and people have an intuitive sense of what it should mean.” Kavukcuoglu says that Google is not trying to mimic any particular human cognitive process in its models.)

Even if reasoning models continue to dominate, Google DeepMind isn’t the only game in town. When the results from DeepSeek began circulating in December and January, it triggered a nearly $1 trillion dip in the stock market because it promised that powerful reasoning models could be had for cheap. The model is referred to as “open weight”—in other words, its internal settings, called weights, are made publicly available, allowing developers to run it on their own rather than paying to access proprietary models from Google or OpenAI. (The term “open source” is reserved for models that disclose the data they were trained on.) 

So why use proprietary models from Google when open ones like DeepSeek are performing so well? Kavukcuoglu says that coding, math, and finance are cases where “there’s high expectation from the model to be very accurate, to be very precise, and to be able to understand really complex situations,” and he expects models that deliver on that, open or not, to win out. In DeepMind’s view, this reasoning will be the foundation of future AI models that act on your behalf and solve problems for you.

“Reasoning is the key capability that builds up intelligence,” he says. “The moment the model starts thinking, the agency of the model has started.”

This story was updated to clarify the problem of “overthinking.

US office that counters foreign disinformation is being eliminated

The only office within the US State Department that monitors foreign disinformation is to be eliminated, according to US Secretary of State Marco Rubio, confirming reporting by MIT Technology Review.

The Counter Foreign Information Manipulation and Interference (R/FIMI) Hub is a small office in the State Department’s Office of Public Diplomacy that tracks and counters foreign disinformation campaigns. 

In shutting R/FIMI, the department’s controversial acting undersecretary, Darren Beattie, is delivering a major win to conservative critics who have alleged that it censors conservative voices. Created at the end of 2024, it was reorganized from the Global Engagement Center (GEC), a larger office with a similar mission that had long been criticized by conservatives who claimed that, despite its international mission, it was censoring American conservatives. In 2023, Elon Musk called the center the “worst offender in US government censorship [and] media manipulation” and a “threat to our democracy.” 

The culling of the office leaves the State Department without a way to actively counter the increasingly sophisticated disinformation campaigns from foreign governments like those of Russia, Iran, and China.

Shortly after publication, employees at R/FIMI received an email, inviting them to an 11:15AM meeting with Beattie, where employees were told that the office and their jobs have been eliminated. 

Have more information on this story or a tip for something else that we should report? Using a non-work device, reach the reporter on Signal at eileenguo.15 or tips@technologyreview.com.

Then, Secretary of State Marco Rubio confirmed our reporting in a blog post in The Federalist, which had sued GEC last year alleging that it had infringed on its freedom of speech. “It is my pleasure to announce the State Department is taking a crucial step toward keeping the president’s promise to liberate American speech by abolishing forever the body formerly known as the Global Engagement Center (GEC),” he wrote. And he told Mike Benz, a former first-term Trump official who also reportedly has alt right views, during a YouTube interview, “We ended government-sponsored censorship in the United States through the State Department.”  

Censorship claims

For years, conservative voices both in and out of government have accused Big Tech of censoring conservative views—and they often charged GEC with enabling such censorship. 

GEC had its roots as the Center for Strategic Counterterrorism Communications (CSCC), created by an Obama-era executive order, but shifted its mission to fight propaganda and disinformation from foreign governments and terrorist organizations in 2016, becoming the Global Engagement Center. It was always explicitly focused on the international information space, but some of the organizations that it funded also did work in the United States. It shut down last December after a measure to reauthorize its $61 million budget was blocked by Republicans in Congress, who accused it of helping Big Tech censor American conservative voices. 

R/FIMI had a similar goal to fight foreign disinformation, but it was smaller: the newly created office had a $51.9 million budget, and a small staff that, by mid-April, was down to just 40 employees, from 125 at GEC. In a Wednesday morning meeting, those employees were told that they would  be put on administrative leave and terminated within 30 days. 

With the change in administrations, R/FIMI had never really gotten off the ground. Beattie, a controversial pick for undersecretary—he was fired as a speechwriter during the first Trump administration for attending a white nationalism conference, has suggested that the FBI organized the January 6 attack on Congress, and has said that it’s not worth defending Taiwan from China—had instructed the few remaining staff to be “pencils down,” one State Department official told me, meaning to pause in their work. 

The administration’s executive order on “countering censorship and restoring freedom of speech” reads like a summary of conservative accusations against GEC:

“Under the guise of combatting “misinformation,” “disinformation,” and “malinformation,” the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.  Government censorship of speech is intolerable in a free society.”

In 2023, The Daily Wire, founded by conservative media personality Ben Shapiro, joined The Federalist in suing GEC for allegedly infringing on the company’s first amendment rights by funding two non-profit organizations, the London-based Global Disinformation Index and New York-based NewsGuard, that had labeled The Daily Wire as “unreliable,” “risky,” and/or (per GDI), susceptible to foreign disinformation. Those projects were not funded by GEC. The lawsuit alleged that this amounted to censorship by “starving them of advertising revenue and reducing the circulation of their reporting and speech,” the lawsuit continued. 

In 2022, the Republican attorneys general of Missouri and Louisiana named GEC among the federal agencies that, they alleged, were pressuring social networks to censor conservative views. Though the case eventually made its way to the Supreme Court, which found no First Amendment violations, a lower court had already removed GEC’s name from the list of defendants, ruling there was “no evidence” that GEC’s communications with the social media platforms had gone beyond “educating the platforms on ‘tools and techniques used by foreign actors.’”

The stakes

The GEC—and now R/FIMI—was targeted as part of a wider campaign to shut down groups accused of being “weaponized” against conservatives. 

Conservative critics railing against what they have alternatively called a disinformation- or censorship- industrial complex have also taken aim at the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the Stanford Internet Observatory, a prominent research group that conducted widely cited research on the flows of disinformation during elections. 

CISA’s former director, Chris Krebs, was personally targeted in an April 9 White House memo, while in response to the criticism and millions of dollars in legal fees, Stanford University shuttered the Stanford Internet Observatory ahead of the 2024 presidential elections.  

But this targeting comes at a time when foreign disinformation campaigns—especially by Russia, China, and Iran—have become increasingly sophisticated. 

According to one estimate, Russia spends $1.5 billion per year on foreign influence campaigns. In 2022, the Islamic Republic of Iran Broadcasting, that country’s primary foreign propaganda arm, had a $1.26 billion budget. And a 2015 estimate suggests that China spent up to $10 billion per year on media targeting non-Chinese foreigners—a figure that has almost certainly grown.

In September 2024, the Justice Department indicted two employees of RT, a Russian state-owned propaganda agency, in a $10 million scheme to create propaganda aimed at influencing US audiences through a media company that has since been identified as the conservative Tenet Media. 

The GEC was one effort to counter such campaigns. Some of its recent projects have included developing AI models to detect memes and deepfakes and exposing Russian propaganda efforts to influence Latin American public opinion against the war in Ukraine. 

By law, the Office of Public Diplomacy has to provide Congress with 15-day advance notice of any intent to reassign any funding allocated by Congress over $1 million. Congress then has time to respond, ask questions, and challenge the decisions—though to judge from its record with other unilateral executive-branch decisions to gut government agencies, it is unlikely to do so. 

We have reached out to the State Department for comment. 

This story was updated at 11:55am to note that R/FIMI employees have confirmed that the office closed.
This story was updated at 12:37am to include confirmation about R/FIMI’s shutdown from Marco Rubio.
This story was updated at 6:10pm to add an identifier for Mike Benz.