NASA has made an air traffic control system for drones

On Thanksgiving weekend of 2013, Jeff Bezos, then Amazon’s CEO, took to 60 Minutes to make a stunning announcement: Amazon was a few years away from deploying drones that would deliver packages to homes in less than 30 minutes. 

It lent urgency to a problem that Parimal Kopardekar, director of the NASA Aeronautics Research Institute, had begun thinking about earlier that year.

“How do you manage and accommodate large-scale drone operations without overloading the air traffic control system?” Kopardekar, who goes by PK, recalls wondering. Busy managing all airplane takeoffs and landings, air traffic controllers clearly wouldn’t have the capacity to oversee the fleets of package-delivering drones Amazon was promising. 

The solution PK devised, which subsequently grew into a collaboration between federal agencies, researchers, and industry, is a system called unmanned-­aircraft-system traffic management, or UTM. Instead of verbally communicating with air traffic controllers, drone operators using UTM share their intended flight paths with each other via a cloud-based network.

This highly scalable approach may finally open the skies to a host of commercial drone applications that have yet to materialize. Amazon Prime Air launched in 2022 but was put on hold after crashes at a testing facility, for example. On any given day, only 8,500 or so unmanned aircraft fly in US airspace, the vast majority of which are used for recreational purposes rather than for services like search and rescue missions, real estate inspections, video surveillance, or farmland surveys. 

One obstacle to wider use has been concern over possible midair drone-to-drone collisions. (Drones are typically restricted to airspace below 400 feet and their access to airports is limited, which significantly lowers the risk of drone-airplane collisions.) Under Federal Aviation Administration regulations, drones generally cannot fly beyond an operator’s visual line of sight, limiting flights to about a third of a mile. This prevents most collisions but also most use cases, such as delivering medication to a patient’s doorstep or dispatching a police drone to an active crime scene so first responders can better prepare before arriving.

Now, though, drone operators are increasingly incorporating UTM into their flights. The system uses path planning algorithms, like those that run in Google Maps, to chart a course that considers not only weather and obstacles like buildings and trees but the flight paths of nearby drones. It’ll automatically reroute a flight before takeoff if another drone has reserved the same volume of airspace at the same time, making the new flight trajectory visible to subsequent pilots. Drones can then fly autonomously to and from their destination, and no air traffic controller is required. 

Over the past decade, NASA and industry have demonstrated to the FAA through a series of tests that drones can safely maneuver around each other by adhering to UTM. And last summer, the agency gave the go-ahead for multiple drone delivery companies using UTM to begin flying simultaneously in the same airspace above Dallas—a first in US aviation history. Drone operators without in-house UTM capabilities have also begun licensing UTM services from FAA-approved third-party providers.

UTM only works if all participants abide by the same rules and agree to share data, and it’s enabled a level of collaboration unusual for companies competing to gain a foothold in a young, hot field, notes Peter Sachs, head of airspace integration strategy at Zipline, a drone delivery company based in South San Francisco that’s approved to use UTM. 

“We all agree that we need to collaborate on the practical, behind-the-scenes nuts and bolts to make sure that this preflight deconfliction for drones works really well,” Sachs says. (“Strategic deconfliction” is the technical term for processes that minimize drone-drone collisions.) Zipline and the drone delivery companies Wing, Flytrex, and DroneUp all operate in the Dallas area and are racing to expand to more cities, yet they disclose where they’re flying to one another in the interest of keeping the airspace conflict-free.

Greater adoption of UTM may be on the way. The FAA is expected to soon release a new rule called Part 108 that may allow operators to fly beyond visual line of sight if, among other requirements, they have some UTM capability, eliminating the need for the difficult-­to-obtain waiver the agency currently requires for these flights. To safely manage this additional drone traffic, drone companies will have to continue working together to keep their aircraft out of each other’s way. 

Yaakov Zinberg is a writer based in Cambridge, Massachusetts.

We need targeted policies, not blunt tariffs, to drive “American energy dominance”

President Trump and his appointees have repeatedly stressed the need to establish “American energy dominance.” 

But the White House’s profusion of executive orders and aggressive tariffs, along with its determined effort to roll back clean-energy policies, are moving the industry in the wrong direction, creating market chaos and economic uncertainty that are making it harder for both legacy players and emerging companies to invest, grow, and compete.


Heat Exchange

MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here.


The current 90-day pause on rolling out most of the administration’s so-called “reciprocal” tariffs presents a critical opportunity. Rather than defaulting to broad, blunt tariffs, the administration should use this window to align trade policy with a focused industrial strategy—one aimed at winning the global race to become a manufacturing powerhouse in next-generation energy technologies. 

By tightly aligning tariff design with US strengths in R&D and recent government investments in the energy innovation lifecycle, the administration can turn a regressive trade posture into a proactive plan for economic growth and geopolitical advantage.

The president is right to point out that America is blessed with world-leading energy resources. Over the past decade, the country has grown from being a net importer to a net exporter of oil and the world’s largest producer of oil and gas. These resources are undeniably crucial to America’s ability to reindustrialize and rebuild a resilient domestic industrial base, while also providing strategic leverage abroad. 

But the world is slowly but surely moving beyond the centuries-old model of extracting and burning fossil fuels, a change driven initially by climate risks but increasingly by economic opportunities. America will achieve true energy dominance only by evolving beyond being a mere exporter of raw, greenhouse-gas-emitting energy commodities—and becoming the world’s manufacturing and innovation hub for sophisticated, high-value energy technologies.

Notably, the nation took a lead role in developing essential early components of the cleantech sector, including solar photovoltaics and electric vehicles. Yet too often, the fruits of that innovation—especially manufacturing jobs and export opportunities—have ended up overseas, particularly in China.

China, which is subject to Trump’s steepest tariffs and wasn’t granted any reprieve in the 90-day pause, has become the world’s dominant producer of lithium-ion batteries, EVs, wind turbines, and other key components of the clean-energy transition.

Today, the US is again making exciting strides in next-generation technologies, including fusion energy, clean steel, advanced batteries, industrial heat pumps, and thermal energy storage. These advances can transform industrial processes, cut emissions, improve air quality, and maximize the strategic value of our fossil-fuel resources. That means not simply burning them for their energy content, but instead using them as feedstocks for higher-value materials and chemicals that power the modern economy.

The US’s leading role in energy innovation didn’t develop by accident. For several decades, legislators on both sides of the political divide supported increasing government investments into energy innovation—from basic research at national labs and universities to applied R&D through ARPA-E and, more recently, to the creation of the Office of Clean Energy Demonstrations, which funds first-of-a-kind technology deployments. These programs have laid the foundation for the technologies we need—not just to meet climate goals, but to achieve global competitiveness.

Early-stage companies in competitive, global industries like energy do need extra support to help them get to the point where they can stand up on their own. This is especially true for cleantech companies whose overseas rivals have much lower labor, land, and environmental compliance costs.

That’s why, for starters, the White House shouldn’t work to eliminate federal investments made in these sectors under the Bipartisan Infrastructure Law and the Inflation Reduction Act, as it’s reportedly striving to do as part of the federal budget negotiations.

Instead, the administration and its Republican colleagues in Congress should preserve and refine these programs, which have already helped expand America’s ability to produce advanced energy products like batteries and EVs. Success should be measured not only in barrels produced or watts generated, but in dollars of goods exported, jobs created, and manufacturing capacity built.

The Trump administration should back this industrial strategy with smarter trade policy as well. Steep, sweeping tariffs won’t  build long-term economic strength. 

But there are certain instances where reasonable, modern, targeted tariffs can be a useful tool in supporting domestic industries or countering unfair trade practices elsewhere. That’s why we’ve seen leaders of both parties, including Presidents Biden and Obama, apply them in recent years.

Such levies can be used to protect domestic industries where we’re competing directly with geopolitical rivals like China, and where American companies need breathing room to scale and thrive. These aims can be achieved by imposing tariffs on specific strategic technologies, such as EVs and next-generation batteries.

But to be clear, targeted tariffs on a few strategic sectors are starkly different from Trump’s tariffs, which now include 145% levies on most Chinese goods, a 10% “universal” tariff on other nations and 25% fees on steel and aluminum. 

Another option is implementing a broader border adjustment policy, like the Foreign Pollution Fee Act recently reintroduced by Senators Cassidy and Graham, which is designed to create a level playing field that would help clean manufacturers in the US compete with heavily polluting businesses overseas.  

Just as important, the nation must avoid counterproductive tariffs on critical raw materials like steel, aluminum, and copper or retaliatory restrictions on critical minerals—all of which are essential inputs for US manufacturing. The nation does not currently produce enough of these materials to meet demand, and it would take years to build up that capacity. Raising input costs through tariffs only slows our ability to keep or bring key industries home.

Finally, we must be strategic in how we deploy the country’s greatest asset: our workforce. Americans are among the most educated and capable workers in the world. Their time, talent, and ingenuity shouldn’t be spent assembling low-cost, low-margin consumer goods like toasters. Instead, we should focus on building cutting-edge industrial technologies that the world is demanding. These are the high-value products that support strong wages, resilient supply chains, and durable global leadership.

The worldwide demand for clean, efficient energy technologies is rising rapidly, and the US cannot afford to be left behind. The energy transition presents not just an environmental imperative but a generational opportunity for American industrial renewal.

The Trump administration has a chance to define energy dominance not just in terms of extraction, but in terms of production—of technology, of exports, of jobs, and of strategic influence. Let’s not let that opportunity slip away.

Addison Killean Stark is the chief executive and cofounder of AtmosZero, an industrial steam heat pump startup based in Loveland, Colorado. He was previously a fellow at the Department of Energy’s ARPA-E division, which funds research and development of advanced energy technologies.

How a 1980s toy robot arm inspired modern robotics

As a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm. 

A drawing from the patent application for the Armatron robotic arm.
COURTESY OF TAKARA TOMY

Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was the rare toy that lived up to the hype printed on the front of the box. This was a legit robotic arm. You could rotate the arm to spin around its base, tilt it up and down, bend it at the “elbow” joint, rotate the “wrist,” and open and close the bright-­orange articulated hand in elegant chords of movement, all using only the twistable twin joysticks. 

Anyone who played with this toy will also remember the sound it made. Once you slid the power button to the On position, you heard a constant whirring sound of plastic gears turning and twisting. And if you tried to push it past its boundaries, it twitched and protested with a jarring “CLICK … CLICK … CLICK.”

It wasn’t just kids who found the Armatron so special. It was featured on the cover of the November/December 1982 issue of Robotics Age magazine, which noted that the $31.95 toy (about $96 today) had “capabilities usually found only in much more expensive experimental arms.”

pieces of the armatron disassembled and arranged on a table

JIM GOLDEN

A few years ago I found my Armatron, and when I opened the case to get it working again, I was startled to find that other than the compartment for the pair of D-cell batteries, a switch, and a tiny three-volt DC motor, this thing was totally devoid of any electronic components. It was purely mechanical. Later, I found the patent drawings for the Armatron online and saw how incredibly complex the schematics of the gearbox were. This design was the work of a genius—or a madman.

The man behind the arm

I needed to know the story of this toy. I reached out to the manufacturer, Tomy (now known as Takara Tomy), which has been in business in Japan for over 100 years. It put me in touch with Hiroyuki Watanabe, a 69-year-old engineer and toy designer living in Tokyo. He’s retired now, but he worked at Tomy for 49 years, building many classic handheld electronic toys of the ’80s, including Blip, Digital Diamond, Digital Derby, and Missile Strike. Watanabe’s name can be found on 44 patents, and he was involved in bringing between 50 and 60 products to market. Watanabe answered emailed questions via video, and his responses were translated from Japanese.

“I didn’t have a period where I studied engineering professionally. Instead, I enrolled in what Japan would call a technical high school that trains technical engineers, and I actually [entered] the electrical department there,” he told me. 

Afterward, he worked at Komatsu Manufacturing—because, he said, he liked bulldozers. But in 1974, he saw that Tomy was hiring, and he wanted to make toys. “I was told that it was the No. 1 toy company in Japan, so I decided [it was worth a look],” he said. “I took a night train from Tohoku to Tokyo to take a job exam, and that’s how I ended up joining the company.”

The inspiration for the Armatron came from a newspaper clipping that Watanabe’s boss brought to him one day. “It showed an image of a [mechanical arm] holding an egg with three fingers. I think we started out thinking, ‘This is where things are heading these days, so let’s make this,’” he recalled. 

As the lead of a small team, Watanabe briefly turned his attention to another project, and by the time he returned to the robotic arm, the team had a prototype. But it was quite different from the Armatron’s final form. “The hand stuck out from the main body to the side and could only move about 90 degrees. The control panel also had six movement positions, and they were switched using six switches. I personally didn’t like that,” said Watanabe. So he went back to work.

The Armatron’s inventor, Hiroyuki Watanabe, in Tokyo in 2025
COURTESY OF TAKARA TOMY

Watanabe’s breakthrough was inspired by the radio-controlled helicopters he operated as a hobby. Holding up a radio remote controller with dual joystick controls, he told me, “This stick operation allows you to perform four movements with two arms, but I thought that if you twist this part, you can use six movements.”

Watanabe at work at Tomy in Tokyo in 1982.
COURTESY OF HIROYUKI WATANABE

“I had always wanted to create a system that could rotate 360 degrees, so I thought about how to make that system work,” he added.

Watanabe stressed that while he is listed as the Armatron’s primary inventor, it was a team effort. A designer created the case, colors, and logo, adding touches to mimic features seen on industrial robots of the time, such as the rubber tubes (which are just for looks). 

When the Armatron first came out, in 1981, robotics engineers started contacting Watanabe. “I wasn’t so much hearing from people at toy stores, but rather from researchers at university laboratories, factories, and companies that were making industrial robots,” he said. “They were quite encouraging, and we often talked together.”

The long reach of the robot at Radio Shack

The bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics.

One of them was Adam Borrell, a mechanical design engineer who has been building robots for 15 years at Boston Dynamics, including Petman, the YouTube-famous Atlas, and the dog-size quadruped called Spot. 

Borrell grew up a few blocks away from a Radio Shack in New York City. “If I was going to the subway station, we would walk right by Radio Shack. I would stop in and play with it and set the timer, do the challenges,” he says. “I know it was a toy, but that was a real robot.” The Armatron was the hook that lured him into Radio Shack and then sparked his lifelong interest in engineering: “I would roll pennies and use them to buy soldering irons and solder at Radio Shack.” 

“There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

Borrell had a fateful reunion with the toy while in grad school for engineering. “One of my office mates had an Armatron at his desk,” he recalls, “and it was broken. We took it apart together, and that was the first time I had seen the guts of it. 

“It had this fantastic mechanical gear train to just engage and disengage this one motor in a bunch of different ways. And it was really fascinating that it had done so much—the one little motor. And that sort of got me back thinking about industrial robot arms again.” 

Eric Paulos, a professor of electrical engineering and computer science at the University of California, Berkeley, recalls nagging his parents about what an educational gift Armatron would make. Ultimately, he succeeded in his lobbying. 

“It was just endless exploration of picking stuff up and moving it around and even just watching it move. It was mesmerizing to me. I felt like I really owned my own little robot,” he recalls. “I cherish this thing. I still have it to this day, and it’s still working.” 

The Armatron on the cover of the November/December 1982 issue of Robotics Age magazine.
PUBLIC DOMAIN

Today, Paulos builds robots and teaches his students how to build their own. He challenges them to solve problems within constraints, such as building with cardboard or Play-Doh; he believes the restrictions facing Watanabe and his team ultimately forced them to be more creative in their engineering.

It’s not very hard to draw connections between the Armatron—an impossibly analog robot—and highly advanced machines that are today learning to move in incredible new ways, powered by AI advancements like computer vision and reinforcement learning.

Paulos sees parallels between the problems he tackled as a kid with his Armatron and those that researchers are still trying to deal with today: “What happens when you pick things up and they’re too heavy, but you can sort of pick it up if you approach it from different angles? Or how do you grip things? There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.”

While AI may be taking over the world of robotics, the field still requires engineers—builders and tinkerers who can problem-solve in the physical world. 

A page from the 1984 Radio Shack catalogue,
featuring the Armatron for $31.95.
COURTESY OF RADIOSHACKCATALOGS.COM

The Armatron encouraged kids to explore these analog mechanics, a reminder that not all breakthroughs happen on a computer screen. And that hands-on curiosity hasn’t faded. Today, a new generation of fans are rediscovering the Armatron through online communities and DIY modifications. Dozens of Armatron videos are on YouTube, including one where the arm has been modified to run on steam power

“I’m very happy to see people who love mechanisms are amazed,” Watanabe told me. “I’m really happy that there are still people out there who love our products in this way.” 

Jon Keegan writes about technology and AI and publishes Beautiful Public Data, a curated collection of government data sets (beautifulpublicdata.com).

These four charts sum up the state of AI and energy

While it’s rare to look at the news without finding some headline related to AI and energy, a lot of us are stuck waving our hands when it comes to what it all means.

Sure, you’ve probably read that AI will drive an increase in electricity demand. But how that fits into the context of the current and future grid can feel less clear from the headlines. That’s true even for people working in the field. 

A new report from the International Energy Agency digs into the details of energy and AI, and I think it’s worth looking at some of the data to help clear things up. Here are four charts from the report that sum up the crucial points about AI and energy demand.

1. AI is power hungry, and the world will need to ramp up electricity supply to meet demand. 

This point is the most obvious, but it bears repeating: AI is exploding, and it’s going to lead to higher energy demand from data centers. “AI has gone from an academic pursuit to an industry with trillions of dollars at stake,” as the IEA report’s executive summary puts it.

Data centers used less than 300 terawatt-hours of electricity in 2020. That could increase to nearly 1,000 terawatt-hours in the next five years, which is more than Japan’s total electricity consumption today.

Today, the US has about 45% of the world’s data center capacity, followed by China. Those two countries will continue to represent the overwhelming majority of capacity through 2035.  

2. The electricity needed to power data centers will largely come from fossil fuels like coal and natural gas in the near term, but nuclear and renewables could play a key role, especially after 2030.

The IEA report is relatively optimistic on the potential for renewables to power data centers, projecting that nearly half of global growth by 2035 will be met with renewables like wind and solar. (In Europe, the IEA projects, renewables will meet 85% of new demand.)

In the near term, though, natural gas and coal will also expand. An additional 175 terawatt-hours from gas will help meet demand in the next decade, largely in the US, according to the IEA’s projections. Another report, published this week by the energy consultancy BloombergNEF, suggests that fossil fuels will play an even larger role than the IEA projects, accounting for two-thirds of additional electricity generation between now and 2035.

Nuclear energy, a favorite of big tech companies looking to power operations without generating massive emissions, could start to make a dent after 2030, according to the IEA data.

3. Data centers are just a small piece of expected electricity demand growth this decade.

We should be talking more about appliances, industry, and EVs when we talk about energy! Electricity demand is on the rise from a whole host of sources: Electric vehicles, air-conditioning, and appliances will each drive more electricity demand than data centers between now and the end of the decade. In total, data centers make up a little over 8% of electricity demand expected between now and 2030.

There are interesting regional effects here, though. Growing economies will see more demand from the likes of air-conditioning than from data centers. On the other hand, the US has seen relatively flat electricity demand from consumers and industry for years, so newly rising demand from high-performance computing will make up a larger chunk. 

4. Data centers tend to be clustered together and close to population centers, making them a unique challenge for the power grid.  

The grid is no stranger to facilities that use huge amounts of energy: Cement plants, aluminum smelters, and coal mines all pull a lot of power in one place. However, data centers are a unique sort of beast.

First, they tend to be closely clustered together. Globally, data centers make up about 1.5% of total electricity demand. However, in Ireland, that number is 20%, and in Virginia, it’s 25%. That trend looks likely to continue, too: Half of data centers under development in the US are in preexisting clusters.

Data centers also tend to be closer to urban areas than other energy-intensive facilities like factories and mines. 

Since data centers are close both to each other and to communities, they could have significant impacts on the regions where they’re situated, whether by bringing on more fossil fuels close to urban centers or by adding strain to the local grid. Or both.

Overall, AI and data centers more broadly are going to be a major driving force for electricity demand. It’s not the whole story, but it’s a unique part of our energy picture to continue watching moving forward. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

How creativity became the reigning value of our time

Americans don’t agree on much these days. Yet even at a time when consensus reality seems to be on the verge of collapse, there remains at least one quintessentially modern value we can all still get behind: creativity. 

We teach it, measure it, envy it, cultivate it, and endlessly worry about its death. And why wouldn’t we? Most of us are taught from a young age that creativity is the key to everything from finding personal fulfillment to achieving career success to solving the world’s thorniest problems. Over the years, we’ve built creative industries, creative spaces, and creative cities and populated them with an entire class of people known simply as “creatives.” We read thousands of books and articles each year that teach us how to unleash, unlock, foster, boost, and hack our own personal creativity. Then we read even more to learn how to manage and protect this precious resource. 

Given how much we obsess over it, the concept of creativity can feel like something that has always existed, a thing philosophers and artists have pondered and debated throughout the ages. While it’s a reasonable assumption, it’s one that turns out to be very wrong. As Samuel Franklin explains in his recent book, The Cult of Creativity, the first known written use of creativity didn’t actually occur until 1875, “making it an infant as far as words go.” What’s more, he writes, before about 1950, “there were approximately zero articles, books, essays, treatises, odes, classes, encyclopedia entries, or anything of the sort dealing explicitly with the subject of ‘creativity.’”

This raises some obvious questions. How exactly did we go from never talking about creativity to always talking about it? What, if anything, distinguishes creativity from other, older words, like ingenuity, cleverness, imagination, and artistry? Maybe most important: How did everyone from kindergarten teachers to mayors, CEOs, designers, engineers, activists, and starving artists come to believe that creativity isn’t just good—personally, socially, economically—but the answer to all life’s problems?

Thankfully, Franklin offers some potential answers in his book. A historian and design researcher at the Delft University of Technology in the Netherlands, he argues that the concept of creativity as we now know it emerged during the post–World War II era in America as a kind of cultural salve—a way to ease the tensions and anxieties caused by increasing conformity, bureaucracy, and suburbanization.

“Typically defined as a kind of trait or process vaguely associated with artists and geniuses but theoretically possessed by anyone and applicable to any field, [creativity] provided a way to unleash individualism within order,” he writes, “and revive the spirit of the lone inventor within the maze of the modern corporation.”

Brainstorming, a new method for encouraging creative thinking, swept corporate America in the 1950s. A response to pressure for new products and new ways of marketing them, as well as a panic over conformity, it inspired passionate debate about whether true creativity should be an individual affair or could be systematized for corporate use.
INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS

I spoke to Franklin about why we continue to be so fascinated by creativity, how Silicon Valley became the supposed epicenter of it, and what role, if any, technologies like AI might have in reshaping our relationship with it. 

I’m curious what your personal relationship to creativity was growing up. What made you want to write a book about it?

Like a lot of kids, I grew up thinking that creativity was this inherently good thing. For me—and I imagine for a lot of other people who, like me, weren’t particularly athletic or good at math and science—being creative meant you at least had some future in this world, even if it wasn’t clear what that future would entail. By the time I got into college and beyond, the conventional wisdom among the TED Talk register of thinkers—people like Daniel Pink and Richard Florida—was that creativity was actually the most important trait to have for the future. Basically, the creative people were going to inherit the Earth, and society desperately needed them if we were going to solve all of these compounding problems in the world. 

On the one hand, as someone who liked to think of himself as creative, it was hard not to be flattered by this. On the other hand, it all seemed overhyped to me. What was being sold as the triumph of the creative class wasn’t actually resulting in a more inclusive or creative world order. What’s more, some of the values embedded in what I call the cult of creativity seemed increasingly problematic—specifically, the focus on self-­realization, doing what you love, and following your passion. Don’t get me wrong—it’s a beautiful vision, and I saw it work out for some people. But I also started to feel like it was just a cover for what was, economically speaking, a pretty bad turn of events for many people.  

Staff members at the University of California’s Institute of Personality Assessment and Research simulate a situational procedure involving group interaction, called the Bingo Test. Researchers of the 1950s hoped to learn how factors in people’s lives and environments shaped their creative aptitude.
INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS

Nowadays, it’s quite common to bash the “follow your passion,” “hustle culture” idea. But back when I started this project, the whole move-fast-and-break-things, disrupter, innovation-economy stuff was very much unquestioned. In a way, the idea for the book came from recognizing that creativity was playing this really interesting role in connecting two worlds: this world of innovation and entrepreneurship and this more soulful, bohemian side of our culture. I wanted to better understand the history of that relationship.

When did you start thinking about creativity as a kind of cultone that we’re all a part of? 

Similar to something like the “cult of domesticity,” it was a way of describing a historical moment in which an idea or value system achieves a kind of broad, uncritical acceptance. I was finding that everyone was selling stuff based on the idea that it boosted your creativity, whether it was a new office layout, a new kind of urban design, or the “Try these five simple tricks” type of thing. 

You start to realize that nobody is bothering to ask, “Hey, uh, why do we all need to be creative again? What even is this thing, creativity?” It had become this unimpeachable value that no one, regardless of what side of the political spectrum they fell on, would even think to question. That, to me, was really unusual, and I think it signaled that something interesting was happening.

Your book highlights midcentury efforts by psychologists to turn creativity into a quantifiable mental trait and the “creative person” into an identifiable type. How did that play out? 

The short answer is: not very well. To study anything, you of course need to agree on what it is you’re looking at. Ultimately, I think these groups of psychologists were frustrated in their attempts to come up with scientific criteria that defined a creative person. One technique was to go find people who were already eminent in fields that were deemed creative—writers like Truman Capote and Norman Mailer, architects like Louis Kahn and Eero Saarinen—and just give them a battery of cognitive and psychoanalytic tests and then write up the results. This was mostly done by an outfit called the Institute of Personality Assessment and Research (IPAR) at Berkeley. Frank Barron and Don MacKinnon were the two biggest researchers in that group.

Another way psychologists went about it was to say, all right, that’s not going to be practical for coming up with a good scientific standard. We need numbers, and lots and lots of people to certify these creative criteria. This group of psychologists theorized that something called “divergent thinking” was a major component of creative accomplishment. You’ve heard of the brick test, where you’re asked to come up with many creative uses for a brick in a given amount of time? They basically gave a version of that test to Army officers, schoolchildren, rank-and-file engineers at General Electric, all kinds of people. It’s tests like those that ultimately became stand-ins for what it means to be “creative.”

Are they still used? 

When you see a headline about AI making people more creative, or actually being more creative than humans, the tests they are basing that assertion on are almost always some version of a divergent thinking test. It’s highly problematic for a number of reasons. Chief among them is the fact that these tests have never been shown to have predictive value—that’s to say, a third grader, a 21-year-old, or a 35-year-old who does really well on divergent thinking tests doesn’t seem to have any greater likelihood of being successful in creative pursuits. The whole point of developing these tests in the first place was to both identify and predict creative people. None of them have been shown to do that. 

Reading your book, I was struck by how vague and, at times, contradictory the concept of “creativity” was from the beginning. You characterize that as “a feature, not a bug.” How so?

Ask any creativity expert today what they mean by “creativity,” and they’ll tell you it’s the ability to generate something new and useful. That something could be an idea, a product, an academic paper—whatever. But the focus on novelty has remained an aspect of creativity from the beginning. It’s also what distinguishes it from other similar words, like imagination or cleverness. But you’re right: Creativity is a flexible enough concept to be used in all sorts of ways and to mean all sorts of things, many of them contradictory. I think I write in the book that the term may not be precise, but that it’s vague in precise and meaningful ways. It can be both playful and practical, artsy and technological, exceptional and pedestrian. That was and remains a big part of its appeal. 

The question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming [AI] into our lives as advisors and assistants.

Is that emphasis on novelty and utility a part of why Silicon Valley likes to think of itself as the new nexus for creativity?

Absolutely. The two criteria go together. In techno-solutionist, hypercapitalist milieus like Silicon Valley, novelty isn’t any good if it’s not useful (or at least marketable), and utility isn’t any good (or marketable) unless it’s also novel. That’s why they’re often dismissive of boring-but-important things like craft, infrastructure, maintenance, and incremental improvement, and why they support art—which is traditionally defined by its resistance to utility—only insofar as it’s useful as inspiration for practical technologies.

At the same time, Silicon Valley loves to wrap itself in “creativity” because of all the artsy and individualist connotations. It has very self-consciously tried to distance itself from the image of the buttoned-down engineer working for a large R&D lab of a brick-and-mortar manufacturing corporation and instead raise up the idea of a rebellious counterculture type tinkering in a garage making weightless products and experiences. That, I think, has saved it from a lot of public scrutiny.

Up until recently, we’ve tended to think of creativity as a human trait, maybe with a few exceptions from the rest of the animal world. Is AI changing that?

When people started defining creativity in the ’50s, the threat of computers automating white-collar work was already underway. They were basically saying, okay, rational and analytical thinking is no longer ours alone. What can we do that the computers can never do? And the assumption was that humans alone could be “truly creative.” For a long time, computers didn’t do much to really press the issue on what that actually meant. Now they’re pressing the issue. Can they do art and poetry? Yes. Can they generate novel products that also make sense or work? Sure.

I think that’s by design. The kinds of LLMs that Silicon Valley companies have put forward are meant to appear “creative” in those conventional senses. Now, whether or not their products are meaningful or wise in a deeper sense, that’s another question. If we’re talking about art, I happen to think embodiment is an important element. Nerve endings, hormones, social instincts, morality, intellectual honesty—those are not things essential to “creativity” necessarily, but they are essential to putting things out into the world that are good, and maybe even beautiful in a certain antiquated sense. That’s why I think the question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming them into our lives as advisors and assistants. 

This interview is based on two conversations and has been edited and condensed for clarity.

Bryan Gardiner is a writer based in Oakland, California.

Longevity clinics around the world are selling unproven treatments

The quest for long, healthy life—and even immortality—is probably almost as old as humans are, but it’s never been hotter than it is right now. Today my newsfeed is full of claims about diets, exercise routines, and supplements that will help me live longer.

A lot of it is marketing fluff, of course. It should be fairly obvious that a healthy, plant-rich diet and moderate exercise will help keep you in good shape. And no drugs or supplements have yet been proved to extend human lifespan.

The growing field of longevity medicine is apparently aiming for something in between these two ends of the wellness spectrum. By combining the established tools of clinical medicine (think blood tests and scans) with some more experimental ones (tests that measure your biological age), these clinics promise to help their clients improve their health and longevity.

But a survey of longevity clinics around the world, carried out by an organization that publishes updates and research on the industry, is revealing a messier picture. In reality, these clinics—most of which cater only to the very wealthy—vary wildly in their offerings.

Today, the number of longevity clinics is thought to be somewhere in the hundreds. The proponents of these clinics say they represent the future of medicine. “We can write new rules on how we treat patients,” Eric Verdin, who directs the Buck Institute for Research on Aging, said at a professional meeting last year.

Phil Newman, who runs Longevity.Technology, a company that tracks the longevity industry, says he knows of 320 longevity clinics operating around the world. Some operate multiple centers on an international scale, while others involve a single “practitioner” incorporating some element of “longevity” into the treatments offered, he says. To get a better idea of what these offerings might be, Newman and his colleagues conducted a survey of 82 clinics around the world, including the US, Australia, Brazil, and multiple countries in Europe and Asia.

Some of the results are not all that surprising. Three-quarters of the clinics said that most of their clients were Gen Xers, aged between 44 and 59. This makes sense—anecdotally, it’s around this age that many people start to feel the effects of aging. And research suggests that waves of molecular changes associated with aging hit us in our 40s and again in our 60s. (Longevity influencers Bryan Johnson, Andrew Huberman, and Peter Attia all fall into this age group too.)

And I wasn’t surprised to see that plenty of clinics are offering aesthetic treatments, focusing more on how old their clients look. Of the clinics surveyed, 28% said they offered Botox injections, 35% offered hair loss treatments, and 38% offered “facial rejuvenation procedures.”  “The distinction between longevity medicine and aesthetic medicine remains blurred,” Andrea Maier of the National University of Singapore, and cofounder of a private longevity clinic, wrote in a commentary on the report.

Maier is also former president of the Healthy Longevity Medicine Society, an organization that was set up with the aim of establishing clinical standards and credibility for longevity clinics. Other results from the survey underline how much of a challenge this will be; many clinics are still offering unproven treatments. Over a third of the clinics said they offered stem-cell treatments, for example. There is no evidence that those treatments will help people live longer—and they are not without risk, either.

I was a little surprised to see that most of the clinics are also offering prescription medicines off label. In other words, drugs that have been approved for specific medical issues are apparently being prescribed for aging instead. This is also not without risks—all medicines have side effects. And, again, none of them have been proved to slow or reverse human aging.

And these prescriptions are coming from certified medical doctors. More than 80% of clinics reported that their practice was overseen by a medical doctor with more than 10 years of clinical experience.

It was also a little surprising to learn that despite their high fees, most of these clinics are not making a profit. For clients, the annual costs of attending a longevity clinic range between $10,000 and $150,000, according to Fountain Life, a company with clinics in Florida and Prague. But only 39% of the surveyed clinics said they were turning a profit and 30% said they were “approaching breaking even,” while 16% said they were operating at a loss.

Proponents of longevity clinics have high hopes for the field. They see longevity medicine as nothing short of a revolution—a move away from reactive treatments and toward proactive health maintenance. But these survey results show just how far they have to go.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The world’s biggest space-based radar will measure Earth’s forests from orbit

Forests are the second-largest carbon sink on the planet, after the oceans. To understand exactly how much carbon they trap, the European Space Agency and Airbus have built a satellite called Biomass that will use a long-prohibited band of the radio spectrum to see below the treetops around the world. It will lift off from French Guiana toward the end of April and will boast the largest space-based radar in history, though it will soon be tied in orbit by the US-India NISAR imaging satellite, due to launch later this year.

Roughly half of a tree’s dry mass is made of carbon, so getting a good measure of how much a forest weighs can tell you how much carbon dioxide it’s taken from the atmosphere. But scientists have no way of measuring that mass directly. 

“To measure biomass, you need to cut the tree down and weigh it, which is why we use indirect measuring systems,” says Klaus Scipal, manager of the Biomass mission. 

These indirect systems rely on a combination of field sampling—foresters roaming among the trees to measure their height and diameter—and remote sensing technologies like lidar scanners, which can be flown over the forests on airplanes or drones and used to measure treetop height along lines of flight. This approach has worked well in North America and Europe, which have well-established forest management systems in place. “People know every tree there, take lots of measurements,” Scipal says. 

But most of the world’s trees are in less-mapped places, like the Amazon jungle, where less than 20% of the forest has been studied in depth on the ground. To get a sense of the biomass in those remote, mostly inaccessible areas, space-based forest sensing is the only feasible option. The problem is, the satellites we currently have in orbit are not equipped for monitoring trees. 

Tropical forests seen from space look like green plush carpets, because all we can see are the treetops; from imagery like this, we can’t tell how high or thick the trees are. Radars we have on satellites like Sentinel 1 use short radio wavelengths like those in the C band, which fall between 3.9 and 7.5 centimeters. These bounce off the leaves and smaller branches and can’t penetrate the forest all the way to the ground. 

This is why for the Biomass mission ESA went with P-band radar. P-band radio waves, which are about 10 times longer in wavelength, can see bigger branches and the trunks of trees, where most of their mass is stored. But fitting a P-band radar system on a satellite isn’t easy. The first problem is the size. 

“Radar systems scale with wavelengths—the longer the wavelength, the bigger your antennas need to be. You need bigger structures,” says Scipal. To enable it to carry the P-band radar, Airbus engineers had to make the Biomass satellite two meters wide, two meters thick, and four meters tall. The antenna for the radar is 12 meters in diameter. It sits on a long, multi-joint boom, and Airbus engineers had to fold it like a giant umbrella to fit it into the Vega C rocket that will lift it into orbit. The unfolding procedure alone is going to take several days once the satellite gets to space. 

Sheer size, though, is just one reason we have generally avoided sending P-band radars to space. Operating such radar systems in space is banned by International Telecommunication Union regulations, and for a good reason: interference. 

workers moving the BIOMASS satellite in a clean space
Workers roll the BIOMASS satellite out into a cleanroom to be inspected before the launch
ESA-CNES-ARIANESPACE/OPTIQUE VIDéO DU CSG–S. MARTIN

“The primary frequency allocation in P band is for huge SOTR [single-object-tracking radars] Americans use to detect incoming intercontinental ballistic missiles. That was, of course, a problem for us,” Scipal says. To get an exemption from the ban on space-based P-band radars, ESA had to agree to several limitations, the most painful of which was turning the Biomass radar off over North America and Europe to avoid interfering with SOTR coverage.

“This was a pity. It’s a European mission, so we wanted to do observations in Europe,” Scipal says. The rest of the world, though, is fair game.

The Biomass mission is scheduled to last five years. Calibration of the radar and other systems is going to take the first five months. After that, Biomass will enter its tomography phase, gathering data to create detailed biomass maps of the forests in India, Australia, Siberia, South America, Africa—everywhere but North America and Europe. “Tomography will work like a CT scan in a hospital. We will take images of each area from various different positions and create the 3D map of the forests,” Scipal says. 

Getting full, global coverage is expected to take 18 months. Then, for the rest of the mission, Biomass will switch to a different measurement method, capturing one full global map every nine months to measure how the condition of our forests changes over time. 

“The scientific goal here is to really understand the role of forests in the global carbon cycle. The main interest is the tropics because it’s the densest forest which is under the biggest threat of deforestation and the one we know the least about,” Scipal says.

Biomass is going to provide hectare-scale-resolution 3D maps of those tropical forests, including everything from the tree heights to ground topography—something we’ve never had before. But there are limits to what it can do. 

“One drawback is that we won’t get insights into seasonal deviations in forest throughout the year because of the time it takes for Biomass to do global coverage,” says Irena Hajnsek, a professor of Earth observation at ETH Zurich, who is not involved in the Biomass mission. And Biomass is still going to leave some of our questions about carbon sinks unanswered.

“In all our estimations of climate change, we know how much carbon is in the atmosphere, but we do not know so much about how much carbon is stored on land,” says Hajnsek. Biomass will have its limits, she says, since significant amounts of carbon are trapped in the soil in permafrost areas, which the mission won’t be able to measure.

“But we’re going to learn how much carbon is stored in the forests and also how much of it is getting released due to disturbances like deforestation or fires,” she says. “And that is going to be a huge contribution.”

This spa’s water is heated by bitcoin mining

At first glance, the Bathhouse spa in Brooklyn looks not so different from other high-end spas. What sets it apart is out of sight: a closet full of cryptocurrency-­mining computers that not only generate bitcoins but also heat the spa’s pools, marble hammams, and showers. 

When cofounder Jason Goodman opened Bathhouse’s first location in Williamsburg in 2019, he used conventional pool heaters. But after diving deep into the world of bitcoin, he realized he could fit cryptocurrency mining seamlessly into his business. That’s because the process, where special computers (called miners) make trillions of guesses per second to try to land on the string of numbers that will earn a bitcoin, consumes tremendous amounts of electricitywhich in turn produces plenty of heat that usually goes to waste. 

 “I thought, ‘That’s interestingwe need heat,’” Goodman says of Bathhouse. Mining facilities typically use fans or water to cool their computers. And pools of water, of course, are a prominent feature of the spa. 

It takes six miners, each roughly the size of an Xbox One console, to maintain a hot tub at 104 °F. At Bathhouse’s  Williamsburg location, miners hum away quietly inside two large tanks, tucked in a storage closet among liquor bottles and teas. To keep them cool and quiet, the units are immersed directly in non-conductive oil, which absorbs the heat they give off and is pumped through tubes beneath Bathhouse’s hot tubs and hammams. 

Mining boilers, which cool the computers by pumping in cold water that comes back out at 170 °F, are now also being used at the site. A thermal battery stores excess heat for future use. 

Goodman says his spas aren’t saving energy by using bitcoin miners for heat, but they’re also not using any more than they would with conventional water heating. “I’m just inserting miners into that chain,” he says. 

Goodman isn’t the only one to see the potential in heating with crypto. In Finland, Marathon Digital Holdings turned fleets of bitcoin miners into a district heating system to warm the homes of 80,000 residents. HeatCore, an integrated energy service provider, has used bitcoin mining to heat a commercial office building in China and to keep pools at a constant temperature for fish farming. This year it will begin a pilot project to heat seawater for desalination. On a smaller scale, bitcoin fans who also want some extra warmth can buy miners that double as space heaters. 

Crypto enthusiasts like Goodman think much more of this is comingespecially under the Trump administration, which has announced plans to create a bitcoin reserve. This prospect alarms environmentalists. 

The energy required for a single bitcoin transaction varies, but as of mid-March it was equivalent to the energy consumed by an average US household over 47.2 days, according to the Bitcoin Energy Consumption Index, run by the economist Alex de Vries. 

Among the various cryptocurrencies, bitcoin mining gobbles up the most energy by far. De Vries points out that others, like ethereum, have eliminated mining and implemented less energy-­intensive algorithms. But bitcoin users resist any change to their currency, so de Vries is doubtful a shift away from mining will happen anytime soon. 

One key barrier to using bitcoin for heating, de Vries says, is that the heat can only be transported short distances before it dissipates. “I see this as something that is extremely niche,” he says. “It’s just not competitive, and you can’t make it work at a large scale.” 

The more renewable sources that are added to electric grids to replace fossil fuels, the cleaner crypto mining will become. But even if bitcoin is powered by renewable energy, “that doesn’t make it sustainable,” says Kaveh Madani, director of the United Nations University Institute for Water, Environment, and Health. Mining burns through valuable resources that could otherwise be used to meet existing energy needs, Madani says. 

For Goodman, relaxing into bitcoin-heated water is a completely justifiable use of energy. It soothes the muscles, calms the mind, and challenges current economic structures, all at the same time. 

Carrie Klein is a freelance journalist based in New York City.

Ecommerce Investor on Turnaround Tactics

Mehtab Bhogal is the co-founder of Karta Ventures, a Canada-based acquirer of troubled ecommerce businesses. The firm seeks companies with “issues,” such as unpaid taxes, regulatory problems, and founder disputes.

He says buying distressed companies is like salvaging a crashed car. “What are the parts worth?” he asks.

Mehtab and I recently spoke. He addressed identifying hidden value, turnaround tactics, seller concerns, and more.

The entire audio of our conversation is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Give us a quick rundown.

Mehtab Bhogal: I’m the co-founder of Karta Ventures. We invest in consumer brands in distressed situations, such as tax issues, regulatory problems, founder disputes, things like that. We move fast and write checks quickly. Our portfolio ranges from a direct-to-consumer succulent plant farm to traditional apparel companies.

Early on, we invested in companies with both income statement and balance sheet problems. Now, I prefer one or the other. We focus on size. We will shrink a company if necessary. An optimal size for us is scaling businesses down to $15-$20 million in annual D2C revenue if we’re buying them outright.

For example, we looked at a retailer once that had peaked at $110 million, was doing $70 million, but we believed it operated most profitably at $30–$40 million in revenue.

Bandholz: How do you find the right deals?

Bhogal: In 2018, when we began, we sent cold emails to over 2,000 companies. We used BuiltWith to analyze tech stacks and backends to estimate revenue. From there, we targeted businesses generating a few million annually. Most of our deal flow now comes from word of mouth, especially since other investors tend to avoid turnarounds.

We also invest in profitable companies with big projects. One company needed help building a new manufacturing facility, which we’re good at. If there’s value to unlock, we’re interested.

Buying distressed companies is like salvaging a crashed car: What are the parts worth, and what could a skilled mechanic do with them? We sometimes acquire the right to buy equity before full diligence. That lets us move quickly, cut costs, and create breathing room while we dig deeper. We often reduce expenses by six to seven figures within a week or so. Meanwhile, we gain insights, and the existing management determines if they want to work with us.

Bandholz: How can you make those cuts in a single week?

Bhogal: It’s all about context. We can usually tell whether growth came from good marketing or a great product.

For example, I know a founder doing 30% net margins on $30-40 million in annual revenue. He has no idea what he’s doing on the ecommerce side. But his product is incredible — strong patents, hard to copy, perfect market fit. That’s why it works.

We’ve developed pattern recognition from working with many companies. We spot inefficiencies quickly, such as bloated teams, sloppy ad accounts, and underutilized staff. For instance, if a company needs only four raw materials, why does it have an entire supply chain team?

Or why does a CFO at a $20 million company have a huge support staff?

Founders are sometimes great at marketing but weak at finance or operations. I can log into a Google Ads account and quickly see if targeting and spend are optimized. That’s the type of stuff we jump on fast.

Bandholz: Is your goal to flip a business or hold it?

Bhogal: It depends. Sometimes we buy the business outright; other times we invest as minority shareholders with no control — both models work for us.

Take the succulent plant business we invested in back in 2018. We helped restructure debt, acquired a farm to integrate vertically, and began growing and shipping plants ourselves from Riverside, California. We’ve held that position and won’t exit unless the founder wants to. That was our agreement — get our cash out in one to two years and go from there.

Other founders want to optimize and sell in 6-12 months. That works too. The key is alignment: Everyone should have the same end goal and roadmap. If those are in place, things rarely go wrong.

Bandholz: What’s your daily focus, researching deals or operating businesses?

Bhogal: We’re hands-on. We teach teams how to manage recurring tasks. But for one-off strategic decisions, such as evaluating whether to use a 3PL or in-house fulfillment, we’re directly involved. The same goes for setting up manufacturing or optimizing marketing. We’ve performed those analyses so many times that we can quickly run the numbers.

We don’t want to be in the weeds long-term, but we’ll dive in initially to gain clarity and speed things up. We want the company to operate without needing our daily involvement. But we’re very engaged for the first few months.

Bandholz: How should founders evaluate debt financing?

Bhogal: First, understand the deal. Model your payments and liabilities. Know if there’s a personal guarantee, if the loan is secured, and what happens if revenue dips. Research lenders on PACER to review their legal history — some are reputable, others not so much. Ecommerce lenders, in particular, can be volatile. Many raised venture money and spent it recklessly.

Ask yourself: Where is this lender getting its money? Is it sustainable, or will its problems become yours in a downturn? In uncertain consumer markets, flexibility matters. We’d rather pay more for a dependable, traditional lender than risk a deal that could backfire if the economy shifts.

Bandholz: Where can people contact you?

Bhogal: Our site is KartaVentures.com. I’m on X and LinkedIn.

Competitor Analysis In Local SEO And How To Gain An Edge via @sejournal, @JRiddall

In every community, multiple businesses and business types vie for prominence within a limited geographic radius.

As such, when it comes to online visibility and local SEO, competitor analysis isn’t just a best practice – it’s a necessity.

Understanding and responding to your rivals’ strategies, strengths, and weaknesses is the cornerstone of a winning local SEO campaign.

For SEO professionals, this means going beyond surface-level observations and diving deep into data to uncover actionable insights.

Let’s explore how to conduct a thorough competitor analysis and leverage any findings to gain an edge in local organic search, drive targeted traffic, and improve the bottom line.

Identifying Your Local Online Competition

Before you can analyze your competition, you need to identify them.

A business you consider a competitor offline may or may not be a competitor online, which will determine whether or not you can learn and apply anything from how their web presence is structured.

Furthermore, at a local level – and depending on the service or product – you may very well find large players like big box stores or ecommerce offerings appearing in the search results.

Here, too, there may not be much to be learned from a tactical perspective, but you do need to understand what and who you are up against in order to develop strategies for any given keyword or topic.

Understanding who the competition is and how far ahead they may or may not be will help you determine where to focus your attention.

A good starting point for any SEO strategy is from a position of strength.

In other words, those areas where your business has established some authority and visibility relative to your competitors.

If you have limited to no authority or visibility, it may be worthwhile focusing your attention elsewhere and considering paid search or social advertising strategies to bridge the gap.

A simple incognito Google search for your primary keywords in your target location will display a list of relevant businesses along with local directories and industry-specific websites, which all represent competition for your customers’ attention.

Alternatively, SEO tools like Ahrefs or Semrush will call out domains/websites, i.e., competitors found to be ranking for the same keywords your domain does.

These tools provide a wealth of content and keyword gap information, which will be used for much of the analysis outlined below.

Key Areas Of Local SEO Competitor Analysis

1. On-Page SEO Analysis

Any effective competitor analysis will naturally begin with a review of competitors’ websites to see “how” they may be able to outrank you and/or what they may be trying to rank for.

Examine the primary website content of any competitors outranking your website, focusing on relevant local keywords you want to be found for.

What keywords do they use in their titles, page headings, and link anchor text. These are presumably the keywords they have optimized for.

Keep in mind that when reviewing a competitive website or content, the assumption is that it was created and published with SEO in mind. However, this may not always be the case, so don’t be surprised if your competitor’s pages are not optimized; rather, look at this as an opportunity.

Are there pages for specific neighborhoods or landmarks? In other words, are your competitors looking to target customers in areas where you are or are not?

Analyze their structured data/schema markup, which helps search engines understand the context of their content.

Tools like Google’s Structured Data Testing Tool can help with this. Structured data is also an important consideration in optimizing for AI Search, a topic we’ll leave for another day.

Assess their website’s user experience and mobile-friendliness by running a Google PageSpeed Insights report on any of the competition’s ranking pages, along with the same on your own, to see what gaps exist.

Google prioritizes mobile-first indexing, so it goes without saying that a mobile-friendly website is essential for local SEO.

2. Google Business Profile (GBP) Optimization

A survey of SEO professionals by Brightlocal found that GBP optimization is the most valuable local SEO service, followed by creating content and web design.

For many businesses, their GBP is as (if not more) important than their website.

As such, reviewing your competitors’ GBP can reveal how often you need to post content or how many reviews you need to compete.

Your competitors’ GBP is a treasure trove of information. Analyze their chosen categories, keyword usage in business descriptions, the quality and quantity of photos and posts, and their engagement in the Q&A section.

Pay close attention to their posting frequency. Are they regularly sharing updates, offers, and events?

According to Google, “Businesses that add photos to their Business Profiles receive 42% more requests for directions on Google Maps, and 35% more clicks through to their websites than businesses that don’t.”

3. Local Citation Analysis

Your business name, address, and phone number (NAP) appearing in citations help strengthen local SEO as they confirm your geographic relevance to Google.

Local directory submission still very much matters when it comes to establishing local authority and visibility.

Here, too, you can conduct an incognito search and review local directories, or you can use tools like Whitespark or BrightLocal to identify where your competitors are listed.

Focus on the consistency and accuracy of their NAP information. Inconsistent citations can confuse search engines and negatively impact rankings.

A study by Moz found citation accuracy is a key factor in local search rankings.

4. Local Link Building Analysis

Similar to citations, obtaining links from relevant, local sources such as local blogs, newspapers, and chambers of commerce is highly valuable as backlinks have the effect of validating both the localness and service/product focus of a business.

Building relationships with local influencers and businesses can also help you acquire high-quality local backlinks.

Use tools like Ahrefs or Semrush to analyze your competitors’ backlink profiles. Identify their link sources and assess the quality of those links to see if it would be worthwhile to pursue the same.

Review your competitors’ websites to see if they’ve established local partnerships, and then see if those partners have linked to or mentioned them on their sites.

5. Review And Reputation Analysis

Reviews are a critical factor in terms of establishing customer trust and, by extension, local search authority and rankings.

Effective reputation management can significantly impact local SEO performance.

Analyze the volume, sentiment, and recency of your competitors’ Google, Yelp, or local/industry directory reviews.

Pay attention to how quickly and how your competition responds to reviews, both positive and negative.

6. Local Content Strategy

Content is still king, and a well-planned and effective content marketing strategy can set a local business apart.

Creating and sharing relevant, high-quality content that your customers and prospects want to read, like, and share is key to providing expertise while building authority and trust.

In fact, it can be argued that creating content that will answer all of your customers’ questions about selecting, purchasing, and using your products and services is the basis of modern SEO, local or otherwise.

Analyze the types of content your competitors are producing. Are they creating blog posts about local events, neighborhood guides, or customer success stories?

Identify content gaps and opportunities to create unique and valuable content for your local audience.

Leveraging local news and events can create very relevant content.

Expanded Strategies To Gain An Edge Through Competitor Analysis

Identify Gaps And Opportunities

This is the foundational step in leveraging the competitive analysis you’ve done.

Your competitor analysis should reveal where your rivals are falling short. These gaps represent opportunities for you to excel and surpass them.

Don’t just note the gaps; prioritize them. Which weaknesses, if addressed, will yield the most significant impact on your local SEO?

Consider factors like search volume for related keywords and the potential for increased customer engagement.

For example, if you note your competitors have not taken advantage of certain sub-categories in their Google Business Profile, ensure you do and key on those sub-categories with content like blog posts, images, or videos you create and share via GBP posts and elsewhere.

Reverse Engineering Successful Strategies

Reverse engineering what your competitors have done doesn’t mean blindly copying their website, content, or campaigns.

It’s about understanding why their strategies work and adapting them to your unique business. Again, be sure to select your true online competitors validated by performance data.

Analyze the elements of their successful strategies. Is it their content, their link-building tactics, their GBP optimization, or something else?

Once you identify the key components that appear to be boosting their presence relative to yours, brainstorm ways to improve upon what they and you are doing. Focus on adding value and differentiation.

For example, a local fitness studio might observe a competitor’s blog posts on “healthy meal prep” generating significant engagement on social media.

It analyzes the competitor’s content, noting the use of high-quality images, easy-to-follow recipes, and local ingredient recommendations.

It then creates its own blog posts on the same topic, but it also includes video tutorials, printable shopping lists, and interviews with local nutritionists, providing a more comprehensive and engaging experience.

Hyperlocal Content Creation

As discussed, content built to resonate with your local audience is essential for local SEO. It can be the difference between you and your competition in terms of both organic search ranking and engagement with your customer base.

Go beyond generic content. Focus on creating content specific to your target location. This could include neighborhood guides, local event calendars, or interviews with like-minded local business owners.

The goal is to establish your business as the trusted source of local information, particularly in areas where you have unique expertise and experience.

As an example, a local bookstore creates a blog series called “Neighborhood Spotlight,” featuring interviews with residents about their reading habits and favorite books or magazines.

It also creates a “Local Author” section on its website, showcasing books by writers from the area.

Local stories about neighbors can be of real interest to residents of a community and can initiate conversations about the business both online and off.

Another example could be a local hardware store or plumber creating, publishing, and sharing “how to” videos on their website, YouTube, and GBP centered around local weather conditions.

In a northern community, one such video might be titled “How to prepare your pipes for a winter freeze in [town name],” while in the south, it becomes “How to guard against flooding in [town name] during hurricane season.”

Building Local Relationships

Networking and partnering with other local businesses and influencers can significantly boost your local authority and visibility.

You may notice thriving local businesses or competitors in your area leveraging these types of partnerships, and if so, this should be a clear signal for doing the same.

If not, this becomes an important opportunity to differentiate your business.

Building relationships takes time and effort, but they will certainly pay off if properly planned and nurtured.

Participate in local events, join local business associations, and collaborate with other like-minded businesses on joint promotions.

The goal is to create a network of local connections that can amplify your reach, credibility, and trust.

A local coffee shop partners with a nearby bakery to offer a “coffee and pastry” combo deal. The shop also collaborates with a local artist to display their artwork, creating a unique and engaging atmosphere.

All three businesses benefit from increased exposure to each other’s customer base.

In another example, a local clothing boutique sees a competitor gain traction from partnering with and supporting a local community organization.

It then looks to establish similar relationships but ensures the return on investment (ROI) of each partnership by co-creating content, running events, and providing unique, branded promotional codes and URLs for tracking engagement and sales.

Monitoring And Adapting

Local SEO is an ongoing process. You need to regularly monitor your competitors, identify any new entrants or tactics, and adapt your strategies as needed.

Track key metrics, such as keyword rankings, organic website traffic, content publishing/sharing, and review volume. The tools noted above can help you do so on a scheduled basis.

Turning Competitor Insights Into Local SEO Success

Competitor analysis is an indispensable component of any successful local SEO strategy.

By properly identifying the competition and understanding your rivals’ strengths and weaknesses, you can identify opportunities to improve your local search visibility and stave off any threats as they arise.

Remember, SEO is a continuous process that requires ongoing monitoring and adaptation.

By leveraging the tools and strategies outlined here, you can work towards gaining and maintaining an edge.

More Resources:


Featured Image: HZ Creations/Shutterstock