Coming soon: 10 Things That Matter in AI Right Now

Each year we compile our 10 Breakthrough Technologies list, featuring our educated predictions for which technologies will have the biggest impact on how we live and work.

This year, however, we had a dilemma. While our final picks encompass all our core coverage areas (energy, AI, and biotech, plus a few more), our 2026 list was harder to wrangle than normal. Why? We had so many worthy AI candidates we couldn’t fit them all in! (The ones that made it were AI companions, mechanistic interpretability, generative coding, and hyperscale data centers.) Many great ideas fell by the wayside to keep the list as wide-ranging as possible.

Well, that got us thinking: What if we made an entirely new list that was all about AI? We got excited about that idea—and before we knew it we had the beginnings of what we’re calling 10 Things That Matter in AI Right Now. It’s an entirely new annual list that we’re proud to be publishing for the first time on April 21, 2026. We’ll unveil it on stage for attendees at our signature AI conference, EmTech AI, held on MIT’s campus (it’s not too late to get tickets), and then publish the list online later that day.

The process for coming up with the list was similar to the way we pick our 10 Breakthrough Technologies. We petitioned our AI team of reporters and editors to propose ideas, put them all in a document, and engaged in some robust discussion. Eventually, we voted for our favorites and whittled the long list down to a final 10.

But there’s a slight difference between this list and our 10 Breakthrough Technologies. AI is already such a big part of our lives that we didn’t want to restrict ourselves to nominating only technologies. Instead, we wanted to put together a definitive annual list that highlights what we believe are the biggest ideas, topics, and research directions in AI right now. So yes, it will include cutting-edge AI technologies, but it will also feature other trends and developments in AI that we want to bring to our subscribers’ attention.

Think of it as a sneak peek inside the collective brain of our crack AI reporting team: These are the things that our reporters will be watching this year. We intend to follow the items on this list really closely, and you will see it reflected in the news and feature stories we publish in 2026.

For us, 10 Things That Matter in AI Right Now is a guide to how we view the current AI landscape. It will be a source of discussion, debate, and maybe some arguments! We are so excited to share it with you on April 21. If you want to be among the first to see it—join us at EmTech AI or become a subscriber to livestream the announcement.

NASA is building the first nuclear reactor-powered interplanetary spacecraft. How will it work?

<div data-chronoton-summary="

  • A US nuclear-powered spacecraft may head to Mars: NASA has announced SR-1, the first-ever nuclear-reactor-powered interplanetary spacecraft, with a planned Mars launch before the end of 2028—a timeline experts call aggressive but exciting.
  • Nuclear could beat chemical and solar power: Unlike traditional propulsion, nuclear electric propulsion is orders of magnitude more efficient and doesn’t depend on sunlight, making it better suited for long, fast journeys through the solar system.
  • The design is already taking shape: SR-1 will resemble a giant fletched arrow, with a recycled Gateway space station propulsion unit at the rear and a 20-kilowatt uranium reactor up front, cooled by enormous fins that vent excess heat into space.
  • The stakes go beyond engineering: With China and Russia pursuing their own deep-space nuclear programs, SR-1 is as much a geopolitical gambit as a scientific one—and success could put the US ahead in the race to land humans on Mars.

” data-chronoton-post-id=”1135848″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Just before Artemis II began its historic slingshot around the moon, Jared Isaacman, the recently confirmed NASA administrator, made a flurry of announcements from the agency’s headquarters in Washington, DC. He said the US would soon undertake far more regular moon missions and establish the foundations for a base at the lunar south pole before the end of the decade. He also affirmed the space agency’s commitment to putting a nuclear reactor on the lunar surface.

These goals were largely expected—but there was still one surprise. Isaacman also said NASA would build the first-ever nuclear reactor-powered interplanetary spacecraft and fly it to Mars by the end of 2028. It’s called the Space Reactor-1 Freedom, or SR-1 for short. “After decades of study, and billions spent on concepts that have never left Earth, America will finally get underway on nuclear power in space,” he said at the event. “We will launch the first-of-its-kind interplanetary mission.”

A successful mission would herald a new era in spaceflight, one in which traveling between Earth, the moon, and Mars would—according to a range of experts—be faster and easier than ever. And it might just give the US the edge in the race against China—allowing the country to beat its greatest geopolitical rival to landing astronauts on another planet.

While experts agree the timeline is extremely tight, they’re excited to see if America’s space agency and its industry partners can deliver an engineering miracle. “You wake up to that announcement, and it puts a big smile on your face,” says Simon Middleburgh, co-director of the Nuclear Futures Institute at Bangor University in Wales.

Little detail on SR-1 is publicly available, and NASA’s own spaceflight researchers did not respond to requests for comment. But MIT Technology Review spoke to several nuclear power and propulsion experts to find out how the new nuclear-powered spacecraft might work.

Nuclear propulsion 101

Traditionally, spaceflight has been powered by chemical propulsion. Liquefied hydrogen and liquefied oxygen are mixed, and then ignited, within a rocket; the searingly hot exhaust from this explosion is ejected through a nozzle, which propels the rocket forth.

Chemical propulsion offers a significant amount of thrust and will, for the foreseeable future, still be used to launch spacecraft from Earth. But nuclear propulsion would enable spacecraft to fly through the solar system for far longer, and faster, than is currently possible. 

“You get more bang per kilogram,” says Middleburgh. A nuclear fuel source is far more energy-dense than its conventional cousin, which means it’s orders of magnitude more efficient. “It’s really, really, really high efficiency,” says Lindsey Holmes, an expert in space nuclear technology and the vice president of advanced projects at Analytical Mechanics Associates, an aerospace company in Virginia. 

The approach also removes one other element of the traditional power equation: solar. Spacecraft, including the Artemis II mission’s Orion space capsule, often rely on the sun for power. But this can be a problem, since it doesn’t always shine in space, particularly when a planet or moon gets in its way—and as you head toward the outer solar system, beyond Mars, there’s just less sunlight available. 

To circumvent this issue, nuclear energy sources have been used in spacecraft plenty of times before—including on both Voyager missions and the Saturn-interrogating Cassini probe. Known as radioisotope thermoelectric generators, or RTGs, these use plutonium, which radioactively decays and generates heat in the process. That heat is then converted into electricity for the spacecraft to use. RTGs, however, aren’t the same as nuclear reactors; they are more akin to radioactive batteries—more rudimentary and considerably less powerful.

So how will a nuclear-reactor-powered spacecraft work? 

Despite operational differences, the fundamentals of running a nuclear reactor in space are much the same as they are on Earth. First, get some uranium fuel; then bombard it with neutrons. This ruptures the uranium’s unstable atomic nuclei, which expel a torrent of extra neutrons—and that rapidly escalates into a self-sustaining, roasting-hot nuclear fission reaction. Its prodigious heat output can then be used to produce electricity.

Doing this in space may sound like an act of lunacy, but it’s not: The idea, and even a lot of the basic technology, has been around for decades. The Soviet Union sent dozens of nuclear reactors into orbit (often to power spy satellites), while the US deployed just one, known as SNAP-10A, back in 1965—a technological demonstration to see if it would operate normally in space. The aim was for the reactor to generate electricity for at least a year, but it ran for just over a month before a high-voltage failure in the spacecraft caused it to malfunction and shut down. 

Now, more than half a century later, the US wants its second-ever space-based nuclear reactor to do something totally different: power an interplanetary spacecraft.

To be clear, the US has started, and terminated, myriad programs looking into nuclear propulsion. The latest casualty was DRACO, a collaboration between NASA and the Department of Defense, which ended in 2025. Like several previous efforts, DRACO was canceled because of a mix of high experimentation costs, lower prices for conventional rocket propulsion, and the difficulty of ensuring that ground tests could be performed safely and effectively (they are creating an incredibly powerful nuclear reaction, after all).

But now external considerations may be changing the calculus. The Artemis program has jump-started America’s return to the moon, and the new space race has palpable momentum behind it. The first nation to deploy nuclear propulsion would have a serious advantage navigating through deep space. 

“I think it’s a very doable technology,” says Philip Metzger, a spaceflight engineering researcher at the Florida Space Institute. “I’m happy to see them finally doing this.”

One version of this technology is known as nuclear thermal propulsion, or NTP. You start with a nuclear reactor, one that’s cooking at around 5,000°F. Then “you’ve got a cold gas, and you squirt cold gas over the hot reactor,” says Middleburgh. “The gas expands, you shoot it out the back of a nozzle, and you have an impulse. And that impulse drives you forward.” 

Because the thrust depends on the speed of the gas being ejected, the propellant gas needs to be light, making hydrogen a popular choice. But hydrogen is a corrosive and explosive substance, so using it in NTP engines can make them precarious to operate. On top of this, NTP doesn’t necessarily have a very long operating life.

Alternatively, there’s nuclear electric propulsion, or NEP, which “is very low thrust, but very efficient, so you can use it for a long period of time,” says Sebastian Corbisiero, the US Department of Energy’s national technical director of space reactor programs. This method uses heat from a fission reactor to generate power. That power is used to electrify a gas and then  blast it out of the spacecraft, generating thrust.  

Both NTP and NEP have been investigated by US researchers, because both have the added benefit of making it easier and safer for human beings to explore the solar system. Astronauts in space are exposed to harmful cosmic radiation, but because nuclear propulsion makes spacecraft speedier and more agile, they’d spend less time in it. “It solves the radiation problem,” says Metzger. “That’s one of the main motivations for inventing better propulsion to and from Mars.”

How to build a nuclear-powered spaceship

For SR-1, NASA has opted for nuclear electric propulsion. NEP is “a much simpler affair” than its thermal counterpart, says Middleburgh. Essentially, you just need to plug a nuclear reactor into a power-and-propulsion system. Luckily for NASA, it’s already got one.

For many years, NASA—along with its space agency partners in Canada, Europe, Japan, and the Middle East—was preparing for Gateway, meant to be humanity’s first space station to orbit around the moon. Isaacman canceled the project in March, but that doesn’t mean its technology will go to waste; the power-and-propulsion element of the nixed space station will be used in SR-1 instead. This contraption was going to be powered by solar energy. It’ll now be attached to an in-development nuclear reactor custom built to survive in space.

What might the SR-1 look like? MIT Technology Review saw a presentation by Steve Sinacore, program executive of NASA’s Space Reactor Office, that offers some clues. So far, the concept art makes it look like a colossal fletched arrow. At the back will be the power-and-propulsion system, while its tip will hold a 20-kilowatt-or-greater uranium-filled nuclear reactor. (For context, a typical nuclear plant on Earth is 50,000 times more powerful, producing a gigawatt of power.) 

NASA

The “fletches” on SR-1 are large fins that allow the reactor to cool down. “You have to have really large radiators,” says Holmes, since the nuclear fission process produces so much heat that much of it has to be vented into space—otherwise, the reactor and spacecraft will melt.

According to that presentation, the spacecraft’s hardware development is due to start this June. By January 2028, SR-1’s systems should be ready for assembly and testing. And by that October, the spacecraft will arrive at the launch site, ready for liftoff before the year’s end. Will the nuclear reactor manage to hold itself together? “Going through the launch safely is going to be a challenge,” says Middleburgh. “You are being shaken, rattled, and rolled.” 

Then, he says, “once you’re up in space, once you’ve got through that few minutes of hell in getting there, it’s zero-gravity considerations you have to worry about.” The question then becomes: Will the mechanics of the reactor, built on terra firma, still work? 

For safety reasons, the nuclear reactor will be switched on around two days post-launch, when it’s comfortably in space. Uranium isn’t tremendously dangerous by itself, but that can’t be said of the nuclear waste products that emerge when the reactor is activated, so you don’t want any of that to fall back to Earth. 

If this schedule is adhered to, and SR-1 works as planned, it’s expected to reach Mars about a year after launch. “It’s an aggressive timeline,” says Holmes, something she suspects is being driven partly by China’s and Russia’s own deep-space nuclear ambitions. The two countries aim to place their own nuclear reactor on the moon’s surface to power the planned International Lunar Research Station—a jointly operated lunar base—by 2035. 

Whether it flies or fails in space, SR-1’s operations should help NASA with putting a nuclear reactor on the moon soon after. “All of the things we’d be learning about how that system operates in space [are] very helpful for a surface application, because basically it’s the same,” says Corbisiero. “There’s still no air on the moon.”

And if SR-1 does triumph, it will be a game-changing victory for NASA. It will also be “a massive win for the human race, frankly,” says Middleburgh. “It will be a marvel of engineering, and it will move the dial in humans potentially taking a step on Mars.” Like many of his colleagues, including Holmes, he remains thrilled by the prospect of the first-ever nuclear-powered interplanetary spacecraft—even with the incredibly ambitious timeline. 

“These are the things that get us up in the morning,” he says. “These are the sorts of things we will remember when we’re old.”

The Download: the state of AI, and protecting bears with drones

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Want to understand the current state of AI? Check out these charts. 

If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. Stanford’s 2026 AI Index—the field’s annual report card—cuts through the noise.  

The data reveals a technology evolving faster than we can manage. From the China-US rivalry and model breakthroughs to public sentiment and the impact on jobs, here are the index’s key findings on the state of AI today

—Michelle Kim 

Why opinion on AI is so divided 

Stanford’s 2026 AI Index is full of striking stats. It also reveals a field riddled with inconsistencies, most notably in the gap between experts and non-experts.  

On jobs, 73% of US experts view AI’s impact positively, compared to just 23% of the public. Similar divides emerged on the economy and healthcare. What’s driving this disconnect? 

Part of the answer may lie in their diverging experiences. Those using AI for coding and technical work see it at its best, while everyone else gets a more mixed bag. The result is two very different realities. Read the full story on what they are—and why they matter

This story is from The Algorithm, our weekly newsletter on AI. Sign up to receive it in your inbox every Monday. 

—Will Douglas Heaven 

Job titles of the future: Wildlife first responder 

Grizzly bears have made such a comeback across eastern Montana that in 2017, the state hired its first-ever prairie-based grizzly manager: wildlife biologist Wesley Sarmento.  

For seven years, Sarmento worked to keep both bears and humans out of trouble. He acted like a first responder, trying to defuse potentially dangerous situations. He even got caught in some himself, which led him to a new wildlife safety tool: drones. Find out the results of his experiments in digital ecology
 
 —Emily Senkosky 

This article is from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands on Wednesday, April 22.  

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 Human scientists still trounce the top AI agents at complex tasks  
The best agents perform only half as well as experts with PhDs. (Nature
+ Can AI really help us discover new materials? (MIT Technology Review
 
2 OpenAI is escalating its fight with Anthropic while pulling away from Microsoft 
A leaked memo exposes plans to attack Anthropic. (Axios
+ And says Microsoft “limited our ability” to reach clients. (The Information $) 
+ While touting a budding alliance with Amazon. (CNBC

3 Carbon removal technology is stalling—and that may be good news 
Better solutions could now emerge. (New Scientist
+ Here are three that are set to break through. (MIT Technology Review
 
4 AI is finding bugs faster than we can fix them—and hackers will benefit 
Welcome to the bug armageddon. (WSJ $)  
+ AI may soon be capable of fully automated attacks. (MIT Technology Review
 
5 A Texas man has been charged with the attempted murder of Sam Altman 
He allegedly threw a Molotov cocktail at the OpenAI CEO’s home last Friday. (NPR
+ The suspect reportedly had a list of other AI leaders. (NYT $) 
 
6 AI is beginning to transform mathematics 
It’s proving new results at a rapid pace. (Quanta
+ One AI startup plans to unearth new mathematical patterns. (MIT Technology Review
 
7 Students are turning away from computer science 
It’s had a massive drop in enrollments. (WP $) 
+ AI coding tools have diminished the degree’s value. (NYT $)  
 
8 India’s bid to become a data center hub is sparking a fierce backlash 
Farmers are protesting Delhi’s courtship of hyperscalers. (Rest of World
 
9 Meta is set to overtake Google in advertising revenue this year 
And become the world’s largest digital ad platform for the first time. (WSJ
 
10 AI influencers are taking over Coachella  
Synthetic content creators are “everywhere” at the festival. (The Verge

Quote of the day 

“These people are almost nothing like you. They are most likely sociopathic/psychopathic and, in the case of Altman, consistently reported to be a pathological liar.” 

—The alleged firebomber of Sam Altman’s home shares his distrust of AI leaders in a blog post. 

One More Thing 

close crop of the titular rodent and smaller rodents

FRANCESCO FRANCAVILLA

We’ve never understood how hunger works. That might be about to change. 

A few years ago, Brad Lowell, a Harvard University neuro­scientist, figured out how to crank the food drive to the maximum. He did it by stimulating neurons in mice. Now, he’s following known parts of the neural hunger circuits into uncharted parts of the brain. 

The work could have important implications for public health. More than 1.9 billion adults worldwide are overweight, and more than 650 million are obese. Understanding the circuits involved could shed new light on why these numbers are skyrocketing. 

Read the full story

—Adam Piore 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

Top image credit: Stephanie Arnett/MIT Technology Review | Getty Images 

+ Someone built a mechanical version of Tony Hawk’s Pro Skater from Lego. 
+ Enjoy this wholesome clip of toddlers discovering the existence of hugs. 
+ This interactive body map shows exactly which exercises you need. 
+ Jon McCormack’s photos of nature’s patterns are breathtaking. 

Redefining the future of software engineering

Software engineering has experienced two seismic shifts this century. First was the rise of the open source movement, which gradually made code accessible to developers and engineers everywhere. Second, the adoption of development operations (DevOps) and agile methodologies took software from siloed to collaborative development and from batch to continuous delivery. Now, a third such shift looks to be taking shape with the adoption of agentic AI in software engineering.

Thus far, engineering teams have mainly used AI to assist with coding, testing, and other individual tasks, within tightly designed parameters. But with agentic capabilities, AI agents become reasoning, self-directing entities that can manage not just discrete tasks but entire software projects—and do so largely autonomously. If adopted and fully embraced by engineering teams, agentic AI will usher in end-to-end software process automation and, ultimately, agent-managed development and product lifecycle automation.

This report, which is based on a survey of 300 engineering and technology executives, finds that software engineering teams are seeing the potential in agentic AI and are beginning to put it to use, but so far in a mainly limited fashion. Their ambitions for it are high, but most realize it will take time and effort to reduce the barriers to its full diffusion in software operations. As with DevOps and agile, reaping the full benefits of agentic AI in engineering will require sometimes difficult organizational and process change to accompany technology adoption. But the gains to be won in speed, efficiency, and quality promise to make any such pain well worthwhile.

Key findings include the following:

Adoption momentum is building. While half of organizations deem agentic AI a top investment priority for software engineering today, it will be a leading investment for over four-fifths in two years. That spending is driving accelerated adoption. Agentic AI is in (mostly limited) use by 51% of software teams today, and 45% have plans to adopt it within the next 12 months.

Early gains will be incremental. It will take time for software teams’ investments in agentic AI to start bearing fruit. Over the next two years, most expect the improvements from agent use to be slight (14%) or at best moderate (52%). But around one-third (32%) have higher expectations, and 9% think the improvements will be game changing.

Agents will accelerate time-to-market. The chief gains from agentic AI use over that two-year time frame will come from greater speed. Nearly all respondents (98%) expect their teams’ delivery of software projects from pilot to production to accelerate, with the anticipated increase in speed averaging 37% across the group.

The goal for most is full agentic lifecycle management. Teams’ ambitions for scaling agentic AI are high. Most aim for AI agents to be managing the product development and software development lifecycles (PDLC and SDLC) end to end relatively quickly. At 41% of organizations, teams aim to achieve this for most or all products in 18 months. That figure will rise to 72% two years from now, if expectations are met.

Compute costs and integration pose key early challenges. For all survey respondents—but especially in early-adopter verticals such as media and entertainment and technology hardware—integrating agents with existing applications and the cost of computing resources are the main challenges they face with agentic AI in software engineering. The experts we interviewed, meanwhile, emphasize the bigger change management difficulties teams will face in changing workflows.

Download the report

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Big Difference in E.U., U.S. Return Rules

The rules for online product returns differ significantly in the E.U. and the U.S.

Distance buying in the European Union includes a statutory right of withdrawal. Consumers can cancel an online purchase within 14 days of delivery without giving a reason, subject to exceptions such as personalized goods and perishables. The rule applies across member states and forms part of the legal structure of ecommerce transactions.

In the United States, retailers set their own return policies. There is no federal law. Policies vary widely across merchants and categories, shaped by competition and customer expectations.

Map of E.U.

The E.U. rules for online product returns are mainly statutory.

Returns Volume

The National Retail Federation estimated in October 2025 that buyers will return 19.3% of U.S. online sales during that year.

Statista reports that E.U. customers returned around 7% of overall ecommerce revenue in 2024, with sharp variation by country. Germany leads, with 55% of online buyers returning at least one product.

The legally mandated withdrawal right in a high-return category creates a predictable but significant cost layer in the E.U. In the U.S., merchants can theoretically limit exposure.

Under the Consumer Rights Directive, E.U. merchants must issue refunds within 14 days of receiving a consumer’s withdrawal notification. However merchants may withhold the refund until they have received the returned items or the consumer provides proof of dispatch. Merchants must process refunds using the original payment method.

The 14-day deadline pressures cash flow for businesses with high return volumes.

In the U.S., retailers determine refund timing. Most process refunds within several business days, but again, there is no statutory requirement. Payment networks settle disputes, but they too impose no universal timeframe.

Hence U.S. merchants can align refunds with operational realities and customer expectations.

Shipping Costs

E.U. consumers are typically responsible for return shipping if it’s clearly disclosed before purchase, although merchants must reimburse the original delivery cost.

Merchants in the E.U. can reduce refunds for products with diminished value through use.

In the U.S., merchants have more flexibility. They can pay return shipping to remain competitive, or not. They can impose restocking fees and deductions, or not. Amazon, notably, offers free return shipping with no additional cost to buyers.

Despite this flexibility, competitive dynamics frequently lead to similar outcomes in the two regions, though the legal frameworks remain distinct.

Return-related losses are driven less by policy and more by execution. In the E.U., failure to clearly disclose return conditions can shift cost responsibility back to the merchant.

In the U.S., generous policies can increase return rates, especially in categories where customers order multiple variations with the intention of returning part of the order.

Across both regions, reverse logistics costs extend beyond shipping. Inspection, repackaging, restocking, and potential markdowns all contribute to the total cost.

Expansion Planning

Thus merchants selling in both regions need separate return strategies. A single global policy creates either compliance risk in the E.U. or unnecessary costs in the U.S.

In the E.U., the priority is disclosure: clearly communicate the withdrawal right, return shipping responsibilities, and refund timelines before checkout. Maintain current documentation and refund workflows within the 14-day statutory window.

In the U.S., the priority is optimization: benchmark return policies against category averages, track return generosity versus conversion rates, and model return costs into pricing.

Regardless, merchants who model return costs into expansion planning strengthen their positioning versus those who treat it as an afterthought.

New Google Search Console Message Glitch Gives SEOs A Scare via @sejournal, @martinibuster

Google Search Console erroneously sent out emails to site owners advising them that Google has just started to record impressions beginning on April 12th. The implication of the message is that Search Console has not previously been collecting those impressions, which is incorrect.

Search Console Impressions

The Search Console impressions report shows how often a site appeared in Google’s search results, regardless of whether or not users clicked. The impressions report by itself is not the metric to pay attention to, but rather the meaningful metrics are t he associated keywords and their positions in the search results. This enables an SEO to identify high value keyword performance and to enable better decisions on addressing performance shortcomings.

The report breaks queries down by:

1. Queries (What people searched)

2. Pages (Which URLs showed up)

3. Countries (Where searchers were located geographically)

4. Devices (Desktop, Mobile, and Tablet)

5. Search Appearance (shows if the impressions are from Rich Results, Videos, Web Light, and Merchant Listings)

Actual Search Console Reporting Errors

Google sent the following message to Search Console users:

“Google systems confirm that on April 12, 2026 we started collecting Google Search impressions for your website in Search Console. This means that pages from your website are now appearing in Google search results for some queries. Here’s how you can monitor your site’s Search performance using Search Console.”

This is an interesting message because it comes after it was disclosed that Google had been incorrectly reporting impressions since May 13, 2025. A note in a Google Support page from April 3 explained:
https://support.google.com/webmasters/answer/6211453#performance-reports-search-results-discover-google-news&zippy=%2Cperformance-reports-search-results-discover-google-news

“A logging error is preventing Search Console from accurately reporting impressions from May 13, 2025 onward. This issue will be resolved over the next few weeks; as a result, you may notice a decrease in impressions in the Search Console Performance report. Clicks and other metrics were not affected by the error, and this issue affected data logging only.”

Is today’s erroneous note related to any fixes made to the impressions report? Google’s John Mueller described it as just a glitch.

Mueller posted remarks on Bluesky about the message in response to a query about it:

“Sorry – this is just a normal glitch, unrelated to anything else.”

It’s a curious because it appears that the impression reporting errors and this erroneous messaging may be related. Are they related or is it just a glitch?

Google Chrome Skills Turn Gemini Prompts Into Reusable Workflows via @sejournal, @MattGSouthern

Google announced Skills in Chrome, a new Gemini in Chrome feature that lets you save prompts and rerun them as one-click tools across selected pages and tabs.

What’s New

Skills turn a prompt you’ve already written into a saved tool you can trigger again later. After running a prompt in Gemini’s Chrome side panel, you can save it as a Skill from your chat history. The next time you need it, type a forward slash or click the plus sign in Gemini in Chrome, select the Skill, and it runs on whatever page you’re viewing.

The feature also works across tabs. You can select additional open tabs when running a Skill, which means a single saved prompt can pull information from multiple pages at once.

Google is launching a library of prebuilt Skills that includes workflows for breaking down product ingredients, comparing specs across tabs, and cross-referencing a gift budget with a recipient’s interests. You can add any library Skill to your saved collection and edit the underlying prompt to customize it.

Why This Matters

This update changes how Chrome’s AI features work together. Over the past year, Google has added page-aware prompts and multi-tab context, connected apps like Gmail and Calendar, and auto-browse for multi-step tasks. Skills add reusability to those capabilities.

A saved prompt that reads a page, compares it against two other open tabs, and drafts a summary email through a connected app is closer to a lightweight automated workflow than a chatbot conversation.

How It Helps

For SEO and marketing work, the multi-tab capability creates several possibilities. You could save a Skill that compares competitor pages against yours, or one that extracts structured data from product pages you’re auditing. A repeatable prompt that checks title tags, meta descriptions, and heading structure across client sites would save time during routine audits.

The launch categories focus on shopping, productivity, and wellness rather than developer or enterprise tools. That suggests Skills are intended more as a consumer productivity feature rather than a power-user API.

Looking Ahead

Skills is the latest in a series of Chrome updates that have upgraded the browser’s AI capabilities.

Taken together, they point to Chrome becoming a more persistent AI assistant rather than a one-off side panel.


Featured Image: Google, 2026. 

Why Your Webinar Program Isn’t Working (So, Copy Ours) via @sejournal, @hethr_campbell

5 years ago, I stepped into the role of webinar ringmaster. I said yes to moderating, but it was all the background work that gave me butterflies. I’d second and third guess everything. Is the title right? What if it flops? Is the content level right for the attendees? Will the right attendees sign up? Is my hair ok?!

But the program grew, and you all were so welcoming. Through the years we’ve tested on you all (it’s fun for us!) And I’ve learned so many tricks that I want to give back to my fellow marketers struggling to recoup the ROI of your webinar program.

The Actual Live Webinar is the Easy Part

Just last week, I interviewed ten marketers running webinar programs right now to see where your pain really is. And I heard the same thing over and over: I’d run more webinars but the prep time and quality of leads are real struggles.

Showing up to moderate, keeping the conversation going, or delivering the content, that’s the easy part. But what I’ve come to understand after five years, 300+ sessions, and over 350,000 leads generated… it’s the 4-8 week preparation before “live day” that is the real work.

That’s what’s going to drive the right ICPs (i.e. help you reach your goals).

Topic selection. Campaign strategy. Promotion timing. Messaging that compels. Audience alignment. Lead scoring. Post-event nurture that doesn’t let a warm lead go cold. None of that happens on stage. Yet, all of it determines whether your program generates real pipeline… or just fills a calendar.

This is your opportunity to see how we do it, and ask your questions.

This session is meant to show you how you can have a fully successful, lead generating webinar program, even if you’re a lean team. We get it, we are too.

Why This Session Is Different

We’re opening up our full playbook! On April 23, I’m sitting down with my co-owner of webinar strategy (and successes), Jennifer McDonald, to talk webinars and we’re leaving plenty of time for Q&A.

You’ll get the full step-by-step walkthrough of our webinar approach that we’ve refined across hundreds of sessions.

This will be a jam-packed webinar. We’ll run through our whole secret formula, and deep dive into the processes that require the most attention (and intention) to drive lead quality over lead volume.

If you’re struggling… attendance is flat, leads aren’t converting, you can’t make the ROI case internally… this webinar about webinars is directly for you.

What You’ll Walk Away With

  • How to choose topics using real data so you’re attracting the right registrants
  • The session format we use to drive thought leadership over sales pitch energy, and why that distinction matters for conversion
  • A post-event nurture sequence that works on both attendees and no-shows, and keeps leads warm until they’re ready to move

This is just the first of some cool, new resources we’re bringing to the SEJ community around deeper education, and I couldn’t be more excited about it.

Whether you’re just getting started or you’ve been at this for years, there’s something in this session for you. Register free and come ready with your questions. Jennifer and I will be there live and we’ll make it worth your time.

And, you know the drill … can’t make it? Register anyway and we’ll get you the recording.

Shorter, Focused Content Wins In ChatGPT via @sejournal, @Kevin_Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

For years, SEOs have operated on a simple assumption: The more ground your content covers, the more likely it is to surface in AI-generated answers. In fact, every “best practice” in classic SEO content pushes you toward more: more subtopics, more sections, more words. Build the “ultimate guide.”

An analysis of 815,000 query-page pairs across 16,851 queries and 353,799 pages says otherwise:

  • Fan-out coverage is nearly irrelevant to citation rates.
  • Two signals actually predict whether ChatGPT cites your page.
  • Six concrete changes to your existing content library help.

1. The Study

AirOps ran 16,851 queries through ChatGPT three times each through the UI, capturing every fan-out sub-query, every URL searched, every citation made, and every page scraped. Oshen Davidson built the pipeline. I analyzed the data.

Each query generates an average of two fan-out queries. ChatGPT retrieves roughly 10 URLs per sub-search, reads through them, then selects which ones to cite. We scored how well each page’s H2-H4 subheadings matched those fan-out queries using cosine similarity on bge-base-en-v1.5 embeddings. That score is what we call fan-out coverage: the share of subtopics a page addresses at a 0.80 similarity threshold. (The 0.80 similarity threshold cutoff was used to decide whether a subheading counts as a match to a fan-out query. Think of it as a relevance bar.)

The question: Do pages with higher fan-out coverage get cited more?

You’ll find even more information in the co-written AirOps report.

2. Density Barely Moves The Needle

Across 815,484 rows, the relationship between fan-out coverage and citation is weak.

Covering 100% of subtopics adds 4.6 percentage points over covering none. That gap shrinks further when you control for query match (how well the page’s best heading matches the original query). Among pages with strong query match (>= 0.80 cosine similarity):

Image Credit: Kevin Indig

Moderate coverage (26-50%) outperforms exhaustive coverage. Pages that cover everything score lower than pages that cover a quarter of the subtopics. The “ultimate guide” strategy produces worse results than a focused article that covers two to three related angles well.

3. What Actually Predicts Citation

These two signals dominate: retrieval rank and query match.

1. Retrieval rank is the strongest predictor by a wide margin. A page at position 0 in ChatGPT’s web search results (the first URL returned by its search tool) has a 58% citation rate. By position 10, that drops to 14%. We ran each prompt three times consecutively for this analysis, and pages cited in all three runs have a median retrieval rank of 2.5. Pages never cited: median rank 13.

Image Credit: Kevin Indig

2. Query match (cosine similarity between the query and the page’s best heading) is the strongest content signal. Pages with a 0.90+ heading match have a 41% citation rate compared to the 30% rate for pages below 0.50. Even among top-ranked pages (position 0-2), higher query match adds 19 percentage points.

Fan-out coverage, word count, heading count, domain authority: all secondary. Some are flat. Some are inversely correlated.

4. The Wikipedia Exception

One site type breaks the pattern. Wikipedia has the worst retrieval rank in the dataset (median 24) and the lowest query match score (0.576). It still achieves the highest citation rate: 59%.

Wikipedia pages average 4,383 words, 31 lists, and 6.6 tables. They are encyclopedic in the literal sense. ChatGPT cites Wikipedia from deep in the search results where every other site type gets ignored.

This is density working as a signal, but at a scale no publisher can replicate. Wikipedia’s content is exhaustive, richly structured, and cross-linked across millions of topics. A 3,000-word corporate blog post with 15 subheadings is not the same thing.

5. The Bimodal Reality

58% of pages retrieved by ChatGPT in this dataset are never cited. 25% are always cited when they appear. Only 17% fall in between.

The always-cited and never-cited groups look nearly identical on most content metrics: similar word counts (~2,200), similar heading counts (~20), similar readability scores (~12 FK grade), similar domain authority (~54). The on-page signals we can measure do not separate winners from losers.

What separates them is retrieval rank. Always-cited pages rank near the top when they surface. Never-cited pages rank in the bottom half. The retrieval system, whatever signals it uses internally, is the gatekeeper. Everything else is a tiebreaker.

6. What This Means For Your Content

Conventional SEO content writing wisdom says cover more subtopics, add more sections, build density. The data says the conventional approach produces “mixed” pages, the 17% in the middle that get cited sometimes and ignored other times.

Mixed pages have the highest word counts, the most headings, and the highest domain authority in the dataset. They are the “ultimate guides.” They are also the least reliable performers in ChatGPT.

The pages that win consistently are focused. They:

  • Match the query directly in their headings,
  • Tend to be shorter (the citation sweet spot is 500-2,000 words), and
  • Have enough structure (7-20 subheadings) to organize the content without diluting it.

Build the page that is the best answer to one question. Not the page that adequately answers 20.


Featured Image: Tero Vesalainen/Shutterstock; Paulo Bobita/Search Engine Journal

Google Lists 9 Scenarios That Explain How It Picks Canonical URLs via @sejournal, @martinibuster

Google’s John Mueller answered a question on Reddit about why Google picks one web page over another when multiple pages have duplicate content, also explaining why Google sometimes appears to pick the wrong URL as the canonical.

Canonical URLs

The word canonical was previously mostly used in the religious sense to describe what writings or beliefs were recognized to be authoritative. In the SEO community, the word is used to refer to which URL is the true web page when multiple web pages share the same or similar content.

Google enables site owners and SEOs to provide a hint of which URL is the canonical with the use of an HTML attribute called rel=canonical. SEOs often refer to rel=canonical as an HTML element, but it’s not. Rel=canonical is an attribute of the element. An HTML element is a building block for a web page. An attribute is markup that modifies the element.

Why Google Picks One URL Over Another

A person on Reddit asked Mueller to provide a deeper dive on the reasons why Google picks one URL over another.

They asked:

“Hey John, can I please ask you to go a little deeper on this? Let’s say I want to understand why Google thinks two pages are duplicate and it chooses one over the other and the reason is not really in plain sight. What can one do to better understand why a page is chosen over another if they cover different topics? Like, IDK, red panda and “regular” panda 🐼. TY!!”

Mueller answered with about nine different reasons why Google chooses one page over another, including the technical reasons why Google appears to get it wrong but in reality it’s someetimes due to something that the site owner over SEO overlooked.

Here are the nine reasons he cited for canonical choices:

  1. Exact duplicate content
    The pages are fully identical, leaving no meaningful signal to distinguish one URL from another.
  2. Substantial duplication in main content
    A large portion of the primary content overlaps across pages, such as the same article appearing in multiple places.
  3. Too little unique main content relative to template content
    The page’s unique content is minimal, so repeated elements like navigation, menus, or layout dominate and make pages appear effectively the same.
  4. URL parameter patterns inferred as duplicates
    When multiple parameterized URLs are known to return the same content, Google may generalize that pattern and treat similar parameter variations as duplicates.
  5. Mobile version used for comparison
    Google may evaluate the mobile version instead of the desktop version, which can lead to duplication assessments that differ from what is manually checked.
  6. Googlebot-visible version used for evaluation
    Canonical decisions are based on what Googlebot actually receives, not necessarily what users see.
  7. Serving Googlebot alternate or non-content pages
    If Googlebot is shown bot challenges, pseudo-error pages, or other generic responses, those may match previously seen content and be treated as duplicates.
  8. Failure to render JavaScript content
    When Google cannot render the page, it may rely on the base HTML shell, which can be identical across pages and trigger duplication.
  9. Ambiguity or misclassification in the system
    In some cases, a URL may be treated as duplicate simply because it appears “misplaced” or due to limitations in how the system interprets similarity.

Here’s Mueller’s complete answer:

“There is no tool that tells you why something was considered duplicate – over the years people often get a feel for it, but it’s not always obvious. Matt’s video “How does Google handle duplicate content?” is a good starter, even now.

Some of the reasons why things are considered duplicate are (these have all been mentioned in various places – duplicate content about duplicate content if you will :-)): exact duplicate (everything is duplicate), partial match (a large part is duplicate, for example, when you have the same post on two blogs; sometimes there’s also just not a lot of content to go on, for example if you have a giant menu and a tiny blog post), or – this is harder – when the URL looks like it would be duplicate based on the duplicates found elsewhere on the site (for example, if /page?tmp=1234 and /page?tmp=3458 are the same, probably /page?tmp=9339 is too — this can be tricky & end up wrong with multiple parameters, is /page?tmp=1234&city=detroit the same too? how about /page?tmp=2123&city=chicago ?).

Two reasons I’ve seen people get thrown off are: we use the mobile version (people generally check on desktop), and we use the version Googlebot sees (and if you show Googlebot a bot-challenge or some other pseudo-error-page, chances are we’ve seen that before and might consider it a duplicate). Also, we use the rendered version – but this means we need to be able to render your page if it’s using a JS framework for the content (if we can’t render it, we might take the bootstrap HTML page and, chances are it’ll be duplicate).

It happens that these systems aren’t perfect in picking duplicate content, sometimes it’s also just that the alternative URL feels obviously misplaced. Sometimes that settles down over time (as our systems recognize that things are really different), sometimes it doesn’t.

If it’s similar content then users can still find their way to it, so it’s generally not that terrible. It’s pretty rare that we end up escalating a wrong duplicate – over the years the teams have done a fantastic job with these systems; most of the weird ones are unproblematic, often it’s just some weird error page that’s hard to spot.”

Takeaway

Mueller offered a deep dive into the reasons why Google chooses canonicals. He described the process of choosing canonicals as like a fuzzy sorting system built from overlapping signals, with Google comparing content, URL patterns, rendered output, and crawler-visible versions, while borderline classifications (“weird ones”) are given a pass because they don’t pose a problem.

Featured Image by Shutterstock/Garun .Prdt