Discover Core Update, AI Mode Ads & Crawl Policy – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse for SEO: updates affect how Google ranks content in Discover, how it plans to monetize AI search, and what content you serve to bots.

Here’s what matters for you and your work.

Google Releases Discover-Only Core Update

Google launched the February 2026 Discover core update, a broad ranking change targeting the Discover feed rather than Search. The rollout may take up to two weeks.

Key Facts: The update is initially limited to English-language users in the United States. Google plans to expand it to more countries and languages, but hasn’t provided a timeline. Google described it as designed to “improve the quality of Discover overall.” Existing core update and Discover guidance apply.

Why This Matters For SEOs

Google has historically rolled Discover ranking changes into broader core updates that affected Search as well. Announcing a Discover-specific core update means rankings in the feed can now move without any corresponding change in Search results.

That distinction creates a monitoring problem. When you track performance in Search Console, you should check Discover traffic independently over the next two weeks. Traffic drops that look like a core update penalty may be Discover-only. Treating them as Search problems leads to the wrong diagnosis.

Discover traffic concentration has grown for publishers. NewzDash CEO John Shehata reported that Discover accounts for roughly 68% of Google-sourced traffic to news sites. A core update targeting that surface independently raises the stakes for any publisher relying on the feed.

Read our full coverage: Google Releases Discover-Focused Core Update

Alphabet Q4 Earnings Reveal AI Mode Monetization Plans

Alphabet reported Q4 2025 earnings, showing Search revenue grew 17% to $63 billion. The call included the first detailed look at how Google plans to monetize AI Mode.

Key Facts: CEO Sundar Pichai said AI Mode queries are three times longer than traditional searches. Chief Business Officer Philipp Schindler described the resulting ad inventory as reaching queries that were “previously challenging to monetize.” Google is testing ads below AI Mode responses.

Why This Matters For SEOs

The monetization details matter more than the revenue headline. Google is treating AI Mode as additive inventory, not a replacement for traditional search ads. Longer queries create new ad surfaces that didn’t exist when users typed three-word searches. For paid search practitioners, that means new campaign territory in conversational queries.

The metrics Google celebrated on this call describe users staying on Google longer. Google framed longer AI Mode sessions as a growth driver, and the monetization infrastructure follows that logic. The tradeoff to watch is referral traffic.

AI Mode creates a seamless path from AI Overviews, as detailed in our coverage last week. The earnings data suggest Google sees that containment as part of the growth story.

Read our full coverage: Alphabet Q4 2025: AI Mode Monetization Tests And Search Revenue Growth

Mueller Pushes Back On Serving Markdown To LLM Bots

Google Search Advocate John Mueller pushed back on the idea of serving Markdown files to LLM crawlers instead of standard HTML, calling the concept “a stupid idea” on Bluesky and raising technical concerns on Reddit.

Key Facts: A developer described plans to serve raw Markdown to AI bots to reduce token usage. Mueller questioned whether LLM bots can recognize Markdown on a website as anything other than a text file, or follow its links. He asked what would happen to internal linking, headers, and navigation. On Bluesky, he was more direct, calling the conversion “a stupid idea.”

Why This Matters For SEOs

The practice exists because developers assume LLMs process Markdown more efficiently than HTML. Mueller’s response treats this as a technical problem, not an optimization. Stripping pages to Markdown can remove the structure that bots need to understand relationships between pages.

Mueller’s technical guidance is consistent, including his advice on multi-domain crawling and his crawl slump guidance. This fits a pattern where Mueller draws clear lines around bot-specific content formats. He previously compared llms.txt to the keywords meta tag, and SE Ranking’s analysis of 300,000 domains found no connection between having an llms.txt file and LLM citation rates.

Read our full coverage: Google’s Mueller Calls Markdown-For-Bots Idea ‘A Stupid Idea’

Google Files Bugs Against WooCommerce Plugins For Crawl Issues

Google’s Search Relations team said on the Search Off the Record podcast that they filed bugs against WordPress plugins. The plugins generate unnecessary crawlable URLs through action parameters like add-to-cart links.

Key Facts: Certain plugins create URLs that Googlebot discovers and attempts to crawl. The result is wasted crawl budget on pages with no search value. Google filed a bug with WooCommerce and flagged other plugin issues that remain unfixed. The team’s response targeted plugin developers rather than expecting individual sites to fix the problem.

Why This Matters For SEOs

Google intervening at the plugin level is unusual. Normally, crawl efficiency falls on individual sites. Filing bugs upstream suggests the problem is widespread enough that one-off fixes won’t solve it.

Ecommerce sites running WooCommerce should audit their plugins for URL patterns that generate crawlable action parameters. Check your crawl stats in Search Console for URLs containing cart or checkout parameters that shouldn’t be indexed.

Read our full coverage: Google’s Crawl Team Filed Bugs Against WordPress Plugins

LinkedIn Shares What Worked For AI Search Visibility

LinkedIn published findings from internal testing on what drives visibility in AI-generated search results. The company reported that non-brand awareness-driven traffic declined by up to 60% across the industry for a subset of B2B topics.

Key Facts: LinkedIn’s testing found that structured content performed better in AI citations, particularly pages with named authors, visible credentials, and clear publication dates. The company is developing new analytics to identify a traffic source for LLM-driven visits and to monitor LLM bot behavior in CMS logs.

Why This Matters For SEOs

What caught my attention is how much this overlaps with what AI platforms themselves are saying. Search Engine Journal’s Roger Montti recently interviewed Jesse Dwyer, head of communications at Perplexity. The AI platform’s own guidance on what drives citations lines up closely with what LinkedIn found. When both the cited source and the citing platform arrive at the same conclusions independently, that gives you something beyond speculation.

Read our full coverage: LinkedIn Shares What Works For AI Search Visibility

Theme Of The Week: Google Is Splitting The Dashboard

Every story this week points to the same realization. “Google” is no longer one thing to monitor.

Google is now announcing Discover core updates separately from Search core updates. AI Mode carries ad formats and checkout features that don’t exist in traditional results. Mueller drew a policy line around how bots consume content. Google filed crawl bugs upstream at the plugin level, and LinkedIn is building a separate measurement for AI-driven traffic.

A year ago, you could check one traffic graph in Search Console and get a reasonable picture. The picture now fragments across Discover, Search, AI Mode, and LLM-driven traffic. Ranking signals and update cycles differ, and the gaps between them haven’t been closed.

Top Stories Of The Week:

This week’s coverage spanned five developments across Discover updates, search monetization, crawl policy, and AI visibility.

More Resources:


Featured Image: Accogliente Design/Shutterstock

Microsoft’s Publisher Marketplace, Google Tag Update & Multi-Party Approvals – PPC Pulse via @sejournal, @brookeosmundson

Welcome to PPC Pulse. This week’s PPC updates come from both Microsoft and Google, all dedicated to more “behind the scenes” work.

Microsoft announced a new Content Publisher Marketplace, where it is starting to rethink how content is compensated amid the increased use of AI.

On the Google front, Google now says the standard tag is no longer the recommended setup. And in a rare security upgrade, Google Ads rolled out multi-party approvals to protect accounts from unauthorized activity.

Here’s what matters for advertisers and why.

Microsoft Ads Announces Publisher Content Marketplace

On February 3, Microsoft Ads and Microsoft AI introduced the Publisher Content Marketplace. The platform is designed to keep high-quality content publishers at the forefront of AI-driven experiences. The marketplace creates a new, transparent licensing system between content publishers and AI builders.

In the blog announcement, Tim Frank, corporate vice president of Microsoft AI Monetization, explained the need for this:

“The open web was built on an implicit value exchange where publishers made content accessible, and distribution channels – like search – helped people find it. That model does not translate cleanly to an AI-first world, where answers are increasingly delivered in a conversation. At the same time, much of the authoritative content lives behind paywalls or within specialized archives. As the AI web grows, publishers need sustainable, transparent ways to govern how their premium content is used and to license it when it makes the most sense.”

The platform allows publishers to define their own licensing terms and get paid based on how their content is used in AI responses. AI builders, in turn, get scalable access to licensed content without needing individual agreements with every publisher.

According to the announcement, Microsoft’s testing with Copilot showed that premium content “meaningfully improves response quality.” The marketplace includes usage-based reporting so publishers can see where their content is being used and how it’s valued.

Why This Matters For Advertisers

The launch of Publisher Content Marketplace matters less for what it does right now and more for what it signals about where AI advertising might be headed.

If premium content becomes a differentiator for AI platforms, the quality of the information feeding those systems could directly impact things like ad relevance and targeting.

For advertisers, that means the platforms with better content licensing deals may end up with better-performing ad products. It also suggests that Microsoft is betting on a future where AI answers aren’t just pulling from the open web but from curated, licensed content sources that have economic incentives to keep their information accurate and current.

Additionally, if Microsoft can differentiate Copilot’s ad inventory based on content quality while Google is still negotiating those types of relationships, it creates an opportunity for Microsoft to position itself as the premium option for certain verticals.

What PPC Professionals Are Saying

Navah Hopkins, Microsoft Ads liaison, also shared the announcement on LinkedIn and highlighted how “content ownership and respect for human autonomy are foundational to getting the AI web right.” Her perspective emphasized content quality over volume, which aligns with Microsoft’s positioning against competitors who may prioritize reach over accuracy.

Christoph Waldstein, senior client director Strategic Sales at Microsoft, also showed his support for the marketplace, stating, “Great to see so many premium partners join us to keep content quality high in an Agentic world!”

The marketplace is voluntary to join, so it will be interesting to see how many publishers opt in and whether the content licensing creates improvements in customer quality for advertisers running on Microsoft.

Google Says Standard Tag Is No Longer The Recommended Setup

Google communicated through various channels, including YouTube Shorts and LinkedIn, that the standard tag setup is no longer the recommended configuration for advertisers.

From the sounds of it, it appears that standard client-side tagging is being phased out in favor of Google Tag Gateway or full server-side tagging setups.

Tag Gateway works by serving Google tags from your own domain instead of from Google’s servers. This approach improves data accuracy by reducing the impact of browser privacy features and ad blockers, extends cookie lifespans in restrictive browsers like Safari, and positions the tracking infrastructure as first-party rather than third-party.

The platform is also promoting Tag Gateway through partnerships and integrations like Webflow, which automate much of the configuration that previously required technical expertise.

With Google Ads for Webflow, marketers can now  connect campaign performance to first-party data, as well as launch and optimize campaigns inside the Webflow dashboard.

Google stated that they’re bringing in more integrations to other platforms soon.

Why This Matters For Advertisers

The practical implication is that advertisers who haven’t upgraded their tagging infrastructure are likely seeing degraded data quality without realizing it. As browsers continue tightening privacy restrictions, that gap is likely going to widen.

Looking at Google’s choice of communication channels for this update, it feels like right now this is more of a technical “recommendation” to get more advertisers on board. My assumption is that it will become mandatory in the future.

To me, it signals that accounts that choose to run on outdated tag configurations won’t have the best data signal strength to compete in automated bidding environments where data quality has a huge impact on performance. That was also echoed in the first episode of Ads Decoded last week, where they talked a lot about data strength.

Google also touts that the upgrade to Tag Gateway is “effortless,” where advertisers can set this up with the CDN or CMS of their choice directly in Google Ads, Google Analytics, or Google Tag Manager. They’re removing a barrier for many small businesses, hoping to get more advertisers on board quicker.

What PPC Professionals Are Saying

Most comments on Google’s LinkedIn post are in agreement with the move to Google Tag Gateway.

Alexandr Stambari, performance marketing specialist at ASBC Moldova, gave good feedback, but also provided some critical potential gaps in transparency that I’m sure many advertisers would also ask:

“The move toward first-party tagging and Google tag gateway makes sense in today’s environment, especially with increasing cookie restrictions and a stronger focus on AI-driven optimization.

At the same time, it would be great to see more transparency on where the actual uplift comes from — the technology itself versus overall improvements in models and media mix. For many advertisers, the entry barrier (infrastructure, resources, and implementation clarity) is still not entirely clear.”

However, some PPCers are against using Google Tag Gateway and have been talking about it before Google posted their videos about it.

In a post last week, Luc Nugteren, tracking specialist, said he’s not using Google Tag Gateway because “server-side tagging offers more benefits” and because SST “isn’t restricted to Google and enables you to use a custom loader, it will help you measure more.”

Google Ads Introduces Multi-Party Approval For Account Changes

Google Ads rolled out multi-party approval (MPA), a security feature that requires a second administrator to verify high-risk account changes before they take effect. The feature was first spotted by Hana Kobzova, founder of PPCNewsFeed.com, who shared the update on LinkedIn.

Multi-party approval applies to actions like adding new users, removing existing users, or changing user roles within an account. When someone initiates one of these changes, all eligible administrators receive an in-product notification to approve or deny the request. There are no email notifications currently, which means administrators need to check the platform directly to see pending approvals.

Requests expire after 20 days if no action is taken. The system automatically blocks expired requests, and the person who initiated the change needs to restart the process if the action is still necessary. Read-only roles are exempt from the approval process.

Why This Matters For Advertisers

This seems like the right move from Google after multiple reports of account owners or agency owners have had their Google Ads accounts hacked.

While it may add some extra friction in operations, it’s more of a justified annoyance in the name of security.

For agencies managing multiple client accounts, the operational impact could be significant. If every user addition or role change requires coordination between two administrators, that adds time to onboarding processes and makes emergency access requests more complicated.

The lack of email notifications is a notable gap. Administrators who don’t log into Google Ads regularly may not see pending approval requests until they’ve already expired, which could create delays for legitimate account changes. Google will likely add email support based on user feedback, but for now, it’s a manual check-in process.

The other consideration is what happens when the only other administrator is unavailable. Google’s support documentation makes it clear that support teams can’t approve or deny requests on behalf of account owners, which means if your backup admin is on vacation or no longer with the company, you’re stuck until they respond or the request expires.

What PPC Professionals Are Saying

Many advertisers seem to be in favor of this move by Google.

Dan Kabakov, founder of Online Labs, stated:

“About time Google addressed this. The account hijacking attacks over the past few months have been brutal for agencies.”

Ana Kostic, co-founder of Bigmomo, said that “it’s a bit annoying but it’s much better than the alternative,” while in the comments Fintan Riordan, founder of VouchFlow.ai said he is “glad to see Google taking this seriously.”

Theme Of The Week: Infrastructure Upgrades May Become Requirements

This week’s updates share a common thread: What used to be optional infrastructure improvements are likely becoming baseline requirements for running competitive advertising campaigns.

Microsoft’s Publisher Content Marketplace is building the foundation for how content gets licensed in an AI-first ecosystem. Google’s push away from standard tags toward Tag Gateway is (not quite) forcing advertisers to upgrade their measurement infrastructure. And multi-party approval is adding procedural safeguards that change how account administration works.

In each case, the platforms are signaling that the old way of doing things is no longer sustainable.

More Resources:


Featured Image: beast01/Shutterstock

This is the most misunderstood graph in AI

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Every time OpenAI, Google, or Anthropic drops a new frontier large language model, the AI community holds its breath. It doesn’t exhale until METR, an AI research nonprofit whose name stands for “Model Evaluation & Threat Research,” updates a now-iconic graph that has played a major role in the AI discourse since it was first released in March of last year. The graph suggests that certain AI capabilities are developing at an exponential rate, and more recent model releases have outperformed that already impressive trend.

That was certainly the case for Claude Opus 4.5, the latest version of Anthropic’s most powerful model, which was released in late November. In December, METR announced that Opus 4.5 appeared to be capable of independently completing a task that would have taken a human about five hours—a vast improvement over what even the exponential trend would have predicted. One Anthropic safety researcher tweeted that he would change the direction of his research in light of those results; another employee at the company simply wrote, “mom come pick me up i’m scared.”

But the truth is more complicated than those dramatic responses would suggest. For one thing, METR’s estimates of the abilities of specific models come with substantial error bars. As METR explicitly stated on X, Opus 4.5 might be able to regularly complete only tasks that take humans about two hours, or it might succeed on tasks that take humans as long as 20 hours. Given the uncertainties intrinsic to the method, it was impossible to know for sure. 

“There are a bunch of ways that people are reading too much into the graph,” says Sydney Von Arx, a member of METR’s technical staff.

More fundamentally, the METR plot does not measure AI abilities writ large, nor does it claim to. In order to build the graph, METR tests the models primarily on coding tasks, evaluating the difficulty of each by measuring or estimating how long it takes humans to complete it—a metric that not everyone accepts. Claude Opus 4.5 might be able to complete certain tasks that take humans five hours, but that doesn’t mean it’s anywhere close to replacing a human worker.

METR was founded to assess the risks posed by frontier AI systems. Though it is best known for the exponential trend plot, it has also worked with AI companies to evaluate their systems in greater detail and published several other independent research projects, including a widely covered July 2025 study suggesting that AI coding assistants might actually be slowing software engineers down. 

But the exponential plot has made METR’s reputation, and the organization appears to have a complicated relationship with that graph’s often breathless reception. In January, Thomas Kwa, one of the lead authors on the paper that introduced it, wrote a blog post responding to some criticisms and making clear its limitations, and METR is currently working on a more extensive FAQ document. But Kwa isn’t optimistic that these efforts will meaningfully shift the discourse. “I think the hype machine will basically, whatever we do, just strip out all the caveats,” he says.

Nevertheless, the METR team does think that the plot has something meaningful to say about the trajectory of AI progress. “You should absolutely not tie your life to this graph,” says Von Arx. “But also,” she adds, “I bet that this trend is gonna hold.”

Part of the trouble with the METR plot is that it’s quite a bit more complicated than it looks. The x-axis is simple enough: It tracks the date when each model was released. But the y-axis is where things get tricky. It records each model’s “time horizon,” an unusual metric that METR created—and that, according to Kwa and Von Arx, is frequently misunderstood.

To understand exactly what model time horizons are, it helps to know all the work that METR put into calculating them. First, the METR team assembled a collection of tasks ranging from quick multiple-choice questions to detailed coding challenges—all of which were somehow relevant to software engineering. Then they had human coders attempt most of those tasks and evaluated how long it took them to finish. In this way, they assigned the tasks a human baseline time. Some tasks took the experts mere seconds, whereas others required several hours.

When METR tested large language models on the task suite, they found that advanced models could complete the fast tasks with ease—but as the models attempted tasks that had taken humans more and more time to finish, their accuracy started to fall off. From a model’s performance, the researchers calculated the point on the time scale of human tasks at which the model would complete about 50% of the tasks successfully. That point is the model’s time horizon. 

All that detail is in the blog post and the academic paper that METR released along with the original time horizon plot. But the METR plot is frequently passed around on social media without this context, and so the true meaning of the time horizon metric can get lost in the shuffle. One common misapprehension is that the numbers on the plot’s y-axis—around five hours for Claude Opus 4.5, for example—represent the length of time that the models can operate independently. They do not. They represent how long it takes humans to complete tasks that a model can successfully perform.  Kwa has seen this error so frequently that he made a point of correcting it at the very top of his recent blog post, and when asked what information he would add to the versions of the plot circulating online, he said he would include the word “human” whenever the task completion time was mentioned.

As complex and widely misinterpreted as the time horizon concept might be, it does make some basic sense: A model with a one-hour time horizon could automate some modest portions of a software engineer’s job, whereas a model with a 40-hour horizon could potentially complete days of work on its own. But some experts question whether the amount of time that humans take on tasks is an effective metric for quantifying AI capabilities. “I don’t think it’s necessarily a given fact that because something takes longer, it’s going to be a harder task,” says Inioluwa Deborah Raji, a PhD student at UC Berkeley who studies model evaluation. 

Von Arx says that she, too, was originally skeptical that time horizon was the right measure to use. What convinced her was seeing the results of her and her colleagues’ analysis. When they calculated the 50% time horizon for all the major models available in early 2025 and then plotted each of them on the graph, they saw that the time horizons for the top-tier models were increasing over time—and, moreover, that the rate of advancement was speeding up. Every seven-ish months, the time horizon doubled, which means that the most advanced models could complete tasks that took humans nine seconds in mid 2020, 4 minutes in early 2023, and 40 minutes in late 2024. “I can do all the theorizing I want about whether or not it makes sense, but the trend is there,” Von Arx says.

It’s this dramatic pattern that made the METR plot such a blockbuster. Many people learned about it when they read AI 2027, a viral sci-fi story cum quantitative forecast positing that superintelligent AI could wipe out humanity by 2030. The writers of AI 2027 based some of their predictions on the METR plot and cited it extensively. In Von Arx’s words, “It’s a little weird when the way lots of people are familiar with your work is this pretty opinionated interpretation.”

Of course, plenty of people invoke the METR plot without imagining large-scale death and destruction. For some AI boosters, the exponential trend indicates that AI will soon usher in an era of radical economic growth. The venture capital firm Sequoia Capital, for example, recently put out a post titled “2026: This is AGI,” which used the METR plot to argue that AI that can act as an employee or contractor will soon arrive. “The provocation really was like, ‘What will you do when your plans are measured in centuries?’” says Sonya Huang, a general partner at Sequoia and one of the post’s authors. 

Just because a model achieves a one-hour time horizon on the METR plot, however, doesn’t mean that it can replace one hour of human work in the real world. For one thing, the tasks on which the models are evaluated don’t reflect the complexities and confusion of real-world work. In their original study, Kwa, Von Arx, and their colleagues quantify what they call the “messiness” of each task according to criteria such as whether the model knows exactly how it is being scored and whether it can easily start over if it makes a mistake (for messy tasks, the answer to both questions would be no). They found that models do noticeably worse on messy tasks, although the overall pattern of improvement holds for both messy and non-messy ones.

And even the messiest tasks that METR considered can’t provide much information about AI’s ability to take on most jobs, because the plot is based almost entirely on coding tasks. “A model can get better at coding, but it’s not going to magically get better at anything else,” says Daniel Kang, an assistant professor of computer science at the University of Illinois Urbana-Champaign. In a follow-up study, Kwa and his colleagues did find that time horizons for tasks in other domains also appear to be on exponential trajectories, but that work was much less formal.

Despite these limitations, many people admire the group’s research. “The METR study is one of the most carefully designed studies in the literature for this kind of work,” Kang told me. Even Gary Marcus, a former NYU professor and professional LLM curmudgeon, described much of the work that went into the plot as “terrific” in a blog post.

Some people will almost certainly continue to read the METR plot as a prognostication of our AI-induced doom, but in reality it’s something far more banal: a carefully constructed scientific tool that puts concrete numbers to people’s intuitive sense of AI progress. As METR employees will readily agree, the plot is far from a perfect instrument. But in a new and fast-moving domain, even imperfect tools can have enormous value.

“This is a bunch of people trying their best to make a metric under a lot of constraints. It is deeply flawed in many ways,” Von Arx says. “I also think that it is one of the best things of its kind.”

Three questions about next-generation nuclear power, answered

Nuclear power continues to be one of the hottest topics in energy today, and in our recent online Roundtables discussion about next-generation nuclear power, hyperscale AI data centers, and the grid, we got dozens of great audience questions.

These ran the gamut, and while we answered quite a few (and I’m keeping some in mind for future reporting), there were a bunch we couldn’t get to, at least not in the depth I would have liked.

So let’s answer a few of your questions about advanced nuclear power. I’ve combined similar ones and edited them for clarity.

How are the fuel needs for next-generation nuclear reactors different, and how are companies addressing the supply chain?

Many next-generation reactors don’t use the low-enriched uranium used in conventional reactors.

It’s worth looking at high-assay low-enriched uranium, or HALEU, specifically. This fuel is enriched to higher concentrations of fissile uranium than conventional nuclear fuel, with a proportion of the isotope U-235 that falls between 5% and 20%. (In conventional fuel, it’s below 5%.)

HALEU can be produced with the same technology as low-enriched uranium, but the geopolitics are complicated. Today, Russia basically has a monopoly on HALEU production. In 2024, the US banned the import of Russian nuclear fuel through 2040 in an effort to reduce dependence on the country. Europe hasn’t taken the same measures, but it is working to move away from Russian energy as well.

That leaves companies in the US and Europe with the major challenge of securing the fuel they need when their regular Russian supply has been cut off or restricted.

The US Department of Energy has a stockpile of HALEU, which the government is doling out to companies to help power demonstration reactions. In the longer term, though, there’s still a major need to set up independent HALEU supply chains to support next-generation reactors.

How is safety being addressed, and what’s happening with nuclear safety regulation in the US?

There are some ways that next-generation nuclear power plants could be safer than conventional reactors. Some use alternative coolants that would prevent the need to run at the high pressure required in conventional water-cooled reactors. Many incorporate passive safety shutoffs, so if there are power supply issues, the reactors shut down harmlessly, avoiding risk of meltdown. (These can be incorporated in newer conventional reactors, too.)

But some experts have raised concerns that in the US, the current administration isn’t taking nuclear safety seriously enough.

A recent NPR investigation found that the Trump administration had secretly rewritten nuclear rules, stripping environmental protections and loosening safety and security measures. The government shared the new rules with companies that are part of a program building experimental nuclear reactors, but not with the public.

I’m reminded of a talk during our EmTech MIT event in November, where Koroush Shirvan, an MIT professor of nuclear engineering, spoke on this issue. “I’ve seen some disturbing trends in recent times, where words like ‘rubber-stamping nuclear projects’ are being said,” Shirvan said during that event.  

During the talk, Shirvan shared statistics showing that nuclear power has a very low rate of injury and death. But that’s not inherent to the technology, and there’s a reason injuries and deaths have been low for nuclear power, he added: “It’s because of stringent regulatory oversight.”  

Are next-generation reactors going to be financially competitive?

Building a nuclear power plant is not cheap. Let’s consider the up-front investment needed to build a power plant.  

Plant Vogtle in Georgia hosts the most recent additions to the US nuclear fleet—Units 3 and 4 came online in 2023 and 2024. Together, they had a capital cost of $15,000 per kilowatt, adjusted for inflation, according to a recent report from the US Department of Energy. (This wonky unit I’m using divides the total cost to build the reactors by their expected power output, so we can compare reactors of different sizes.)

That number’s quite high, partly because those were the first of their kind built in the US, and because there were some inefficiencies in the planning. It’s worth noting that China builds reactors for much less, somewhere between $2,000/kW and $3,000/kW, depending on the estimate.

The up-front capital cost for first-of-a-kind advanced nuclear plants will likely run between $6,000 and $10,000 per kilowatt, according to that DOE report. That could come down by up to 40% after the technologies are scaled up and mass-produced.

So new reactors will (hopefully) be cheaper than the ultra-over-budget and behind-schedule Vogtle project, but they aren’t necessarily significantly cheaper than efficiently built conventional plants, if you normalize by their size.

It’ll certainly be cheaper to build new natural-gas plants (setting aside the likely equipment shortages we’re likely going to see for years.) Today’s most efficient natural-gas plants cost just $1,600/kW on the high end, according to data from Lazard.

An important caveat: Capital cost isn’t everything—running a nuclear plant is relatively inexpensive, which is why there’s so much interest in extending the lifetime of existing plants or reopening shuttered ones.

Ultimately, by many metrics, nuclear plants of any type are going to be more expensive than other sources, like wind and solar power. But they provide something many other power sources don’t: a reliable, stable source of electricity that can run for 60 years or more.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: attempting to track AI, and the next generation of nuclear power

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This is the most misunderstood graph in AI

Every time OpenAI, Google, or Anthropic drops a new frontier large language model, the AI community holds its breath. It doesn’t exhale until METR, an AI research nonprofit whose name stands for “Model Evaluation & Threat Research,” updates a now-iconic graph that has played a major role in the AI discourse since it was first released in March of last year. 

The graph suggests that certain AI capabilities are developing at an exponential rate, and more recent model releases have outperformed that already impressive trend.

That was certainly the case for Claude Opus 4.5, the latest version of Anthropic’s most powerful model, which was released in late November. In December, METR announced that Opus 4.5 appeared to be capable of independently completing a task that would have taken a human about five hours—a vast improvement over what even the exponential trend would have predicted.

But the truth is more complicated than those dramatic responses would suggest. Read the full story.

—Grace Huckins

This story is part of MIT Technology Review Explains: our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Three questions about next-generation nuclear power, answered

Nuclear power continues to be one of the hottest topics in energy today, and in our recent online Roundtables discussion about next-generation nuclear power, hyperscale AI data centers, and the grid, we got dozens of great audience questions.

These ran the gamut, and while we answered quite a few (and I’m keeping some in mind for future reporting), there were a bunch we couldn’t get to, at least not in the depth I would have liked. So let’s answer a few of your questions about advanced nuclear power.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Anthropic’s new coding tools are rattling the markets 
Fields as diverse as publishing and coding to law and advertising are paying attention. (FT $)
+ Legacy software companies, beware. (Insider $)
+ Is “software-mageddon” nigh? It depends who you ask. (Reuters)

2 This Apple setting prevented the FBI from accessing a reporter’s iPhone
Lockdown Mode has proved remarkably effective—for now. (404 Media)
+ Agents were able to access Hannah Natanson’s laptop, however. (Ars Technica)

3 Last month’s data center outage disrupted all TikTok categories
Not just the political content that some users claimed. (NPR)

4 Big Tech is pouring billions into AI in India
A newly-announced 20-year tax break should help to speed things along. (WSJ $)
+ India’s female content moderators are watching hours of abuse content to train AI. (The Guardian)
+ Officials in the country are weighing up restricting social media for minors. (Bloomberg $)
+ Inside India’s scramble for AI independence. (MIT Technology Review)

5 YouTubers are harassing women using body cams
They’re abusing freedom of information laws to humiliate their targets. (NY Mag $)
+ AI was supposed to make police bodycams better. What happened? (MIT Technology Review)

6 Jokers have created a working version of Jeffrey Epstein’s inbox
Complete with notable starred threads. (Wired $)
+ Epstein’s links with Silicon Valley are vast and deep. (Fast Company $)
+ The revelations are driving rifts between previously-friendly factions. (NBC News)

7 What’s the last thing you see before you die?
A new model might help to explain near-death experiences—but not all researchers are on board. (WP $)
+ What is death? (MIT Technology Review)

8 A new app is essentially TikTok for vibe-coded apps
Words which would have made no sense 15 years ago. (TechCrunch)
+ What is vibe coding, exactly? (MIT Technology Review)

9 Rogue TV boxes are all the rage
Viewers are sick of the soaring prices of streaming services, and are embracing less legal means of watching their favorite shows. (The Verge)

10 Climate change is threatening the future of the Winter Olympics ⛷
Artificial snow is one (short term) solution. (Bloomberg $)
+ Team USA is using AI to try and gain an edge on its competition. (NBC News)

Quote of the day

“We’ve heard from many who want nothing to do with AI.”

—Ajit Varma, head of Mozilla’s web browser Firefox, explains why the company is reversing its previous decision to transform Firefox into an “AI browser,” PC Gamer reports.

One more thing

A major AI training data set contains millions of examples of personal data

Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.

Thousands of images—including identifiable faces—were found in a small subset of DataComp CommonPool, a major AI training set for image generation scraped from the web. Because the researchers audited just 0.1% of CommonPool’s data, they estimate that the real number of images containing personally identifiable information, including faces and identity documents, is in the hundreds of millions. 

The bottom line? Anything you put online can be and probably has been scraped. Read the full story.

—Eileen Guo

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ If you’re crazy enough to be training for a marathon right now, here’s how to beat boredom on those long, long runs.
+ Mark Cohen’s intimate street photography is a fascinating window into humanity.
+ A seriously dedicated gamer has spent days painstakingly recreating a Fallout vault inside the Sims 4.
+ Here’s what music’s most stylish men are wearing right now—from leather pants to khaki parkas.

Consolidating systems for AI with iPaaS

For decades, enterprises reacted to shifting business pressures with stopgap technology solutions. To rein in rising infrastructure costs, they adopted cloud services that could scale on demand. When customers shifted their lives onto smartphones, companies rolled out mobile apps to keep pace. And when businesses began needing real-time visibility into factories and stockrooms, they layered on IoT systems to supply those insights.

Each new plug-in or platform promised better, more efficient operations. And individually, many delivered. But as more and more solutions stacked up, IT teams had to string together a tangled web to connect them—less an IT ecosystem and more of a make-do collection of ad-hoc workarounds.

That reality has led to bottlenecks and maintenance burdens, and the impact is showing up in performance. Today, fewer than half of CIOs (48%) say their current digital initiatives are meeting or exceeding business outcome targets. Another 2025 survey found that operations leaders point to integration complexity and data quality issues as top culprits for why investments haven’t delivered as expected.

Achim Kraiss, chief product officer of SAP Integration Suite, elaborates on the wide-ranging problems inherent in patchwork IT: “A fragmented landscape makes it difficult to see and control end-to-end business processes,” he explains. “Monitoring, troubleshooting, and governance all suffer. Costs go up because of all the complex mappings and multi-application connectivity you have to maintain.”

These challenges take on new significance as enterprises look to adopt AI. As AI becomes embedded in everyday workflows, systems are suddenly expected to move far larger volumes of data, at higher speeds, and with tighter coordination than yesterday’s architectures were built
to sustain.

As companies now prepare for an AI-powered future, whether that is generative AI, machine learning, or agentic AI, many are realizing that the way data moves through their business matters just as much as the insights it generates. As a result, organizations are moving away from scattered integration tools and toward consolidated, end-to-end platforms that restore order and streamline how systems interact.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

5 Content Marketing Ideas for March 2026

Whether it’s spring cleaning tips, peanut butter recipes, or honoring Mr. Spock, March 2026 is full of opportunities to use content to promote your company and its products.

The aim of content marketing is to attract, engage, and retain customers. It works on the principle of reciprocity. Provide valuable content to shoppers, and they may repay with purchases.

What follows are five content marketing ideas your shop can use in March 2026.

Spring Renewal

AI-generated image of a female in the morning sunlight

Spring invites small changes that make everyday life feel new again.

March marks the slow turn toward spring in the Northern Hemisphere. Days grow longer. The weather turns warmer. And we shift our focus.

Many of us begin thinking about small changes that lead to larger improvements.

For some, that means spring cleaning. For others, it means reorganizing, replacing worn items, or restarting habits that faded over winter.

This seasonal mindset can lead to useful, product-centered content. Shoppers browse for ideas and try to perform practical tasks.

Thus the best content could help folks complete a project or learn a skill that also aligns with what the store sells. A home goods retailer can explain how to refresh a kitchen or bedroom. A clothing merchant can address updating a wardrobe. A fitness brand might outline a plan for restarting workouts.

When it helps a potential customer accomplish something meaningful, a business builds trust. That trust often turns into future sales.

National Peanut Butter Lover’s Day

Photo of multiple peanut butter jars and brands on a table

Peanut butter has many fans.

March 1, 2026, is National Peanut Butter Lover’s Day in the United States. The pseudo-holiday began appearing on calendars in the 1990s.

Food-focused content can perform well in search, social, and, presumably, AI

With this in mind, peanut butter-related articles will likely work best for merchants selling kitchenware, specialty foods, and fitness products such as protein powders and supplements.

For these merchants, the content ideas are straightforward.

  • “5 Peanut Butter Desserts to Make in Under 30 Minutes”
  • “Homemade Peanut Butter That’s Better Than Store-Bought”
  • “Easy Snacks for Peanut Butter Lovers”

Peanut butter has a broad cultural appeal. Creative content marketers in other niches can likely find ways to participate.

For example, a retailer specializing in science and math kits might publish educational or curiosity-driven content such as:

  • “Who Invented Peanut Butter?”
  • “Peanut Butter and the World’s Fair”
  • “10 Shocking Peanut Butter Patents”

The goal is to create timely, interesting content that aligns with the spirit of the day and introduces shoppers to the store.

Live Long and Prosper Day

Photo of Mr. Spock on a Star Trek episode

Star Trek’s Mr. Spock, played by Leonard Nimoy, famously coined the phrase “live long and prosper.”

March 26, 2026, is Live Long and Prosper Day, a nod to Leonard Nimoy and his iconic portrayal of Mr. Spock in Star Trek, The Original Series.

“Live long and prosper” was a series phrase, but it has long become shorthand for science fiction’s hopeful view of the future and humanity’s relationship with technology.

Science fiction remains one of the most popular and influential genres in popular culture. Its ideas shape how people imagine innovation, design, and progress. That influence creates an opportunity for ecommerce content marketers to frame products through a science-fiction lens without needing licensed merchandise.

Content can focus on everyday products that feel futuristic, minimalist, or inspired by speculative design. Electronics, home office gear, tools, apparel, books, hobby supplies, and gift items can all fit this theme.

The age of AI means almost everything feels like science fiction.

Make Up Your Own Holiday Day

Create a holiday that fits your business.

If Star Trek’s Mr. Spock is not a great fit, March 26 is also Make Up Your Own Holiday Day, a lighthearted invitation to invent a celebration from scratch.

For ecommerce marketers, the date offers a chance to blend entertaining content with promotions.

Unlike regular holidays, this do-it-yourself occasion can focus on a product category, a use case, or a habit.

An online liquor store, for example, might introduce “Buy Your Spouse a Bottle Day.” A coffee merchant could launch “Perfect Cup of Coffee Day.” An office supply retailer might create “Organize Your Desk Day.” Or finally, a game shop could celebrate “Family Game Night Day.”

Each of those should include products. It’s a permission to have fun while driving sales.

America at 250

AI image of a man at a workbench making a leather product

Content from U.S. merchants in 2026 can focus on craftsmanship and domestically-made products.

In 2026, the United States will celebrate its 250th year as a nation. It’s an opportunity for U.S. merchants to focus on product-centered storytelling.

One angle is to emphasize goods made in the United States or rooted in long-standing domestic craftsmanship. These products naturally connect to themes of durability, heritage, and continuity without requiring overt patriotic messaging.

Shoppers learn where products come from, how they are made, and why that matters today. Example article titles might include:

  • “15 Heritage Brands That Have Stood the Test of Time”
  • “American Craftsmanship in Everyday Products”
  • “5 American-Made Products Worth Paying For”

These pieces can become part of a continuing “America 250” series that expands through spring and into summer, gradually building a library of heritage-focused content tied to merchandise.

Google Shows How To Check Passage Indexing via @sejournal, @martinibuster

Google’s John Mueller was asked how many megabytes of HTML Googlebot crawls per page. The question was whether Googlebot indexes two megabytes (MB) or fifteen megabytes of data. Mueller’s answer minimized the technical aspect of the question and went straight to the heart of the issue, which is really about how much content is indexed.

GoogleBot And Other Bots

In the middle of an ongoing discussion in Bluesky someone revived the question about whether Googlebot crawls and indexes 2 or 15 megabytes of data.

They posted:

“Hope you got whatever made you run 🙂

It would be super useful to have more precisions, and real-life examples like “My page is X Mb long, it gets cut after X Mb, it also loads resource A: 15Kb, resource B: 3Mb, resource B is not fully loaded, but resource A is because 15Kb < 2Mb”.”

Panic About 2 Megabyte Limit Is Overblown

Mueller said that it’s not necessary to weigh bytes and implied that what’s ultimately important isn’t about constraining how many bytes are on a page but rather whether or not important passages are indexed.

Furthermore, Mueller said that it is rare that a site exceeds two megabytes of HTML, dismissing the idea that it’s possible that a website’s content might not get indexed because it’s too big.

He also said that Googlebot isn’t the only bot that crawls a web page, apparently to explain why 2 megabytes and 15 megabytes aren’t limiting factors. Google publishes a list of all the crawlers they use for various purposes.

How To Check If Content Passages Are Indexed

Lastly, Mueller’s response confirmed a simple way to check whether or not important passages are indexed.

Mueller answered:

“Google has a lot of crawlers, which is why we split it. It’s extremely rare that sites run into issues in this regard, 2MB of HTML (for those focusing on Googlebot) is quite a bit. The way I usually check is to search for an important quote further down on a page – usually no need to weigh bytes.”

Passages For Ranking

People have short attention spans except when they’re reading about a topic that they are passionate about. That’s when a comprehensive article may come in handy for those readers who really want to take a deep dive to learn more.

From an SEO perspective, I can understand why some may feel that a comprehensive article might not be ideal for ranking if a document provides deep coverage of multiple topics, any one of which could be a standalone article.

A publisher or an SEO needs to step back and assess whether a user is satisfied with deep coverage of a topic or whether a deeper treatment of it is needed by users. There are also different levels of comprehensiveness, one with granular details and another with an overview-level of coverage of details, with links to deeper coverage.

In other words, sometimes users require a view of the forest and sometimes they require a view of the trees.

Google has long been able to rank document passages with their passage ranking algorithms. Ultimately, in my opinion, it really comes down to what is useful to users and is likely to result in a higher level of user satisfaction.

If comprehensive topic coverage excites people and makes them passionate enough about to share it with other people then that is a win.

If comprehensive coverage isn’t useful for that specific topic then it may be better to split the content into shorter coverage that better aligns with the reasons why people are coming to that page to read about that topic.

Takeaways

While most of these takeaways aren’t represented in Mueller’s response, they do in my opinion represent good practices for SEO.

  • HTML size limits belie a concern for deeper questions about content length and indexing visibility
  • Megabyte thresholds are rarely a practical constraint for real-world pages
  • Counting bytes is less useful than verifying whether content actually appears in search
  • Searching for distinctive passages is a practical way to confirm indexing
  • Comprehensiveness should be driven by user intent, not crawl assumptions
  • Content usefulness and clarity matter more than document size
  • User satisfaction remains the deciding factor in content performance

Concern over how many megabytes are a hard crawl limit for Googlebot reflect uncertainty about whether important content in a long document is being indexed and is available to rank in search. Focusing on megabytes shifts attention away from the real issues SEOs should be focusing on, which is whether the topic coverage depth best serves a user’s needs.

Mueller’s response reinforces the point that web pages that are too big to be indexed are uncommon, and fixed byte limits are not a constraint that SEOs should be concerned about.

In my opinion, SEOs and publishers will probably have better search coverage by shifting their focus away from optimizing for assumed crawl limits and instead focus on user content consumption limits.

But if a publisher or SEO is concerned about whether a passage near the end of a document is indexed, there is an easy way to check the status by simply doing a search for an exact match for that passage.

Comprehensive topic coverage is not automatically a ranking problem, and it not always the best (or worst) approach. HTML size is not really a concern unless it starts impacting page speed. What matters is whether content is clear, relevant, and useful to the intended audience at the precise levels of granularity that serves the user’s purposes.

Featured Image by Shutterstock/Krakenimages.com

What 1,000 Businesses Reveal About Growth in 2026 [Webinar] via @sejournal, @hethr_campbell

Learn The Signals Shaping Marketing, Efficiency, and AI Planning

As 2026 rolls on, many teams find themselves adjusting how they approach overall business and marketing growth. 

What is the most efficient use of this year’s tighter budgets? 

Priorities are shifting across industries. Understanding how peers are responding can help teams make better strategic decisions.

Join Jeff Hirz, EVP of Business Development at OuterBox, as he shares early findings from 2025 Performance Insights From 1,000 Businesses Planning for 2026

Based on survey data from nearly 1,000 businesses, this session highlights where confidence is rising, where caution remains, and how companies are balancing growth, efficiency, and focus.

What You’ll Learn

  • How business like yours will fund marketing, sales, and efficiency initiatives 
  • What AI readiness looks like in practice for businesses like yours
  • Where business confidence is increasing, and what teams are prioritizing

Why Attend?

This webinar provides a practical benchmark for evaluating your 2026 plan against peer data. You will leave with clear context and takeaways to help refine growth, efficiency, and AI strategies for the year ahead.

Register now to see what real business data says about planning for 2026.

🛑 Can’t watch live? Register anyway, and we’ll send you the recording.

Google Releases Discover-Focused Core Update via @sejournal, @MattGSouthern
  • Google has launched a core update specifically for Discover, rather than Search more broadly.
  • The February Discover core update began Feb. 5 for English-language users in the U.S., with plans to expand to other countries and languages.
  • Google says the rollout may take up to two weeks.

Google has started a Discover core update. The rollout may take up to two weeks, with expansion to more countries and languages later.