The Download: the LLM will see you now, and a new fusion power deal

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This medical startup uses LLMs to run appointments and make diagnoses

Patients at a small number of clinics in Southern California run by the medical startup Akido Labs are spending relatively little time, or even no time at all, with their doctors. Instead, they see a medical assistant, who can lend a sympathetic ear but has limited clinical training.

The job of formulating diagnoses and concocting a treatment plan is done by an LLM-based system called ScopeAI that transcribes and analyzes the dialogue between patient and assistant. A doctor then approves, or corrects, the AI system’s recommendations.

According to Akido’s CEO, this approach allows doctors to see four to five times as many patients as they could previously. But experts aren’t convinced that displacing so much of the cognitive work of medicine onto AI is the right way to remedy the doctor shortage. Read the full story.

—Grace Huckins

An oil and gas giant signed a $1 billion deal with Commonwealth Fusion Systems

Eni, one of the world’s largest oil and gas companies, just agreed to buy $1 billion in electricity from a power plant being built by Commonwealth Fusion Systems. The deal is the latest to illustrate just how much investment Commonwealth and other fusion companies are courting as they attempt to take fusion power from the lab to the power grid.

The agreement will see Eni purchase electricity from Commonwealth’s first commercial fusion power plant, in Virginia. The facility is still in the planning stages but is scheduled to come online in the early 2030s. Read the full story.

—Casey Crownhart

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Trump officials are expected to link Tylenol to autism
They’re also likely to tout a lesser-known drug called leucovorin as a potential treatment. (WP $)
+ They’ll warn women in the early stages of pregnancy that they should only take Tylenol to treat high fevers. (Politico)
+ But a huge study found no connection last year. (Axios)

2 Trump wants to charge skilled foreign workers $100,000 for H-1B visas
The decision is highly likely to harm US growth, especially in its tech sector. (The Guardian)
+ The visa has been a lifeline for hundreds of thousands of tech workers. (BBC)
+ Indian outsourcing companies are struggling to pivot. (Bloomberg $)
+ Tech firms are sending memos to their workers on the visa. (Insider $)

3 The European Commission wants to ax cookie consent banners
A 2009 law triggered an influx in pesky pop-ups that the EU now wants to get rid of. (Politico)

4 The Murdochs and Michael Dell are among TikTok’s potential buyers
The media mogul family and Dell founder are interested in shares, Trump says. (CNN)

5 Inside China’s plan to put its data centers to work
A mega-cluster of centers is springing up in the city of Wuhu. (FT $)
+ China built hundreds of AI data centers to catch the AI boom. Now many stand unused. (MIT Technology Review)

6 Seattle’s tech scene is in trouble
When its biggest firms slash their workforces, where does that leave everyone else? (WSJ $)

7 Innocent people are being scammed into scamming
Chinese gangs are imprisoning trafficking victims in compounds on the Myanmar-Thai border. (Reuters)
+ Inside a romance scam compound—and how people get tricked into being there. (MIT Technology Review)

8 Europe’s reusable rocket dream isn’t entirely dead
But progress has been a lot slower than it should be. (Ars Technica)
+ Elon Musk’s utter dominance of space tech is hard to overestimate. (Wired $)
+ Europe is finally getting serious about commercial rockets. (MIT Technology Review)

9 How ChatGPT fares as a financial stock picker
Be prepared to roll the dice. (Fast Company $)

10 Silicon Valley is ditching dating apps
And turning to elite matchmakers instead. (The Information $)

Quote of the day

“I didn’t sleep all night. I kept thinking: What if I get stuck outside the US?”

—Akaash Hazarika, a Salesforce engineer, tells Insider he was forced to cut his vacation to Toronto short and rush back to America after the Trump administration announced changes to the H-1B skilled foreign worker visa.

One more thing

The quest to figure out farming on Mars

Once upon a time, water flowed across the surface of Mars. Waves lapped against shorelines, strong winds gusted and howled, and driving rain fell from thick, cloudy skies. It wasn’t really so different from our own planet 4 billion years ago, except for one crucial detail—its size. Mars is about half the diameter of Earth, and that’s where things went wrong.

The Martian core cooled quickly, soon leaving the planet without a magnetic field. This, in turn, left it vulnerable to the solar wind, which swept away much of its atmosphere. Without a critical shield from the sun’s ultraviolet rays, Mars could not retain its heat. Some of the oceans evaporated, and the subsurface absorbed the rest, with only a bit of water left behind and frozen at its poles. If ever a blade of grass grew on Mars, those days are over.

But could they begin again? And what would it take to grow plants to feed future astronauts on Mars? Read the full story.

—David W. Brown

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+  These abandoned blogs are a relic of the bygone internet (bring them back!)
+ How to strengthen your bond with your reluctant cat 😾
+ How Metal Gear Solid inspired the video to one of the greatest hits of the late 90s.
+ If I had to explain British culture to someone, I’d just send them this video.

Recover ChatGPT 404 Traffic with GA4

ChatGPT often links to sources when answering prompts. Traffic from those clicks is typically high-converting in my testing. Unfortunately, ChatGPT frequently hallucinates URLs and sends visitors to nonexistent pages.

A study released this month by Ahrefs found ChatGPT 5 links to error pages nearly three times more than does Google Search.

To be sure, traffic thus far from ChatGPT is less than 5% for most sites. But it’s still a good idea to monitor ChatGPT-generated 404 errors and adjust the pages accordingly. With declining Google Search traffic, “saving” visits is paramount.

Address the problem in three steps:

  1. Track 404 “page not found” URLs in Google Analytics 4.
  2. Create helpful 404 pages for visitors from hallucinated URLs.
  3. Set up 301 redirects only for broken URLs that generate traffic.

I’ll explain the first step in this article.

Track in Google Analytics

Filter Google Analytics reports to URLs with traffic from ChatGPT:

  • Go to “Engagement” > “Pages and screens” to view all pages with traffic for the designated period.
  • Select “Page titles and screen class” above the list of pages.
  • Click “Add filter” above the graph.
  • Select “Session source/medium” as the dimension.
  • Select “Contains” and type “ChatGPT.”
  • Click “Apply.”
Screenshot of the Dimension interface in Google Analytics

Filter Google Analytics reports to URLs with traffic from ChatGPT. Click image to enlarge.

Now your list is filtered to pages with traffic from ChatGPT.

Next, narrow the list to error pages:

  • Go to your site and open all the ChatGPT-filtered pages above.
  • Note the title of pages with 404 errors (Ctrl+D on Windows; Command+D on Mac). In my case, the title was “404 Response Error Page.”

Then return to Google Analytics:

  • Type the error page title in the search bar above the list of pages with traffic from ChatGPT. Add “Page path and screen” class as a secondary dimension to view the hallucinated URLs.
  • Bookmark the URL of this report and check from time to time.

Type the error page title in the search bar above the list of pages with traffic from ChatGPT. Add “Page path and screen” class as a secondary dimension. Click image to enlarge.

Agentic AI In SEO: AI Agents & The Future Of Content Strategy (Part 3) via @sejournal, @VincentTerrasi

For years, the SEO equation appeared to be a fixed and unchanging landscape: optimizing for Googlebot on one side, and creating content for human users on the other. This outdated binary vision is now a thing of the past.

In the current business environment, a new generation of actors is causing significant changes to the online visibility landscape. AI agents such as ChatGPT, Perplexity, Claude, and Gemini are no longer merely processing information; they are exploring, synthesizing, choosing sources to cite, and significantly influencing traffic flows.

For those who are skeptical about the impact of AI agents, I would invite you to consider the concept of Zero Moment of Truth (ZMOT), which was developed by Google over 10 years ago. The principle is straightforward: Prior to any purchase, consumers undertake an extensive research phase. They consult customer reviews, compare across different sites, scrutinize social networks, accumulate information sources, and now use their favorite AIs for final validation.

A New Paradigm

We are currently experiencing a fundamental reconfiguration of the digital ecosystem. In the past, we have identified two or three main engines. However, a new paradigm is emerging.

Google continues to be a leading search engine, utilizing sophisticated algorithms to index and rank content. Humans act as a virality engine, sharing and amplifying information via their social networks and interactions.

It is becoming increasingly apparent that AI agents are assuming the role of an autonomous traffic engine. These intelligent systems are capable of navigating information independently, establishing their own selection criteria, and directing users to sources they deem relevant.

This transformation necessitates a wholly new approach to content creation, which I will be sharing imminently. I will be sharing concepts and case studies that have been successfully implemented with several major accounts.

Agentic SEO

Quick reminder following my two previous articles on the subject: “Agentic AI In SEO: AI Agents & Workflows For Ideation (Part 1)” and “Agentic AI In SEO: AI Agents & Workflows For Audit (Part 2).”

Agentic SEO involves the creation of structured and dynamic content that is designed to appeal not only to Google, but also to conversational AIs.

The approach to content generation is founded on three key pillars:

1. Data Enrichment: Schema.org data, microformats, and semantic tags are becoming important as, when grounding data, they can facilitate understanding and information extraction by language models.

2. Content Modularity: Concise and “chunkable” responses are perfectly suited to Retrieval-Augmented Generation (RAG) ingestion processes utilized by these agents. Content should be designed using autonomous and reusable blocks.

3. Polymorphism: Each page can offer variants adapted according to the type of agent consulting it. It is essential to recognize that the needs of a shopping agent differ from those of a medical agent, and content must adapt accordingly.

Image from author, September 2025

If your content isn’t optimized for AI agents, you’re already experiencing considerable strategic lag.

However, if your site is optimized for SEO, you’ve already taken a significant step forward.

The Foundations: Generative SEO And Edge SEO

To understand this evolution, it is important to consider the concepts that have prepared the ground: generative SEO and Edge SEO.

Generative SEO

Generative SEO facilitates the creation of substantial and insightful content through the utilization of language models. This approach automates the process of creating content while ensuring its relevance and quality.

Generative SEO has always existed in primitive forms, such as content spinning and all derived techniques. In today’s digital landscape, we are witnessing a paradigm shift towards unparalleled quality, as evidenced by the preponderance of AI-generated or co-written content across various social networks, including LinkedIn.

Edge SEO

Edge SEO leverages CDN or proxy-side deployment capabilities to reduce deployment latency and enable large-scale content testing from both content and performance perspectives.

These two approaches are naturally complementary, but they still represent a 1.0 vision of automated SEO. It is important to note that traditional A/B testing and content freezing, once generation is complete, limit the potential of the project.

The true revolution lies in the adoption of dynamic and adaptive systems that surpass these limitations.

Agentic Edge SEO

Edge SEO had already revolutionized the very notion of static content. The system now has the capability to modify content in real-time according to the following three variables:

  • Firstly, user intention is detected and used to guide content adaptation. The system is able to analyze behavioral signals in order to adjust the message in real-time.
  • Next, let us consider the impact of SERP seasonality on modifications. When Google prioritizes certain trends on a given query, content automatically adapts to capitalize on these evolutions.
  • Finally, the instant technical optimizations triggered by Core Web Vitals signals ensure that performance is maintained.
Image from author, September 2025

Let us consider a product page as a case study. If Google highlights “sustainable” or “economical” trends for a particular search, this page automatically adapts its titles, metadata, and visuals to align with these market signals.

At Draft&Goal, we have developed connectors with the Fasterize tool to facilitate the deployment of AI workflows. These workflows are compatible with all the most recent proprietary or open-source LLMs.

We anticipate that in the future, the system will continuously test these variants with search engines and users, collecting performance data in near real-time.

The most effective version is then selected by the algorithm, in terms of click-through rate (CTR), positioning, and conversion, with results continually being optimized.

For example, imagine a “Running Shoes” landing page, existing in seven distinct versions, each oriented towards a specific angle: price, performance, comfort, ecology, style, durability, or innovation. The polymorphic system automatically highlights the most effective variant according to signals sent by Google and user behaviors.

Three Concrete Applications

These concepts are immediately applicable to several strategic sectors. Allow me to provide three examples of the products currently under active testing.

In ecommerce, product pages are self-evolving. These systems adapt to search trends, available stock, and detected behavioral preferences.

1. To illustrate this point, consider a peer-to-peer car rental platform that manages 20,000 city pages.

Each page automatically adapts according to Google signals and local user patterns. During the summer months, the “Car rental Nice” page automatically prioritizes convertibles and highlights family testimonials. During the winter season, the fleet is transitioned to 4×4 vehicles, with a focus on optimizing the “mountain car rental” service.

2. Another example of technological innovation in the media industry is the ability of major news outlets to deploy “living” articles.

These articles are automatically updated to include the latest breaking news, ensuring that content remains fresh and relevant without the need for human editorial intervention. We continue to prioritize content creation by human professionals, with AI playing a supportive role in maintaining currency.

3. Finally, the promo codes website has successfully managed 3,000 merchant pages, which adapt in real-time to commercial cycles and breaking deals.

Amazon’s Prime Days announcement is met with the automatic enrichment of contextual banners and temporal counters on all related pages. The system is designed to monitor partner APIs in order to detect new offers and instantly generate optimized content. Three weeks before Black Friday, “Zalando promo codes” pages automatically integrate dedicated sections and restructure their keywords.

Toward A New Era Of SEO

The future of SEO lies in publishing dynamic content that can adapt to the ever-changing algorithms of Google’s index. This transformation requires a fundamental paradigm shift, and many SEO agencies we support have already made the switch.

Marketing experts must abandon the “page” logic to adopt that of “adaptive systems.” This transition necessitates the acquisition of new tools and skills, as well as a re-evaluation of our strategic vision.

It is important to note that Agentic SEO is not merely a passing trend; it is the necessary response to an ecosystem undergoing profound mutation. Organizations that master these concepts will gain a significant competitive advantage in tomorrow’s attention economy.

More Resources:


Featured Image: Collagery/Shutterstock

How AI Mode Will Redefine Paid Search Advertising via @sejournal, @brookeosmundson

Search has always been a moving target.

From the days when keyword match types and manual cost-per-click (CPCs) gave advertisers a sense of control, to the rise of Shopping ads, automated bidding, and Performance Max, Google has never stopped reshaping how search works.

Every step has chipped away at some level of control for marketers while making it easier for Google to monetize intent.

But what we’re seeing now with AI Overviews and AI Mode is not just another product update. It is a structural rewrite of how search itself functions, which has some serious implications for paid ads.

Instead of sending people to a list of blue links, Google is using AI to generate answers and guide users through multi-step, conversational journeys. Ads are being pulled directly into these experiences, sometimes above or below AI summaries, other times embedded right inside them.

Google calls this a way to “shorten the path from discovery to decision.” For advertisers, it means budgets are being funneled into surfaces that look and act very different from the SERPs we’ve optimized around for years.

The stakes are clear: If fewer people click through to websites, advertisers face tighter competition for attention, rising CPCs, brand safety concerns, and limited transparency into where money is going.

Marketing leaders can’t afford to treat AI Mode as a side experiment. This is the future of Google search, and your ads will either adapt to it or be left behind.

Google’s AI Search Vision And Ad Strategy

Google has been explicit about where it wants to go. At Google Marketing Live 2025, executives described AI Overviews as “one of the most successful launches in Search in the past decade,” citing increases in commercial queries in markets like the U.S. and India.

AI Mode builds on that success by creating a conversational environment where users can refine, compare, and act without returning to the static list of links that defined Google for 20 years.

The company frames this as a win-win: Users get answers more efficiently, and advertisers get placements where intent is clearer and actions are closer at hand.

Google explains that ads are pulled seamlessly into these surfaces from Search, Shopping, Performance Max, and App campaigns.

For the user, the ad is “just part of the journey.” For the advertiser, there is no opting out, no special campaign type, and no reporting that shows which impressions or clicks came from AI Mode versus traditional search.

This approach is not new. Every major change to Google’s results has tilted the balance toward monetization.

Shopping ads once displaced text ads. Featured Snippets and the Knowledge Graph began answering questions directly, cutting down on organic clicks. Performance Max combined inventory into a single system, obscuring where impressions were served.

AI Mode is the culmination of these shifts: Ads are not just on the page; they are woven into the answers themselves.

Competition is another driver. Microsoft has already integrated ads into Copilot. OpenAI is experimenting with sponsored results in ChatGPT. Perplexity, the AI search upstart, has raised millions while building advertiser interest in native placements.

Google cannot afford to sit back while others monetize AI-first search. Ads inside AI Mode aren’t an experiment; they’re an existential business necessity.

Industry experts see this direction clearly. Cindy Krum of MobileMoxie has argued that Google is merging AI Overviews, Discover, and conversational flows into a single journey-first system. She believes ads will become highly-targeted to users within that journey.

Krum further explained her opinion of Google’s intention for Ads in AI Mode:

You’ll have to be logged in to access AI Mode and when you’re logged in, they [Google] can collect all kinds of behavioral data and serve you incredibly personalized ads—ones you’re actually likely to click and convert on. That’s valuable to advertisers. Google can say, “We only show your ads to people who will convert.”

What I find concerning, though, is that advertisers are being asked to play along without the transparency they need to measure value. Seamless for users often means opaque for marketers, and this transition is no exception.

How AI Mode Changes User Behavior And Why It Matters For Ads

It’s easy to assume AI Mode is just another SERP redesign. But the data suggests it is changing how users behave, and those changes have direct implications for paid ads.

According to Pew Research, when an AI Overview appears:

  • Only 8% of visits result in clicks on traditional results, compared to 15% when no overview is present.
  • Only about 1% of visits include clicks on the links embedded inside the AI box.

Similarweb has tracked a sharp rise in zero-click searches, reaching nearly 70% of all queries by mid-2025, up from 56% the year before.

Authoritas found that in news-related queries, traffic to a top-ranking result dropped by almost 80% when an AI Overview appeared above it.

For advertisers, the math is simple.

  • If fewer people leave Google, the competition for the remaining clicks intensifies.
  • CPCs rise because ad real estate is scarcer.
  • Campaign budgets have to stretch further just to maintain the same level of visibility.
  • Organic traffic has always acted as a counterweight to paid spend.
  • If that counterweight shrinks, paid budgets take on more pressure.

The effects differ by vertical. Ecommerce and travel sometimes see AI summaries spark more exploration of products, which can benefit Shopping ads.

Finance and insurance face mixed outcomes. Simplified comparisons may increase clicks in some cases but reduce brand-specific exposure in others.

News, health, and publishers are hit hardest, with traffic losses so steep that paid ads often become the only reliable way to reach audiences at scale.

Industry experts have not been shy about voicing their concerns.

Lily Ray, SEO director at Amsive, expressed her view after click-through rate data came out on AI Overviews:

“It was only a matter of time before new data & studies started to contradict Google’s messaging around the impact of AIOs on traffic.”

Rand Fishkin of SparkToro has been even more blunt:

“Zero click is taking over everything. Google is trying to answer searches without clicks. Facebook is trying to keep people on Facebook. LinkedIn wants to keep people on LinkedIn.”

I share that unease. This is a classic supply-and-demand problem. As free clicks shrink, advertisers will be forced to compete harder and pay more. Google benefits from this compression; advertisers absorb the costs.

Marketing leaders should stop treating this as a temporary adjustment. CPC inflation is becoming a structural reality of AI-powered search.

Ads Inside AI Journeys: Auctions, Costs, And Creative Implications

Google’s marketing spin around AI Mode is that ads are “a logical and natural next action to consumers exploring any topic.” That might be true from a user perspective, but from an advertiser’s perspective, the auction mechanics have changed in ways that deserve scrutiny.

Ads in AI Mode are not a distinct product. They are pulled from Search, Shopping, Performance Max, and App campaigns.

That means the inventory is blended, and advertisers don’t know whether impressions came from a standard SERP or an AI-generated summary.

Larger brands with broad match strategies, comprehensive product feeds, and robust budgets will have the advantage. Smaller or more niche advertisers risk being squeezed out, not because of poor strategy, but because the system is designed to privilege scale.

This dynamic almost guarantees CPC pressure. We saw the same thing when Shopping ads rose to prominence a decade ago.

As more real estate was given to paid placements, the remaining inventory became more competitive, and CPCs rose for the survivors. AI Mode is likely to trigger a similar cycle: fewer outbound clicks, fiercer bidding, higher costs.

Google is also testing outcome-based formats that push this further. For example, in the retail vertical, early experiments allow users to use virtual try-on or track prices without ever leaving the AI journey.

By embedding ads as actions, Google can move from CPC toward cost-per-action pricing.

Fred Vallaeys of Optmyzr stated:

I have no doubt that Google and other ad platforms will find ways to appropriately monetize these advertising opportunities, even if there will be fewer impressions for each consumer journey.

He sees a potential upside for advertisers. I agree, but only if advertisers can prove that the actions driven inside AI Mode are incremental, not cannibalized from existing campaigns.

Creative expectations are also shifting. Conversational journeys demand conversational ads.

A blunt “Sign up today” may feel jarring inside a multi-step dialogue. Phrasing like “Find the right plan for your family” or “See how much you could save in minutes” fits better into the AI-driven flow.

I see opportunity here, but also risk. AI Mode could deliver more relevant matches between ad and intent. But without transparency into where ads appear and how they perform, advertisers are bidding blind. Google will extract more value from each interaction. Whether advertisers see the same value in return is far less certain.

The Transparency And Measurement Gap Of AI Mode

Perhaps the most glaring problem with AI Mode is measurement. Advertisers cannot see how their ads perform specifically in AI Overviews or AI Mode.

There is no column in Google Ads. Search Console offers no separate reporting. All performance is collapsed into existing campaigns.

This is more than a technical gap. For CMOs and CFOs, modeled attribution is not enough. Boards want to know where money is going and what it is producing.

If ad spend is being redirected into AI surfaces but not disclosed separately, how can leaders defend their budgets?

We’ve seen this before. Performance Max launched with almost no reporting visibility. Advertisers pushed back, and Google eventually provided more insights.

Transparency tends to lag product launches, but history suggests it comes only after sustained pressure from advertisers and agencies.

In the meantime, marketers have to fill the gap themselves. Some are building marketing mix models to estimate AI’s contribution. Others are connecting CRM systems more tightly to campaign spend.

Tracking mid-funnel events like demos or downloads is also becoming essential, since these signals often reveal whether AI-driven impressions are assisting conversion paths.

Modeled attribution can provide directional value, but it cannot replace true visibility.

Until Google surfaces AI-specific reporting, marketers should approach performance claims with skepticism and invest in their own measurement frameworks to avoid flying blind.

The Brand Safety And Trust Challenge With AI Overviews

AI Overviews have already produced embarrassing results, suggesting people put glue on pizza or eat rocks.

Google has since upgraded its models, grounding them in Gemini 2.5 and using “query fan-out” to cross-check responses. Accuracy has improved, but hallucinations still occur.

For advertisers, the risk goes beyond bad answers. It’s about adjacency. If your brand’s ad appears alongside a flawed or misleading AI-generated response, the reputational fallout could be significant.

This is a new kind of brand safety risk for search. In Display, adjacency concerns are expected. In search, ads have traditionally been “safe.” AI Mode changes that equation.

Regulators are also paying attention. The FTC and DOJ have already scrutinized Google’s dominance in search advertising.

If AI-driven ads blur the line between editorial and commercial, new antitrust challenges are possible. In Europe, the AI Act may impose stricter standards for how AI-generated content and ads are labeled.

Avoiding AI surfaces altogether isn’t realistic. The opportunity is too large. But brands must prepare frameworks to protect themselves.

That means actively monitoring where ads appear, setting internal thresholds for unacceptable contexts, and establishing escalation paths with Google when placements cross the line.

Trust cannot be outsourced. Advertisers must take responsibility for brand safety in AI environments, even if it means creating new workflows and raising difficult questions with their Google reps.

What Should Marketers Prioritize In The Face Of AI Mode And Overviews?

It’s tempting to wait until reporting improves and best practices become clearer. But hesitation is risky. The brands that adapt early will set the standards others follow.

The most important shift is reframing search around journeys, not keywords.

AI Mode thrives on follow-ups and refinements. Campaigns should be designed with multi-step customer paths in mind.

An insurance company, for example, shouldn’t stop at “compare rates.” It should also anticipate “how to switch providers” or “what coverage works best for families.”

Automation is another reality. Performance Max and broad match are the engines of eligibility for AI surfaces. But these tools need guardrails.

Negative keywords, audience signals, and clean product feeds help prevent waste and maintain some level of control.

Tinuiti has emphasized media accountability and measurement tools to ensure campaigns optimize what works and limit waste.

Agencies like Seer Interactive have published data showing paid click-through rates drop significantly when AI Overviews are present, and recommend careful monitoring and automation guardrails so advertisers don’t get caught by surprise.

Asset quality also matters more than ever. Structured data, schema markup, and entity-rich product feeds aren’t optional. They determine whether ads are eligible to show inside AI responses at all. Poor data means invisibility.

Measurement, too, must evolve. Last-click cost-per-acquisition (CPA) no longer tells the story. Marketing leaders need to evaluate mid-funnel signals like lead quality, sales cycle speed, and assisted revenue.

These key performance indicators (KPIs) reveal whether AI-driven impressions are helping move customers forward.

Creative strategy is another frontier. Ads inside AI journeys need to read like natural next steps, not jarring interruptions.

Early tests in Microsoft Copilot and Perplexity show conversational CTAs, such as “Estimate your monthly cost in seconds,” outperform blunt directives. Marketers should begin experimenting now to build a playbook before these surfaces scale further.

Adaptation is non-negotiable. This isn’t about abandoning SEM fundamentals. It’s about extending them into a search environment where AI defines the journey. CMOs who build strategies around these realities will not just survive the shift; they’ll gain a competitive edge.

The Future Of Paid Search In An AI World

AI search complicates the three pillars paid search has relied on for decades:

  • Transparency.
  • Predictable intent.
  • Measurable outcomes.

Ads are shifting from placements that sit beside results to actions that live inside AI-generated answers.

This isn’t unique to Google. Microsoft has integrated ads into Copilot. OpenAI is piloting sponsored answers in ChatGPT. Amazon and TikTok are testing AI-driven search monetization.

The entire industry is converging on the same model: AI-assisted journeys with ads embedded at critical decision points.

The outlook can be framed in scenarios.

In the best case, AI ads deliver more qualified clicks and higher efficiency, creating a win for advertisers.

In the middle case, some verticals see gains while frustrations over transparency persist.

In the worst case, CPCs inflate significantly, brand safety incidents mount, and ROI weakens, pushing advertisers to question their reliance on Google.

My conclusion is clear: This is not a passing experiment. It’s a structural shift. CMOs should treat AI search as a permanent change to the foundation of paid advertising.

That means reframing PPC as journey management, not keyword bidding. It means doubling down on first-party data and building attribution systems that don’t rely on Google’s word alone. And it means pressing Google for accountability at every step.

Because when ads become the answer, the brands that prepare early will be the ones that still get found.

More Resources:


Featured Image: Masha_art/Shutterstock

Google Answers SEO Question About Keyword Cannibalization via @sejournal, @martinibuster

Google’s John Mueller answered a question about a situation where multiple pages were ranking for the same search queries. Mueller affirmed the importance of reducing unnecessary duplication but also downplayed keyword cannibalization.

What Is Keyword/Content Cannibalization?

There is an idea that web pages will have trouble ranking if multiple pages are competing for the same keyword phrases. This is related to the SEO fear of duplicate content. Keyword cannibalization is just a catchall phrase that is applied to low-ranking pages that are on similar topics.

The problem with saying that something is keyword cannibalization is that it does not identify something specific about the content that is wrong. That is why there are people asking John Mueller about it, simply because it is an ill-defined and unhelpful SEO concept.

SEO Confusion

The SEO was confused about the recent &num=100 change, where Google is blocking rank trackers from scraping the search results (SERPs) at the rate of 100 results at a time. Some rank trackers are floating the idea of only showing ranking data for the top 20 search results. This affects rank trackers’ ability to scrape the SERPs and has no effect on Google Search Console other than to show more accurate results.

The SEO was under the wrong impression that Search Console was no longer showing impressions from results beyond the top twenty. This is false.

Mueller didn’t address that question; it is just a misunderstanding on the part of the SEO.

Here is the question that was asked:

“If now we are not seeing data from GSC from positions 20 and over, does that mean in fact there are no pages ranking above those places?

If I want to avoid cannibalization, how would I know which pages are being considered for a query, if I can only see URLs in the top 20 or so positions?”

Different Pages Ranking For Same Query

Mueller said that different pages ranking for the same search query is not a problem. I agree: multiple web pages ranking for the same keyword phrases is not a problem; it’s a good thing.

Mueller explained:

“Search Console shows data for when pages were actually shown, it’s not a theoretical measurement. Assuming you’re looking for pages ranking for the same query, you’d see that only if they were actually shown. (IMO it’s not really “cannibalization” if it’s theoretical.)

All that said, I don’t know if this is actually a good use of time. If you have 3 different pages appearing in the same search result, that doesn’t seem problematic to me just because it’s “more than 1″. You need to look at the details, you need to know your site, and your potential users.

Reduce unnecessary duplication and spend your energy on a fantastic page, sure. But pages aren’t duplicates just because they happen to appear in the same search results page. I like cheese, and many pages could appear without being duplicates: shops, recipes, suggestions, knives, pineapple, etc.”

Actual SEO Problems

Multiple pages ranking for the same keyword phrases is not a problem; it’s a good thing and not a reason for concern. Multiple pages not ranking for keywords is a problem.

Here are some real reasons why pages on the same topic may fail to rank:

  • The pages are too long and consequently are unfocused.
  • The pages contain off-topic passages.
  • The pages are insufficiently linked internally.
  • The pages are thin.
  • The pages are virtually duplicates of the other pages in the group.

The above are just a few real reasons why multiple pages on the same topic may not be ranking. Pointing at the pages and declaring they are cannibalizing each other is not real. It’s not something to worry about because keyword cannibalization is just a catchall phrase that masks all the actual reasons I just listed.

Takeaway

The debate over keyword cannibalization says less about Google’s algorithm and more about how the SEO community is willing to accept ideas without really questioning whether the underlying basis makes sense. The question about keyword cannibalization is frequently discussed, and I think that’s because many SEOs have the intuition that it’s somehow not right.

Maybe the habit of diagnosing ranking issues with convenient labels mirrors the human tendency to prefer simple explanations over complex answers. But, as Mueller reminds us, the real story is not that two or three pages happen to surface for the same query. The real story is whether those pages are useful, well linked, and focused enough to meet a reader’s information needs.

What is diagnosed as “content cannibalization” is more likely something else. So, rather than chasing shadows, it may be better to look at the web pages with the eyes of a user and really dig into what’s wrong with the page or the interlinking patterns of the entire section that is proving problematic. Keyword cannibalization disappears the moment you look closer, and other real reasons become evident.

Featured Image by Shutterstock/Roman Samborskyi

Facial Recognition Bans Spread Globally

A ruling last week in Australia makes using facial recognition to combat fraud almost impossible and is the latest example of global regulators’ growing disapproval of biometric technology in retail environments.

The Office of the Australian Information Commissioner (OAIC) determined that Kmart Australia Limited had violated the country’s Privacy Act 1988 when it used facial recognition to prevent return fraud and theft.

Image of a Kmart entrance in a mall

Kmart stores in Australia had used facial recognition technology to catch fraudsters. Image: Wesfarmers.

Kmart and Bunnings

At question was a Kmart pilot program that had placed facial recognition technology (FRT) in 28 of the company’s retail locations from June 2020 through July 2022.

The company created a face print, if you will, of every shopper entering one of the pilot program stores. When a customer returned an item, Kmart’s system would compare that person’s face print to a list of known thieves and fraudsters.

Kmart argued that the technology aimed to thwart return fraud and protect its employees, which thieves had frequently threatened. Biometrics, however, represent a special category of privacy protection in Australia.

The case was similar to a November 2024 OAIC determination against Bunnings, a home-improvement retailer, for using FRT to identify criminals. Australian conglomerate Wesfarmers Limited owns Kmart Australia, Bunnings, and other retail chains, including Target Australia.

FRT Challenges

The OAIC stated that its finding is not a ban on FRT, but its conditions make using the technology challenging, if not impossible.

For example, an Australian retailer would need consent before employing FRT, and the thieves stealing items to attempt return fraud would almost certainly refuse.

Kmart had disclosed FRT in a sign at the front of each pilot store, which read, “This store has 24-hour CCTV coverage, which includes facial recognition technology.” But this notice did not establish consent according to the OAIC.

Asking would-be criminals for permission to use facial recognition has the same effect as banning it, given the current state of the technology.

GDPR

The OAIC’s Kmart decision regarding explicit consent aligns with other privacy regulations and rulings.

For example, many privacy experts note that Article 9 of the European Union’s General Data Privacy Regulation, which covers the processing of special categories of personal data, requires explicit consent for the use of FRT.

FTC vs. Rite Aid

In the United States, there are instances of rulings against FRT and the use of biometric data.

In a 2023 determination, the U.S. Federal Trade Commission prohibited Rite Aid Pharmacy from using FRT and other automated biometric systems for five years.

The agency argued that Rite Aid had not taken sufficient measures to prevent false positives and algorithmic racial profiling.

Illinois BIPA

The Illinois Biometric Information Privacy Act was enacted in 2008 and is, perhaps, the most stringent biometric privacy law in the nation.

The BIPA requires businesses to provide written notification of the use of biometric data and obtain shoppers’ written consent. The law permits individuals to sue for violations, and has resulted in many cases against retailers, such as:

  • A 2022 lawsuit alleges that Walmart’s in-store “cameras and advanced video surveillance systems” secretly collect shoppers’ biometric data without notice or consent.
  • A March 2024 class-action lawsuit against Target alleges the retailer used FRT to identify shoplifters without proper consent.
  • A class-action lawsuit filed in August 2025 alleges that Home Depot is illegally using FRT at its self-checkout kiosks.

M•A•C Cosmetics

From the retail and ecommerce perspective, the most concerning BIPA lawsuit may be Fiza Javid v. M.A.C. Cosmetics Inc. The class-action suit, filed in August 2025, is not concerned with crime fighting but with virtual try-on technology.

The complaint notes that M•A•C’s website asks shoppers to upload a photo or enable live video so that it can detect someone’s facial structure and skin color. Plaintiff Fiza Javid asserts the feature would require BIPA’s written consent and is therefore in violation of the Illinois law.

Screenshot of M•A•C Cosmetics website

M•A•C Cosmetics offers tools for virtual try-on and skin color identification.

M•A•C’s virtual makeup try-on tools enhance the experience for shoppers and almost certainly improve ecommerce conversion rates.

The merits of the case are pending, yet BIPA has already spawned virtual try-on cases, including:

  • Kukovec v. Estée Lauder Companies, Inc. (2022).
  • Theriot v. Louis Vuitton North America, Inc. (2022).
  • Gielow v. Pandora Jewelry LLC (2022).
  • Shores v. Wella Operations US LLC (2022).

Engagement and Enforcement

AI-driven facial recognition and biometric technology are among the most promising trends in retail and ecommerce.

The technology has the potential to reduce fraud, deter theft, and support criminal prosecutions. A 2023 article in the International Security Journal estimated that facial biometrics could reduce retail shoplifting by between 50% and 90% depending on location and use.

Moreover, biometrics can improve online and in-store shopping with virtual try-on tools. Some merchants have reported a 35% increase in sales conversions when virtual shopping is available.

The question is how privacy regulations and rulings, such as last week’s Kmart decision, ultimately impact its use.

The Download: the CDC’s vaccine chaos

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

A pivotal meeting on vaccine guidance is underway—and former CDC leaders are alarmed

This week has been an eventful one for America’s public health agency. Two former leaders of the US Centers for Disease Control and Prevention explained why they suddenly departed in a Senate hearing. They also described how CDC employees are being instructed to turn their backs on scientific evidence.

They painted a picture of a health agency in turmoil—and at risk of harming the people it is meant to serve. And, just hours afterwards, a panel of CDC advisers voted to stop recommending the MMRV vaccine for children under four. Read the full story.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

If you’re interested in reading more about US vaccine policy, check out:

+ Read our profile of Jim O’Neill, the deputy health secretary and current acting CDC director.

+ Why US federal health agencies are abandoning mRNA vaccines. Read the full story.

+ Why childhood vaccines are a public health success story. No vaccine is perfect, but these medicines are still saving millions of lives. Read the full story

+ The FDA plans to limit access to covid vaccines. Here’s why that’s not all bad.

Meet Sneha Goenka: our 2025 Innovator of the Year

Every year, MIT Technology Review selects one individual whose work we admire to recognize as Innovator of the Year. For 2025, we chose Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method

Thanks to her work, physicians can now sequence a patient’s genome and diagnose a genetic condition in less than eight hours—an achievement that could transform medical care.

Register here to join an exclusive subscriber-only Roundtable conversation with Goenka, Leilani Battle, assistant professor at the University of Washington, and our editor in chief Mat Honan at 1pm ET next Tuesday September 23.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The CDC voted against giving some children a combined vaccine 
If accepted, the agency will stop recommending the MMRV vaccine for children under 4. (CNN)
+ Its vote on hepatitis B vaccines for newborns is expected today too. (The Atlantic $)
+ RFK JR’s allies are closing ranks around him. (Politico)

2 Russia is using Charlie Kirk’s murder to sow division in the US
It’s using the momentum to push pro-Kremlin narratives and divide Americans. (WP $)
+ The complicated phenomenon of political violence. (Vox)
+ We don’t know what being ‘terminally online’ means any more. (Wired $)

3 Nvidia will invest $5 billion in Intel
The partnership allows Intel to develop custom CPUs to work with Nvidia’s chips. (WSJ $)
+ It’s a much-needed financial shot in the arm for Intel. (WP $)
+ It’s also great news for Intel’s Asian suppliers. (Bloomberg $)

4 Medical AI tools downplay symptoms in women and ethnic minorities
Experts fear that LLM-powered tools could lead to worse health outcomes. (FT $)
+ Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review)

5 AI browsers have hit the mainstream
Where’s the off switch? (Wired $)
+ AI means the end of internet search as we’ve known it. (MIT Technology Review)

6 China has entered the global brain interface race
Its ambitious government-backed startups are primed to challenge Neuralink. (Bloomberg $)
+ This patient’s Neuralink brain implant gets a boost from generative AI. (MIT Technology Review)

7 What makes humans unique in the age of AI?
Defining the distinctions between us and machines isn’t as easy as it used to be. (New Yorker $)
+ How AI can help supercharge creativity. (MIT Technology Review)

8 This ship helps to reconnect Africa’s internet
AI needs high speed internet, which needs undersea cables. (Rest of World)
+ What Africa needs to do to become a major AI player. (MIT Technology Review)

9 Hundreds of people queued in Beijing to buy Apple’s new iPhone
Desire for Apple products in the country appears to be alive and well. (Reuters)

10 San Francisco’s idea of a great night out? A robot cage fight
It’s certainly one way to have a good time. (NYT $)

Quote of the day

“Get off the iPad!”

—An irate air traffic controller tells the pilots of a Spirit Airlines flight to pay attention to avoid potentially colliding with Donald Trump’s Air Force One aircraft, Ars Technica reports.

One more thing

We used to get excited about technology. What happened?

As a philosopher who studies AI and data, Shannon Vallor’s Twitter feed is always filled with the latest tech news. Increasingly, she’s realized that the constant stream of information is no longer inspiring joy, but a sense of resignation.

Joy is missing from our lives, and from our technology. Its absence is feeding a growing unease being voiced by many who work in tech or study it. Fixing it depends on understanding how and why the priorities in our tech ecosystem have changed. Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Would you go about your daily business with a soft toy on your shoulder? This intrepid reporter gave it a go.
+ How dying dinosaurs shaped the landscapes around us.
+ I can’t believe I missed Pythagorean Theorem day earlier this week.
+ Inside the rise in popularity of the no-water yard.

How Brands Boost ROI with Smart Data

Ecommerce marketers know the challenge of delivering relevant promotions to prospects without violating privacy rules and norms. Yet many providers now offer solutions that do both — personalize offers and respect privacy — for much greater performance.

Two of those providers are my guests in this week’s episode. Sean Larkin is CEO of Fueled, a customer data platform for merchants. Francesco Gatti is CEO of Opensend, a repository of consumer demographic and behavior data.

The entire audio of our conversation is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Give us an overview of what you do.

Sean Larkin: I’m CEO and founder of Fueled, a customer data platform for ecommerce. We help brands strengthen the data signals sent to advertising and marketing platforms such as Meta to improve tracking and performance. Our team collaborates with companies such as Built Basics, Dr. Squatch, and Oats Overnight, ensuring accurate pixel data and confidence in their marketing metrics.

Francesco Gatti: I’m CEO and co-founder of Opensend. We help brands identify site visitors who haven’t provided contact information. This includes new users who show sufficient engagement for retargeting and returning shoppers browsing on different devices or browsers. Our technology links these sessions, enabling brands and platforms such as Klaviyo and Shopify to distinguish between returning visitors and new ones.

We also offer a persona tool that segments customers by detailed demographics and behavior, enabling personalized marketing. We integrate directly with Klaviyo and other email platforms. By enhancing Klaviyo accounts, we help merchants reach unidentifiable visitors and maximize ad spend. Our re-identification capabilities are critical, as consumers often use multiple devices and frequently replace them, which can disrupt tracking. We work with roughly 1,000 brands, including Oats Overnight, Regent, and Alexander Wang.

Bandholz: How can your tools track a consumer across devices?

Gatti: We see two main use cases. First is cross-device and cross-browser identification. Imagine Joe bought from you last year on his old iPhone. This year, he returns using a new iPhone or his work computer. Typically, you wouldn’t know it’s the same person. Our technology matches signals such as user-agent data against our consumer graph, which holds multiple devices per person, allowing you to recognize Joe regardless of the device or browser.

The second use case involves capturing emails from high-intent visitors. Suppose Joe clicks an Instagram ad, views several product pages for over two minutes, even adds items to his cart, but leaves without buying or subscribing.

Through data partnerships with publishers such as Sports Illustrated and Quizlet, where users provide their email addresses in exchange for free content or promotions, we can match Joe’s anonymous activity to his known email. We then send that email, plus his on-site behavior, to Klaviyo and similar platforms. This triggers an abandonment flow, allowing us to retarget him with personalized messages and increase the chance of conversion.

Bandholz: What are other ways brands use the data?

Gatti: Brands mainly set up automated flows and let them run. Like Fueled, we send data to email platforms and customer data systems, allowing them to trigger personalized actions automatically. The data enables Klaviyo to distinguish between new and returning visitors to show pop-ups only to first-timers.

Larkin: We integrate hashed emails into Meta. Match scores rise 30–50%, and return on ad spend improves because we can prove an ad drove a sale and retarget that shopper.

Gatti: Our identity graph stores multiple data points, including email addresses, phone numbers, postal addresses, IP addresses, and devices. Sharing that with Fueled feeds richer details into Meta’s conversion API, dramatically increasing match rates and targeting accuracy.

Larkin: Privacy rules now limit simple pixel tracking. Since iOS 17, identifiers are stripped, making it harder for ecommerce brands to track visitors and run effective ads. Fueled collects first-party data, and Opensend’s third-party graph restores lost signals. With conversion API integrations, brands send detailed data directly to platforms such as Meta for stronger targeting and email automation.

Bandholz: When should a brand start using data technologies like yours?

Larkin: It depends on scale. If you’re spending under $20,000 a month on ads, the free Shopify integrations with Meta or Google usually suffice. Fueled is twice the cost of our competitors because we offer hands-on audits, proactive monitoring, and direct Slack support. Our typical clients do $8 million or more in annual revenue, often over $20 million. Some entrepreneurs bring us in from day one for the data advantage. Still, most brands should wait until ad spend grows and minor optimizations have a significant financial impact.

Gatti: For Opensend it’s about traffic, not revenue. We recommend at least 30,000 monthly unique visitors so our filtering can produce quality new emails. Services that identify returning visitors across devices work best for sites with 100,000 monthly visitors, where a 10x ROI is common. Our plans start at $250 per month.

Visitors who never share an email address convert less often, but our filtering narrows the gap. At apparel brand True Classic, for example, we captured 390,000 emails over three years, saw 65% open rates, and delivered a 5x return on investment within three months. As these contacts move through remarketing with holiday offers and seasonal promotions, ROI continues to compound.

Bandholz: When should a company remove an unresponsive subscriber?

Gatti: It varies by brand, average order value, and overall marketing strategy. We work with both high-end luxury companies and billion-dollar tire sellers with very different approaches. In general, if you’ve sent 10 to 15 emails with zero engagement, it’s time to drop those contacts. Continuing to send won’t help.

Larkin: I’d add that many brands, including big ones, don’t plan their retargeting or abandonment flows, especially heading into Black Friday and Cyber Monday. The pressure to discount everything can lead to leaving money on the table. Opensend reveals customer intent, allowing you to adjust offers. Someone who reaches the checkout may not require the same discount as someone who has only added a product to their cart.

We partner with agencies such as Brand.co and New Standard Co that help us build smart strategies. My biggest recommendation for the holidays is to review your flows, decide when a large discount is necessary, and avoid giving away the farm. If you blanket customers with huge discounts, many will disappear once the sale ends.

Bandholz: Where can people follow you, find you, use your services?

Gatti: Our site is Opensend.com. I’m on LinkedIn.

Larkin: We’re Fueled.io. I’m also on LinkedIn.

Internal WordPress Conflict Spills Out Into The Open via @sejournal, @martinibuster

An internal dispute within the WordPress core contributor team spilled into the open, causing major confusion among people outside the organization. The friction began with a post from more than a week ago and culminated in a remarkable outburst, exposing latent tensions within the core contributor community.

Mary Hubbard Announcement Triggers Conflict

The incident seemingly began with a September 15 announcement by Mary Hubbard, the Executive Director of WordPress. She announced a new Core Program Team that is meant to improve how Core contributor groups coordinate with each other and improve collaboration between Core contributor teams. But this was just the trigger for the conflict, which was actually part of a longer-term friction.

Hubbard explained the role of the new team:

“The goal of this team is to strengthen coordination across Core, improve efficiency, and make contribution easier. It will focus on documenting practices, surfacing roadmaps, and supporting new teams with clear processes.

The Core Program Team will not set product direction. Each Core team remains autonomous. The Program Team’s role is to listen, connect, and reduce friction so contributors can collaborate more smoothly.”

That announcement was met with the following response by a member of the documentation team (Jenni McKinnon), which was eventually removed:

“For the public record: This Core Program Team announcement was published during an active legal and procedural review that directly affects the structural governance of this project.

I am not only subject to this review—I am one of the appointed officials overseeing it under my legal duty as a recognized lead within SSRO (Strategic Social Resilience Operations). This is a formal governance, safety, and accountability protocol—bound by national and international law—not internal opinion.

Effective immediately:
• This post and the program it outlines are to be paused in full.
• No action is to be taken under the name of this Core Program Team until the review concludes and clearance is formally issued.
• Mary Hubbard holds no valid authority in this matter. Any influence, instruction, or decision traced to her is procedurally invalid and is now part of a legal evidentiary record.
• Direction, oversight, and all official governance relating to this matter is held by SSRO, myself, and verified leadership under secured protocol.

This directive exists to protect the integrity of WordPress contributors, prevent governance sabotage, and ensure future decisions are legally and ethically sound.

Further updates will be provided only through secured channels or when review concludes. Thank you for respecting this freeze and honoring the laws and values that underpin open source.”

The post was followed by astonishment and questions in various Slack and Facebook WordPress groups. The roots of the friction begin with events from a week ago centered on documentation team participation.

Documentation Team Participation

A September 10 post by documentation team member Estela Rueda informed the Core contributor community that the WordPress 6.9 release squad is experimenting with a smaller team that excludes documentation leads, with only a temporary “Docs Liaison” in place. Her post explained why this exclusion is a problem, detailed the importance of documentation in the release cycle, and urged that a formal documentation lead role be reinstated in future releases.

Estela Rueda wrote (in the September 10 post):

“The release team does not include representation from the documentation team. Why is this a problem? Because often documentation gets overlooked in release planning and project-wide coordination: Documentation is not a “nice-to-have,” it is a survival requirement. It’s not something we might do if someone has time; it’s something we must do — or the whole thing breaks down at scale. Removing the role from the release squad, we are not just sending the message that documentation is not important, we are showing new contributors that working on docs will never get them to the top of the credits page, therefore showing that we don’t even appreciate contributing to the Docs.”

Jenni McKinnon, who is a member of the docs team, responded with her opinions:

“This approach isn’t in line with genuine open-source values — it’s exclusionary and risks reinforcing harmful, cult-like behaviors.

By removing the Docs Team from the release squad under the guise of “reducing overhead,” this message sends a stark signal: documentation is not essential. That’s not just unfair — it actively erodes the foundations of transparency, contributor morale, and equitable participation.”

She added further comments, culminating in the post below that accused WordPress Executive Director Mary Hubbard of being behind a shift toward “top-down” control:

“While this post may appear collaborative on the surface, it’s important to state for the record — under Chatham House Rule, and in protection of those who have been directly impacted — that this proposal was pushed forward by Mary Hubbard, despite every Docs Team lead, and multiple long-time contributors, expressing concerns about the ethics, sustainability, and power dynamics involved.

Framing this as ‘streamlining’ or ‘experimenting’ is misleading. What’s happening is a shift toward top-down control and exclusion, and it has already resulted in real harm, including abusive behavior behind the scenes.”

Screenshot Of September 10 Comment

Documentation Team Member Asked To Step Away

Today’s issue appears to have been triggered by a post from earlier today announcing that Jenni McKinnon was asked to “step away.”

Milana Cap wrote a post today titled, “The stepping away of a team member” that explained why McKinnon was asked to step away:

“The Documentation team’s leadership has asked Jenni McKinnon to step away from the team.

Recent changes in the structure of the WordPress release squad started a discussion about the role of the Documentation team in documenting the release. While the team was working with the Core team, the release squad, and Mary Hubbard to find a solution for this and future releases, Jenni posted comments that were out of alignment with the team, including calls for broad changes across the project and requests to remove certain members from leadership roles.

This ran counter to the Documentation team’s intentions. Docs leadership reached out privately in an effort to de-escalate the situation and asked Jenni to stop posting such comments, but this behaviour did not stop. As a result, the team has decided to ask her to step away for a period of time to reassess her involvement. We will work with her to explore rejoining the team in the future, if it aligns with the best outcomes for both her and the team.”

And that post may have been what precipitated today’s blow-up in the comments section of Mary Hubbard’s post.

Zooming Out: The Big Picture

What happened today is an isolated incident. But some in the WordPress community have confided their opinion that the WordPress core technical debt has grown larger and expressed concern that the big picture is being ignored. Separately, in comments on her September 10 post (Docs team participation in WordPress releases), Estela Rueda alluded to the issue of burnout among WordPress contributors:

“…the number of contributors increases in waves depending on the releases or any special projects we may have going. The ones that stay longer, we often feel burned out and have to take breaks.”

Taken together, to an outsider, today’s friction contributes to the appearance of cracks starting to show in the WordPress project.