This year’s UN climate talks avoided fossil fuels, again

If we didn’t have pictures and videos, I almost wouldn’t believe the imagery that came out of this year’s UN climate talks.

Over the past few weeks in Belem, Brazil, attendees dealt with oppressive heat and flooding, and at one point a literal fire broke out, delaying negotiations. The symbolism was almost too much to bear.

While many, including the president of Brazil, framed this year’s conference as one of action, the talks ended with a watered-down agreement. The final draft doesn’t even include the phrase “fossil fuels.”

As emissions and global temperatures reach record highs again this year, I’m left wondering: Why is it so hard to formally acknowledge what’s causing the problem?

This is the 30th time that leaders have gathered for the Conference of the Parties, or COP, an annual UN conference focused on climate change. COP30 also marks 10 years since the gathering that produced the Paris Agreement, in which world powers committed to limiting global warming to “well below” 2.0 °C above preindustrial levels, with a goal of staying below the 1.5 °C mark. (That’s 3.6 °F and 2.7 °F, respectively, for my fellow Americans.)

Before the conference kicked off this year, host country Brazil’s president, Luiz Inácio Lula da Silva, cast this as the “implementation COP” and called for negotiators to focus on action, and specifically to deliver a road map for a global transition away from fossil fuels.

The science is clear—burning fossil fuels emits greenhouse gases and drives climate change. Reports have shown that meeting the goal of limiting warming to 1.5 °C would require stopping new fossil-fuel exploration and development.

The problem is, “fossil fuels” might as well be a curse word at global climate negotiations. Two years ago, fights over how to address fossil fuels brought talks at COP28 to a standstill. (It’s worth noting that the conference was hosted in Dubai in the UAE, and the leader was literally the head of the country’s national oil company.)

The agreement in Dubai ended up including a line that called on countries to transition away from fossil fuels in energy systems. It was short of what many advocates wanted, which was a more explicit call to phase out fossil fuels entirely. But it was still hailed as a win. As I wrote at the time: “The bar is truly on the floor.”

And yet this year, it seems we’ve dug into the basement.

At one point about 80 countries, a little under half of those present, demanded a concrete plan to move away from fossil fuels.

But oil producers like Saudi Arabia were insistent that fossil fuels not be singled out. Other countries, including some in Africa and Asia, also made a very fair point: Western nations like the US have burned the most fossil fuels and benefited from it economically. This contingent maintains that legacy polluters have a unique responsibility to finance the transition for less wealthy and developing nations rather than simply barring them from taking the same development route. 

The US, by the way, didn’t send a formal delegation to the talks, for the first time in 30 years. But the absence spoke volumes. In a statement to the New York Times that sidestepped the COP talks, White House spokesperson Taylor Rogers said that president Trump had “set a strong example for the rest of the world” by pursuing new fossil-fuel development.

To sum up: Some countries are economically dependent on fossil fuels, some don’t want to stop depending on fossil fuels without incentives from other countries, and the current US administration would rather keep using fossil fuels than switch to other energy sources. 

All those factors combined help explain why, in its final form, COP30’s agreement doesn’t name fossil fuels at all. Instead, there’s a vague line that leaders should take into account the decisions made in Dubai, and an acknowledgement that the “global transition towards low greenhouse-gas emissions and climate-resilient development is irreversible and the trend of the future.”

Hopefully, that’s true. But it’s concerning that even on the world’s biggest stage, naming what we’re supposed to be transitioning away from and putting together any sort of plan to actually do it seems to be impossible.

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The Download: the fossil fuel elephant in the room, and better tests for endometriosis

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

This year’s UN climate talks avoided fossil fuels, again

Over the past few weeks in Belem, Brazil, attendees of this year’s UN climate talks dealt with oppressive heat and flooding, and at one point a literal fire broke out, delaying negotiations. The symbolism was almost too much to bear.

While many, including the president of Brazil, framed this year’s conference as one of action, the talks ended with a watered-down agreement. The final draft doesn’t even include the phrase “fossil fuels.”

As emissions and global temperatures reach record highs again this year, I’m left wondering: Why is it so hard to formally acknowledge what’s causing the problem?

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

New noninvasive endometriosis tests are on the rise

Endometriosis inflicts debilitating pain and heavy bleeding on more than 11% of reproductive-­age women in the United States. Diagnosis takes nearly 10 years on average, partly because half the cases don’t show up on scans, and surgery is required to obtain tissue samples.

But a new generation of noninvasive tests are emerging that could help accelerate diagnosis and improve management of this poorly understood condition. Read the full story.

—Colleen de Bellefonds

This story is from the last print issue of MIT Technology Review magazine, which is full of fascinating stories about the body. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI claims a teenager circumvented its safety features before ending his life
It says ChatGPT directed Adam Raine to seek help more than 100 times. (TechCrunch)
+ OpenAI is strongly refuting the idea it’s liable for the 16-year old’s death. (NBC News)
+ The looming crackdown on AI companionship. (MIT Technology Review)

2 The CDC’s new deputy director prefers natural immunity to vaccines
And he wasn’t even the worst choice among those considered for the role. (Ars Technica)
+ Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man. (MIT Technology Review)

3 An MIT study says AI could already replace 12% of the US workforce
Researchers drew that conclusion after simulating a digital twin of the US labor market. (CNBC)
+ Separate research suggests it could replace 3 million jobs in the UK, too. (The Guardian)
+ AI usage looks unlikely to keep climbing. (Economist $)

4 An Italian defense group has created an AI-powered air shield system
It claims the system allows defenders to generate dome-style missile shields. (FT $)
+ Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies. (MIT Technology Review)

5 The EU is considering a ban on social media for under-16s
Following in Australia’s footsteps, whose own ban comes into power next month. (Politico)
+ The European Parliament wants parents to decide on access. (The Guardian)

6 Why do so many astronauts keep getting stuck in space?
America, Russia and now China have had to contend with this situation. (WP $)
+ A rescue craft for three stranded Chinese astronauts has successfully reached them. (The Register)

7 Uploading pictures of your hotel room could help trafficking victims
A new app uses computer vision to determine where pictures of generic-looking rooms were taken. (IEEE Spectrum)

8 This browser tool turns back the clock to a pre-AI slop web
Back to the golden age of pre-November 30 2022. (404 Media)
+ The White House’s slop posts are shockingly bad. (NY Mag $)
+ Animated neo-Nazi propaganda is freely available on X. (The Atlantic $)

9 Grok’s “epic roasts” are as tragic as you’d expect
Test it out at parties at your own peril. (Wired $)

10 Startup founders dread explaining their jobs at Thanksgiving 🍗
Yes Grandma, I work with computers. (Insider $)

Quote of the day

“AI cannot ever replace the unique gift that you are to the world.”

—Pope Leo XIV warns students about the dangers of over-relying on AI, New York Magazine reports.

One more thing

Why we should thank pigeons for our AI breakthroughs

People looking for precursors to artificial intelligence often point to science fiction or thought experiments like the Turing test. But an equally important, if surprising and less appreciated, forerunner is American psychologist B.F. Skinner’s research with pigeons in the middle of the 20th century.

Skinner believed that association—learning, through trial and error, to link an action with a punishment or reward—was the building block of every behavior, not just in pigeons but in all living organisms, including human beings.

His “behaviorist” theories fell out of favor in the 1960s but were taken up by computer scientists who eventually provided the foundation for many of the leading AI tools. Read the full story.

—Ben Crair

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I hope you had a happy, err, Green Wednesday if you partook this year.
+ Here how to help an endangered species from the comfort of your own home.
+ Polly wants to FaceTime—now! 📱🦜(thanks Alice!)
+ I need Macaulay Culkin’s idea for another Home Alone sequel to get greenlit, stat.

The Alpha Is Not LLM Monitoring via @sejournal, @Kevin_Indig

Adobe just paid $1.9 billion for Semrush. Not for the LLM tracking dashboards. For the platform, the customer relationships, and the distribution.

Contrast: Investors poured $227 million into AI visibility tracking. Most of that went to tracking dashboards. The companies shipping outputs from agentic SEO raised a third of that. Adobe’s acquisition proves dashboards were never the point.

Investors chased LLM monitoring because it looked like easy SaaS, but the durable value sits in agentic SEO tools that actually ship work. Why? Because agentic SEO goes beyond the traditional SEO tooling setup, and offers SEO professionals and agencies a completely new operational capability that can augment (or doom) their business.

Together with WordliftGrowth CapitalNiccolo SanaricoPrimo Capital, and G2, I analyzed the funding data and the companies behind it. The pattern is clear: Capital chased what sounded innovative. The real opportunity hid in what actually works.

1. AI Visibility Monitoring Looked Like The Future

Image Credit: Kevin Indig

We looked at 80 companies and their collective $1.5 billion in venture funding:

  • Established platforms (five companies) captured $550 million.
  • LLM Monitoring (18 companies) split $227 million.
  • Agentic SEO companies got $86 million.

AI visibility tracking seemed like the obvious problem in 2024 because every CMO asked the same question: “How does my brand show up in ChatGPT?” It’s still not a solved problem: We don’t have real user prompts, and responses vary significantly. But measuring is not defensible. The vast number of startups providing the same product proves it.

Monitoring tools have negative switching costs. Agentic tools have high switching costs.

  • Low pain: If a brand turns off a monitoring dashboard, they lose historical charts.
  • High pain: If a brand turns off an agentic SEO platform, their marketing stops publishing.

Venture capital collectively invested +$200 million because companies care about how and where they show up on the first new channel since Alphabet, Meta, and TikTok. The AI visibility industry has the potential to be bigger than the SEO industry (~$75 billion) because Brand and Product Marketing departments care about AI visibility as well.

What they missed is how fast that trend becomes infrastructure. Amplitude proved it was commoditizable by offering monitoring for free. When Semrush added it as a checkbox, the category collapsed.

2. The Alpha Is In Outcomes, Not Insights

Outcomes trump insights. In 2025, the value of AI is getting things done. Monitoring is table stakes.

73% of AI visibility tracking companies were founded in 2024 and raised $12 million on average. That check size is typically reserved for scale-stage companies with proven market-fit.

Image Credit: Kevin Indig

Our analysis reveals a massive maturity gap between where capital flowed and where value lives.

  • Monitoring companies (average age: 1.3 years) raised seed capital at growth valuations.
  • Agentic SEO companies (average age: 5.5 years) have been building infrastructure for nearly a decade.

Despite being more mature, the agentic layer raised one-third as much capital as the monitoring layer. Why? Because investors missed the moat.

Investors dislike “shipping” tools at the seed stage because they require integration, approval workflows, and “human-in-the-loop” setup. To a VC, this looks like low-margin consulting. Monitoring tools look like perfect SaaS: 90% gross margins, instant onboarding, and zero friction.

Money optimized for ease of adoption and missed ease of cancellation.

  • The Monitoring Trap: You can turn off a dashboard with a click to save budget.
  • The Execution Moat: The “messy” friction of agentic SEO is actually the defensibility. Once an operational workflow is installed, it becomes infrastructure. You cannot turn off an execution engine without halting your revenue.

Capital flowed to the “clean” financials of monitoring, leaving the “messy” but durable execution layer underfunded. That is where the opportunity sits.

Three capabilities separate the winners from the features:

  1. Execution Velocity: Brands need content shipped across Reddit, TikTok, Quora, and traditional search simultaneously. Winners automate the entire workflow from insight to publication.
  2. Grounding in Context: Generic optimization loses to systems that understand your specific business logic and brand voice. (Ontology is the new moat).
  3. Operations at Scale: Content generation without pipeline management is a toy. You need systems enforcing governance across dozens of channels. Point solutions lose; platform plays win.

The difference is simple: one group solves “how do I know?” and the other solves “how do I ship?”

3. The Next 18 Months Will Wipe Out The Weakest Part Of The AI Stack

The market sorts into three tiers based on defensibility:

1. Established platforms win by commoditizing. Semrush and Ahrefs have customer relationships spanning two decades. They’ve already added LLM monitoring as a feature. They now need to move faster on the action layer – the workflow automation that helps marketers create and distribute assets at scale. Their risk isn’t losing relevance. It’s moving too slowly while specialized startups prove out what’s possible.

The challenge: Established platforms are read-optimized; agentic operations require write-access. Semrush and Ahrefs built 20-year moats on indexing the web (Read-Only). Moving to agentic SEO requires them to write back to the customer’s CMS (Write-Access).

2. Agentic SEO platforms scale into the gap. They’re solving real operational constraints with sticky products. AirOps is proving the thesis: $40 million Series B, $225 million valuation. Their product lives in the action layer – content generation, maintenance, rich media automation. Underfunded today, they capture follow-on capital tomorrow.

3. Monitoring tools consolidate or disappear. Standalone AI visibility vendors have 18 months to either build execution layers on top of their dashboards or find an acquirer. The market doesn’t support single-function tracking at venture scale.

Q3/Q4 2026 could be an “Extinction Event.” This is when the 18-month runway from the early 2024 hype cycle runs out. Companies will go to market to raise more money, fail to show the revenue growth required to support their 2024 valuations, and be forced to:

  • Accept a “down-round” (raising money at a lower valuation, crushing employee equity).
  • Sell for parts (acqui-hire).
  • Fold.

Let’s do some basic “Runway Math”:

  • Assumption: The dataset shows the average “Last Funding Date” for this cluster is March 2025. This means the bulk of this €227 million hit bank accounts in Q1 2025.
  • Data Point: The average company raised ~€21 million.
  • The Calculation: A typical Series A/Seed round is calculated to provide 18 to 24 months of runway. With the last funding in Q1 2025 and 18 months of runway, we arrive at Q3 2026.

To raise their next round (Series B) and extend their life, AI visibility companies must justify the high valuation of their previous round. But to justify a Series A valuation (likely $50-$100 million post-money given the AI hype), they need to show roughly 3x-5x ARR growth year-over-year. Because the product is commoditized by free tools like Amplitude and bundled features from Semrush, they might miss that 5x revenue growth target.

Andrea Volpini, Founder and CEO of Wordlift:

After 25 years, the Semantic Web has finally arrived. The idea that agents can reach a shared understanding by exchanging ontologies and even bootstrap new reasoning capabilities is no longer theoretical. It is how the human-centered web is turning into an agentic, reasoning web while most of the industry is caught off guard. When Sir Tim Berners-Lee warns that LLMs may end up consuming the web instead of humans, he is signaling a seismic shift. It is bigger than AI Search. It is reshaping the business model that has powered the web for three decades. This AI Map is meant to show who is laying the foundations of the reasoning web and who is about to be left behind.

4. The Market Thesis: When $166 Billion Meets Behavioral Disruption

From Niccolo Sanarico, writer of The Week in Italian Startups and Partner at Primo Capital:

Let’s leave the funding data for a moment, and shift to the demand side of the market: on the one hand, Google integrating AI search results on its SERP, ChatGPT or Perplexity becoming the entry point for search and discovery, are phenomena that are creating a change in user behavior – and when users change behavior, new giants emerge. On the other hand, SEO has historically been a consulting-like, human-driven, tool-enabled effort, but its components (data monitoring & analysis, content ideation & creation, process automation) are the bread and butter of the current generation of AI, and we believe there is a huge space for emerging AI platforms to chip away at the consulting side of this business. Unsurprisingly, 42% of the companies in our dataset were founded on or after 2020, despite the oldest and greatest players dating back more than 20 years, and the key message they are passing is “let us do the work.”

The numbers validate this thesis at scale. Even though it is not always easy to size it, recent research finds that the SEO market represents a $166 billion opportunity split between tools ($84.94 billion) and services ($81.46 billion), growing at 13%+ annually. But the distribution reveals the disruption opportunity: agencies dominate with 55% market share in services, while 60% of enterprise spend flows to large consulting relationships. This $50+ billion consulting layer – built on manual processes, relationship-dependent expertise, and human-intensive workflows – sits directly in AI’s disruption path.

The workforce data tells the automation story. With >200,000 SEO professionals globally and median salaries in the US of $82,000 (15% above U.S. national average), we’re looking at a knowledge worker category ripe for productivity transformation. The job market shifts already signal this transition: content-focused SEO roles declined 28% in 2024 as AI automation eliminated routine work, while leadership positions grew 50-58% as the focus shifted to strategy and execution oversight. When 90% of new SEO positions come from companies with 250+ employees, and these organizations are simultaneously increasing AI tool budgets from 5% to 15% of total SEO spend, the path forward is clear: AI platforms that can deliver execution velocity will capture the value gap between high-cost consulting and lower-margin monitoring tools.

5. What This Means For You

For Tool Buyers

Stop asking “Is it AI-powered?” Ask instead:

  1. Does this solve an operational constraint or just give me information? (If it’s information, Semrush will have it free in 18 months.)
  2. Does this automate a workflow or create new manual work? (Sticky products are deeply integrated. Point solutions require babysitting.)
  3. Can I get this from my existing platform eventually, or is this defensible? (If an established player can bundle it, they will.)

For Investors

You’re at an inflection point:

  • The narrative layer (monitoring) is collapsing in real-time.
  • The substance layer (execution) is still underfunded.
  • This gap closes fast.

When evaluating opportunities, ask: “What would need to happen for Semrush or Ahrefs to provide this?” If the answer is “not much,” it’s not defensible at venture scale. If they had to rebuild core infrastructure or cannibalize part of their product, you have a moat.

The best signal isn’t which companies are raising capital, but which categories are raising capital despite low defensibility. That’s where you find the upside.

For Builders

Your strategic question isn’t “Which category should I enter?” It’s “How deeply integrated will I be in my customers’ workflows?” If you’re building monitoring tools, you have 18 months. Either build an execution layer on top of your dashboard or optimize for acquisition.

If you’re building execution platforms, defensibility comes from three things:

  1. Depth of integration in daily workflows
  2. Required domain expertise
  3. Operational leverage you provide relative to building in-house

The winning companies are those that solve problems needing continuous domain expertise and cannot be easily copied. Automated workflows that understand brand guidelines, customer segments, and channel-specific best practices aren’t.

Ask yourself: What operational constraint am I solving that requires judgment calls, not just better AI? If the answer is “I’m just generating better content faster,” you’re building a feature. If the answer is “I’m managing complexity across dozens of channels while enforcing consistency,” you’re building a platform.

Full infographic of our analysis:

Image Credit: Kevin Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!


Featured Image: Paulo Bobita/Search Engine Journal

From Organic Search To AI Answers: How To Redesign SEO Content Workflows via @sejournal, @rio_seo

It’s officially the end of organic search as we know it. A recent survey reveals that 83% of consumers believe AI-powered search tools are more efficient than traditional search engines.

The days of simple search are long gone, and a profound transformation continues to sweep the search engine results pages (SERPs). The rise of AI-powered answer engines, from ChatGPT to Perplexity to Google’s AI Overviews, is rewriting the rules of online visibility.

Instead of returning traditional blue links or images, AI systems are returning immediate results. For marketing leaders, the question is no longer “How do we rank number one?” but rather “How do we become the top answer?”

This shift has eliminated the distance between the search and the solution. No longer do customers need to click through to find the information they’re seeking. And while zero-click searches are more prevalent and old metrics like keyword rankings are fading fast, it also creates a massive opportunity for chief marketing officers to redefine SEO as a strategic growth function.

Yes, content remains king, but it must be rooted in a foundation that fuels authority, brand trust, and authenticity to serve the systems that are shaping what appears when a search is conducted. This isn’t just a new channel; it’s a new way of creating, structuring, and validating content

In this post, we’ll dissect how to redesign content workflows for generative engines to ensure your content reigns supreme in an AI-first era.

What Generative Engines Changed And Why “Traditional SEO” Won’t Recover

When users ask generative search engines a question, they aren’t presented with a list of websites to click through to learn more; instead, they’re given a quick, synthesized answer. The source of the answer is cited, allowing users to click to learn more if they so choose to. These citations are the new “rankings” and most likely to be clicked on.

In fact, research shows 60% of consumers click through at least sometimes after seeing an AI-generated overview in Google Search. A separate study found that 91% of frequent AI users turn to popular large language models (LLMs) such as ChatGPT for their searching needs.

While keyword optimization still holds importance in content marketing, generative engines are favoring expertise, brand authority, and structured data. For CMOs, the old metrics no longer necessarily equate to success. Visibility and impressions are no longer tied to website traffic, and success is now contingent upon citations, mentions, and verifiable authority signals.

The AI era signals a serious identity shift, one in which traditional SEO collides with AI-driven search. SEO can no longer be a mechanical, straightforward checklist that sits under demand generation. It must integrate with a broader strategy to manage brand knowledge, ensuring that when AI pulls data to form an answer, your content is what they trust most out of all the options out there.

In this new search era, improving visibility can be measured in three diverse ways:

  • Appearing in results or answers.
  • Being seen as a thought leader in your space by being cited or trusted as a credible source.
  • Driving influence, affinity, or conversions from your digital presence.

Traditional SEO is now only one piece of the content visibility puzzle. Generative SEO demands fluency across all three.

The CMO’s New Dilemma: AI As Both Channel And Competitor

Consumers have questions. Generative engines have the answers. With over half (56%) of consumers trusting the use of Gen AI as an education resource, generative engines are now mediators between your brand and your customers. They can influence purchases or sway customers toward your competition, depending on whether your content earns their hard-earned trust.

For example, if a customer asks, “What’s the best CRM for enterprise brands?” and an AI engine suggests HubSpot’s content over your brand, the damage isn’t just a lost click but a missed opportunity to garner interest and trust with that motivated searcher. The hard truth is the Gen AI model didn’t see your content as relevant or reliable enough to deliver in its answer.

Generative engines are trained on content that already exists, meaning your competitors’ content, user reviews, forum discussions, and your own material are all fair game. That means AI is both a discovery channel and competitor for audience attention. This duality must be recognized by CMOs to invest in structuring, amplifying, and revamping content workflows to match Gen AI’s expectations. The goal isn’t to chase algorithms; it’s to shape the content in a meaningful way to ensure those algorithms trust and view your content as the single source of truth.

Think of it this way: Traditional SEO practices taught you to optimize content for crawlers. With Generative SEO, you’re optimizing for the model’s memory.

How To Redesign SEO Content Workflows For The Generative Era

To win citations and influence AI-generated answers, it’s time to throw out your old playbooks and overhaul previous workflows. It may be time to ditch how you used to plan content and how performance was measured. Out with the old and in with the new (and more successful).

From Keyword Targeting To Knowledge Modeling

Generative models go beyond understanding just keywords. They understand entities and relationships, too. To show up in coveted AI answers and to be the top choice, your content must reflect structured, interconnected knowledge.

Start by building a brand knowledge graph that maps people, products, and topics that define your expertise. Schema markup is also a must to show how these entities connect. Additionally, every piece of content you produce should reinforce your position within that network.

Long-tail keywords may be easier to target and rank for in traditional SEO; however, optimizing for AI search requires a shift in content workflows, one that targets “entity clusters” instead. Here’s what this might look like in practice: A software company wouldn’t only optimize content around the focus keyword phrase “best CRM integrations.” The writer should also define its relationship to the concept of “CRM,” “workflow automation,” “customer data,” and other related phrases.

From Content Volume To Verifiable Authority

It was once thought that the more content, the better. This is not the case with SEO today as AI systems prefer and prioritize content that’s well-sourced, attributable, and authoritative. Content velocity is no longer the end game, but rather producing stronger, more evidence-backed pieces.

Marketing leaders should create an AI-readiness checklist for their content marketing team to ensure every piece of content is optimized for generative engines. Every article should include author credentials (job title, advanced degrees, and certifications), clear citations (where the statistics or research came from), and verifiable claims.

Create an AI-readiness checklist for your team. Every article should include author credentials, clear citations, and verifiable claims. Reference independent studies and owned research where possible. AI models cross-validate multiple sources to determine what’s credible and reliable.

In short: Don’t publish faster. Publish smarter.

From Static Publishing To Dynamic Feedback

If one thing is certain, it’s that generative engines are continuing to evolve, similar to traditional search. What ranks well today may change entirely tomorrow. That’s why successful SEO teams are adopting an agile publishing cycle to continue to stay on top of what’s working best. SEO teams are actively and consistently:

  • Testing which questions their audience asks in generative engines.
  • Tracking whether their content appears in those answers.
  • Refreshing content based on what’s being cited, summarized, or ignored.

Several tools are emerging to help you track your brand’s presence across, ChatGPT, Perplexity, AI Overviews, and more, including SE Ranking, Peec AI,  Profound, and Conductor. If you choose to forego tools, you can also run regular AI audits on your own to see how your brand is represented across engines by following the aforementioned framework. Treat that data like search console metrics and think of it as your new visibility report.

How To Measure SEO Success In An Answer-Driven World

Measuring SEO success across generative engines looks different than how we used to measure traditional SEO. Traffic will always matter, but it’s no longer the sole proof of impact. For CMOs, understanding how to measure marketing’s impact is essential to demonstrate the value your team delivers to the organization’s mission.

Here’s how progressive CMOs are redefining SEO success:

  • AI Citations: How often your content is referenced within AI-generated responses.
  • Answer Visibility Share: The percentage of relevant queries where your content appears in an AI answer.
  • Zero-Click Exposure: Instances where your brand is visible in AI responses, even if users don’t visit your site.
  • Answer Referral Traffic: The new “clicks”; visits that originate directly from AI-generated links.
  • Semantic Coverage: The breadth of related entities and subtopics your brand consistently appears for.

These metrics move SEO reporting from vanity numbers to visibility intelligence and are a more accurate representation of brand authority in the machine age.

Future-Proof Your SEO For Generative Search

Generative search is just as volatile as traditional search, but volatility is fertile ground for innovation. Instead of resisting it, CMOs should continue to treat SEO as an experimental function; a sandbox for continuously testing new ways to be discovered and trusted. SEO continues to remain a function that isn’t a set it and forget it, but one that must change with time and testing.

CMOs should encourage their team to A/B test content formats, schema implementations, and even phrasing to see what appears in AI generated responses. Cross-pollinate SEO insights with PR, product, and customer experience. When your organization learns how AI represents your brand, it becomes a feedback loop that strengthens everything from messaging to market positioning.

In the near future, the term “organic search” will become something broader to encompass the fast-growing ecosystem of machine-mediated discovery. The brands that succeed won’t just optimize for keywords. They’ll build long-lasting trust.

The Next Evolution Of Search

The notion that AI is killing SEO is false. AI isn’t eliminating SEO but rather redefining what it means today. What used to be a tactical discipline is shifting to become a more strategic approach that requires understanding how your brand exists within digital knowledge systems. It’s straying from what’s comfortable and moving into largely uncharted territory.

The opportunity for marketing leaders is clear: It’s time to move past the known and venture into the somewhat elusive realm of generative answer engines. After all, Forrester predicts AI-powered search will drive 20% of all organic traffic by the end of 2025. At the end of the day, many of the traditional SEO best practices still apply: create content that’s verifiable, well-structured, and context-rich. The main mindset shift lies in how to measure generative engine success, not by rankings but by relevance in conversation.

In the age of AI answers, your brand doesn’t need to just be searchable; it needs to be knowable.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

The AI Hype Index: The people can’t get enough of AI slop

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

Last year, the fantasy author Joanna Maciejewska went viral (if such a thing is still possible on X) with a post saying “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” Clearly, it struck a chord with the disaffected masses.

Regrettably, 18 months after Maciejewska’s post, the entertainment industry insists that machines should make art and artists should do laundry. The streaming platform Disney+ has plans to let its users generate their own content from its intellectual property instead of, y’know, paying humans to make some new Star Wars or Marvel movies.

Elsewhere, it seems AI-generated music is resonating with a depressingly large audience, given that the AI band Breaking Rust has topped Billboard’s Country Digital Song Sales chart. If the people demand AI slop, who are we to deny them?

The Download: AI and the economy, and slop for the masses

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AI is changing the economy

There’s a lot at stake when it comes to understanding how AI is changing the economy right now. Should we be pessimistic? Optimistic? Or is the situation too nuanced for that?

Hopefully, we can point you towards some answers. Mat Honan, our editor in chief, will hold a special subscriber-only Roundtables conversation with our editor at large David Rotman, and Richard Waters, Financial Times columnist, exploring what’s happening across different markets. Register here to join us at 1pm ET on Tuesday December 9.

The event is part of the Financial Times and MIT Technology Review “The State of AI” partnership, exploring the global impact of artificial intelligence. Over the past month, we’ve been running discussions between our journalists—sign up here to receive future editions every Monday.

If you’re interested in how AI is affecting the economy, take a look at: 

+ People are worried that AI will take everyone’s jobs. We’ve been here before.

+  What will AI mean for economic inequality? If we’re not careful, we could see widening gaps within countries and between them. Read the full story.

+ Artificial intelligence could put us on the path to a booming economic future, but getting there will take some serious course corrections. Here’s how to fine-tune AI for prosperity.

The AI Hype Index: The people can’t get enough of AI slop

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here, featuring everything from replacing animal testing with AI to our story on why AGI should be viewed as a conspiracy theory

MIT Technology Review Narrated: How to fix the internet

We all know the internet (well, social media) is broken. But it has also provided a haven for marginalized groups and a place for support. It offers information at times of crisis. It can connect you with long-lost friends. It can make you laugh.

That makes it worth fighting for. And yet, fixing online discourse is the definition of a hard problem.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How much AI investment is too much AI investment?
Tech companies hope to learn from beleaguered Intel. (WSJ $)
+ HP is pivoting to AI in the hopes of saving $1 billion a year. (The Guardian)
+ The European Central bank has accused tech investors of FOMO. (FT $)

2 ICE is outsourcing immigrant surveillance to private firms
It’s incentivizing contractors with multi-million dollar rewards. (Wired $)
+ Californian residents have been traumatized by recent raids. (The Guardian)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

3 Poland plans to use drones to defend its rail network from attack
It’s blaming Russia for a recent line explosion. (FT $)
+ This giant microwave may change the future of war. (MIT Technology Review)

4 ChatGPT could eventually have as many subscribers as Spotify
According to erm, OpenAI. (The Information $)

5 Here’s how your phone-checking habits could shape your daily life
You’re probably underestimating just how often you pick it up. (WP $)
+ How to log off. (MIT Technology Review)

6 Chinese drugs are coming
Its drugmakers are on the verge of making more money overseas than at home. (Economist $)

7 Uber is deploying fully driverless robotaxis in an Abu Dhabi island
Roaming 12 square miles of the popular tourist destination. (The Verge)
+ Tesla is hoping to double its robotaxi fleet in Austin next month. (Reuters)

8 Apple is set to become the world’s largest smartphone maker
After more than a decade in Samsung’s shadow. (Bloomberg $)

9 An AI teddy bear that discussed sexual topics is back on sale
But the Teddy Kumma toy is now powered by a different chatbot. (Bloomberg $)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

10 How Stranger Things became the ultimate algorithmic TV show
Its creators mashed a load of pop culture references together and created a streaming phenomenon. (NYT $)

Quote of the day

“AI is a very powerful tool—it’s a hammer and that doesn’t mean everything is a nail.”

—Marketing consultant Ryan Bearden explains to the Wall Street Journal why it pays to be discerning when using AI.

One more thing

Are we ready to hand AI agents the keys?

In recent months, a new class of agents has arrived on the scene: ones built using large language models. Any action that can be captured by text—from playing a video game using written commands to running a social media account—is potentially within the purview of this type of system.

LLM agents don’t have much of a track record yet, but to hear CEOs tell it, they will transform the economy—and soon. Despite that, like chatbot LLMs, agents can be chaotic and unpredictable. Here’s what could happen as we try to integrate them into everything.

—Grace Huckins

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The entries for this year’s Nature inFocus Photography Awards are fantastic.
+ There’s nothing like a good karaoke sesh.
+ Happy heavenly birthday Tina Turner, who would have turned 86 years old today.
+ Stop the presses—the hotly-contested list of the world’s top 50 vineyards has officially been announced 🍇

New Ecommerce Tools: November 26, 2025

Every week we publish a handpicked list of new products and services for ecommerce merchants. This installment includes updates on product experience management, agentic commerce, AI-powered payment integration, fulfillment, alternative payments, customer support, website builders, and cross-platform ad campaigns.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

Brandfuel launches AI-native Product Experience Management platform. Brandfuel has announced the availability (out of beta) of its AI-native Product Experience Management platform for ecommerce brands and agencies. According to Brandfuel, the platform can capture a brand’s personas, competitors, and keywords — to guide personalized content creation — as well as automate image analysis, alt tags, and per-product competitor tracking. The platform features product content scoring, multi-language and multichannel support, automated A/B content testing, Klaviyo and Meta integrations, and more.

Home page of Brandfuel

Brandfuel

OpenAI introduces shopping research in ChatGPT. OpenAI‘s new shopping research feature in ChatGPT helps consumers find the right products. Per OpenAI, the tool asks clarifying questions, reviews quality sources, and builds on ChatGPT’s understanding of a user from past conversations to deliver a personalized buyer’s guide. Shopping research is currently rolling out on mobile and web for logged-in ChatGPT users on Free, Go, Plus, and Pro plans.

Worldpay accelerates agentic commerce with Model Context Protocol. Worldpay, a financial technology and payment processing company, has launched Worldpay Model Context Protocol, a set of server specifications and tools designed to accelerate AI-powered payment integration and agentic commerce. Developers and merchants can download, modify, and deploy the protocol immediately to enable the rapid creation of AI agents and direct payment integrations with Worldpay’s API. Worldpay MCP is available on its Developer Hub and on GitHub.

Perplexity announces free tool to streamline online shopping. Perplexity, in partnership with PayPal, is rolling out a free agentic shopping product for U.S. users, who can purchase items from more than 5,000 merchants through the search engine. Perplexity says the new free product will be better than its paid shopping subscription at detecting shopping intent, resulting in more personalized results.

NIQ and Amazon Marketing Cloud partner on cross-platform ad campaigns in Italy. NIQ, a consumer intelligence company, and Amazon Marketing Cloud have announced a collaboration to study the effectiveness in Italy of cross-platform advertising across linear television and Amazon Ads inventory. Advertisers and agencies will gain actionable insights into the relative performance of ad placements across digital, linear TV, and streaming environments, including how each contributes to incremental reach and influences product purchases on Amazon’s ecommerce platforms. The project is part of Amazon Marketing Cloud’s Global Strategic Initiative.

Home page of NIQ

NIQ

Ecommerce accelerator Pattern expands fulfillment solutions. Pattern Group, which accelerates brands on global ecommerce marketplaces, has expanded its portfolio of fulfillment and logistics services. Pattern now offers inbound transportation services, leveraging the company’s carrier relationships and transportation infrastructure. Pattern has expanded its reverse logistics capabilities to help businesses recover more value from returns. Pattern has also launched Reimbursements, an automated service that handles filing and tracking marketplace reimbursement claims, particularly on Amazon.

Integrated E.U. payment solution Unzer enables Wero for merchants. Unzer, a payments and software provider serving small and mid-sized businesses across Germany, Austria, Luxembourg, and the Nordics, has gone live with Wero, a new alternative payment solution for Europe-based consumers and merchants. Unzer and the European Payments Initiative, a service backed by 16 European banks and providers, are now inviting merchants to be among the first to adopt the digital payment method through Unzer’s integrated platform, UnzerOne.

Ordoro partners with ShipBob on ecommerce fulfillment. Ordoro, a provider of multichannel ecommerce operations software, has teamed up with ShipBob, a supply chain and fulfillment platform, to help small and mid-market omnichannel merchants find the proper fulfillment setup for their growth stage. According to the companies, merchants using Ordoro benefit from advanced inventory and shipping automation, while brands ready to scale can either outsource to ShipBob’s global fulfillment network or run their own U.S. warehouse using ShipBob’s warehouse management software.

Website builder Jimdo releases AI-powered Companion for small businesses. Jimdo, a Germany-based website builder specializing in solopreneurs, microbusinesses, and small ecommerce ventures, has launched Companion, an AI agent. Built into the Jimdo architecture, Companion provides personalized recommendations that drive visibility and transactions by analyzing each business’s performance history, industry benchmarks, and competitive landscape. Companion is available for Jimdo’s website customers at no extra cost across the U.S., U.K., Ireland, as well as Germany, Austria, and Switzerland.

Jimdo home page

Jimdo

Fermàt launches AI Search Commerce Engine. Fermàt Commerce, an AI-powered commerce platform for personalized shopping experiences, has launched AI Search Commerce Engine to help measure visibility, generate shoppable content, and drive transactions from answer engines, including ChatGPT, Claude, and Gemini. “Monitor Prompts” identifies high-value AI prompts using search engine data, marketing signals, product catalogs, and customer reviews. “Generate First-Party Content” automatically creates shoppable content optimized for large-language-model indexing. “Measure Visibility” tracks results with citation-level attribution, competitor benchmarking, and prompt expansion.

Znode announces enhanced Commerce Connector for B2B ecommerce. Znode, a B2B ecommerce platform, has announced an update to its Commerce Connector. The new release introduces Data Exchanges, expanding Znode’s native integration capabilities for connecting to enterprise systems. Data Exchanges handles real-time or scheduled data flows for products, pricing, inventory, customers, and orders. The update allows manufacturers and distributors to integrate Znode with ERP, CRM, PIM, and other business systems. Administrators gain visibility through configurable mapping and monitoring tools to reduce integration risk, according to Znode.

OpenAI and Target partner to bring AI-powered experiences across retail. Through its partnership with OpenAI, omnichannel retailer Target has announced that consumers can discover and shop Target products inside ChatGPT as a curated, conversational experience. Target is offering its shopping experience through an app in ChatGPT, allowing users to purchase multiple items in a single transaction, shop for fresh food products, and select drive-up, pickup, or shipping fulfillment options.

HappyFox launches Autopilot agentic AI platform for customer support teams. HappyFox, a customer service software provider, has launched Autopilot, an agentic AI platform that delivers pre-built agents for quick deployment. “Shopify Delivery Dispute Analyzer” investigates ecommerce delivery discrepancies between fulfillment status and customer claims. “Ticket Triage Agent” automatically categorizes and tags tickets. “Churn Risk Detector” analyzes SaaS customer conversations for signals of dissatisfaction. “Duplicate Ticket Notifier” identifies and flags potential duplicate tickets. Users can access outcome-based pricing and pay only when agents complete tasks, per HappyFox.

HappyFox home page

HappyFox

Mueller: Background Video Loading Unlikely To Affect SEO via @sejournal, @MattGSouthern

Google Search Advocate John Mueller says large video files loading in the background are unlikely to have a noticeable SEO impact if page content loads first.

A site owner on Reddit’s r/SEO asked whether a 100MB video would hurt SEO if the page prioritizes loading a hero image and content before the video. The video continues loading in the background while users can already see the page.

Mueller responded:

“I don’t think you’d notice an SEO effect.”

Broader Context

The question addresses a common concern for sites using large hero videos or animated backgrounds.

The site owner described an implementation where content and images load within seconds, displaying a “full visual ready” state. The video then loads asynchronously and replaces the hero image once complete.

This method aligns with Google’s documentation on lazy loading, which recommends deferring non-critical content to improve page performance.

Google’s help documents state that lazy loading is “a common performance and UX best practice” for non-critical or non-visible content. The key requirement is ensuring content loads when visible in the viewport.

Why This Matters

If you’re running hero videos or animated backgrounds on landing pages, this suggests that background loading strategies are unlikely to harm your rankings. The critical factor is ensuring your primary content reaches users quickly.

Google measures page experience through Core Web Vitals metrics like Largest Contentful Paint. In many cases, a video that loads after visible content is ready shouldn’t block these measurements.

Implementation Best Practices

Google’s web.dev documentation recommends using preload=”none” on video elements to avoid unnecessary preloading of video data. Adding a poster attribute provides a placeholder image while the video loads.

For videos that autoplay, the documentation suggests using the Intersection Observer API to load video sources only when the element enters the viewport. This lets you maintain visual impact without affecting initial page load performance.

Looking Ahead

Site owners using background video can generally continue doing so without major SEO concerns, provided content loads first. Focus on Core Web Vitals metrics to verify your implementation meets performance thresholds.

Test your setup using Google Search Console’s URL Inspection Tool to confirm video elements appear correctly in rendered HTML.


Featured Image: Roman Samborskyi/Shutterstock

New Data: Top Factors Influencing ChatGPT Citations via @sejournal, @MattGSouthern

SE Ranking analyzed 129,000 unique domains across 216,524 pages in 20 niches to identify which factors correlate with ChatGPT citations.

The number of referring domains ranked as the single strongest predictor of citation likelihood.

What The Data Says

Backlinks And Trust Signals

Link diversity showed the clearest correlation with citations. Sites with up to 2,500 referring domains averaged 1.6 to 1.8 citations. Those with over 350,000 referring domains averaged 8.4 citations.

The researchers identified a threshold effect at 32,000 referring domains. At that point, citations nearly doubled from 2.9 to 5.6.

Domain Trust scores followed a similar pattern. Sites with Domain Trust below 43 averaged 1.6 citations. The benefits accelerated significantly at the top end: sites scoring 91–96 averaged 6 citations, while those scoring 97–100 averaged 8.4.

Page Trust mattered less than domain-level signals. Any page with a Page Trust score of 28 or above received roughly the same citation rate (8.3 average), suggesting ChatGPT weighs overall domain authority more heavily than individual page metrics .

One notable finding: .gov and .edu domains didn’t automatically outperform commercial sites. Government and educational domains averaged 3.2 citations, compared to 4.0 for sites without trusted zone designations.

The authors wrote:

“What ultimately matters is not the domain name itself, but the quality of the content and the value it provides.”

Traffic & Google Rankings

Domain traffic ranked as the second most important factor, though the correlation only appeared at high traffic levels.

Sites under 190,000 monthly visitors averaged 2 to 2.9 citations regardless of exact traffic volume. A site receiving 20 organic visitors performed similarly to one receiving 20,000.

Only after crossing 190,000 monthly visitors did traffic correlate with increased citations. Domains with over 10 million visitors averaged 8.5 citations.

Homepage traffic specifically mattered. Sites with at least 7,900 organic visitors to their main page showed the highest citation rates.

Average Google ranking position also tracked with ChatGPT citations. Pages ranking between positions 1 and 45 averaged 5 citations. Those ranking 64 to 75 averaged 3.1.

The authors noted:

“While this doesn’t prove that ChatGPT relies on Google’s index, it suggests both systems evaluate authority and content quality similarly.”

Content Depth & Structure

Content length showed consistent correlation. Articles under 800 words averaged 3.2 citations. Those over 2,900 words averaged 5.1.

Structure mattered beyond raw word count. Pages with section lengths of 120 to 180 words between headings performed best, averaging 4.6 citations. Extremely short sections under 50 words averaged 2.7 citations.

Pages with expert quotes averaged 4.1 citations versus 2.4 for those without. Content with 19 or more statistical data points averaged 5.4 citations, compared to 2.8 for pages with minimal data.

Content freshness produced one of the clearer findings. Pages updated within three months averaged 6 citations. Outdated content averaged 3.6.

Surprisingly, the raw data showed that pages with FAQ sections actually received fewer citations (3.8) than those without (4.1). However, the researchers noted that their predictive model viewed the absence of an FAQ section as a negative signal. They suggest this discrepancy exists because FAQs often appear on simpler support pages that naturally earn fewer citations.

The report also found that using question-style headings (e.g., as H1s or H2s) underperformed straightforward headings, earning 3.4 citations versus 4.3. This contradicts standard voice search optimization advice, suggesting AI models may prefer direct topical labeling over question formats.

Social Signals & Review Platforms

Brand mentions on discussion platforms showed strong correlation with citations.

Domains with minimal Quora presence (up to 33 mentions) averaged 1.7 citations. Heavy Quora presence (6.6 million mentions) corresponded to 7.0 citations.

Reddit showed similar patterns. Domains with over 10 million mentions averaged 7 citations, compared to 1.8 for those with minimal activity.

The authors positioned this as particularly relevant for smaller sites:

“For smaller, less-established websites, engaging on Quora and Reddit offers a way to build authority and earn trust from ChatGPT, similar to what larger domains achieve through backlinks and high traffic.”

Presence on review platforms like Trustpilot, G2, Capterra, Sitejabber, and Yelp also correlated with increased citations. Domains listed on multiple review platforms earned 4.6 to 6.3 citations on average. Those absent from such platforms averaged 1.8.

Technical Performance

Page speed metrics correlated with citation likelihood.

Pages with First Contentful Paint under 0.4 seconds averaged 6.7 citations. Slower pages (over 1.13 seconds) averaged 2.1.

Speed Index showed similar patterns. Sites with indices below 1.14 seconds performed reliably well. Those above 2.2 seconds experienced steep decline.

One counterintuitive finding: pages with the fastest Interaction to Next Paint scores (under 0.4 seconds) actually received fewer citations (1.6 average) than those with moderate INP scores (0.8 to 1.0 seconds, averaging 4.5 citations). The researchers suggested extremely simple or static pages may not signal the depth ChatGPT looks for in authoritative sources.

URL & Title Optimization

The report found that broad, topic-describing URLs outperformed keyword-optimized ones.

Pages with low semantic relevance between URL and target keyword (0.00 to 0.57 range) averaged 6.4 citations. Those with highest semantic relevance (0.84 to 1.00) averaged only 2.7 citations.

Titles followed the same pattern. Titles with low keyword matching averaged 5.9 citations. Highly keyword-optimized titles averaged 2.8.

The researchers concluded: “ChatGPT prefers URLs that clearly describe the overall topic rather than those strictly optimized for a single keyword.”

Factors That Underperformed

Several commonly recommended AI optimization tactics showed minimal or negative correlation with citations.

FAQ schema markup underperformed. Pages with FAQ schema averaged 3.6 citations. Pages without averaged 4.2.

LLMs.txt files showed negligible impact. Outbound links to high-authority sites also showed minimal effect on citation likelihood.

Why This Matters

The findings suggest your existing SEO strategy may already serve AI visibility goals. If you’re building referring domains, earning traffic, maintaining fast pages, and keeping content updated, you’re addressing the factors this report identified as most predictive.

For smaller sites without extensive backlink profiles, the research points to community engagement on Reddit and Quora as a viable path to building authority signals The data also suggests focusing on content depth over keyword density.

The researchers note that factors are interdependent. Optimizing one signal while ignoring others reduces overall effectiveness.

Looking Ahead

SE Ranking analyzed ChatGPT specifically. Other AI systems may weight factors differently.

SE Ranking doesn’t specify which ChatGPT version or timeframe the data represents, so these patterns should be treated as directional correlations rather than proof of how ChatGPT’s ranking algorithm works.


Featured Image: BongkarnGraphic/Shuttersrtock

How AI’s Geo-Identification Failures Are Rewriting International SEO via @sejournal, @motokohunt

AI search isn’t just changing what content ranks; it’s quietly redrawing where your brand appears to belong. As large language models (LLMs) synthesize results across languages and markets, they blur the boundaries that once kept content localized. Traditional geographic signals of hreflang, ccTLDs, and regional schema are being bypassed, misread, or overwritten by global defaults. The result: your English site becomes the “truth” for all markets, while your local teams wonder why their traffic and conversions are vanishing.

This article focuses primarily on search-grounded AI systems such as Google’s AI Overviews and Bing’s generative search, where the problem of geo-identification drift is most visible. Purely conversational AI may behave differently, but the core issue remains: when authority signals and training data skew global and geographic context, synthesis often loses that context.

The New Geography Of Search

In classic search, location was explicit:

  • IP, language, and market-specific domains dictated what users saw.
  • Hreflang told Google which market variant to serve.
  • Local content lived on distinct ccTLDs or subdirectories, supported by region-specific backlinks and metadata.

AI search breaks this deterministic system.

In a recent article on “AI Translation Gaps,” International SEO Blas Giffuni demonstrated this problem when he typed the phrase “proveedores de químicos industriales.” Rather than presenting the local market website with a list of industrial chemical suppliers in Mexico, it presented a translated list from the US, of which some either did not do business in Mexico or did not meet local safety or business requirements. A generative engine doesn’t just retrieve documents; it synthesizes an answer using whatever language or source it finds most complete.

If your local pages are thin, inconsistently marked up, or overshadowed by global English content, the model will simply pull from the worldwide corpus and rewrite the answer in Spanish or French.

On the surface, it looks localized. Underneath, it’s English data wearing a different flag.

Why Geo-Identification Is Breaking

1. Language ≠ Location

AI systems treat language as a proxy for geography. A Spanish query could represent Mexico, Colombia, or Spain. If your signals don’t specify which markets you serve through schema, hreflang, and local citations, the model lumps them together.

When that happens, your strongest instance wins. And nine times out of 10, that’s your main English language website.

2. Market Aggregation Bias

During training, LLMs learn from corpus distributions that heavily favor English content. When related entities appear across markets (‘GlobalChem Mexico,’ ‘GlobalChem Japan’), the model’s representations are dominated by whichever instance has the most training examples, typically the English global brand. This creates an authority imbalance that persists during inference, causing the model to default to global content even for market-specific queries.

3. Canonical Amplification

Search engines naturally try to consolidate near-identical pages, and hreflang exists to counter that bias by telling them that similar versions are valid alternatives for different markets. When AI systems retrieve from these consolidated indexes, they inherit this hierarchy, treating the canonical version as the primary source of truth. Without explicit geographic signals in the content itself, regional pages become invisible to the synthesis layer, even when they are adequately tagged with hreflang.

This amplifies market-aggregation bias; your regional pages aren’t just overshadowed, they’re conceptually absorbed into the parent entity.

Will This Problem Self-Correct?

As LLMs incorporate more diverse training data, some geographic imbalances may diminish. However, structural issues like canonical consolidation and the network effects of English-language authority will persist. Even with perfect training data distribution, your brand’s internal hierarchy and content depth differences across markets will continue to influence which version dominates in synthesis.

The Ripple Effect On Local Search

Global Answers, Local Users

Procurement teams in Mexico or Japan receive AI-generated answers derived from English pages. The contact info, certifications, and shipping policies are wrong, even if localized pages exist.

Local Authority, Global Overshadowing

Even strong local competitors are being displaced because models weigh the English/global corpus more heavily. The result: the local authority doesn’t register.

Brand Trust Erosion

Users perceive this as neglect:

“They don’t serve our market.”
“Their information isn’t relevant here.”

In regulated or B2B industries where compliance, units, and standards matter, this results in lost revenue and reputational risk.

Hreflang In The Age of AI

Hreflang was a precision instrument in a rules-based world. It told Google which page to serve in which market. But AI engines don’t “serve pages” – they generate responses.

That means:

  • Hreflang becomes advisory, not authoritative.
  • Current evidence suggests LLMs don’t actively interpret hreflang during synthesis because it doesn’t apply to the document-level relationships they use for reasoning.
  • If your canonical structure points to global pages, the model inherits that hierarchy, not your hreflang instructions.

In short, hreflang still helps Google indexing, but it no longer governs interpretation.

AI systems learn from patterns of connectivity, authority, and relevance. If your global content has richer interlinking, higher engagement, and more external citations, it will always dominate the synthesis layer – regardless of hreflang.

Read more: Ask An SEO: What Are The Most Common Hreflang Mistakes & How Do I Audit Them?

How Geo Drift Happens

Let’s look at a real-world pattern observed across markets:

  1. Weak local content (thin copy, missing schema, outdated catalog).
  2. Global canonical consolidates authority under .com.
  3. AI overview or chatbot pulls the English page as source data.
  4. The model generates a response in the user’s language, drawing on facts and context from the English source while adding a few local brand names to create the appearance of localization, and then serves a synthetic local-language answer.
  5. User clicks through to a U.S. contact form, gets blocked by shipping restrictions, and leaves frustrated.

Each of these steps seems minor, but together they create a digital sovereignty problem – global data has overwritten your local market’s representation.

Geo-Legibility: The New SEO Imperative

In the era of generative search, the challenge isn’t just to rank in each market – it’s to make your presence geo-legible to machines.

Geo-legibility builds on international SEO fundamentals but addresses a new challenge: making geographic boundaries interpretable during AI synthesis, not just during traditional retrieval and ranking. While hreflang tells Google which page to index for which market, geo-legibility ensures the content itself contains explicit, machine-readable signals that survive the transition from structured index to generative response.

That means encoding geography, compliance, and market boundaries in ways LLMs can understand during both indexing and synthesis.

Key Layers Of Geo-Legibility

Layer Example Action Why It Matters
Content Include explicit market context (e.g., “Distribuimos en México bajo norma NOM-018-STPS”) Reinforces relevance to a defined geography.
Structure Use schema for areaServed, priceCurrency, and addressLocality Provides explicit geographic context that may influence retrieval systems and helps future-proof as AI systems evolve to better understand structured data.
Links & Mentions Secure backlinks from local directories and trade associations Builds local authority and entity clustering.
Data Consistency Align address, phone, and organization names across all sources Prevents entity merging and confusion.
Governance Monitor AI outputs for misattribution or cross-market drift Detects early leakage before it becomes entrenched.

Note: While current evidence for schema’s direct impact on AI synthesis is limited, these properties strengthen traditional search signals and position content for future AI systems that may parse structured data more systematically.

Geo-legibility isn’t about speaking the right language; it’s about being understood in the right place.

Diagnostic Workflow: “Where Did My Market Go?”

  1. Run Local Queries in AI Overview or Chat Search. Test your core product and category terms in the local language and record which language, domain, and market each result reflects.
  2. Capture Cited URLs and Market Indicators. If you see English pages cited for non-English queries, that’s a signal your local content lacks authority or visibility.
  3. Cross-Check Search Console Coverage. Confirm that your local URLs are indexed, discoverable, and mapped correctly through hreflang.
  4. Inspect Canonical Hierarchies. Ensure your regional URLs aren’t canonicalized to global pages. AI systems often treat canonical as “primary truth.”
  5. Test Structured Geography. For Google and Bing, be sure to add or validate schema properties like areaServed, address, and priceCurrency to help engines map jurisdictional relevance.
  6. Repeat Quarterly. AI search evolves rapidly. Regular testing ensures your geo boundaries remain stable as models retrain.

Remediation Workflow: From Drift To Differentiation

Step Focus Impact
1 Strengthen local data signals (structured geography, certification markup). Clarifies market authority
2 Build localized case studies, regulatory references, and testimonials. Anchors E-E-A-T locally
3 Optimize internal linking from regional subdomains to local entities. Reinforces market identity
4 Secure regional backlinks from industry bodies. Adds non-linguistic trust
5 Adjust canonical logic to favor local markets. Prevents AI inheritance of global defaults
6 Conduct “AI visibility audits” alongside traditional SEO reports.

Beyond Hreflang: A New Model Of Market Governance

Executives need to see this for what it is: not an SEO bug, but a strategic governance gap.

AI search collapses boundaries between brand, market, and language. Without deliberate reinforcement, your local entities become shadows inside global knowledge graphs.

That loss of differentiation affects:

  • Revenue: You become invisible in the markets where growth depends on discoverability.
  • Compliance: Users act on information intended for another jurisdiction.

Equity: Your local authority and link capital are absorbed by the global brand, distorting measurement and accountability.

Why Executives Must Pay Attention

The implications of AI-driven geo drift extend far beyond marketing. When your brand’s digital footprint no longer aligns with its operational reality, it creates measurable business risk. A misrouted customer in the wrong market isn’t just a lost lead; it’s a symptom of organizational misalignment between marketing, IT, compliance, and regional leadership.

Executives must ensure their digital infrastructure reflects how the company actually operates, which markets it serves, which standards it adheres to, and which entities own accountability for performance. Aligning these systems is not optional; it’s the only way to minimize negative impact as AI platforms redefine how brands are recognized, attributed, and trusted globally.

Executive Imperatives

  1. Reevaluate Canonical Strategy. What once improved efficiency may now reduce market visibility. Treat canonicals as control levers, not conveniences.
  2. Expand SEO Governance to AI Search Governance. Traditional hreflang audits must evolve into cross-market AI visibility reviews that track how generative engines interpret your entity graph.
  3. Reinvest in Local Authority. Encourage regional teams to create content with market-first intent – not translated copies of global pages.
  4. Measure Visibility Differently. Rankings alone no longer indicate presence: track citations, sources, and language of origin in AI search outputs.

Final Thought

AI didn’t make geography irrelevant; it just exposed how fragile our digital maps were.

Hreflang, ccTLDs, and translation workflows gave companies the illusion of control.

AI search removed the guardrails, and now the strongest signals win – regardless of borders.

The next evolution of international SEO isn’t about tagging and translating more pages. It’s about governing your digital borders and making sure every market you serve remains visible, distinct, and correctly represented in the age of synthesis.

Because when AI redraws the map, the brands that stay findable aren’t the ones that translate best; they’re the ones who define where they belong.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock