From Organic Search To AI Answers: How To Redesign SEO Content Workflows via @sejournal, @rio_seo

It’s officially the end of organic search as we know it. A recent survey reveals that 83% of consumers believe AI-powered search tools are more efficient than traditional search engines.

The days of simple search are long gone, and a profound transformation continues to sweep the search engine results pages (SERPs). The rise of AI-powered answer engines, from ChatGPT to Perplexity to Google’s AI Overviews, is rewriting the rules of online visibility.

Instead of returning traditional blue links or images, AI systems are returning immediate results. For marketing leaders, the question is no longer “How do we rank number one?” but rather “How do we become the top answer?”

This shift has eliminated the distance between the search and the solution. No longer do customers need to click through to find the information they’re seeking. And while zero-click searches are more prevalent and old metrics like keyword rankings are fading fast, it also creates a massive opportunity for chief marketing officers to redefine SEO as a strategic growth function.

Yes, content remains king, but it must be rooted in a foundation that fuels authority, brand trust, and authenticity to serve the systems that are shaping what appears when a search is conducted. This isn’t just a new channel; it’s a new way of creating, structuring, and validating content

In this post, we’ll dissect how to redesign content workflows for generative engines to ensure your content reigns supreme in an AI-first era.

What Generative Engines Changed And Why “Traditional SEO” Won’t Recover

When users ask generative search engines a question, they aren’t presented with a list of websites to click through to learn more; instead, they’re given a quick, synthesized answer. The source of the answer is cited, allowing users to click to learn more if they so choose to. These citations are the new “rankings” and most likely to be clicked on.

In fact, research shows 60% of consumers click through at least sometimes after seeing an AI-generated overview in Google Search. A separate study found that 91% of frequent AI users turn to popular large language models (LLMs) such as ChatGPT for their searching needs.

While keyword optimization still holds importance in content marketing, generative engines are favoring expertise, brand authority, and structured data. For CMOs, the old metrics no longer necessarily equate to success. Visibility and impressions are no longer tied to website traffic, and success is now contingent upon citations, mentions, and verifiable authority signals.

The AI era signals a serious identity shift, one in which traditional SEO collides with AI-driven search. SEO can no longer be a mechanical, straightforward checklist that sits under demand generation. It must integrate with a broader strategy to manage brand knowledge, ensuring that when AI pulls data to form an answer, your content is what they trust most out of all the options out there.

In this new search era, improving visibility can be measured in three diverse ways:

  • Appearing in results or answers.
  • Being seen as a thought leader in your space by being cited or trusted as a credible source.
  • Driving influence, affinity, or conversions from your digital presence.

Traditional SEO is now only one piece of the content visibility puzzle. Generative SEO demands fluency across all three.

The CMO’s New Dilemma: AI As Both Channel And Competitor

Consumers have questions. Generative engines have the answers. With over half (56%) of consumers trusting the use of Gen AI as an education resource, generative engines are now mediators between your brand and your customers. They can influence purchases or sway customers toward your competition, depending on whether your content earns their hard-earned trust.

For example, if a customer asks, “What’s the best CRM for enterprise brands?” and an AI engine suggests HubSpot’s content over your brand, the damage isn’t just a lost click but a missed opportunity to garner interest and trust with that motivated searcher. The hard truth is the Gen AI model didn’t see your content as relevant or reliable enough to deliver in its answer.

Generative engines are trained on content that already exists, meaning your competitors’ content, user reviews, forum discussions, and your own material are all fair game. That means AI is both a discovery channel and competitor for audience attention. This duality must be recognized by CMOs to invest in structuring, amplifying, and revamping content workflows to match Gen AI’s expectations. The goal isn’t to chase algorithms; it’s to shape the content in a meaningful way to ensure those algorithms trust and view your content as the single source of truth.

Think of it this way: Traditional SEO practices taught you to optimize content for crawlers. With Generative SEO, you’re optimizing for the model’s memory.

How To Redesign SEO Content Workflows For The Generative Era

To win citations and influence AI-generated answers, it’s time to throw out your old playbooks and overhaul previous workflows. It may be time to ditch how you used to plan content and how performance was measured. Out with the old and in with the new (and more successful).

From Keyword Targeting To Knowledge Modeling

Generative models go beyond understanding just keywords. They understand entities and relationships, too. To show up in coveted AI answers and to be the top choice, your content must reflect structured, interconnected knowledge.

Start by building a brand knowledge graph that maps people, products, and topics that define your expertise. Schema markup is also a must to show how these entities connect. Additionally, every piece of content you produce should reinforce your position within that network.

Long-tail keywords may be easier to target and rank for in traditional SEO; however, optimizing for AI search requires a shift in content workflows, one that targets “entity clusters” instead. Here’s what this might look like in practice: A software company wouldn’t only optimize content around the focus keyword phrase “best CRM integrations.” The writer should also define its relationship to the concept of “CRM,” “workflow automation,” “customer data,” and other related phrases.

From Content Volume To Verifiable Authority

It was once thought that the more content, the better. This is not the case with SEO today as AI systems prefer and prioritize content that’s well-sourced, attributable, and authoritative. Content velocity is no longer the end game, but rather producing stronger, more evidence-backed pieces.

Marketing leaders should create an AI-readiness checklist for their content marketing team to ensure every piece of content is optimized for generative engines. Every article should include author credentials (job title, advanced degrees, and certifications), clear citations (where the statistics or research came from), and verifiable claims.

Create an AI-readiness checklist for your team. Every article should include author credentials, clear citations, and verifiable claims. Reference independent studies and owned research where possible. AI models cross-validate multiple sources to determine what’s credible and reliable.

In short: Don’t publish faster. Publish smarter.

From Static Publishing To Dynamic Feedback

If one thing is certain, it’s that generative engines are continuing to evolve, similar to traditional search. What ranks well today may change entirely tomorrow. That’s why successful SEO teams are adopting an agile publishing cycle to continue to stay on top of what’s working best. SEO teams are actively and consistently:

  • Testing which questions their audience asks in generative engines.
  • Tracking whether their content appears in those answers.
  • Refreshing content based on what’s being cited, summarized, or ignored.

Several tools are emerging to help you track your brand’s presence across, ChatGPT, Perplexity, AI Overviews, and more, including SE Ranking, Peec AI,  Profound, and Conductor. If you choose to forego tools, you can also run regular AI audits on your own to see how your brand is represented across engines by following the aforementioned framework. Treat that data like search console metrics and think of it as your new visibility report.

How To Measure SEO Success In An Answer-Driven World

Measuring SEO success across generative engines looks different than how we used to measure traditional SEO. Traffic will always matter, but it’s no longer the sole proof of impact. For CMOs, understanding how to measure marketing’s impact is essential to demonstrate the value your team delivers to the organization’s mission.

Here’s how progressive CMOs are redefining SEO success:

  • AI Citations: How often your content is referenced within AI-generated responses.
  • Answer Visibility Share: The percentage of relevant queries where your content appears in an AI answer.
  • Zero-Click Exposure: Instances where your brand is visible in AI responses, even if users don’t visit your site.
  • Answer Referral Traffic: The new “clicks”; visits that originate directly from AI-generated links.
  • Semantic Coverage: The breadth of related entities and subtopics your brand consistently appears for.

These metrics move SEO reporting from vanity numbers to visibility intelligence and are a more accurate representation of brand authority in the machine age.

Future-Proof Your SEO For Generative Search

Generative search is just as volatile as traditional search, but volatility is fertile ground for innovation. Instead of resisting it, CMOs should continue to treat SEO as an experimental function; a sandbox for continuously testing new ways to be discovered and trusted. SEO continues to remain a function that isn’t a set it and forget it, but one that must change with time and testing.

CMOs should encourage their team to A/B test content formats, schema implementations, and even phrasing to see what appears in AI generated responses. Cross-pollinate SEO insights with PR, product, and customer experience. When your organization learns how AI represents your brand, it becomes a feedback loop that strengthens everything from messaging to market positioning.

In the near future, the term “organic search” will become something broader to encompass the fast-growing ecosystem of machine-mediated discovery. The brands that succeed won’t just optimize for keywords. They’ll build long-lasting trust.

The Next Evolution Of Search

The notion that AI is killing SEO is false. AI isn’t eliminating SEO but rather redefining what it means today. What used to be a tactical discipline is shifting to become a more strategic approach that requires understanding how your brand exists within digital knowledge systems. It’s straying from what’s comfortable and moving into largely uncharted territory.

The opportunity for marketing leaders is clear: It’s time to move past the known and venture into the somewhat elusive realm of generative answer engines. After all, Forrester predicts AI-powered search will drive 20% of all organic traffic by the end of 2025. At the end of the day, many of the traditional SEO best practices still apply: create content that’s verifiable, well-structured, and context-rich. The main mindset shift lies in how to measure generative engine success, not by rankings but by relevance in conversation.

In the age of AI answers, your brand doesn’t need to just be searchable; it needs to be knowable.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

The AI Hype Index: The people can’t get enough of AI slop

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

Last year, the fantasy author Joanna Maciejewska went viral (if such a thing is still possible on X) with a post saying “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” Clearly, it struck a chord with the disaffected masses.

Regrettably, 18 months after Maciejewska’s post, the entertainment industry insists that machines should make art and artists should do laundry. The streaming platform Disney+ has plans to let its users generate their own content from its intellectual property instead of, y’know, paying humans to make some new Star Wars or Marvel movies.

Elsewhere, it seems AI-generated music is resonating with a depressingly large audience, given that the AI band Breaking Rust has topped Billboard’s Country Digital Song Sales chart. If the people demand AI slop, who are we to deny them?

The Download: AI and the economy, and slop for the masses

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AI is changing the economy

There’s a lot at stake when it comes to understanding how AI is changing the economy right now. Should we be pessimistic? Optimistic? Or is the situation too nuanced for that?

Hopefully, we can point you towards some answers. Mat Honan, our editor in chief, will hold a special subscriber-only Roundtables conversation with our editor at large David Rotman, and Richard Waters, Financial Times columnist, exploring what’s happening across different markets. Register here to join us at 1pm ET on Tuesday December 9.

The event is part of the Financial Times and MIT Technology Review “The State of AI” partnership, exploring the global impact of artificial intelligence. Over the past month, we’ve been running discussions between our journalists—sign up here to receive future editions every Monday.

If you’re interested in how AI is affecting the economy, take a look at: 

+ People are worried that AI will take everyone’s jobs. We’ve been here before.

+  What will AI mean for economic inequality? If we’re not careful, we could see widening gaps within countries and between them. Read the full story.

+ Artificial intelligence could put us on the path to a booming economic future, but getting there will take some serious course corrections. Here’s how to fine-tune AI for prosperity.

The AI Hype Index: The people can’t get enough of AI slop

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here, featuring everything from replacing animal testing with AI to our story on why AGI should be viewed as a conspiracy theory

MIT Technology Review Narrated: How to fix the internet

We all know the internet (well, social media) is broken. But it has also provided a haven for marginalized groups and a place for support. It offers information at times of crisis. It can connect you with long-lost friends. It can make you laugh.

That makes it worth fighting for. And yet, fixing online discourse is the definition of a hard problem.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 How much AI investment is too much AI investment?
Tech companies hope to learn from beleaguered Intel. (WSJ $)
+ HP is pivoting to AI in the hopes of saving $1 billion a year. (The Guardian)
+ The European Central bank has accused tech investors of FOMO. (FT $)

2 ICE is outsourcing immigrant surveillance to private firms
It’s incentivizing contractors with multi-million dollar rewards. (Wired $)
+ Californian residents have been traumatized by recent raids. (The Guardian)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)

3 Poland plans to use drones to defend its rail network from attack
It’s blaming Russia for a recent line explosion. (FT $)
+ This giant microwave may change the future of war. (MIT Technology Review)

4 ChatGPT could eventually have as many subscribers as Spotify
According to erm, OpenAI. (The Information $)

5 Here’s how your phone-checking habits could shape your daily life
You’re probably underestimating just how often you pick it up. (WP $)
+ How to log off. (MIT Technology Review)

6 Chinese drugs are coming
Its drugmakers are on the verge of making more money overseas than at home. (Economist $)

7 Uber is deploying fully driverless robotaxis in an Abu Dhabi island
Roaming 12 square miles of the popular tourist destination. (The Verge)
+ Tesla is hoping to double its robotaxi fleet in Austin next month. (Reuters)

8 Apple is set to become the world’s largest smartphone maker
After more than a decade in Samsung’s shadow. (Bloomberg $)

9 An AI teddy bear that discussed sexual topics is back on sale
But the Teddy Kumma toy is now powered by a different chatbot. (Bloomberg $)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

10 How Stranger Things became the ultimate algorithmic TV show
Its creators mashed a load of pop culture references together and created a streaming phenomenon. (NYT $)

Quote of the day

“AI is a very powerful tool—it’s a hammer and that doesn’t mean everything is a nail.”

—Marketing consultant Ryan Bearden explains to the Wall Street Journal why it pays to be discerning when using AI.

One more thing

Are we ready to hand AI agents the keys?

In recent months, a new class of agents has arrived on the scene: ones built using large language models. Any action that can be captured by text—from playing a video game using written commands to running a social media account—is potentially within the purview of this type of system.

LLM agents don’t have much of a track record yet, but to hear CEOs tell it, they will transform the economy—and soon. Despite that, like chatbot LLMs, agents can be chaotic and unpredictable. Here’s what could happen as we try to integrate them into everything.

—Grace Huckins

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The entries for this year’s Nature inFocus Photography Awards are fantastic.
+ There’s nothing like a good karaoke sesh.
+ Happy heavenly birthday Tina Turner, who would have turned 86 years old today.
+ Stop the presses—the hotly-contested list of the world’s top 50 vineyards has officially been announced 🍇

New Ecommerce Tools: November 26, 2025

Every week we publish a handpicked list of new products and services for ecommerce merchants. This installment includes updates on product experience management, agentic commerce, AI-powered payment integration, fulfillment, alternative payments, customer support, website builders, and cross-platform ad campaigns.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

Brandfuel launches AI-native Product Experience Management platform. Brandfuel has announced the availability (out of beta) of its AI-native Product Experience Management platform for ecommerce brands and agencies. According to Brandfuel, the platform can capture a brand’s personas, competitors, and keywords — to guide personalized content creation — as well as automate image analysis, alt tags, and per-product competitor tracking. The platform features product content scoring, multi-language and multichannel support, automated A/B content testing, Klaviyo and Meta integrations, and more.

Home page of Brandfuel

Brandfuel

OpenAI introduces shopping research in ChatGPT. OpenAI‘s new shopping research feature in ChatGPT helps consumers find the right products. Per OpenAI, the tool asks clarifying questions, reviews quality sources, and builds on ChatGPT’s understanding of a user from past conversations to deliver a personalized buyer’s guide. Shopping research is currently rolling out on mobile and web for logged-in ChatGPT users on Free, Go, Plus, and Pro plans.

Worldpay accelerates agentic commerce with Model Context Protocol. Worldpay, a financial technology and payment processing company, has launched Worldpay Model Context Protocol, a set of server specifications and tools designed to accelerate AI-powered payment integration and agentic commerce. Developers and merchants can download, modify, and deploy the protocol immediately to enable the rapid creation of AI agents and direct payment integrations with Worldpay’s API. Worldpay MCP is available on its Developer Hub and on GitHub.

Perplexity announces free tool to streamline online shopping. Perplexity, in partnership with PayPal, is rolling out a free agentic shopping product for U.S. users, who can purchase items from more than 5,000 merchants through the search engine. Perplexity says the new free product will be better than its paid shopping subscription at detecting shopping intent, resulting in more personalized results.

NIQ and Amazon Marketing Cloud partner on cross-platform ad campaigns in Italy. NIQ, a consumer intelligence company, and Amazon Marketing Cloud have announced a collaboration to study the effectiveness in Italy of cross-platform advertising across linear television and Amazon Ads inventory. Advertisers and agencies will gain actionable insights into the relative performance of ad placements across digital, linear TV, and streaming environments, including how each contributes to incremental reach and influences product purchases on Amazon’s ecommerce platforms. The project is part of Amazon Marketing Cloud’s Global Strategic Initiative.

Home page of NIQ

NIQ

Ecommerce accelerator Pattern expands fulfillment solutions. Pattern Group, which accelerates brands on global ecommerce marketplaces, has expanded its portfolio of fulfillment and logistics services. Pattern now offers inbound transportation services, leveraging the company’s carrier relationships and transportation infrastructure. Pattern has expanded its reverse logistics capabilities to help businesses recover more value from returns. Pattern has also launched Reimbursements, an automated service that handles filing and tracking marketplace reimbursement claims, particularly on Amazon.

Integrated E.U. payment solution Unzer enables Wero for merchants. Unzer, a payments and software provider serving small and mid-sized businesses across Germany, Austria, Luxembourg, and the Nordics, has gone live with Wero, a new alternative payment solution for Europe-based consumers and merchants. Unzer and the European Payments Initiative, a service backed by 16 European banks and providers, are now inviting merchants to be among the first to adopt the digital payment method through Unzer’s integrated platform, UnzerOne.

Ordoro partners with ShipBob on ecommerce fulfillment. Ordoro, a provider of multichannel ecommerce operations software, has teamed up with ShipBob, a supply chain and fulfillment platform, to help small and mid-market omnichannel merchants find the proper fulfillment setup for their growth stage. According to the companies, merchants using Ordoro benefit from advanced inventory and shipping automation, while brands ready to scale can either outsource to ShipBob’s global fulfillment network or run their own U.S. warehouse using ShipBob’s warehouse management software.

Website builder Jimdo releases AI-powered Companion for small businesses. Jimdo, a Germany-based website builder specializing in solopreneurs, microbusinesses, and small ecommerce ventures, has launched Companion, an AI agent. Built into the Jimdo architecture, Companion provides personalized recommendations that drive visibility and transactions by analyzing each business’s performance history, industry benchmarks, and competitive landscape. Companion is available for Jimdo’s website customers at no extra cost across the U.S., U.K., Ireland, as well as Germany, Austria, and Switzerland.

Jimdo home page

Jimdo

Fermàt launches AI Search Commerce Engine. Fermàt Commerce, an AI-powered commerce platform for personalized shopping experiences, has launched AI Search Commerce Engine to help measure visibility, generate shoppable content, and drive transactions from answer engines, including ChatGPT, Claude, and Gemini. “Monitor Prompts” identifies high-value AI prompts using search engine data, marketing signals, product catalogs, and customer reviews. “Generate First-Party Content” automatically creates shoppable content optimized for large-language-model indexing. “Measure Visibility” tracks results with citation-level attribution, competitor benchmarking, and prompt expansion.

Znode announces enhanced Commerce Connector for B2B ecommerce. Znode, a B2B ecommerce platform, has announced an update to its Commerce Connector. The new release introduces Data Exchanges, expanding Znode’s native integration capabilities for connecting to enterprise systems. Data Exchanges handles real-time or scheduled data flows for products, pricing, inventory, customers, and orders. The update allows manufacturers and distributors to integrate Znode with ERP, CRM, PIM, and other business systems. Administrators gain visibility through configurable mapping and monitoring tools to reduce integration risk, according to Znode.

OpenAI and Target partner to bring AI-powered experiences across retail. Through its partnership with OpenAI, omnichannel retailer Target has announced that consumers can discover and shop Target products inside ChatGPT as a curated, conversational experience. Target is offering its shopping experience through an app in ChatGPT, allowing users to purchase multiple items in a single transaction, shop for fresh food products, and select drive-up, pickup, or shipping fulfillment options.

HappyFox launches Autopilot agentic AI platform for customer support teams. HappyFox, a customer service software provider, has launched Autopilot, an agentic AI platform that delivers pre-built agents for quick deployment. “Shopify Delivery Dispute Analyzer” investigates ecommerce delivery discrepancies between fulfillment status and customer claims. “Ticket Triage Agent” automatically categorizes and tags tickets. “Churn Risk Detector” analyzes SaaS customer conversations for signals of dissatisfaction. “Duplicate Ticket Notifier” identifies and flags potential duplicate tickets. Users can access outcome-based pricing and pay only when agents complete tasks, per HappyFox.

HappyFox home page

HappyFox

Mueller: Background Video Loading Unlikely To Affect SEO via @sejournal, @MattGSouthern

Google Search Advocate John Mueller says large video files loading in the background are unlikely to have a noticeable SEO impact if page content loads first.

A site owner on Reddit’s r/SEO asked whether a 100MB video would hurt SEO if the page prioritizes loading a hero image and content before the video. The video continues loading in the background while users can already see the page.

Mueller responded:

“I don’t think you’d notice an SEO effect.”

Broader Context

The question addresses a common concern for sites using large hero videos or animated backgrounds.

The site owner described an implementation where content and images load within seconds, displaying a “full visual ready” state. The video then loads asynchronously and replaces the hero image once complete.

This method aligns with Google’s documentation on lazy loading, which recommends deferring non-critical content to improve page performance.

Google’s help documents state that lazy loading is “a common performance and UX best practice” for non-critical or non-visible content. The key requirement is ensuring content loads when visible in the viewport.

Why This Matters

If you’re running hero videos or animated backgrounds on landing pages, this suggests that background loading strategies are unlikely to harm your rankings. The critical factor is ensuring your primary content reaches users quickly.

Google measures page experience through Core Web Vitals metrics like Largest Contentful Paint. In many cases, a video that loads after visible content is ready shouldn’t block these measurements.

Implementation Best Practices

Google’s web.dev documentation recommends using preload=”none” on video elements to avoid unnecessary preloading of video data. Adding a poster attribute provides a placeholder image while the video loads.

For videos that autoplay, the documentation suggests using the Intersection Observer API to load video sources only when the element enters the viewport. This lets you maintain visual impact without affecting initial page load performance.

Looking Ahead

Site owners using background video can generally continue doing so without major SEO concerns, provided content loads first. Focus on Core Web Vitals metrics to verify your implementation meets performance thresholds.

Test your setup using Google Search Console’s URL Inspection Tool to confirm video elements appear correctly in rendered HTML.


Featured Image: Roman Samborskyi/Shutterstock

New Data: Top Factors Influencing ChatGPT Citations via @sejournal, @MattGSouthern

SE Ranking analyzed 129,000 unique domains across 216,524 pages in 20 niches to identify which factors correlate with ChatGPT citations.

The number of referring domains ranked as the single strongest predictor of citation likelihood.

What The Data Says

Backlinks And Trust Signals

Link diversity showed the clearest correlation with citations. Sites with up to 2,500 referring domains averaged 1.6 to 1.8 citations. Those with over 350,000 referring domains averaged 8.4 citations.

The researchers identified a threshold effect at 32,000 referring domains. At that point, citations nearly doubled from 2.9 to 5.6.

Domain Trust scores followed a similar pattern. Sites with Domain Trust below 43 averaged 1.6 citations. The benefits accelerated significantly at the top end: sites scoring 91–96 averaged 6 citations, while those scoring 97–100 averaged 8.4.

Page Trust mattered less than domain-level signals. Any page with a Page Trust score of 28 or above received roughly the same citation rate (8.3 average), suggesting ChatGPT weighs overall domain authority more heavily than individual page metrics .

One notable finding: .gov and .edu domains didn’t automatically outperform commercial sites. Government and educational domains averaged 3.2 citations, compared to 4.0 for sites without trusted zone designations.

The authors wrote:

“What ultimately matters is not the domain name itself, but the quality of the content and the value it provides.”

Traffic & Google Rankings

Domain traffic ranked as the second most important factor, though the correlation only appeared at high traffic levels.

Sites under 190,000 monthly visitors averaged 2 to 2.9 citations regardless of exact traffic volume. A site receiving 20 organic visitors performed similarly to one receiving 20,000.

Only after crossing 190,000 monthly visitors did traffic correlate with increased citations. Domains with over 10 million visitors averaged 8.5 citations.

Homepage traffic specifically mattered. Sites with at least 7,900 organic visitors to their main page showed the highest citation rates.

Average Google ranking position also tracked with ChatGPT citations. Pages ranking between positions 1 and 45 averaged 5 citations. Those ranking 64 to 75 averaged 3.1.

The authors noted:

“While this doesn’t prove that ChatGPT relies on Google’s index, it suggests both systems evaluate authority and content quality similarly.”

Content Depth & Structure

Content length showed consistent correlation. Articles under 800 words averaged 3.2 citations. Those over 2,900 words averaged 5.1.

Structure mattered beyond raw word count. Pages with section lengths of 120 to 180 words between headings performed best, averaging 4.6 citations. Extremely short sections under 50 words averaged 2.7 citations.

Pages with expert quotes averaged 4.1 citations versus 2.4 for those without. Content with 19 or more statistical data points averaged 5.4 citations, compared to 2.8 for pages with minimal data.

Content freshness produced one of the clearer findings. Pages updated within three months averaged 6 citations. Outdated content averaged 3.6.

Surprisingly, the raw data showed that pages with FAQ sections actually received fewer citations (3.8) than those without (4.1). However, the researchers noted that their predictive model viewed the absence of an FAQ section as a negative signal. They suggest this discrepancy exists because FAQs often appear on simpler support pages that naturally earn fewer citations.

The report also found that using question-style headings (e.g., as H1s or H2s) underperformed straightforward headings, earning 3.4 citations versus 4.3. This contradicts standard voice search optimization advice, suggesting AI models may prefer direct topical labeling over question formats.

Social Signals & Review Platforms

Brand mentions on discussion platforms showed strong correlation with citations.

Domains with minimal Quora presence (up to 33 mentions) averaged 1.7 citations. Heavy Quora presence (6.6 million mentions) corresponded to 7.0 citations.

Reddit showed similar patterns. Domains with over 10 million mentions averaged 7 citations, compared to 1.8 for those with minimal activity.

The authors positioned this as particularly relevant for smaller sites:

“For smaller, less-established websites, engaging on Quora and Reddit offers a way to build authority and earn trust from ChatGPT, similar to what larger domains achieve through backlinks and high traffic.”

Presence on review platforms like Trustpilot, G2, Capterra, Sitejabber, and Yelp also correlated with increased citations. Domains listed on multiple review platforms earned 4.6 to 6.3 citations on average. Those absent from such platforms averaged 1.8.

Technical Performance

Page speed metrics correlated with citation likelihood.

Pages with First Contentful Paint under 0.4 seconds averaged 6.7 citations. Slower pages (over 1.13 seconds) averaged 2.1.

Speed Index showed similar patterns. Sites with indices below 1.14 seconds performed reliably well. Those above 2.2 seconds experienced steep decline.

One counterintuitive finding: pages with the fastest Interaction to Next Paint scores (under 0.4 seconds) actually received fewer citations (1.6 average) than those with moderate INP scores (0.8 to 1.0 seconds, averaging 4.5 citations). The researchers suggested extremely simple or static pages may not signal the depth ChatGPT looks for in authoritative sources.

URL & Title Optimization

The report found that broad, topic-describing URLs outperformed keyword-optimized ones.

Pages with low semantic relevance between URL and target keyword (0.00 to 0.57 range) averaged 6.4 citations. Those with highest semantic relevance (0.84 to 1.00) averaged only 2.7 citations.

Titles followed the same pattern. Titles with low keyword matching averaged 5.9 citations. Highly keyword-optimized titles averaged 2.8.

The researchers concluded: “ChatGPT prefers URLs that clearly describe the overall topic rather than those strictly optimized for a single keyword.”

Factors That Underperformed

Several commonly recommended AI optimization tactics showed minimal or negative correlation with citations.

FAQ schema markup underperformed. Pages with FAQ schema averaged 3.6 citations. Pages without averaged 4.2.

LLMs.txt files showed negligible impact. Outbound links to high-authority sites also showed minimal effect on citation likelihood.

Why This Matters

The findings suggest your existing SEO strategy may already serve AI visibility goals. If you’re building referring domains, earning traffic, maintaining fast pages, and keeping content updated, you’re addressing the factors this report identified as most predictive.

For smaller sites without extensive backlink profiles, the research points to community engagement on Reddit and Quora as a viable path to building authority signals The data also suggests focusing on content depth over keyword density.

The researchers note that factors are interdependent. Optimizing one signal while ignoring others reduces overall effectiveness.

Looking Ahead

SE Ranking analyzed ChatGPT specifically. Other AI systems may weight factors differently.

SE Ranking doesn’t specify which ChatGPT version or timeframe the data represents, so these patterns should be treated as directional correlations rather than proof of how ChatGPT’s ranking algorithm works.


Featured Image: BongkarnGraphic/Shuttersrtock

How AI’s Geo-Identification Failures Are Rewriting International SEO via @sejournal, @motokohunt

AI search isn’t just changing what content ranks; it’s quietly redrawing where your brand appears to belong. As large language models (LLMs) synthesize results across languages and markets, they blur the boundaries that once kept content localized. Traditional geographic signals of hreflang, ccTLDs, and regional schema are being bypassed, misread, or overwritten by global defaults. The result: your English site becomes the “truth” for all markets, while your local teams wonder why their traffic and conversions are vanishing.

This article focuses primarily on search-grounded AI systems such as Google’s AI Overviews and Bing’s generative search, where the problem of geo-identification drift is most visible. Purely conversational AI may behave differently, but the core issue remains: when authority signals and training data skew global and geographic context, synthesis often loses that context.

The New Geography Of Search

In classic search, location was explicit:

  • IP, language, and market-specific domains dictated what users saw.
  • Hreflang told Google which market variant to serve.
  • Local content lived on distinct ccTLDs or subdirectories, supported by region-specific backlinks and metadata.

AI search breaks this deterministic system.

In a recent article on “AI Translation Gaps,” International SEO Blas Giffuni demonstrated this problem when he typed the phrase “proveedores de químicos industriales.” Rather than presenting the local market website with a list of industrial chemical suppliers in Mexico, it presented a translated list from the US, of which some either did not do business in Mexico or did not meet local safety or business requirements. A generative engine doesn’t just retrieve documents; it synthesizes an answer using whatever language or source it finds most complete.

If your local pages are thin, inconsistently marked up, or overshadowed by global English content, the model will simply pull from the worldwide corpus and rewrite the answer in Spanish or French.

On the surface, it looks localized. Underneath, it’s English data wearing a different flag.

Why Geo-Identification Is Breaking

1. Language ≠ Location

AI systems treat language as a proxy for geography. A Spanish query could represent Mexico, Colombia, or Spain. If your signals don’t specify which markets you serve through schema, hreflang, and local citations, the model lumps them together.

When that happens, your strongest instance wins. And nine times out of 10, that’s your main English language website.

2. Market Aggregation Bias

During training, LLMs learn from corpus distributions that heavily favor English content. When related entities appear across markets (‘GlobalChem Mexico,’ ‘GlobalChem Japan’), the model’s representations are dominated by whichever instance has the most training examples, typically the English global brand. This creates an authority imbalance that persists during inference, causing the model to default to global content even for market-specific queries.

3. Canonical Amplification

Search engines naturally try to consolidate near-identical pages, and hreflang exists to counter that bias by telling them that similar versions are valid alternatives for different markets. When AI systems retrieve from these consolidated indexes, they inherit this hierarchy, treating the canonical version as the primary source of truth. Without explicit geographic signals in the content itself, regional pages become invisible to the synthesis layer, even when they are adequately tagged with hreflang.

This amplifies market-aggregation bias; your regional pages aren’t just overshadowed, they’re conceptually absorbed into the parent entity.

Will This Problem Self-Correct?

As LLMs incorporate more diverse training data, some geographic imbalances may diminish. However, structural issues like canonical consolidation and the network effects of English-language authority will persist. Even with perfect training data distribution, your brand’s internal hierarchy and content depth differences across markets will continue to influence which version dominates in synthesis.

The Ripple Effect On Local Search

Global Answers, Local Users

Procurement teams in Mexico or Japan receive AI-generated answers derived from English pages. The contact info, certifications, and shipping policies are wrong, even if localized pages exist.

Local Authority, Global Overshadowing

Even strong local competitors are being displaced because models weigh the English/global corpus more heavily. The result: the local authority doesn’t register.

Brand Trust Erosion

Users perceive this as neglect:

“They don’t serve our market.”
“Their information isn’t relevant here.”

In regulated or B2B industries where compliance, units, and standards matter, this results in lost revenue and reputational risk.

Hreflang In The Age of AI

Hreflang was a precision instrument in a rules-based world. It told Google which page to serve in which market. But AI engines don’t “serve pages” – they generate responses.

That means:

  • Hreflang becomes advisory, not authoritative.
  • Current evidence suggests LLMs don’t actively interpret hreflang during synthesis because it doesn’t apply to the document-level relationships they use for reasoning.
  • If your canonical structure points to global pages, the model inherits that hierarchy, not your hreflang instructions.

In short, hreflang still helps Google indexing, but it no longer governs interpretation.

AI systems learn from patterns of connectivity, authority, and relevance. If your global content has richer interlinking, higher engagement, and more external citations, it will always dominate the synthesis layer – regardless of hreflang.

Read more: Ask An SEO: What Are The Most Common Hreflang Mistakes & How Do I Audit Them?

How Geo Drift Happens

Let’s look at a real-world pattern observed across markets:

  1. Weak local content (thin copy, missing schema, outdated catalog).
  2. Global canonical consolidates authority under .com.
  3. AI overview or chatbot pulls the English page as source data.
  4. The model generates a response in the user’s language, drawing on facts and context from the English source while adding a few local brand names to create the appearance of localization, and then serves a synthetic local-language answer.
  5. User clicks through to a U.S. contact form, gets blocked by shipping restrictions, and leaves frustrated.

Each of these steps seems minor, but together they create a digital sovereignty problem – global data has overwritten your local market’s representation.

Geo-Legibility: The New SEO Imperative

In the era of generative search, the challenge isn’t just to rank in each market – it’s to make your presence geo-legible to machines.

Geo-legibility builds on international SEO fundamentals but addresses a new challenge: making geographic boundaries interpretable during AI synthesis, not just during traditional retrieval and ranking. While hreflang tells Google which page to index for which market, geo-legibility ensures the content itself contains explicit, machine-readable signals that survive the transition from structured index to generative response.

That means encoding geography, compliance, and market boundaries in ways LLMs can understand during both indexing and synthesis.

Key Layers Of Geo-Legibility

Layer Example Action Why It Matters
Content Include explicit market context (e.g., “Distribuimos en México bajo norma NOM-018-STPS”) Reinforces relevance to a defined geography.
Structure Use schema for areaServed, priceCurrency, and addressLocality Provides explicit geographic context that may influence retrieval systems and helps future-proof as AI systems evolve to better understand structured data.
Links & Mentions Secure backlinks from local directories and trade associations Builds local authority and entity clustering.
Data Consistency Align address, phone, and organization names across all sources Prevents entity merging and confusion.
Governance Monitor AI outputs for misattribution or cross-market drift Detects early leakage before it becomes entrenched.

Note: While current evidence for schema’s direct impact on AI synthesis is limited, these properties strengthen traditional search signals and position content for future AI systems that may parse structured data more systematically.

Geo-legibility isn’t about speaking the right language; it’s about being understood in the right place.

Diagnostic Workflow: “Where Did My Market Go?”

  1. Run Local Queries in AI Overview or Chat Search. Test your core product and category terms in the local language and record which language, domain, and market each result reflects.
  2. Capture Cited URLs and Market Indicators. If you see English pages cited for non-English queries, that’s a signal your local content lacks authority or visibility.
  3. Cross-Check Search Console Coverage. Confirm that your local URLs are indexed, discoverable, and mapped correctly through hreflang.
  4. Inspect Canonical Hierarchies. Ensure your regional URLs aren’t canonicalized to global pages. AI systems often treat canonical as “primary truth.”
  5. Test Structured Geography. For Google and Bing, be sure to add or validate schema properties like areaServed, address, and priceCurrency to help engines map jurisdictional relevance.
  6. Repeat Quarterly. AI search evolves rapidly. Regular testing ensures your geo boundaries remain stable as models retrain.

Remediation Workflow: From Drift To Differentiation

Step Focus Impact
1 Strengthen local data signals (structured geography, certification markup). Clarifies market authority
2 Build localized case studies, regulatory references, and testimonials. Anchors E-E-A-T locally
3 Optimize internal linking from regional subdomains to local entities. Reinforces market identity
4 Secure regional backlinks from industry bodies. Adds non-linguistic trust
5 Adjust canonical logic to favor local markets. Prevents AI inheritance of global defaults
6 Conduct “AI visibility audits” alongside traditional SEO reports.

Beyond Hreflang: A New Model Of Market Governance

Executives need to see this for what it is: not an SEO bug, but a strategic governance gap.

AI search collapses boundaries between brand, market, and language. Without deliberate reinforcement, your local entities become shadows inside global knowledge graphs.

That loss of differentiation affects:

  • Revenue: You become invisible in the markets where growth depends on discoverability.
  • Compliance: Users act on information intended for another jurisdiction.

Equity: Your local authority and link capital are absorbed by the global brand, distorting measurement and accountability.

Why Executives Must Pay Attention

The implications of AI-driven geo drift extend far beyond marketing. When your brand’s digital footprint no longer aligns with its operational reality, it creates measurable business risk. A misrouted customer in the wrong market isn’t just a lost lead; it’s a symptom of organizational misalignment between marketing, IT, compliance, and regional leadership.

Executives must ensure their digital infrastructure reflects how the company actually operates, which markets it serves, which standards it adheres to, and which entities own accountability for performance. Aligning these systems is not optional; it’s the only way to minimize negative impact as AI platforms redefine how brands are recognized, attributed, and trusted globally.

Executive Imperatives

  1. Reevaluate Canonical Strategy. What once improved efficiency may now reduce market visibility. Treat canonicals as control levers, not conveniences.
  2. Expand SEO Governance to AI Search Governance. Traditional hreflang audits must evolve into cross-market AI visibility reviews that track how generative engines interpret your entity graph.
  3. Reinvest in Local Authority. Encourage regional teams to create content with market-first intent – not translated copies of global pages.
  4. Measure Visibility Differently. Rankings alone no longer indicate presence: track citations, sources, and language of origin in AI search outputs.

Final Thought

AI didn’t make geography irrelevant; it just exposed how fragile our digital maps were.

Hreflang, ccTLDs, and translation workflows gave companies the illusion of control.

AI search removed the guardrails, and now the strongest signals win – regardless of borders.

The next evolution of international SEO isn’t about tagging and translating more pages. It’s about governing your digital borders and making sure every market you serve remains visible, distinct, and correctly represented in the age of synthesis.

Because when AI redraws the map, the brands that stay findable aren’t the ones that translate best; they’re the ones who define where they belong.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

The 2026 AI Search Benchmark Every SEO Leader Needs [Webinar] via @sejournal, @lorenbaker

See Where Your Brand Stands in the New Search Frontier

AI search has become the new gateway to visibility. As Google’s AI Overviews and Answer Engine Optimization (AEO) reshape discovery, the question is no longer if your brand should adapt, but how fast.

Join Pat Reinhart, VP of Services and Thought Leadership at Conductor, and Shannon Vize, Sr. Content Marketing Manager at Conductor, for an exclusive first look at the 2026 AEO and GEO Benchmarks Report, the industry’s most comprehensive study of AI search performance across 10 key industries.

What You’ll Learn

  • The exclusive 2026 benchmarks for AI referral traffic, AIO visibility, and AEO/GEO performance across industries
  • How to identify where your brand stands against AI market share leaders
  • How AI search and AIO are transforming visibility and referral traffic

Why Attend?

This is your opportunity to see what top-performing brands are doing differently and how to measure your own visibility, referral traffic, and share of voice in AI search. You’ll gain data-backed insights to update your SEO and AEO strategy for 2026 and beyond.

📌 Register now to secure your seat and benchmark your brand’s performance in the new era of AI search.

🛑 Can’t make it live? Register anyway and we’ll send you the full recording after the event.

Aligning VMware migration with business continuity

For decades, business continuity planning meant preparing for anomalous events like hurricanes, floods, tornadoes, or regional power outages. In anticipation of these rare disasters, IT teams built playbooks, ran annual tests, crossed their fingers, and hoped they’d never have to use them.

In recent years, an even more persistent threat has emerged. Cyber incidents, particularly ransomware, are now more common—and often, more damaging—than physical disasters. In a recent survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year. Earlier in 2025, ransomware attack rates on enterprises reached record highs.

Mark Vaughn, senior director of the virtualization practice at Presidio, has witnessed the trend firsthand. “When I speak at conferences, I’ll ask the room, ‘How many people have been impacted?’ For disaster recovery, you usually get a few hands,” he says. “But a little over a year ago, I asked how many people in the room had been hit by ransomware, and easily two-thirds of the hands went up.”

Download the full article.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: the future of AlphaFold, and chatbot privacy concerns

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate

In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from game-playing AI to a secret project to predict the structures of proteins. He applied for a job.

Just three years later, Jumper and CEO Demis Hassabis had led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching lab-level accuracy, and doing it many times faster—returning results in hours instead of months.

Last year, Jumper and Hassabis shared a Nobel Prize in chemistry. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out. Read the full story.

—Will Douglas Heaven

The State of AI: Chatbot companions and the future of our privacy

—Eileen Guo & Melissa Heikkilä

Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up.

Some state governments are taking notice and starting to regulate companion AI. But tellingly, one area the laws fail to address is user privacy. Read the full story.

This is the fourth edition of The State of AI, our subscriber-only collaboration between the Financial Times and MIT Technology Review. Sign up here to receive future editions every Monday.

While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the MIT Technology Review are able to read the whole thing on our site.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has signed an executive order to boost AI innovation 
The “Genesis Mission” will try to speed up the rate of scientific breakthroughs. (Politico)
+ The order directs government science agencies to aggressively embrace AI. (Axios)
+ It’s also being touted as a way to lower energy prices. (CNN)

2 Anthropic’s new AI model is designed to be better at coding
We’ll discover just how much better once Claude Opus 4.5 has been properly put through its paces. (Bloomberg $)
+ It reportedly outscored human candidates in an internal engineering test. (VentureBeat)
+ What is vibe coding, exactly? (MIT Technology Review)

3 The AI boom is keeping India hooked on coal
Leaving little chance of cleaning up Mumbai’s famously deadly pollution. (The Guardian)
+ It’s lethal smog season in New Delhi right now. (CNN)
+ The data center boom in the desert. (MIT Technology Review)

4 Teenagers are losing access to their AI companions
Character.AI is limiting the amount of time underage users can spend interacting with its chatbots. (WSJ $)
+ The majority of the company’s users are young and female. (CNBC)
+ One of OpenAI’s key safety leaders is leaving the company. (Wired $)
+ The looming crackdown on AI companionship. (MIT Technology Review)

5 Weight-loss drugs may be riskier during pregnancy 
Recipients are more likely to deliver babies prematurely. (WP $)
+ The pill version of Ozempic failed to halt Alzheimer’s progression in a trial. (The Guardian)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

6 OpenAI is launching a new “shopping research” tool
All the better to track your consumer spending with. (CNBC)
+ It’s designed for price comparisons and compiling buyer’s guides. (The Information $)
+ The company is clearly aiming for a share of Amazon’s e-commerce pie. (Semafor)

7 LA residents displaced by wildfires are moving into prefab housing 🏠
Their new homes are cheap to build and simple to install. (Fast Company $)
+ How AI can help spot wildfires. (MIT Technology Review)

8 Why former Uber drivers are undertaking the world’s toughest driving test
They’re taking the Knowledge—London’s gruelling street test that bypasses GPS. (NYT $)

9 How to spot a fake battery
Great, one more thing to worry about. (IEEE Spectrum)

10 Where is the Trump Mobile?
Almost six months after it was announced, there’s no sign of it. (CNBC)

Quote of the day

“AI is a tsunami that is gonna wipe out everyone. So I’m handing out surfboards.”

—Filmmaker PJ Accetturo, tells Ars Technica why he’s writing a newsletter advising fellow creatives how to pivot to AI tools.

One more thing

The second wave of AI coding is here

Ask people building generative AI what generative AI is good for right now—what they’re really fired up about—and many will tell you: coding.

Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. This next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it.

But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ If you’re planning a visit to Istanbul here’s hoping you like cats—the city can’t get enough of them.
+ Rest in power reggae icon Jimmy Cliff.
+ Did you know the ancient Egyptians had a pretty accurate way of testing for pregnancy?
+ As our readers in the US start prepping for Thanksgiving, spare a thought for Astoria the lovelorn turkey 🦃