SISTRIX Reports Sharp Drop In ChatGPT Web Searches via @sejournal, @MattGSouthern

SISTRIX reports that ChatGPT is triggering live web searches far less often for people who use the app without logging in.

In daily spot-checks over the last two weeks, the share of answers that called the web fell from above 15% to below 2.5%. SISTRIX does not assign a cause and notes the observation applies to anonymous sessions.

What Changed

SISTRIX says it “analyses numerous ChatGPT responses to a wide variety of prompts” each day and recently “noticed that ChatGPT uses web searches significantly less frequently.”

It adds that, “at least when using the app without an account,” the measured rate of responses completed via a web search declined sharply in the period reviewed.

SISTRIX doesn’t publish a sample size, list of prompts, or detection method in the post.

SISTRIX also writes that ChatGPT has “traditionally” relied on Bing for web lookups and references rumors of Google data being used, but it doesn’t claim a direct link between any specific backend change and the measured decline.

Related Context

Microsoft Bing Search APIs Retirement

Microsoft announced that the Bing Search APIs were retired on August 11.

Some third-party tools have migrated to alternatives. This doesn’t prove a change inside ChatGPT, but it’s a relevant ecosystem shift.

Google’s SERP Access Changes

SISTRIX separately documented that Google no longer supports the “num=100” parameter and now returns 10 results per request, increasing the effort required to collect SERP data at scale.

Again, this is context rather than causation.

Recent ChatGPT Product Notes

OpenAI’s release notes list “improvements to search in ChatGPT” on September 16, without detailing backend sourcing.

That update may be unrelated to the SISTRIX measurement, but is worth noting in the same timeframe.

Why This Matters

If ChatGPT is consulting the web less frequently in anonymous sessions, you might notice fewer answers citing current sources and a greater reliance on the model’s internal knowledge for those users.

This could influence how often recent news is referenced in responses for users who aren’t logged in, although the behavior may differ for Plus or Enterprise accounts.

Looking Ahead

SISTRIX’s observation is limited to a specific time frame and anonymous usage. Currently, there’s no confirmed information from OpenAI about how frequently ChatGPT performs live lookups overall, and SISTRIX hasn’t provided a reason for the recent drop.

The most cautious conclusion is that one independent measurement showed a sharp short-term decline, which deserves further testing.


Featured Image: matakeris.creative/Shutterstock

Google App Adds Search Live For Real-Time Visual Search via @sejournal, @MattGSouthern

Google has rolled out Search Live in English in the United States, bringing real-time, camera-aware conversations to the Google app on Android and iOS.

You can tap the new Live icon under the search bar, or open Google Lens and choose Live to start an interactive voice conversation that can also see what your camera sees.

Rajan Patel, VP of Engineering for Search at Google, highlights the launch in a post on X:

How It Works

Search Live has two entry points. In the Google app, you can start a voice conversation and optionally enable video input.

Look for the icon shown below:

Image Credit: Google

In Lens, camera sharing is on by default so you can immediately ask questions about what is in front of you and get follow-ups with links to dig deeper on the web.

Google highlights practical scenarios such as hands-free trip planning, quick how-to guidance for hobbies, step-by-step troubleshooting for electronics without typing model numbers, support for school projects, and picking a board game by scanning several boxes at once.

See it in action in this launch video:

Why This Matters

Search Live moves queries from typed text to camera and voice, with answers arriving while people are actively engaged in tasks.

You can capture these searchers by prioritizing content that answers specific, in-the-moment questions. Ensure that your visual information is accurate and easily recognizable.

Local businesses should consider keeping storefront photos, product imagery, and key details current since people can now point, ask, and get links in real time.

Looking Ahead

Search Live is only launching in English in the U.S. for now, but Google says more languages and regions are coming.

This launch continues Google’s push to move everyday search beyond the keyboard. Businesses that prepare their content and visuals for that shift will be better positioned when the rollout expands

Newfold Digital Sells MarkMonitor As Part Of Strategic Refocus via @sejournal, @martinibuster

London-headquartered corporate domain management company Com Laude announced the acquisition of its competitor, MarkMonitor, previously one of the holdings of Newfold Digital.

Newfold Digital Simplifies Portfolio

Newfold Digital owns many top Internet brands like Yoast, Bluehost, Register.com, and Domain.com, all businesses that focus on small and medium-sized businesses. This divestiture may be a sign that Newfold Digital may be shifting away from the enterprise market and toward focusing its portfolio of web services on the SMB end of the market.

The official Newfold Digital press release states:

“The sale is part of Newfold Digital’s strategy to simplify its portfolio and double down on the areas where it can deliver the greatest value to customers – its core brands, Bluehost and Network Solutions. ”

Stu Homan, Head of MarkMonitor, commented:

“With this acquisition, Markmonitor has found owners who value our dedicated corporate services as much as our customers do. Com Laude is deeply committed to preserving and building upon our ability to continue to deliver industry-leading customer service while growing to new levels with dedication and investment.

Our entire team is excited to bring Com Laude’s advanced tools and services to our customers, and to be part of the most exciting development in corporate domain services since Markmonitor invented the white glove service model twenty-six years ago.”

Previous to the acquisition, Com Laude was a competitor of MarkMonitor, offering services that were similar to MarkMonitor but with key differences and technologies like an AI-powered domain management dashboard.

Com Laude is headquartered in London, United Kingdom, and MarkMonitor is in Boise, Idaho, which is not commonly regarded as the center of Internet commerce or technology but is actually a growing regional technology hub.

Benjamin Crawford, CEO of Com Laude, remarked:

“Markmonitor is the best-known name in domain services for corporate customers, having virtually invented the category twenty-six years ago, and since then grown a long list of blue-chip customers with its “white glove” customer service. Com Laude offers market leading advanced tools and bespoke services in domains and online brand protection, developed for the world’s largest companies and most valuable brands. Together we will be uniquely positioned to protect and grow the digital presence of any company that needs assistance with its domain names, internet infrastructure and security, online brand protection, internet policy and compliance, and online strategy.”

Read Com Laude’s announcement:

Com Laude to Acquire Markmonitor in a Landmark Transaction

Featured Image by Shutterstock/thodonal88

From Line Item To Leverage: How Web Performance Impacts Shareholder Value via @sejournal, @billhunt

Despite years of digital transformation talk, too many CEOs and CFOs still treat the corporate website as a necessary marketing expense, a sunk cost with limited upside. I have far too many CEO’s of billion-dollar companies who view it simply as an expensive interactive brochure, setting the tone for the company and dooming the web as just that, a brochure without strategic value.

But the modern website is not just a cost center. It’s a capital asset. One that, when strategically managed, generates revenue, lowers acquisition costs, accelerates growth, and protects brand equity.

In my previous articles (“Closing the Digital Performance Gap” and “Who Owns Web Performance?“), I outlined how poor internal ownership and misaligned incentives drag down web effectiveness. Now it’s time to reframe the economic value of performance. Because digital visibility, findability, and functionality aren’t just tactical wins – they affect shareholder value.

Web Execution: Expense Or Asset?

Let’s speak the CFO’s language. If you build a new manufacturing line, you evaluate its contribution to output and margin. If you invest in a retail expansion, you track foot traffic, conversion, and revenue per square foot.

Why don’t we evaluate digital the same way?

Here’s how most companies currently think:

  • SEO: Free traffic driver.
  • Content: Sales and marketing copy.
  • UX: Design polish.
  • Analytics: Reporting tool.

Here’s how performance-minded leaders think:

  • SEO: Organic demand capture engine.
  • Content: Business development asset.
  • UX: Funnel velocity multiplier.
  • Analytics: Optimization flywheel.

When you stop viewing digital as overhead and start seeing it as infrastructure, the return on investment (ROI) math changes completely.

How Underperformance Drains Enterprise Value

If your digital infrastructure is fragmented, under-optimized, or reactive:

  • You spend more on paid channels to make up for poor organic performance.
  • You lose visibility to competitors in AI and search environments.
  • You deliver confusing or outdated experiences that erode brand trust.
  • You waste employee and agency hours chasing after misaligned key performance indicators (KPIs).

None of these are minor problems. They compound.

They show up in:

  • Lower customer lifetime value (CLV).
  • Higher customer acquisition cost (CAC).
  • Missed revenue from unindexed products or inaccessible content.
  • Declines in organic search traffic and authority that paid cannot make up for.

The Invisible ROI Leak: Misalignment

As explored in “Who Owns Web Performance?,” when multiple teams touch the website – but no one owns outcomes – you get:

  • Wasted spend on underperforming campaigns.
  • Lost traffic due to crawlability errors and excessive technical issues.
  • Duplicated content with no central taxonomy.
  • Security or compliance risks from unmanaged pages.

These are not theoretical. They show up on the balance sheet as missed revenue, higher CAC, and lower conversion rates.

The Capital Efficiency Of SEO And Organic Visibility

Capital efficiency is one of the most underappreciated components of shareholder value, but increasingly, it’s a critical factor in CEO evaluations. Boards and investors are looking beyond topline growth to assess how effectively a company turns investment into output to achieve growth. That means efficient, repeatable, high-margin systems like SEO and web performance become strategic levers, not support functions.

SEO is often dismissed as “free traffic,” but that’s misleading. It’s not free and has been rebranded into MBA-friendly buzzwords like “organic visibility” and “owned media.” But behind those terms is real effort. SEO teams must optimize content that was often created in a vacuum, retrofit pages with structured data, and resolve infrastructure gaps just to make that content accessible to search engines. These are real costs and costs that wouldn’t exist if SEO were embedded earlier in the workflow. When viewed holistically as a strategic function, SEO becomes a high-efficiency, compounding return channel. One that gets stronger with alignment and investment, and weaker with neglect.

Properly funded and governed SEO:

  • Reduces dependency on paid media.
  • Enables customer self-service and support at scale.
  • Increases discoverability across multiple intent stages.
  • Builds durable search equity and authority.
  • Fuels AI citations and rich result presence.

More importantly, it improves capital efficiency, the ability to turn inputs (budget, time, content) into outputs (qualified leads, revenue, brand trust) with minimal waste.

AI Search Just Raised The Stakes

Search is no longer about blue links – it’s about recommendation systems. AI Overviews, summary blocks, and generative results are now front and center. If your content isn’t:

…then you’re invisible. Or worse – you’re used as a data source without receiving attribution.

As I wrote in “The New Role of SEO in the Age of AI,” platforms now monetize the experience, not just the click. They extract content, retain the user, and collect behavioral data to improve their own models.

“If your content can’t be reused, monetized, or trained against – it’s less likely to be shown.”

Your site is not just competing with others – it’s competing with the platform itself.

Let’s Talk Shareholder Value

When SEO and digital performance are working:

  • You lower CAC.
  • You increase CLV through better segmentation and nurturing.
  • You strengthen brand equity via visibility and trust signals.
  • You improve operational efficiency through centralized platforms and reusable modules, and reduce customer support costs through effective self-service experiences.
  • You protect valuation by owning your digital demand footprint.

When they aren’t working, you erode those same advantages.

Let’s take a real-world example.

I worked with a public company preparing to spin off half its business into a new entity. The leadership’s attention was focused almost entirely on launching the new brand and website, yet there was no plan for preserving or migrating organic search performance. The new entity’s success depended on leveraging an existing client base, maintaining current sales momentum, and hitting aggressive growth targets. But SEO wasn’t even on the radar.

I was brought in to develop the business case for making organic search a strategic pillar of the post-divestiture digital platform. I argue that we would only get senior executive buy-in not by forecasting traffic loss, but by reframing SEO’s contribution across the three drivers of shareholder value:

  • Financial: Conservative modeling, based on current performance rates, showed that a poorly managed migration could result in $350 million in lost lead value. In addition, regaining that visibility via paid media would require tens of millions in unplanned ad spend.
  • Operational: The company continued operating in 45 countries across 10 languages. Without localized optimization and scalable global templates, international lead pipelines would suffer dramatically.
  • Strategic: To stand apart from the legacy business and support complex enterprise sales cycles, the new digital platform needed to rapidly establish authority, build trust signals, and differentiate itself not only in search but in ease of use and depth of information.

By speaking the language of shareholder value and showing how SEO impacted financial outcomes, operational continuity, and long-term strategic position, we secured executive alignment. SEO was integrated early into the platform roadmap, ensuring scalability, visibility, and global readiness from day one.

A Call To Action For Senior Leaders

If you’re a CEO, CMO, or CFO reading this, ask yourself:

  • Do we treat the website as a strategic asset or a sunk cost?
  • Is there executive ownership of performance or just distributed responsibility?
  • Are we capturing, measuring, and maximizing organic opportunity – or plugging gaps with paid media?
  • Is our content structured and usable by AI systems, or just accurate but invisible?

This is about mindset and governance, not just tactics.

Final Thought: Web Performance Is A Leverage Point

As digital channels drive more business outcomes, functions once considered tactical (like SEO or load speed optimization) can now contribute meaningfully to operational leverage, customer acquisition, and profitability turning them into strategic priorities.

Your website is where your brand, product, content, and promise converge. It’s your most visible, scalable, and measurable asset.

Treating it like a brochure is like owning an F1 race car and only polishing the paint.

When you design for performance, staff for cross-functional excellence, and govern for outcomes – you stop leaking value and start building leverage.

Because in today’s market, digital performance isn’t just good marketing. It’s good business.

And good business drives shareholder value.

More Resources:


Featured Image: Master1305/Shutterstock

And The Truth? This Writing Style Screams AI via @sejournal, @cshel

Six months ago, you could spot AI-generated text by its polished grammar, rigid essay structure, suspicious fondness for em dashes – and, of course, the inevitable emoji bullets (🔥🚀✨). The real giveaway, at least to my eye and ear, isn’t the emojis or the punctuation. It’s the cadence.

AI writing has a rhythm problem. The sentences are clipped. Overly dramatic. Split into one-line paragraphs that feel more like infomercials than journalism.

“The truth? This wasn’t SEO causation. It was a stock market correction.”
“They were left behind. They were angry. They weren’t your people.”

On the page, this is nails-on-chalkboard grating. It doesn’t read as conversational. It reads as performative. In my opinion, this is, without a doubt, AI’s most recognizable stylistic fingerprint.

A Brief History Of The AI Cadence

This rhythm predates AI. It has been the language of speechwriters, preachers, and copywriters long before GPT entered the chat. Think Reagan’s addresses, Clinton’s campaign rallies, Obama’s campaign speeches, Churchill’s wartime broadcasts, and Blair’s conference speeches. Each leaned on rhythm and repetition to generate a great deal of emotion out of a speck of substance. Pair that with Captain Kirk’s famously staccato delivery, televangelists’ sermons, or TED Talks built around dramatic pauses, and you see how cadence can make small or mundane ideas feel powerful and deep.

That style used to stay in its lane. Where print valued density and clarity, speech valued brevity and rhythm. Readers could re-read; listeners could not. Editors enforced writing standards and styles and the economics of print rewarded information density over theatrics. As a result, this cadence lived solely in spoken word. It lived in speeches and sales copy, and not in essays and articles.

AI collapsed those boundaries. Because LLMs cannot (or chose to not) differentiate between a stump speech, a YouTube transcript, and a white paper, they overindex patterns designed to persuade aloud and repurpose them for the written page. Now, we are inundated with technical articles that read like motivational talks.

Why AIs Default To This Cadence

The AI cadence is not an accident – it’s a reflection of what models were most heavily trained on. Large language models have been fed a disproportionate amount of spoken-word material: transcripts of speeches, news reports, debates, interviews, webinars, podcasts, and video scripts. These aren’t “written texts” in the traditional sense; they are spoken performances converted into text.

Why so much spoken-word data? Because it’s cheap and plentiful. Back when I was running my ISP, I loved radio and TV for advertising and news mentions because it was far less expensive than buying or winning space in print. Broadcasters had 24 hours a day to fill, and local stations were always desperate for content. Print, on the other hand, is expensive. Every page of a newspaper, magazine, or book costs money to produce, and publishers limit content to what is necessary or affordable. As a result, far more hours of audio and video have been produced than carefully edited prose — and much of that material ends up transcribed. Those transcripts give the models a vast mountain of “written-down speech” compared to a relatively smaller body of curated, edited text.

The difference is subtle but important: a transcript is in a written medium, but it is not writing in a written style. It preserves the cadence of spoken delivery — short bursts, rhetorical pauses, fragments. Models overindex this rhythm because it dominates the dataset.

Even when prompted to avoid it, the models can’t resist drifting back into this rhythm. They might manage a few sentences of varied prose, but the gravitational pull of the AI cadence always drags them back. It’s now the default groove burned into their training.

The Em Dash Problem

That overindexing also explains a related AI tell: the sudden overuse of em dashes. In polished writing, dashes were historically used sparingly for emphasis or interruption. In speech, however, pauses are constant. Transcripts often mark those pauses with dashes. For a model swimming in transcripts, the dash becomes a default punctuation mark, because it functions as the written equivalent of a spoken pause. The result is copy littered with dashes – not because the ideas require them, but because the training data normalized them.

Punctuation As Breath

Punctuation has always been about more than grammar. Periods, commas, and dashes are signals for how we pause and where we breathe. They are like rests in music, telling the reader when to stop, inhale, and reset before continuing. Well-edited prose balances those pauses so the rhythm feels natural.

The AI cadence breaks this balance. When every thought is chopped into fragments, you’re effectively told to breathe after every line. Reading an article like this feels like hyperventilating: shallow breaths, constant interruptions, no sustained flow. It makes everything sound catastrophic, urgent, or world-shattering, even when the subject matter is mundane. Gentle readers, not every sentence or every idea warrants that level of drama.

Where this leaves us is that when models generate text, they parrot back the structures they’ve seen most often: speech rhythms and speech punctuation, presented as though they were the standard for written communication. They are not. They’re salesmanship with line breaks and pauses dressed up as prose.

Why Readers React To It

This cadence feels powerful at first. It mimics natural speech. It creates rhythm. It feels dramatic without requiring depth. That’s why it pops in feeds.

However, the longer it is stretched out, like in long-form content, or the more a reader is exposed to the same cadence over and over and over again, the power you once felt collapses into disdain. This breathy, short-sentence delivery leads to:

  • Oversimplification which flattens nuance.
  • Repetition that manipulates more than it informs.
  • Every line to demand attention ensuring none of them earn it.
  • Readers to suspect style is substituting for substance.

Here is the deeper problem: when everything is delivered as if it were earth-shattering, readers begin to doubt the authenticity of the message itself. It’s Syndrome’s hypothesis in The Incredibles: “When everyone is super, no one is.” If every sentence screams urgency, then nothing actually carries weight.

Historically, this kind of relentless, crisis-driven cadence has also been a manipulation tactic. Political demagogues, televangelists, and snake-oil salesmen leaned on hyperbole precisely because they lacked evidence. When AI reproduces that same rhythm on the page, it inherits the credibility problem too. Readers may not articulate it consciously, but they feel it: if you have to shout every line, maybe you don’t have enough substance to stand on quietly.

Just as keyword stuffing once became a hallmark of low-quality SEO, this cadence is already becoming the hallmark of low-quality AI. Readers recognize the rhythm before they absorb the message. When the medium distracts from the message, trust erodes.

A Tale Of Two Paragraphs

AI cadence in practice:

“The algorithm changed.
Sites lost traffic.
Panic spread.
And the industry?
It declared SEO dead – again.”

Now, the same idea written for readers:

“When the algorithm changed, many sites saw a drop in traffic. The panic was predictable. Within days, familiar headlines declared SEO dead once again. The cycle repeats every few years, and every few years it proves wrong.”

The difference here is obvious: one is an infomercial and the other is writing.

How To Spot It

Editors and readers can train themselves to notice:

  • Long runs of one-sentence paragraphs.
  • Rhetorical questions with no depth (often beginning with conjunctions like And or But…
  • Sentence fragments pretending to be profound.
  • Sermon-like pacing that seems to expect a chorus of ‘amens’ (or applause, if you’re lucky)…

Simply put, once you have seen it, you cannot unsee it: it is the literary equivalent of a laugh track.

How To Write Like A Human Again

How do we remedy this situation? Short of, I suppose, doing our own writing?

  • Vary sentence length instead of defaulting to extremes.
  • Use rhetorical questions sparingly – only when they genuinely add depth.
  • Group related ideas into paragraphs; readers can handle more than one sentence at a time. Unless you are writing FOR toddlers, do not treat your readers as though they ARE toddlers.
  • Prioritize clarity and voice over performative drama. Note here that the goal isn’t to sound casual at all costs, but to sound intentional, rational, and backed by data.

Why It Matters For SEOs And Marketers

AI writing tools are embedded in nearly every workflow. Left unchecked, they will flood the web with copy that reads like an endless sales pitch. Professionals must edit not just for facts but for voice.

That means:

  • Training teams to recognize and break the AI cadence.
  • Creating style guides that emphasize varied sentence and paragraph structure.
  • Editing AI drafts with rhythm in mind, not just keywords.
  • Writing for humans who read – not just platforms that skim.

Respecting the reader’s time and intelligence is, in the end, the real optimization.

Is There Ever A Place For This Style?

Yes, of course, but like most things, in moderation. Staccato writing is effective for:

  • Ad copy where space is limited.
  • Video scripts where pacing drives attention. (Your LinkedIn vertical videos and IG Reels? Have at it. This is where the staccato AI cadence shines.)
  • The occasional LinkedIn post engineered for scanning.

However, should this become the default writing style for articles, blogs, or essays? Abso-effing-lutely not. It cheapens the content and undermines credibility.

In Closing

AI has introduced more than just new tools. It has also normalized certain stylistic tics that don’t belong in most forms of writing. Among these, the AI cadence problem is the most recognizable and the most damaging when left unchecked.

Writers, editors, and marketers need to treat the presence of AI cadence in their writings the same way we treated keyword stuffing a decade ago: as a major red flag. The difference between human and AI writing isn’t just factual accuracy. It’s rhythm, intent, and voice.

The real divide isn’t human versus machine. It’s generic versus intentional. Intentional writing that is structured for clarity, rooted in substance, and respectful of the reader will always stand out.

More Resources:


Featured Image: N Universe/Shutterstock

Are AI Search Summaries Making Evergreen Articles Obsolete? via @sejournal, @martinibuster

Ahrefs’ Tim Soulo recently posted that AI is making publishing evergreen content obsolete and no longer worth the investment because AI summaries leave fewer clicks for publishers.  He posits that it may be more profitable to focus on trending topics, calling it Fast SEO.  Is publishing evergreen content no longer a viable content strategy?

The Reason For Evergreen Content

Evergreen content can be a basic topic that generally doesn’t change much from year to year. For example, the answer to how to change a tire will generally always be the same.

The promise of evergreen content was that it represents a steady source of traffic. Once a web page is ranking for evergreen topics, publishers basically just have to make sure that it’s updated if the topic has changed in some way.

Does AI Break The Evergreen Content Promise?

Tim Soulo is suggesting that evergreen content, which can be easy to answer with a summary, is less likely to send a click because AI summarizes the answer and satisfies the user, who may not need to visit a website.

Soulo tweeted:

“The era of “evergreen SEO content” is over. We’re entering the era of “fast SEO.”

There’s little point in writing yet another “Ultimate Guide To ___.” Most evergreen topics have already been covered to death and turned into common knowledge. Google is therefore happy to give an AI answer, and searchers are fine with that.

Instead, the real opportunity lies in spotting and covering new trends — or even setting them yourself.”

Is Fast SEO The Future Of Publishing?

Fast SEO is another way of describing trending topics. Trending topics have always been around; it’s why Google invented the freshness algorithm, to satisfy users with up-to-date content when a “query deserves freshness.”

Soulo’s idea is that trending topics are not the kind of content that AI summarizes. Perplexity is the exception; it has an entire content discovery section called Perplexity Discover that’s dedicated to showing trending news articles.

Fast SEO is about spotting and seizing short-lived content opportunities. These can be new developments, shifts in the industry or perceptions, or cultural moments.

His tweet captures the current feeling within the SEO and publishing communities that AI is the reason for diminishing traffic from Google.

The Evergreen Content Situation Is Worse Than Imagined

A technical issue that Soulo didn’t mention but is relevant here is that it’s challenging to create an “Ultimate Guide To X, Y, Z” or the “Definitive Guide To Bla, Bla, Bla” and expect it to be fresh and different from what is already published.

The barrier to entry for evergreen content is higher now than it’s ever been for several reasons:

  • There are more people publishing content.
  • People are consuming multiple forms of content (text, audio, and video).
  • Search algorithms are focused on quality, which shuts out those who focus harder on SEO than they do on people.
  • User behavior signals are more reliable than traditional link signals, and SEOs still haven’t caught on to this, making it harder to rank.
  • Query Fan-Out is causing a huge disruption in SEO.

Why Query Fan-Out Is A Disruption

Evergreen content is an uphill struggle, compounded by the seeming inevitability that AI will summarize the content and, because of Query Fan-Out, possibly send the click to another website that is cited because it offers the answer to a follow-up question to the initial search query.

Query Fan-Out displays answers to the initial query and to follow-up questions to the initial search query. If the user is happy with the summary to the initial query, they may become interested in one of the follow-up queries, and one of those will get the click, not the initial query.

This completely changes what it means to target a search query. How does an SEO target a follow-up question? Maybe, instead of targeting the main high-traffic query, it may make sense to target the follow-up queries with evergreen content.

Evergreen Content Publishing Still Has Life

There is another side to this story, and it’s about user demand. Foundational questions stick around for a long time. People will always search “how to tie a bowtie” or “how to set up WordPress.” Many users prefer the stability of an established guide that has been reviewed and updated by a trusted brand. It’s not about being a brand; it’s about being the kind of site that is trusted, well-liked, and recommended.

A strong resource can become the canonical source for a topic, ranking for years and generating the kind of user behavior signals that reinforce its authority and signal the quality of being trusted.

Trend-driven content, by contrast, often delivers only a brief spike before fading. A newsroom model is difficult to maintain because it requires constant work to be first and be the best.

The Third Way: Do It All

The choice between producing evergreen content and trending topics doesn’t have to be binary; there’s a third option where you can do it all. Evergreen and trending topics can complement each other because each side provides opportunities for driving traffic to the other. Fresh, trend-driven content can link back to the evergreen, and this can be reversed to send readers to fresh content from the evergreen.

Trend-driven content sometimes becomes evergreen itself. But in general, creating evergreen content requires deep planning, quality execution, and marketing. Somebody’s going to get the click from evergreen content, it might as well be you.

Featured Image by Shutterstock/Stokkete

From SEO To GEO: How Can Marketers Adapt To The New Era Of Search Visibility? via @sejournal, @Semji_fr

This post was sponsored by Semji. The opinions expressed in this article are the sponsor’s own.

For three decades, SEO has been the cornerstone of digital visibility.

Keywords, backlinks, and technical optimization determined whether your brand appeared at the top of search results.

However, the landscape is shifting, and it’s likely that if you’re reading this article, you already know it.

With generative AI tools like ChatGPT, Google AI Overviews, Gemini, or Perplexity, users no longer rely solely on lists of blue links.

Instead, searchers and researchers receive synthesized, conversational answers that draw content from high-authority sources.

The message is clear: ranking alone is no longer enough.

To be visible in the age of AI, marketers need a complementary discipline, Generative Engine Optimization (GEO).

To do so, you need concrete methods and best practices to add GEO efficiently into your strategy.

What Is Generative Search Optimization (GEO)?

Generative Search Optimization (GEO) is the practice of ensuring that your content is selected, understood, and cited by large language models (LLMs) and generative engines.

How Does GEO Differ From Traditional SEO?

Traditional search engines use bots to crawl webpages and rank them.

LLMs synthesize patterns from massive pre-ingested datasets. LLMs and answer engines don’t index; they use them as their conversational padding.

What Is A Pre-Ingested Data Set?

Pre-ingested datasets are content that is pulled from websites, reviews, directories, forums, and even brand-owned assets.

This means your visibility no longer depends only on keywords

What Do I Need To Do To Show Up In AI Overviews & SERPs?

To increase your visibility in LLMs, your content must be:

Put simply: GEO ensures your brand shows up in the answers themselves as well as in the links beneath them.

How To Optimize For LLMs In GEO

Optimizing for LLMs is about aligning with how these systems select and reuse content.

From our analysis, three core principles stand out in consistently GEO-friendly content:

1. Provide Structure & Clarity

Generative models prioritize content that is well-organized and easy to parse. Clear headings, bullet points, tables, summaries… help engines extract information and recompose it into human-like answers.

2. Include Trust & Reliability Signals

LLMs reward factual accuracy, consistency, and transparency. Contradictions between your site, profiles, and third-party sources weaken credibility. Conversely, quoting sources, citing data, and showcasing expertise increase your chances of being cited!

3. Contextual & Semantic Depth Are Key

Engines rely less on keywords and more on contextual signals (as it has been more and more the case with Google these last years–hello BERT, haven’t heard from you in a while!). Content enriched with synonyms, related terms, and variations is more flexible and better aligned with diverse queries, which is especially important as AI queries are conversational, not just transactional.

3 Tips For Creating GEO-Friendly Content

In the GEO guide we’re sharing with you in this article, 15 tips are delivered–here are 3 of the most important ones:

1. Be Comprehensive & Intent-Driven

LLMs favor complete answers.

Cover not just the main query but related terms, variations, and natural follow-ups.

For example, if writing about “content ROI,” anticipate adjacent questions like “How do you measure ROI in SEO?” or “What KPIs prove content ROI?”!

By aligning with user intent, not just keywords, you increase the likelihood of your content being surfaced as the “best available answer” for the LLMs.

Learn how to do this.

2. Showcase E-E-A-T Signals

GEO is inseparable from trust. Engines look for identifiable signals of credibility:

  • Author bylines with expertise.
  • Real-world examples, roles, or case insights.
  • Transparent sourcing of statistics and references.
  • And many more opportunities to prove your credibility and authority.

Think of it as content that doesn’t just “read well,” but feels safe to reuse by the LLMs.

3. Optimize format for machine & human readability

Beyond clarity, formats like FAQs, how-tos, comparisons, and lists make your content both user-friendly and machine-friendly. Many SEO techniques are just as powerful and efficient in GEO:

  • Add alt text for visuals.
  • Include summaries and key takeaways in long-form content.
  • Use structured data and schema where relevant.

This dual optimization increases both discoverability and reusability in AI-generated answers.

Why It’s Essential To Optimize For LLMs

Skeptical about GEO? Consider this: 74% of problem-solving searches now surface AI-generated responses, and AI Overviews already appear in more than 1 in 10 Google queries in the U.S. AI Overviews, Perplexity summaries, and Gemini snapshots are becoming default behaviors in information-seeking. The line between “search” and “chat” is blurring.

The risk of ignoring GEO is not just lower traffic—it’s invisibility in the answer layer where trust and decisions are increasingly formed.

By contrast, marketers who embrace GEO can:

  • Defend brand presence where AI engines consolidate attention.
  • Create future-forward SEO strategies as search continues to evolve.
  • Maximize ROI by aligning content with both human expectations and machine logic.

In other words, GEO is not a trend: it’s a structural shift in digital visibility, where SEO remains essential but is no longer sufficient. GEO adds the missing layer: being cited, trusted, and reused by the engines that increasingly mediate how users access information.

GEO As A New Competitive Advantage

The age of GEO is here. For marketing and SEO leaders, the opportunity is to adapt faster than competitors—aligning content with the standards of generative search while continuing to refine SEO.

To win visibility in this environment, prioritize:

  • Auditing your current content for GEO readiness.
  • Enhancing clarity, trust signals, and semantic richness.
  • Monitoring your presence in AI Overviews, ChatGPT, and other generative engines.

Those who invest in GEO today will shape how tomorrow’s answers are written.

Want to explore the full framework of GEO?


Image Credits

Featured Image: Image by Semji. Used with permission.

AI models are using material from retracted scientific papers

Some AI chatbots rely on flawed research from retracted scientific papers to answer questions, according to recent studies. The findings, confirmed by MIT Technology Review, raise questions about how reliable AI tools are at evaluating scientific research and could complicate efforts by countries and industries seeking to invest in AI tools for scientists.

AI search tools and chatbots are already known to fabricate links and references. But answers based on the material from actual papers can mislead as well if those papers have been retracted. The chatbot is “using a real paper, real material, to tell you something,” says Weikuan Gu, a medical researcher at the University of Tennessee in Memphis and an author of one of the recent studies. But, he says, if people only look at the content of the answer and do not click through to the paper and see that it’s been retracted, that’s really a problem. 

Gu and his team asked OpenAI’s ChatGPT, running on the GPT-4o model, questions based on information from 21 retracted papers about medical imaging. The chatbot’s answers referenced retracted papers in five cases but advised caution in only three. While it cited non-retracted papers for other questions, the authors note that it may not have recognized the retraction status of the articles. In a study from August, a different group of researchers used ChatGPT-4o mini to evaluate the quality of 217 retracted and low-quality papers from different scientific fields; they found that none of the chatbot’s responses mentioned retractions or other concerns. (No similar studies have been released on GPT-5, which came out in August.)

The public uses AI chatbots to ask for medical advice and diagnose health conditions. Students and scientists increasingly use science-focused AI tools to review existing scientific literature and summarize papers. That kind of usage is likely to increase. The US National Science Foundation, for instance, invested $75 million in building AI models for science research this August.

“If [a tool is] facing the general public, then using retraction as a kind of quality indicator is very important,” says Yuanxi Fu, an information science researcher at the University of Illinois Urbana-Champaign. There’s “kind of an agreement that retracted papers have been struck off the record of science,” she says, “and the people who are outside of science—they should be warned that these are retracted papers.” OpenAI did not provide a response to a request for comment about the paper results.

The problem is not limited to ChatGPT. In June, MIT Technology Review tested AI tools specifically advertised for research work, such as Elicit, Ai2 ScholarQA (now part of the Allen Institute for Artificial Intelligence’s Asta tool), Perplexity, and Consensus, using questions based on the 21 retracted papers in Gu’s study. Elicit referenced five of the retracted papers in its answers, while Ai2 ScholarQA referenced 17, Perplexity 11, and Consensus 18—all without noting the retractions.

Some companies have since made moves to correct the issue. “Until recently, we didn’t have great retraction data in our search engine,” says Christian Salem, cofounder of Consensus. His company has now started using retraction data from a combination of sources, including publishers and data aggregators, independent web crawling, and Retraction Watch, which manually curates and maintains a database of retractions. In a test of the same papers in August, Consensus cited only five retracted papers. 

Elicit told MIT Technology Review that it removes retracted papers flagged by the scholarly research catalogue OpenAlex from its database and is “still working on aggregating sources of retractions.” Ai2 told us that its tool does not automatically detect or remove retracted papers currently. Perplexity said that it “[does] not ever claim to be 100% accurate.” 

However, relying on retraction databases may not be enough. Ivan Oransky, the cofounder of Retraction Watch, is careful not to describe it as a comprehensive database, saying that creating one would require more resources than anyone has: “The reason it’s resource intensive is because someone has to do it all by hand if you want it to be accurate.”

Further complicating the matter is that publishers don’t share a uniform approach to retraction notices. “Where things are retracted, they can be marked as such in very different ways,” says Caitlin Bakker from University of Regina, Canada, an expert in research and discovery tools. “Correction,” “expression of concern,” “erratum,” and “retracted” are among some labels publishers may add to research papers—and these labels can be added for many reasons, including concerns about the content, methodology, and data or the presence of conflicts of interest. 

Some researchers distribute their papers on preprint servers, paper repositories, and other websites, causing copies to be scattered around the web. Moreover, the data used to train AI models may not be up to date. If a paper is retracted after the model’s training cutoff date, its responses might not instantaneously reflect what’s going on, says Fu. Most academic search engines don’t do a real-time check against retraction data, so you are at the mercy of how accurate their corpus is, says Aaron Tay, a librarian at Singapore Management University.

Oransky and other experts advocate making more context available for models to use when creating a response. This could mean publishing information that already exists, like peer reviews commissioned by journals and critiques from the review site PubPeer, alongside the published paper.  

Many publishers, such as Nature and the BMJ, publish retraction notices as separate articles linked to the paper, outside paywalls. Fu says companies need to effectively make use of such information, as well as any news articles in a model’s training data that mention a paper’s retraction. 

The users and creators of AI tools need to do their due diligence. “We are at the very, very early stages, and essentially you have to be skeptical,” says Tay.

Ananya is a freelance science and technology journalist based in Bengaluru, India.

The Download: AI’s retracted papers problem

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

AI models are using material from retracted scientific papers

The news: Some AI chatbots rely on flawed research from retracted scientific papers to answer questions, according to recent studies. In one such study, researchers asked OpenAI’s ChatGPT questions based on information from 21 retracted papers on medical imaging. The chatbot’s answers referenced retracted papers in five cases but advised caution in only three. 

The bigger picture: The findings raise serious questions about how reliable AI tools are at evaluating scientific research, or answering people’s health queries. They could also complicate efforts to invest in AI tools for scientists. And it’s not an easy problem to fix. Read the full story.

—Ananya

Join us at 1pm ET today to meet our Innovator of the Year

Every year, MIT Technology Review awards Innovator of the Year to someone whose work we admire. This year we selected Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method.

Her work could transform medical care by allowing physicians to sequence a patient’s genome and diagnose genetic conditions in less than eight hours.

Register here to join an exclusive subscriber-only Roundtable conversation with Goenka, Leilani Battle, assistant professor at the University of Washington, and our editor in chief Mat Honan at 1pm ET today. 

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 There’s scant evidence tylenol use during pregnancy causes autism
The biggest cause of autism is genetic—that’s why it often runs in families. (Scientific American $)
+ Anti-vaxxers are furious the White House didn’t link autism to vaccines. (Ars Technica)
+ The company that sells Tylenol is being forced to defend the medicine’s safety. (Axios)

2 Nvidia is investing up to $100 billion in OpenAI
OpenAI is already a major customer, but this will bind the two even more closely together. (Reuters $)
+ America’s top companies keep talking about AI—but they can’t explain its upsides. (FT $)

3 Denmark’s biggest airport was shut down by drones
Its prime minister refused to rule out Russian involvement. (FT $)
+ Poland and Estonia have been speaking up at the UN about Russian incursions into their airspace. (The Guardian)

4 Google is facing another antitrust trial in the US
This one will focus on remedies to its dominance of the advertising tech market. (Ars Technica)
+ The FTC is also taking Amazon to court over accusations the company tricks people into paying for Prime. (NPR)
+ The Supreme Court has ruled to allow Trump’s firing of a Democrat FTC commissioner. (NYT $)

5 Here’s the potential impact of Trump’s H-1B crackdown on tech
It’s likely to push a lot of skilled workers elsewhere. (Rest of World)

6 How TikTok’s deal to stay in the US will work
Oracle will manage its algorithm for US users and oversee security operations. (ABC)
+ It’s a giant prize for Trump’s friend Larry Ellison, Oracle’s cofounder. (NYT $)
+ Trump and his allies are now likely to exert a lot of political influence over TikTok. (WP $)

7 Record labels are escalating their lawsuit against an AI music startup
They claim it knowingly pirated songs from YouTube to train its generative AI models. (The Verge $)
+ AI is coming for music, too. (MIT Technology Review

8 There’s a big fight in the US over who pays for weight loss drugs
Although they’ll save insurers money long-term, they cost a lot upfront. (WP $)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

9 How a lone vigilante ended up blowing up 5G towers
A little bit of knowledge can be a dangerous thing. (Wired $)

10 The moon is rusting 🌕
And it’s our fault. Awkward! (Nature)

Quote of the day

“At the heart of this is people trying to look for simple answers to complex problems.”

—James Cusack, chief executive of an autism charity called Autistica, tells Nature what he thinks is driving Trump and others to incorrectly link the condition with Tylenol use during pregnancy. 

One more thing

A mobility walker sinking in an hourglass.

SARAH ROGERS / MITTR | PHOTOS GETTY

Maybe you will be able to live past 122

How long can humans live? This is a good time to ask the question. The longevity scene is having a moment, and a few key areas of research suggest that we might be able to push human life spans further, and potentially reverse at least some signs of aging.

Researchers can’t even agree on what the exact mechanisms of aging are and which they should be targeting. Debates continue to rage over how long it’s possible for humans to live—and whether there is a limit at all.

But it looks likely that something will be developed in the coming decades that will help us live longer, in better health. Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ This website lets you send a letter to your future self. 
+ Here’s what Brian Eno has to say about art.
+ This photographer takes stunning pictures of Greenland. 
+ The Hungarian dish Rakott krumpli isn’t going to win any health plaudits, but it looks very comforting all the same.

Roundtables: Meet the 2025 Innovator of the Year

Every year, MIT Technology Review selects one individual whose work we admire to recognize as Innovator of the Year. For 2025, we chose Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method. Thanks to her work, physicians can now sequence a patient’s genome and diagnose a genetic condition in less than eight hours—an achievement that could transform medical care.

Speakers: Sneha Goenka, Innovator of the Year; Leilani BattleUniversity of Washington; and Mat Honaneditor in chief

Recorded on September 23, 2025

Related Coverage: