Ahrefs’ Tim Soulo recently posted that AI is making publishing evergreen content obsolete and no longer worth the investment because AI summaries leave fewer clicks for publishers. He posits that it may be more profitable to focus on trending topics, calling it Fast SEO. Is publishing evergreen content no longer a viable content strategy?
The Reason For Evergreen Content
Evergreen content can be a basic topic that generally doesn’t change much from year to year. For example, the answer to how to change a tire will generally always be the same.
The promise of evergreen content was that it represents a steady source of traffic. Once a web page is ranking for evergreen topics, publishers basically just have to make sure that it’s updated if the topic has changed in some way.
Does AI Break The Evergreen Content Promise?
Tim Soulo is suggesting that evergreen content, which can be easy to answer with a summary, is less likely to send a click because AI summarizes the answer and satisfies the user, who may not need to visit a website.
“The era of “evergreen SEO content” is over. We’re entering the era of “fast SEO.”
There’s little point in writing yet another “Ultimate Guide To ___.” Most evergreen topics have already been covered to death and turned into common knowledge. Google is therefore happy to give an AI answer, and searchers are fine with that.
Instead, the real opportunity lies in spotting and covering new trends — or even setting them yourself.”
Is Fast SEO The Future Of Publishing?
Fast SEO is another way of describing trending topics. Trending topics have always been around; it’s why Google invented the freshness algorithm, to satisfy users with up-to-date content when a “query deserves freshness.”
Soulo’s idea is that trending topics are not the kind of content that AI summarizes. Perplexity is the exception; it has an entire content discovery section called Perplexity Discover that’s dedicated to showing trending news articles.
Fast SEO is about spotting and seizing short-lived content opportunities. These can be new developments, shifts in the industry or perceptions, or cultural moments.
His tweet captures the current feeling within the SEO and publishing communities that AI is the reason for diminishing traffic from Google.
The Evergreen Content Situation Is Worse Than Imagined
A technical issue that Soulo didn’t mention but is relevant here is that it’s challenging to create an “Ultimate Guide To X, Y, Z” or the “Definitive Guide To Bla, Bla, Bla” and expect it to be fresh and different from what is already published.
The barrier to entry for evergreen content is higher now than it’s ever been for several reasons:
There are more people publishing content.
People are consuming multiple forms of content (text, audio, and video).
Search algorithms are focused on quality, which shuts out those who focus harder on SEO than they do on people.
User behavior signals are more reliable than traditional link signals, and SEOs still haven’t caught on to this, making it harder to rank.
Query Fan-Out is causing a huge disruption in SEO.
Why Query Fan-Out Is A Disruption
Evergreen content is an uphill struggle, compounded by the seeming inevitability that AI will summarize the content and, because of Query Fan-Out, possibly send the click to another website that is cited because it offers the answer to a follow-up question to the initial search query.
Query Fan-Out displays answers to the initial query and to follow-up questions to the initial search query. If the user is happy with the summary to the initial query, they may become interested in one of the follow-up queries, and one of those will get the click, not the initial query.
This completely changes what it means to target a search query. How does an SEO target a follow-up question? Maybe, instead of targeting the main high-traffic query, it may make sense to target the follow-up queries with evergreen content.
Evergreen Content Publishing Still Has Life
There is another side to this story, and it’s about user demand. Foundational questions stick around for a long time. People will always search “how to tie a bowtie” or “how to set up WordPress.” Many users prefer the stability of an established guide that has been reviewed and updated by a trusted brand. It’s not about being a brand; it’s about being the kind of site that is trusted, well-liked, and recommended.
A strong resource can become the canonical source for a topic, ranking for years and generating the kind of user behavior signals that reinforce its authority and signal the quality of being trusted.
Trend-driven content, by contrast, often delivers only a brief spike before fading. A newsroom model is difficult to maintain because it requires constant work to be first and be the best.
The Third Way: Do It All
The choice between producing evergreen content and trending topics doesn’t have to be binary; there’s a third option where you can do it all. Evergreen and trending topics can complement each other because each side provides opportunities for driving traffic to the other. Fresh, trend-driven content can link back to the evergreen, and this can be reversed to send readers to fresh content from the evergreen.
Trend-driven content sometimes becomes evergreen itself. But in general, creating evergreen content requires deep planning, quality execution, and marketing. Somebody’s going to get the click from evergreen content, it might as well be you.
Traditional search engines use bots to crawl webpages and rank them.
LLMs synthesize patterns from massive pre-ingested datasets. LLMs and answer engines don’t index; they use them as their conversational padding.
What Is A Pre-Ingested Data Set?
Pre-ingested datasets are content that is pulled from websites, reviews, directories, forums, and even brand-owned assets.
This means your visibility no longer depends only on keywords
What Do I Need To Do To Show Up In AI Overviews & SERPs?
To increase your visibility in LLMs, your content must be:
Put simply: GEO ensures your brand shows up in the answers themselves as well as in the links beneath them.
How To Optimize For LLMs In GEO
Optimizing for LLMs is about aligning with how these systems select and reuse content.
From our analysis, three core principles stand out in consistently GEO-friendly content:
1. Provide Structure & Clarity
Generative models prioritize content that is well-organized and easy to parse. Clear headings, bullet points, tables, summaries… help engines extract information and recompose it into human-like answers.
2. Include Trust & Reliability Signals
LLMs reward factual accuracy, consistency, and transparency. Contradictions between your site, profiles, and third-party sources weaken credibility. Conversely, quoting sources, citing data, and showcasing expertise increase your chances of being cited!
3. Contextual & Semantic Depth Are Key
Engines rely less on keywords and more on contextual signals (as it has been more and more the case with Google these last years–hello BERT, haven’t heard from you in a while!). Content enriched with synonyms, related terms, and variations is more flexible and better aligned with diverse queries, which is especially important as AI queries are conversational, not just transactional.
3 Tips For Creating GEO-Friendly Content
In the GEO guide we’re sharing with you in this article, 15 tips are delivered–here are 3 of the most important ones:
Cover not just the main query but related terms, variations, and natural follow-ups.
For example, if writing about “content ROI,” anticipate adjacent questions like “How do you measure ROI in SEO?” or “What KPIs prove content ROI?”!
By aligning with user intent, not just keywords, you increase the likelihood of your content being surfaced as the “best available answer” for the LLMs.
And many more opportunities to prove your credibility and authority.
Think of it as content that doesn’t just “read well,” but feels safe to reuse by the LLMs.
3. Optimize format for machine & human readability
Beyond clarity, formats like FAQs, how-tos, comparisons, and lists make your content both user-friendly and machine-friendly. Many SEO techniques are just as powerful and efficient in GEO:
Add alt text for visuals.
Include summaries and key takeaways in long-form content.
Use structured data and schema where relevant.
This dual optimization increases both discoverability and reusability in AI-generated answers.
The risk of ignoring GEO is not just lower traffic—it’s invisibility in the answer layer where trust and decisions are increasingly formed.
By contrast, marketers who embrace GEO can:
Defend brand presence where AI engines consolidate attention.
Create future-forward SEO strategies as search continues to evolve.
Maximize ROI by aligning content with both human expectations and machine logic.
In other words, GEO is not a trend: it’s a structural shift in digital visibility, where SEO remains essential but is no longer sufficient. GEO adds the missing layer: being cited, trusted, and reused by the engines that increasingly mediate how users access information.
GEO As A New Competitive Advantage
The age of GEO is here. For marketing and SEO leaders, the opportunity is to adapt faster than competitors—aligning content with the standards of generative search while continuing to refine SEO.
To win visibility in this environment, prioritize:
Auditing your current content for GEO readiness.
Enhancing clarity, trust signals, and semantic richness.
Monitoring your presence in AI Overviews, ChatGPT, and other generative engines.
Those who invest in GEO today will shape how tomorrow’s answers are written.
Some AI chatbots rely on flawed research from retracted scientific papers to answer questions, according to recent studies. The findings, confirmed by MIT Technology Review, raise questions about how reliable AI tools are at evaluating scientific research and could complicate efforts by countries and industries seeking to invest in AI tools for scientists.
AI search tools and chatbots are already known to fabricate links and references. But answers based on the material from actual papers can mislead as well if those papers have been retracted. The chatbot is “using a real paper, real material, to tell you something,” says Weikuan Gu, a medical researcher at the University of Tennessee in Memphis and an author of one of the recent studies. But, he says, if people only look at the content of the answer and do not click through to the paper and see that it’s been retracted, that’s really a problem.
Gu and his team asked OpenAI’s ChatGPT, running on the GPT-4o model, questions based on information from 21 retracted papers about medical imaging. The chatbot’s answers referenced retracted papers in five cases but advised caution in only three. While it cited non-retracted papers for other questions, the authors note that it may not have recognized the retraction status of the articles. In a study from August, a different group of researchers used ChatGPT-4o mini to evaluate the quality of 217 retracted and low-quality papers from different scientific fields; they found that none of the chatbot’s responses mentioned retractions or other concerns. (No similar studies have been released on GPT-5, which came out in August.)
The public uses AI chatbots to ask for medical advice and diagnose health conditions. Students and scientists increasingly usescience-focused AI tools to review existing scientific literature and summarize papers. That kind of usage is likely to increase. The US National Science Foundation, for instance, invested $75 million in building AI models for science research this August.
“If [a tool is] facing the general public, then using retraction as a kind of quality indicator is very important,” says Yuanxi Fu, an information science researcher at the University of Illinois Urbana-Champaign. There’s “kind of an agreement that retracted papers have been struck off the record of science,” she says, “and the people who are outside of science—they should be warned that these are retracted papers.” OpenAI did not provide a response to a request for comment about the paper results.
The problem is not limited to ChatGPT. In June,MIT Technology Review tested AI tools specifically advertised for research work, such as Elicit, Ai2 ScholarQA (now part of the Allen Institute for Artificial Intelligence’s Asta tool), Perplexity, and Consensus, using questions based on the 21 retracted papers in Gu’s study. Elicit referenced five of the retracted papers in its answers, while Ai2 ScholarQA referenced 17, Perplexity 11, and Consensus 18—all without noting the retractions.
Some companies have since made moves to correct the issue. “Until recently, we didn’t have great retraction data in our search engine,” says Christian Salem, cofounder of Consensus. His company has now started using retraction data from a combination of sources, including publishers and data aggregators, independent web crawling, and Retraction Watch, which manually curates and maintains a database of retractions. In a test of the same papers in August, Consensus cited only five retracted papers.
Elicit told MIT Technology Review that it removes retracted papers flagged by the scholarly research catalogue OpenAlex from its database and is “still working on aggregating sources of retractions.” Ai2 told us that its tool does not automatically detect or remove retracted papers currently. Perplexity said that it “[does] not ever claim to be 100% accurate.”
However, relying on retraction databases may not be enough. Ivan Oransky, the cofounder of Retraction Watch, is careful not to describe it as a comprehensive database, saying that creating one would require more resources than anyone has: “The reason it’s resource intensive is because someone has to do it all by hand if you want it to be accurate.”
Further complicating the matter is that publishers don’t share a uniform approach to retraction notices. “Where things are retracted, they can be marked as such in very different ways,” says Caitlin Bakker from University of Regina, Canada, an expert in research and discovery tools. “Correction,” “expression of concern,” “erratum,” and “retracted” are among some labels publishers may add to research papers—and these labels can be added for many reasons, including concerns about the content, methodology, and data or the presence of conflicts of interest.
Some researchers distribute their papers on preprint servers, paper repositories, and other websites, causing copies to be scattered around the web. Moreover, the data used to train AI models may not be up to date. If a paper is retracted after the model’s training cutoff date, its responses might not instantaneously reflect what’s going on, says Fu. Most academic search engines don’t do a real-time check against retraction data, so you are at the mercy of how accurate their corpus is, says Aaron Tay, a librarian at Singapore Management University.
Oransky and other experts advocate making more context available for models to use when creating a response. This could mean publishing information that already exists, like peer reviews commissioned by journals and critiques from the review site PubPeer, alongside the published paper.
Many publishers, such as Nature and the BMJ, publish retraction notices as separate articles linked to the paper, outside paywalls. Fu says companies need to effectively make use of such information, as well as any news articles in a model’s training data that mention a paper’s retraction.
The users and creators of AI tools need to do their due diligence. “We are at the very, very early stages, and essentially you have to be skeptical,” says Tay.
Ananya is a freelance science and technology journalist based in Bengaluru, India.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
AI models are using material from retracted scientific papers
The news: Some AI chatbots rely on flawed research from retracted scientific papers to answer questions, according to recent studies. In one such study, researchers asked OpenAI’s ChatGPT questions based on information from 21 retracted papers on medical imaging. The chatbot’s answers referenced retracted papers in five cases but advised caution in only three.
The bigger picture: The findings raise serious questions about how reliable AI tools are at evaluating scientific research, or answering people’s health queries. They could also complicate efforts to invest in AI tools for scientists. And it’s not an easy problem to fix. Read the full story.
—Ananya
Join us at 1pm ET today to meet our Innovator of the Year
Every year, MIT Technology Review awards Innovator of the Year to someone whose work we admire. This year we selected Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method.
Her work could transform medical care by allowing physicians to sequence a patient’s genome and diagnose genetic conditions in less than eight hours.
Register here to join an exclusive subscriber-only Roundtable conversation with Goenka, Leilani Battle, assistant professor at the University of Washington, and our editor in chief Mat Honan at 1pm ET today.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 There’s scant evidence tylenol use during pregnancy causes autism The biggest cause of autism is genetic—that’s why it often runs in families. (Scientific American $) + Anti-vaxxers are furious the White House didn’t link autism to vaccines. (Ars Technica) + The company that sells Tylenol is being forced to defend the medicine’s safety. (Axios)
2 Nvidia is investing up to $100 billion in OpenAI OpenAI is already a major customer, but this will bind the two even more closely together. (Reuters $) + America’s top companies keep talking about AI—but they can’t explain its upsides. (FT $)
3 Denmark’s biggest airport was shut down by drones Its prime minister refused to rule out Russian involvement. (FT $) + Poland and Estonia have been speaking up at the UN about Russian incursions into their airspace. (The Guardian)
4 Google is facing another antitrust trial in the US This one will focus on remedies to its dominance of the advertising tech market. (Ars Technica) + The FTC is also taking Amazon to court over accusations the company tricks people into paying for Prime. (NPR) + The Supreme Court has ruled to allow Trump’s firing of a Democrat FTC commissioner. (NYT $)
5 Here’s the potential impact of Trump’s H-1B crackdown on tech It’s likely to push a lot of skilled workers elsewhere. (Rest of World)
6 How TikTok’s deal to stay in the US will work Oracle will manage its algorithm for US users and oversee security operations. (ABC) + It’s a giant prize for Trump’s friend Larry Ellison, Oracle’s cofounder. (NYT $) + Trump and his allies are now likely to exert a lot of political influence over TikTok. (WP $)
7 Record labels are escalating their lawsuit against an AI music startup They claim it knowingly pirated songs from YouTube to train its generative AI models. (The Verge $) + AI is coming for music, too. (MIT Technology Review)
8 There’s a big fight in the US over who pays for weight loss drugs Although they’ll save insurers money long-term, they cost a lot upfront. (WP $) + We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)
9 How a lone vigilante ended up blowing up 5G towers A little bit of knowledge can be a dangerous thing. (Wired $)
10 The moon is rusting And it’s our fault. Awkward! (Nature)
Quote of the day
“At the heart of this is people trying to look for simple answers to complex problems.”
—James Cusack, chief executive of an autism charity called Autistica, tells Nature what he thinks is driving Trump and others to incorrectly link the condition with Tylenol use during pregnancy.
One more thing
SARAH ROGERS / MITTR | PHOTOS GETTY
Maybe you will be able to live past 122
How long can humans live? This is a good time to ask the question. The longevity scene is having a moment, and a few key areas of research suggest that we might be able to push human life spans further, and potentially reverse at least some signs of aging.
Researchers can’t even agree on what the exact mechanisms of aging are and which they should be targeting. Debates continue to rage over how long it’s possible for humans to live—and whether there is a limit at all.
But it looks likely that something will be developed in the coming decades that will help us live longer, in better health. Read the full story.
—Jessica Hamzelou
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ This website lets you send a letter to your future self. + Here’s what Brian Eno has to say about art. + This photographer takes stunning pictures of Greenland. + The Hungarian dish Rakott krumpli isn’t going to win any health plaudits, but it looks very comforting all the same.
Every year, MIT Technology Review selects one individual whose work we admire to recognize as Innovator of the Year. For 2025, we chose Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method. Thanks to her work, physicians can now sequence a patient’s genome and diagnose a genetic condition in less than eight hours—an achievement that could transform medical care.
Speakers: Sneha Goenka, Innovator of the Year;Leilani Battle, University of Washington;andMat Honan, editor in chief
“Ask an Expert” is an occasional feature where we pose questions to seasoned ecommerce pros. For this installment, we’ve turned to Louis Camassa, the director of product at Rithum, a marketplace orchestration platform. He’s also a serial entrepreneur and an occasional contributor to Practical Ecommerce.
He addresses the essentials of generative engine optimization for ecommerce.
Practical Ecommerce: How can merchants optimize product visibility across ChatGPT, Perplexity, Gemini, and other generative AI platforms?
Louis Camassa: There’s no universal guide at present for product integration, but retailers can take proactive steps to prepare.
Louis Camassa
Begin by evaluating current genAI visibility. Merchants should search for their brand names to understand how the platforms present them. Experiment with shopper-like queries to observe how the systems rank and mention offerings in comparison to competitors. Consider searches such as “Find me running shoes with maximum cushioning for marathon training” or “What are the top-rated coffee makers that brew a single cup in under 2 minutes?”
Next, thoroughly review product info. Again, standardized genAI product formats do not yet exist, yet companies with existing product feeds have a solid foundation. Ensure your feed contains key details such as dimensions, color options, materials, weight specifications, and intended applications.
ChatGPT, Perplexity, and Gemini have not yet opened the gates to share product data directly, but small-to-midsize merchants can get ahead by preparing now.
Generative AI platforms thrive on structured, accurate, real-time data.
Here are optimization tips:
1. Keep product data clean and consistent • Unique IDs that never change • Plain text titles and descriptions
2. Write for people, not just machines • Short, specific titles (brand + product + key attribute) • Natural language benefits in descriptions
3. Use structured attributes • Brand, price, size, color, and material • Group product variants with a shared ID (e.g., parent-child relationship)
4. Optimize images • Use a content delivery network • Extra angles or lifestyle shots help
A new Pew Research Center survey reveals a gap between people’s desire to know when AI is used in content and their confidence in being able to identify it.
Seventy-six percent say it’s extremely or very important to know whether pictures, videos, or text were made by AI or by people. Only 12% feel confident they could tell the difference themselves.
“Americans feel strongly that it’s important to be able to tell if pictures, videos or text were made by AI or by humans. Yet many don’t trust their own ability to spot AI-generated content.”
This confidence gap reflects a rising unease with AI.
Half of Americans believe that the increased presence of AI in daily life raises more concerns than excitement, while just 10% are more excited than worried.
What Pew Research Found
People Want More Control
About 60% of Americans want more control over AI in their lives, an increase from 55% last year.
They’re open to AI helping with daily tasks, but still want clarity on where AI ends and human involvement begins.
When People Accept vs. Reject AI
Most support the use of AI in data-intensive tasks, such as weather prediction, financial crime detection, fraud investigation, and drug development.
About two-thirds oppose AI in personal areas such as religious guidance and matchmaking.
Younger Audiences Are More Aware
Awareness of AI is highest among adults under 30, with 62% claiming they’ve heard a lot about it, compared to only 32% of those 65 and older.
But this awareness doesn’t lead to optimism. Younger adults are more likely than seniors to believe that AI will negatively impact creative thinking and the development of meaningful relationships.
Creativity Concerns
More Americans believe AI will negatively impact essential human skills.
Fifty-three percent think it will reduce creative thinking, and 50% feel it will hinder the ability to connect with others, with only a few expecting improvements.
This suggests labeling alone isn’t sufficient. Human input must also be evident in the work.
Why This Matters
People are generally not against AI, but they do want to know when AI is involved. Being open about AI use can help build trust.
Brands that go the transparent route might find themselves at an advantage in creating connections with their audience.
A new analysis from Search Atlas quantifies the interaction between proximity and reviews in local rankings.
Proximity drives visibility overall, while review signals become stronger differentiators in the highest positions.
This study examines 3,269 businesses across the food, health, law, and beauty sectors.
It shows that for positions 1–21, proximity influences 55% of decisions, while review count accounts for 19%. In the top ten, proximity’s influence decreases to 36%, but review count increases to 26%, with review keyword relevance reaching 22%.
Search Atlas writes:
Proximity is the top driver of local visibility.
The study also notes:
Proximity does not always dominate in elite positions.
What It Means
You’ll have a better chance of achieving top results by focusing on earning more reviews and naturally incorporating service-specific terms into reviews, rather than relying on your pin’s location on the map.
The report suggests that Google understands review text semantically. Using service-specific language in reviews can help your rankings for high-value queries.
How To Apply This
Think of proximity as your default setting. It’s fixed, so focus your attention on the inputs you can control.
When crafting your review requests, aim for natural, service-specific language. For instance, “best dentist for whitening” tends to work better than “great service.”
Also, ensure that your GBP name and profile details are aligned. The research shows that matching your business name to the search intent, such as “Downtown Dental Clinic” for someone searching “dentist near me,” can make a positive difference.
Sector Behavior
While the overall pattern remains consistent, shoppers can exhibit different behaviors across categories.
Per the report:
For Law, proximity tends to be the most important factor, with reviews playing a secondary role.
In Beauty, reputation signals are more influential. While proximity is still key, review volume and keywords are also important.
When it comes to Food, review content and profile relevance become especially valuable, particularly in crowded markets.
Health balances proximity with strong reviews and service alignment in reviews.
Looking Ahead
This study quantifies something practitioners have long suspected: proximity earns you a look, but review content helps you secure the top spot in the close contest.
If you can’t change your location, shape the language around it.
For more data on GBP ranking factors, see the full report.
Methods & Limits
The authors applied XGBoost to grid visibility, GBP metadata, website content, and reviews, achieving a global model that explains approximately 92–93% of the variance.
They emphasize that feature importance indicates correlation, not causation. Additionally, they warn that proximity might be overstated due to fixed grid collection and note that their results represent a snapshot in time.
Use these insights as guidance, not a strict rulebook.
Here’s what I’m covering this week: How to get the most out of personas in your day-to-day work across SEO, content, and the broader org.
Because in the AI-search era, personas built from organic queries and prompts have value for every touchpoint: ad copy, sales scripts, support docs, product messaging.
They carry the unfiltered language of your audience (their fears, hesitations, and demands) straight into the hands of the teams shaping your funnel.
If you’re not operationalizing search-data-based personas across departments, you’re missing one of the few forms of market intelligence that scale across SEO, marketing, sales, and product.
Personas shouldn’t live stagnantly in a slide deck. I’ll show you how to make them pull their weight across the org.
Image Credit: Kevin Indig
Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!
Last week, I showed you how to create search personas based on data you already have available, along with how to use an LLM-ready persona card to extract custom insights.
But the best persona in the world doesn’t help if it collects dust in your Google Drive.
This week, I’m digging into how to make these search persona insights actionable – not only across your SEO processes and production, but also across broader teams that SEO work touches.
However, before we dive in, I want to share a few notable perspectives on search personas that came up in conversation on this LinkedIn thread:
Malte Landwehr, CPO & CMO at Peec AI, gave this visual example in the thread (with additional context) that resonated strongly. From his own research and testing, he shared a visual detailing LLM visibility for various headphones based on prompts for personas and use cases.
The findings? LLMs recommended different brands/products based on different persona-based prompts.
Image Credit: Kevin Indig
And below, David Melamed brings up an interesting and important question below.
Image Credit: Kevin Indig
I agree with David: The more personalized search results are, the less you can segment or generalize across a group.
He shares that “more long tail content and citations across more unique niches, scenarios and comparisons should beat out persona driven content” and that “looking at questions, related searches in search console, and Google and Microsoft ads search term reports… [along with] experience and other voice of customer research (listening to calls, analyzing reviews, reddit threads, complaints, etc..)” would be a helpful approach.
And that’s what I tackled last week in Personas are critical for AI search (part 1 on the persona topic): To succeed with user personas for SEO – and make them valuable and usable – the goal is to build custom, unique search personas from your actual in-house data and long-tail Google Search Console.
So, David brought up a valid point, one that’s aligned with how we should be building useful search personas for today.
Lastly, Elisa Daniela Montanari sums up how a lot of us feel about the shift toward qualitative research (along with mentioning her goals to upskill as an SEO by diving into user research tactics):
Image Credit: Kevin Indig
And with these conversations in mind…
I’d argue that high-quality, customer-centered SEO research captures unfiltered questions, painpoints, and intents at scale, across the entire journey – and that makes it one of the most versatile forms of market intelligence that you can use across your brand as a whole.
So if organic query and prompt research is so valuable and versatile, how do you ensure they’re actually used?
Because all strategists everywhere have had that stupidly challenging moment: After doing all the labor-intensive data-gathering of building user personas for SEO, it’s time to get your team or clients to use those insights regularly across SEO production.
You need to prep your findings so they’re not left gathering cobwebs in the dark corners of the cloud.
1. Create An Internal Knowledge Hub For Core Search Personas
Not another slide deck or spreadsheet that gathers dust. A simple, easily-accessible hub that is a living, breathing document.
Translate data into the formats your team and stakeholders already use: dashboards, one-page briefs, funnel visualizations.
Think Notion, Airtable, Asana, Google Sheets, Slack Canvas – wherever your team is already working and discussing production.
Key contributors need to have access to fluidly comment and update as organic questions and pain points surface across your audience.
2. Build A Clear Narrative Around How And Why Using These Personas Is Valuable
Position SEO research/persona use as a “horizontal competency” that makes every department smarter.
Kick off persona use with a short session showing:
Real queries from your personas.
How those queries reveal pain points, objections, or jobs-to-be-done.
Where competitors are (or aren’t) meeting those needs.
Inform the team on how users are interacting with AI-based search results (see Trust Still Lives in Blue Links for details on the four AIO intent patterns).
A three-minute Loom video can do wonders.
Use the data you have (Google Search Console, Semrush, Ahrefs, LLM prompt monitoring tools) to back up the importance of use.
At the end of this memo, I have a slide deck template for premium subscribers that will help you build this narrative and guide effective persona implementation across teams.
3. Train Contributors On How Personas Will Be Used Across Production – And Follow Through
Train your SEO/content contributors that personas don’t just shape blog posts – they inform all communication touchpoints in the customer journey.
If you’re also using search personas to inform your sales and customer care team interactions (and you should – more on that below), create examples of how to use personas across all communication channels.
Highlight missed opportunities (e.g., ad copy vs. organic messaging mismatch, customer support docs hidden from search, sales scripts that could benefit).
And although this means extra work for leaders, managers, or editors, this part is crucial: Let your team know that briefs that don’t specify personas will be rejected or sent back for revision. That also goes for drafts that don’t speak directly to defined personas and their search behaviors/needs.
Yes, it’s an added step on an often-already-overloaded plate of a marketer, but this is how you ensure they’re successfully implemented across your work over time.
Image Credit: Kevin Indig
Here’s where your personas stop being a strategy deck or training session and start shaping what users experience.
1. Incorporate Persona Data Into Every Content Brief
Your search persona data is there to help you direct every brief beyond target queries and products/services features to mention.
Use it to inform your content producers of the following:
Unique, data-backed pain points.
Real customer/lead questions that need answering.
Proof points needed to reduce hesitation.
What authority signals resonate with your target reader.
Behaviors that impact interactions with the page.
Copy on the page.
In every content brief, flag actual language from queries, call transcripts, or reviews that should be used on the page. Create a copy bank that’s tagged into your content briefs that your writers, editors, and LLMs can pull from.
For example, if your persona says “integration headaches,” don’t water it down to “implementation challenges.” Use their words.
2. Use Search Persona Data To Inform Page Structure
Match the flow of the page to how specific personas are likely to consume information.
Some personas need trust-driven validation upfront (editorial quality signals, branded logos, stats, testimonials). Others need efficiency first, then a CTA.
Here’s a practical way to estimate what each of your search personas needs on the page:
Follow guidance (and use the regex) provided in Personas are critical for AI search to extract GSC long-tail queries that can contain indicators of specific search personas.
Select a specific URL or page that comes up for multiple long-tails for a consistent search persona type.
Examine on-page user scrolling and clicking behavior via your heatmap tool.
Look for places users pause, scroll past, or toggle back and forth between information. Strong behavioral patterns (skips, hesitations, long-tread times) point to places to better optimize page structure based on search persona type.
Once you’re done gathering information based on user behavioral patterns, audit your on-page modules, formats, and design capabilities to ensure you have all pieces needed to create pages that fulfill those specific needs.
Enlist your product and/or web design team to create what’s needed to serve a better on-page experience.
Then, include direction in each brief of what sort of modules and information structuring is needed based on search persona type.
3. Map To Topic Clusters In The Brief
Specific search personas naturally gravitate toward certain topics or proof points.
A searcher who uses technical language for their queries may cluster around integrations and APIs and need to see clear documentation is available for how to use them, while a user with economic or decision-making intent may cluster around ROI topics.
4. Personas Should Inform Your AI-Assisted Workflows
Use search persona details as inputs to LLM prompts and/or incorporate them into your AI-assisted content generation, like AirOps workflows.
Instead of “write an article about X with the search intent of Y,” frame it as “write for a skeptical buyer evaluating vendors – include comparisons and third-party validation.”
Or better yet? Use your persona cards (see Personas Are Crucial for AI Search for a detailed guide) to help guide additional prompts personas might use in LLMs when attempting to solve queries related to your brand.
Below, take a look at how this could work in practice, using the four distinct AIO intent patterns from the additional analysis of the UX study of AIOs found in Trust Still Lives in Blue Links:
Efficiency-first validations that reward clean, extractable facts (accepting of AIOs).
Trust-driven validations that convert only with credibility (validate AIOs).
Comparative validations that use AIOs but compare with multiple sources.
Skeptical rejections that automatically distrust AIOs for high-stakes queries.
Let’s say you work for a fintech startup that provides easy-to-use business insurance for small to midsize businesses.
Here’s how you might use personas to inform content production for efficiency-first and trust-driven search behaviors:
Example 1: Junior operations coordinator at a 20-person marketing agency → accepting of AIOs (efficiency-first) → queries “What’s the average cost of business insurance for a 20-person company?” → Likely to validate range via the AIO → Takeaway for your brand: Create content geared to businesses with small teams and/or junior learners that includes straightforward facts and ranges that are easily extractable, so it’s cited in AIOs. Make your pricing explanations scannable and structured. Internally link to other knowledge guides for project managers or operations leads at small to midsize businesses.
Example 2: Small business owner in healthcare services → validate AIOs with second-clicks (trust-driven) → queries “Do I need business insurance for HIPAA compliance?” → Likely to read the AIO but won’t act until they see credible signals → citations from legal/insurance authorities → Takeaway for your brand: Position your content with authoritative references (link to .gov or .org sources) and highlight compliance expertise so your page is validated by trust; include case studies and/or social proof of authority; Internally link to other guides for healthcare service businesses.
How To Know Search Persona Implementation Is Working
Watch for these signals:
Higher engagement time and more downstream actions on the page.
Lower bounce rates on persona-driven pages.
More citations and visibility in AIOs and LLM outputs (your copy matches how users ask questions).
Increased assisted conversions: Pages designed for a specific persona show up more often in multi-touch journeys or are incorporated strategically and/or organically into follow-up communications by sales/customer teams.
Sales/Customer service team feedback loop: Fewer “this didn’t answer my question” moments.
Amanda jumping in here: In March of this last year, I led one of my clients to pivot hard to persona-focused content. Not only have we seen an increase in AIO inclusion, AI Mode citations, and LLM visibility for these niche terms, but we’ve also experienced a boost in visits to our core guides that were geared toward our broader audience. After this pivot, we’re seeing anywhere between a 20-60% month-over-month increase in organic visits from ChatGPT, and a ~40% increase month-over-month in visible AIO inclusion, to include our older core content as well. Although some of this growth is likely due to increased overall ChatGPT adoption and increase in Google’s use of AIOs across queries, here’s the takeaway (and my hypothesis): As you create niche content for personas, it’s possible you could also see a lift in your core content as it’s served to these specific groups of searchers – based on what these tools know about (1) the end user and (2) who your brand serves best. But only time (and more experiments) will truly tell.
The reality is, no matter how well you implement search personas into your SEO and content production, SEO and growth marketing teams can’t win on their own.
Search personas have the real opportunity to contribute to results when the rest of the org picks them up and runs with them throughout lead and customer touchpoints.
The trick is to make it dead-simple for every team to see why personas matter for their work and how to apply them.
Plus, a big advantage of bringing other teams on board is that SEO-driven personas – built from real search queries, prompts, social chatter, and call transcripts – arm everyone with the exact language customers use.
That means you can reduce hesitations, preemptively answer questions, and build trust across every channel of communication.
Below, here’s a quick list of guidance to help you collaborate with other teams on how to use search persona data.
And in the next section, I’ll jump into how to create intentional feedback loops so your personas stay fresh, useful, and relevant.
Email Marketing
Work with email teams to trigger sequences based on persona signals (query intent by pages visited, topics visited).
Example: If someone hits three pricing-related pages, route them into a nurture path designed for a search-data-informed persona that includes supportive content often visited by those users.
Benefit: Aligns your SEO insights with lifecycle marketing, reducing drop-off between discovery and conversion.
Paid Media And Advertising
Lift search-persona informed language directly into ad copy → track if it increases CTR because you’re speaking the way customers search.
Map objections to creatives: For example, run ads that emphasize compliance and audits if you have search data illustrating a segment of users who have detailed questions about security of your software.
Test messaging by persona to learn faster which angles convert.
Benefit: SEO persona research de-risks your paid spend by validating copy before it goes live.
Social And Community
Translate persona pain points into campaign themes and engagement prompts.
Highlight UGC that shows peers solving the same persona pain point = social proof!
Build Reddit or forum campaigns where you provide helpful answers framed through persona lenses.
Benefit: Social teams stop guessing what will resonate – they get ready-made hooks from organic customer query data and in-house transcript research.
Sales
Use personas to shape sales scripts to reduce organic hesitations, along with your follow-up email templates.
Provide a list of key characteristics or organic phrases discovered in your SEO user persona research for sales to easily pick up on what scripts or content to use.
Equip reps with content “proof kits” (case studies, calculators, benchmarks) that map to persona objections.
Example: Lead comes in from organic content around “integration headaches.” Sales can immediately address hesitations with comparison docs + customer proof.
Benefit: SEO insights close the loop. Your leads feel heard because the same language follows them from organic query to sales call.
Customer Support
Build FAQs, hub pages, and documentation around persona pain points and natural language so customers can self-serve faster.
Train reps on marketing and educational language developed for personas to keep communication consistent across the lifecycle.
Feed recurring support questions back to SEO/content as new opportunities.
Benefit: Less friction for customers, more organic opportunities uncovered for SEO.
Product And/Or Product Marketing
Tie persona insights to feature positioning: “Which persona is this release for?”
Test messaging against persona objections to see what sticks before launch.
Document frameworks: “For Persona A, highlight speed. For Persona B, highlight compliance.”
Benefit: SEO personas become market intelligence, not just marketing intel. This helps product teams ship smarter. Unanswered questions or unsolved organic problems are great opportunities for new features.
One of the biggest pitfalls with doing the work to create search personas is then treating them like static, lifeless relics afterward.
A 2015 B2B study conducted by Cintell found that 71% of companies who exceeded revenue goals had documented personas – and nearly two-thirds of those orgs had updated them within the last six months.
(Listen, I am well aware 2015 is approximately 47 internet years ago – but I’d argue core human decision-making behavior takes much longer to change than a decade.)
No matter the study’s age, the message rings true today: Marketing and user personas win when they’re kept alive.
SEO personas make this easier than traditional personas because they’re rooted in fluid signals, like real search queries, prompts, and customer language that evolve as quickly as the market and trends do.
If you’re closely monitoring GSC data, Semrush, or AIO/LLM interactions, you’ll see shifts in questions and pain points before most competitors.
Image Credit: Kevin Indig
How to operationalize a persona freshness feedback loop across your team:
Employ direct communication channels: Create dedicated Slack channels, a shared CRM note hub, or monthly syncs where Sales, Customers, and Marketing can drop fresh objections, questions, or hesitations they’re hearing. If you’ve got power users or partners who can drop in routine feedback and thoughts, even better.
Develop a regular review cadence: Run a quarterly refresh of persona pain points, objections, and query patterns. Layer in branded search trends, referral data, and AIO/LLM interactions to validate updates.
Create an escalation path: Set up a clear process for when a “new pain point” surfaces. Sales hears it first → SEO/content teams get it next → new content or updates ship fast → implement/inform across marketing channels. How do you make room for organic escalations in your SEO/content production systems?
Do hesitation check-ins: Bi-weekly or monthly cross-team reviews (Support + Sales + SEO) where you identify the top organic customer/lead hesitations and assign assets to resolve them: case studies, how-to videos, tools and calculators, testimonials/reviews, community feedback on social channels.
Hold a regular retro: Tie shipped assets back to KPIs. Which persona-driven pages moved the needle? Which didn’t? Prune or upgrade pages that aren’t solving the problem.
The big takeaway here is search personas are never one-and-done.
They’re a dynamic, qualitative and quantitative data-based operating system for your marketing, sales, and product teams … and if you keep the feedback loop tight, they’ll keep paying dividends.
Featured Image: Paulo Bobita/Search Engine Journal
Google Ads in 2025 looks nothing like it did in 2019. What used to be a hands-on, keyword-driven platform is now powered by AI and machine learning. From bidding strategies and audience targeting to creative testing and budget allocation, automation runs through everything.
Automation brings a lot to the table: efficiency at scale, smarter bidding, faster launches, and less time spent tweaking settings. For busy advertisers or those managing multiple accounts, it is a game-changer.
But left unchecked, automation backfires. Hand over the keys without guardrails and you risk wasted spend, irrelevant placements, or campaigns chasing the wrong metrics. Automation can execute tasks, but it still lacks an understanding of client goals, market nuances, and broader strategy.
In this article, we’ll explore how to balance AI and human oversight. We’ll look at where automation shines, where it falls short, and how to design a hybrid setup that leverages both scale and strategic control.
Measurement First: Feeding The Machine The Right Signals
Automation learns from the conversions you feed it. When tracking is incomplete, Google fills the gaps with modeled conversions. These estimates are useful for directional reporting, but they do not always match the actual numbers in your customer relationship management (CRM).
Chart by author, September 2025
Conversion lag adds another wrinkle. Google attributes conversions to the click date, not the conversion date, which means lead generation accounts often look like they are underperforming mid-week, even though conversions are still being reported. Adding the “Conversions (by conversion time)” column alongside the standard “Conversions” reveals that lag. Also, you can build a custom column to compare actual cost-per-acquisition (CPA) or return on ad spend (ROAS) against your targets. This makes it clear when Smart Bidding is constrained by overly strict settings rather than failing outright.
For CPA, use the formula (Cost / Conversions) – Target CPA. The result tells you how far above or below the goal the campaign is currently hitting. A positive number means you are running over target, often because Smart Bidding is being choked by strict efficiency settings. Smart Bidding may pull back volume and still fail to reach efficiency, or compromise by bringing in conversions above target. A negative number means you are under target, which suggests automation is performing well and may have room to scale.
For ROAS, use the formula (Conv. Value / Cost) – Target ROAS. A negative result shows Smart Bidding is under-delivering on efficiency and not meeting the target. A positive result means you are beating the target, a signal that the system is thriving.
For example, if your Target CPA is $50 and the custom column shows +12, your campaigns are running $12 above goal, typically because the bidding algorithm is adhering too closely to constraints put in by the advertiser. If it shows -8, you are beating the target by $8, which can mean that the system could scale further.
To get real value from automation, connect it to business outcomes, not just clicks or form fills. Optimize toward revenue, profit margin, customer lifetime value, or qualified opportunities in your CRM. Train automation on shallow signals, and it will chase cheap conversions. Train it on metrics that matter to the business, and it will align more closely with growth goals.
Drawing Lanes For Automation
Automation performs best when campaigns have clear lanes. Mix brand and non-brand queries, or new and returning customers, and the system will almost always chase the easiest wins.
That is why human strategy still matters. Search campaigns should own high-intent queries where control of copy and bidding is critical. Performance Max should focus on prospecting and cross-network reach. Without this separation, the auction can route more impressions to PMax, which often pulls volume away from Search. The scale of overlap is hard to ignore. Optmyzr’s analysis revealed that when PMax cannibalized Search keywords, Search campaigns still performed better 28.37% of the time. In cases where PMax and Search overlapped, Search won outright 32.37% of the time.
The same problem arises with brand traffic. PMax leans heavily toward brand queries because they convert cheaply and inflate reported performance. Even with brand exclusions, impressions slip through. If you’re looking for your brand exclusions to be airtight, add branded negative keywords to your campaigns.
Supervising The Machine
Automation does not announce its mistakes. It drifts quietly, and you have to search for the information and read the signals.
Bid strategy reports show which signals Smart Bidding relied on. Seeing remarketing lists or high-value audiences is reassuring. Seeing random in market categories that do not reflect your customer base is a warning that your conversion data is too thin or too noisy.
Google now includes Performance Max search terms in the standard Search Terms report, providing visibility into the actual queries driving clicks and conversions. You can view these within Google Ads and even pull them via API for deeper analysis. With this update, you can now extract performance metrics, including impressions, clicks, click-through rates (CTR), conversions, and directly add negative keywords from the report, helping to refine your targeting quickly.
Looking at impression share signals completes the picture. A high Lost IS (budget) means your campaign is simply underfunded. A high lost IS (rank) paired with a low Absolute Top IS usually means your CPA or ROAS targets are too strict, so the system bids too low to win auctions. This tells us that it’s not automation that is failing; it’s automation following the rules you set. The fix is incremental: Loosen targets by 10-15% and reassess after a full learning cycle.
Intervening When Context Changes
Even the best automation struggles when conditions change faster than its learning model can adapt. Smart Bidding optimizes based on historical patterns, so when the context shifts suddenly, the system often misreads the signals.
Take seasonality, for example. During Black Friday, conversion rates spike far above normal, and the algorithm raises bids aggressively to capture that “new normal.” When the sale ends, it can take days or weeks for smart bidding to recalibrate, overvaluing traffic long after the uplift is gone. Or consider tracking errors. If duplicate conversions fire, the system thinks performance has improved and will start to bid more aggressively, spending money on results that don’t even exist.
That is why guardrails, such as seasonality adjustments and data exclusions, exist: they provide the algorithm with a correction in moments when its model would otherwise drift.
Auto Applied Recommendations: Why They Miss The Mark
Auto-applied recommendations are pitched as a way to streamline account management. On paper, they promise efficiency and better hygiene. In practice, they often do more harm than good, broadening match types, adding irrelevant keywords, or switching bid strategies without context.
Google positions them as helpful, but many practitioners disagree. My view is that AARs are not designed to maximize your profitability at the account level. They are designed to keep budgets flowing efficiently across Google’s limited inventory. The safest approach is to turn them off and review recommendations manually. Keep what aligns with your strategy and ignore the rest. My firm belief is that automation should support your work, not overwrite it.
Scripts That Catch What Automation Misses
Scripts remain one of the simplest ways to hold automation accountable.
The official Google Ads Account Anomaly Detector flags when spend, clicks, or conversions swing far outside historical norms, giving you an early warning when automation starts drifting. The updated n-gram script identifies recurring low-quality terms, such as “free” or “jobs,” allowing you to exclude them before Smart Bidding optimizes toward them. And if you want a simple pacing safeguard, Callie Kessler’s custom column shows how daily spend is tracking against your monthly budget, making volatility visible at a glance.
Together, these lightweight scripts and columns act as additional guardrails. They don’t replace automation, but they catch blind spots and force a human check before wasted spend piles up.
Where To Let AI Lead And Where To Step In
Automation performs best when it has clean signals, clear lanes, and enough data to learn from. That is when you can lean in with tROAS, Maximize Conversion Value, or new customer goals and let Smart Bidding handle auction-time complexity.
It struggles when data quality is shaky, when intents are mixed in a single campaign, or when efficiency targets are set unrealistically tight. Those are the moments when human oversight matters most: adding negatives, restructuring campaigns, excluding bad data, or easing targets so the system can compete.
Closing Thoughts
Automation is the operating system of Google Ads. The question is not whether it works; it is whether it is working in your favor. Left alone, it will drift toward easy wins and inflated metrics. Supervised properly, it can scale results no human could ever manage.
The balance is recognizing that automation is powerful, but not self-policing. Feed it clean data, define its lanes, and intervene when context shifts. Do that, and you will turn automation from a liability into an edge.