Archive

News

Google Says AI Creative Should Help Brands Differentiate, Not Blend In via @sejournal, @brookeosmundson

One of the more interesting moments in Google’s latest Ads Decoded podcast centered around a growing advertiser concern about AI-generated creative.As more brands gain access to the same AI tools, will advertising eventually start feeling repetitive?
Ginny Marvin, Ads Liaison at Google, raised that question directly during the discussion, asking whether the industry was heading toward a “sea of sameness.”
The response from Charles Boyd, Groupe Product Manager for Creative at Google, offered a clearer look on how Google is positioning AI creative tools inside Google Ads and where the company believes advertiser differentiation still comes from.
Google Says AI Creative Should Expand Creative Variation
Throughout the episode, Google repeatedly framed AI creative tools as systems designed to expand variation, accelerate testing, and adapt messaging across different audiences and placements.
Google repeatedly positioned these tools as dependent on advertiser strategy and direction.
Boyd described the value of generative tools as “the ability to quickly create different creative styles and iterations at scale.”
A large part of the industry conversation around AI advertising has focused on concerns about generic outputs and loss of differentiation.
Google appears to be taking the opposite position.
The company seems to believe advertisers with a strong understanding of their audience, messaging, and brand voice will be able to scale those strengths more efficiently through AI-assisted creative workflows.
Instead, Google appears to be positioning AI as infrastructure that helps advertisers produce more combinations, more testing opportunities, and more audience-specific variations.
That distinction gives more context to how Google is approaching AI creative tools.
Google Wants Advertisers Steering AI Creative
Another phrase Google returned to multiple times during the episode was “advertiser-in-the-loop.”
The broader point was that automation should still include advertiser guidance and oversight.
Google highlighted several tools designed to give advertisers more control over how AI-generated assets are created:

Text guidelines
Brand guidance
AI briefs
Asset Studio
Video enhancement previews
Text disclaimers
Final URL expansion controls

Boyd explained that advertisers can now provide specific text instructions directly inside campaigns.
For example, a brand could tell Google not to describe products using certain language or positioning:

Google literally will check every asset that gets created against each one of the guidelines that you provide.

According to Google, advertisers can specify up to 40 text guidelines within a campaign.
That is a noticeable shift from earlier automation features, which often felt far more rigid from a brand and messaging perspective.
The addition of text guidelines, AI briefs, and expanded creative controls suggests Google is trying to give advertisers more influence over how AI-generated assets are created and adapted across campaigns.
Google Is Increasingly Focused On Creative Breadth
Another notable takeaway from the episode was how often Google discussed creative diversity and variation.
The conversation repeatedly touched on:

Multiple responsive search ads
Different landing pages
Different aspect ratios
Audience-specific messaging
Diverse asset combinations
Creative tailored to different stages of the customer journey

At one point, Boyd encouraged advertisers to consider having multiple responsive search ads with different landing pages inside the same ad group.
That guidance would have sounded unusual to many PPC practitioners several years ago.
Google’s reasoning is that systems like AI Max can dynamically combine the following o better align messaging with different user journeys:

Headlines
Descriptions
Landing pages
Audience intent signals
Search context
Asset combinations

This feels connected to a larger shift happening across Google Ads.
Campaign optimization increasingly revolves around combinations of signals instead of isolated assets or keywords.
Sarah Hathiramani, Director of Product Management for YouTube Ads, reinforced this idea when discussing Demand Gen and YouTube creative:

There may be different audiences that you’re going after, and those audiences are going to resonate with very different creative messages.

That point becomes more important as Google’s systems increasingly personalize creative combinations dynamically.
Veo Signals Where Google Thinks Creative Production Is Going
The episode also offered another look at how Google sees AI changing creative production itself.
Hathiramani discussed Veo integrations inside Google Ads and Asset Studio.
According to Google, advertisers can upload up to three images and generate multiple short-form video variations automatically.
Google positioned this as a way to reduce production barriers for advertisers that may not have dedicated video resources:

Instead of asking every advertiser to become an in-house video production company, we’re able to use Veo to leverage automation while maintaining transparency and control.

That could be particularly meaningful for smaller advertisers or brands that historically relied heavily on static image creative.
It also reflects a larger trend happening across Google Ads.
The company increasingly wants advertisers participating across more inventory types, placements, formats, and surfaces.
AI-generated creative helps reduce some of the operational burden required to do that.
At the same time, Google repeatedly stressed that advertisers still need strong inputs.
Marvin specifically noted that brands with a clear voice and point of view are likely to benefit most from these tools.
What This Means For Advertisers
One of the more noticeable themes throughout the episode was how often Google emphasized creative breadth.
Multiple landing pages, multiple responsive search ads, audience-specific messaging, different aspect ratios, and structured asset testing all came up repeatedly across Search, Performance Max, Demand Gen, and YouTube.
That guidance reflects how Google’s systems increasingly optimize around combinations of assets, intent signals, placements, and audiences rather than isolated ads or keywords.
For advertisers, that may require a shift away from building a small set of highly controlled assets toward developing broader creative coverage across different audience stages and formats.
Looking Ahead
This episode offered a clearer look at how Google is talking about AI creative internally ahead of Google Marketing Live.
The discussion repeatedly centered around advertiser controls, creative testing, audience-specific messaging, and broader asset variation across campaigns.
That may be one of the more important signals for advertisers paying attention to where Google Ads is heading next.
Google appears to be encouraging advertisers to build more adaptable creative systems rather than relying on a small set of static assets.
Featured image: Google, YouTube

Read More »
Generative AI

Bing Team Describes How Grounding Differs From Search Indexing via @sejournal, @MattGSouthern

Microsoft’s Bing team published a framework describing how indexing requirements change when the goal is to ground AI answers rather than to rank search results.The post identifies five measurement areas where the company says the two systems diverge. It also names “abstention” as a design choice for AI-powered retrieval.
What Microsoft Described
The post argues that traditional search indexing and grounding indexing share the same foundation but serve different goals.
Traditional search, the team writes, asks “which pages should a user visit?” The grounding layer asks “what information can an AI system responsibly use to construct a response?”
Microsoft identifies five categories where the measurement requirements differ.
On factual fidelity, the team notes that some ranking mismatch is tolerable in traditional search because a user can click through and evaluate. In grounding, the post describes breaking content into retrievable chunks as a process that “can distort page substance in ways that never appear in any ranking signal.”
For source attribution quality, the Bing team calls attribution helpful in traditional search but “a core signal” in grounding. Not all indexed content matters equally as evidence for an AI answer, the team adds.
On freshness, Microsoft notes a clear difference in cost. Stale content in search is a ranking problem. In grounding, the post says, “a stale fact produces a misleading response.”
For coverage of high-value facts, the post explains that a missed document in search is recoverable because alternative results exist. In grounding, the index must ensure “the specific facts and sources that people are likely to ask about are actually available and groundable.”
On contradictions, traditional search can surface one source above another and let the user decide. A grounding system can’t do that. “An AI system that silently arbitrates between contradictory sources is one that may confidently assert the wrong thing,” the team says.
Abstention And Iterative Retrieval
The post also covers two design differences between the systems.
Microsoft calls declining to answer “abstention.” For a grounding system, that’s a valid outcome when support is missing, stale, or conflicting. Traditional search doesn’t need to make this judgment because it presents options for a human to evaluate.
Iterative retrieval is the other difference. Traditional search is typically a single interaction where a query goes in and ranked results come out. Grounding systems may need to ask follow-up questions, refine retrieval based on intermediate results, and combine evidence from multiple sources.
Errors in early retrieval steps “compound through subsequent reasoning steps in ways that no human reviewer would catch in real time,” the post adds.
Context
This blog post comes after a series of moves by Microsoft to build out its grounding tooling and give publishers visibility into it.
In February, Microsoft launched the AI Performance dashboard in Bing Webmaster Tools, giving sites their first page-level citation data for AI-generated answers. The company rewrote the Bing Webmaster Guidelines in March to include GEO as a named optimization category and added grounding query-to-page mapping to the dashboard the same month. At SEO Week in April, Madhavan previewed four additional features for the dashboard, including Citation Share and grounding query intent labels.
This post is more conceptual than those prior announcements. It doesn’t introduce new tools or features. Instead, it lays out the engineering principles the company describes as guiding its index evolution.
Why This Matters
This framework clarifies what Microsoft says its systems need from the index for AI answers.
Microsoft states grounding relies on the same crawling, quality, and web understanding as search, but grounded answers require accurate, fresh, attributable, and consistent evidence. Stale facts, weak sources, and contradictions pose risks when content is used for answers.
Looking Ahead
The post offers insight into why some content is easier for AI to cite. If the Citation Share and intent-label features previewed at SEO Week ship, they could help test whether the measurement priorities described here show up in actual publisher data.

Featured Image: TY Lim/Shutterstock

Read More »
Generative AI

The Whole Point Was The Mess via @sejournal, @pedrodias

Semrush put out an infographic last week. The kind built to be screenshotted into LinkedIn carousels and pasted into webinar decks. Four pillars. The fourth one is called “Technical GEO”: schema, structured data, clean architecture. The line that justifies it: “Ensures AI engines can parse and connect your content.”Ensures.

See it live on X/Twitter. Image Credit: Pedro Dias
That is the entire piece in one word. The architecture of large language models is, by design, the opposite of ensured. And schema has nothing to do with whether an LLM can parse text. LLMs parse text by reading text.
Semrush is far from alone. Every SaaS vendor with skin in this game is running variations of the same play. SEO-era controllability, repackaged under a new acronym. The same percentages, pillars, and pyramids. All dressed for a system that was built specifically not to work this way.
I have made the strategic version of this case before, in “Your AI Strategy Isn’t a Strategy.” This piece is the technical floor underneath it.
Built To Read Whatever’s There
Language models exist because the web is a mess. Forums, Wikipedia stubs, blog posts written at 2 A.M., scraped product copy, machine-translated junk, code comments, half-formed sentences, typos, contradictions, every register from journal article to subreddit shitpost. Pre-training data is the public web, and the public web has never been structured.
The transformer architecture handles this by treating language as sequences of tokens. There is no parser inside the model looking for tags. There is no preference for FAQ markup. The model reads the words. That is the mechanism.
At inference time, the model generates more tokens conditioned on the input. None of that pipeline is reading microdata.
Schema.org has real jobs. It feeds rich results in classical search. It supports entity disambiguation in the knowledge graph. It helps voice assistants pull structured fields. These are well-defined functions inside specific systems. They are not the mechanism by which an LLM understands a sentence.
So when a vendor claims structured data “ensures AI engines can parse and connect your content,” there is nothing to ensure. The parsing layer they are imagining is not there. The model already parsed your sentence. It did so by reading the sentence.
One Trick, Three Brand Colors
Look at the biggest GEO and AEO explainers in the market right now, and you find the same SEO-era playbook with the acronym swapped.
Semrush is already covered. The fourth pillar of its “Technical GEO” presents schema and structured data as ensuring something that the architecture cannot ensure.
AirOps published a graphic titled “15 Ways to Get Cited by ChatGPT, Perplexity, & Google.” It is the most numbers-heavy specimen of the genre I have seen this year. Schema markup increases citation likelihood by 13%. Sequential H2 to H4 tags double your chances. Short paragraphs make content 49% more likely to appear in AI answers. Perplexity cites UGC in 91% of answers, versus Gemini’s 7. Read the source notes and the methodology trail comes home. The numbers in the graphic trace back to AirOps’s own “2026 State of AI Search Report.” AirOps is citing AirOps on the question of whether AirOps’s prescriptions work.
Peec AI does a more honest job in places. Its complete guide to GEO acknowledges the probabilistic nature of the system and concedes that foundation models are already trained, so optimization focuses on the retrieval layer. Then it lands the same prescriptions: heading hierarchy, bullet lists, FAQ markup, multiple schema types layered on each page, summaries at the top of sections – all built on the chunking claim that long paragraphs lose out because the engine extracts fragments rather than full articles.
Profound, citing Aleyda Solis’s checklist, is the most explicit in its piece: “Optimize for Chunk-Level Retrieval.” Each section, a standalone snippet. Each page, a buffet from which the engine takes what it wants. The engine, in this telling, is a polite guest who only takes what’s been laid out.
Three vendors. Same operating assumption: a controllable, prescriptive technical discipline sits between a publisher and a citation, and it occupies roughly the same shape as classical SEO. Schema, headings, structure, freshness, machine-readable formats. Familiar. Billable. Reportable up to a chief marketing officer.
What Schema Actually Does
Schema is not the target here. Schema has real, well-defined uses. Classical Google search uses it for rich results: prices, ratings, event times, the structured fields that drive search engine results page features. The knowledge graph uses it for entity disambiguation. Voice assistants pull structured fields out of it.
None of that goes away. If you’re responsible for technical SEO, keep implementing schema where it earns its keep.
Schema cannot reach into a transformer and improve its comprehension of your prose. The model isn’t architected to read schema as schema. It receives whatever text the engine fetched and chose to include, and processes that text as language tokens. The entire GEO/AEO marketing layer rests on conflating two distinct claims: that schema is useful in classical search, and that schema feeds the LLM. The first is true. The second is a category error.
Chunking Is Not Yours To Optimize

Image Credit: Pedro Dias
The chunking advice keeps reappearing because it sounds technical, sits neatly inside a flowchart, and gives a content team something concrete to do on Monday morning. It is also incoherent.
Chunking happens at retrieval time. Perplexity, ChatGPT, and Gemini each run a retriever over candidate documents, split them according to their own configurations (length, overlap, embedding model, sometimes semantic boundaries), and feed the top-k chunks into the model’s context. Those configurations belong to the engine. They get tuned differently across systems and retuned on schedules no publisher is privy to. The publisher’s view of the chunker is the publisher’s view of the model: black box, results only.
So when a vendor says “optimize for chunk-level retrieval,” what is actually being recommended is good writing. Short, self-contained paragraphs. Clear definitions near the top of sections. Internal logical structure. These are recognizable disciplines: information architecture, technical writing, readability. They have been recognizable disciplines since long before the transformer was invented. They are not a new technical layer.
A more honest version of the pitch would be: Hire someone competent at writing for the web. That sentence does not fit on a pricing page.
The Paper They Don’t Read
There is an actual academic paper called “GEO.” Aggarwal and co-authors, KDD 2024. It is the closest thing to a citable source the SaaS layer has when it sells generative engine optimization as a discipline. It is also, as papers go, easy to skim. Nine “optimization methods” are tested on a 10,000-query benchmark, with results.
What did the paper find worked?
Adding citations from credible sources. Adding quotations from relevant sources. Adding statistics. Improving fluency. Making prose easier to understand. The methods that produced the largest visibility lifts were essentially: write content with more evidence in cleaner prose.
What did the paper test and find did not work?
Keyword stuffing, the closest analogue in the paper to the SEO-era playbook the current GEO and AEO vendors have repackaged. Result: below baseline. The paper’s authors note in plain terms that techniques effective in search engines “may not translate to success in this new paradigm.”
Notice what is not in the list of nine methods. Schema. Structured data. FAQ markup. Heading hierarchy. Machine-readable formats. None of these are tested in the paper, because none of them are the optimization surface the paper studies. The paper is studying content-level interventions: what you put in the words, not metadata layered around the words.
The SaaS layer borrowed the acronym. The findings stayed in the paper. “Technical GEO” is the SEO playbook with different stickers on the same boxes, sold against research that points the other way.
The Assumption Smuggled In
The SaaS pitch only makes sense if you smuggle in one assumption: that the system you’re optimizing for has the same shape as the one that’s been billing SEO clients for a quarter-century. Inputs you control. Outputs that respond. A retrievable causal chain between the two.
That model was always a simplification of how search worked. It was close enough to keep the industry running, and close enough to keep the invoices going out.
None of that simplification survives contact with generative systems. The same prompt produces different answers across sessions, users, temperatures, model versions, and days. Observed behavior across the major engines, not a clean property of any single one. The retrieval layer in front of the model also moves: candidate sources shift, ranking shifts, freshness windows shift. No causal chain runs between “I added FAQ schema” and “the model cited my page.” What runs between them is a probability distribution, and the things you control affect that distribution in ways nobody can cleanly attribute. Not even the people who created these systems.
This is the established line on AI visibility tools, repeated here because it applies to the whole prescriptive layer. Statistically unverifiable data drawn from non-deterministic systems. A 13% citation lift, measured how, against what counterfactual, with what reproducibility? The methodological questions aren’t what those numbers are designed to answer. The numbers are the answer. They land in a graphic, get rendered as ROI in a board deck, and the conversation moves on.
Something To Say In The Meeting
Here is the part that the architecture argument and the methodology argument do not, on their own, explain. Why does the entire SaaS layer keep successfully selling this stuff to people who are not stupid?
The honest version of the answer goes something like: We are operating with reduced visibility into a system that does not expose its mechanics, that returns different outputs to different people for the same query, that is changing month by month, and that has folded a substantial chunk of the funnel into a black box. We can keep doing the work that has always been the work: writing well, being useful, building authority, maintaining the site. We can monitor what shows up where. The deterministic dashboard we used to have is not coming back.
That sentence is unsayable in a marketing meeting. It admits the lever is not connected. It tells leadership that the budget line they approved does not have a corresponding action. It gives the team nothing to put in next quarter’s plan.
So the SaaS layer fills the gap. It manufactures levers. Pillars, frameworks, percentage lifts, schema audits, chunking optimization, machine-readable formats. Reportable activity. Defensible expenditure. Something to say in the meeting. None of this gets you visibility. The engine decides that. What is on offer is the appearance of control, sold to people who would rather pay than concede that control left the room.
Once the lever is bought, it has to be operated. Schema audits get scheduled. Chunking checklists get reviewed. Citation likelihoods get tracked, refreshed, and compared. The dashboard the team paid for becomes the dashboard the team optimizes against, and the dashboard quietly replaces the actual problem with the part of the problem it can see. By the time anyone notices, the SaaS layer is writing the brief.
None of this is a moral failure on the buyer’s side. What you are watching is what happens when an industry has been organized for a quarter-century around the premise that you can pull a lever and watch the meter move, and the meter quietly disconnects from the lever. The vendors aren’t running a con. They are filling demand for the only thing the buyer can no longer afford to do without: an answer that fits in a slide.
Rank And Tank, All Over Again
I keep coming back to a phrase that fits this whole moment: dancing to the rank-and-tank tunes (I borrowed it from David McSweeney). The cycle goes: Vendor sells the controllable-discipline frame, agencies adopt it, content teams scale production around the prescriptions, AI-generated articles get pumped out at volume because the prescriptions are easy to template. Some of it ranks for a while. Most of it eventually tanks because the prescriptions were never the mechanism, and the engine adjusts, or the freshness window closes, or the system simply moves on.
The SEO industry has done this before. Spinning. Mass programmatic pages. Doorway content. Each cycle followed the same shape: a controllable input dressed as a discipline, sold at scale, briefly effective, eventually punished by the engine, replaced by the next controllable input dressed as a discipline.
GEO and AEO are the current cycle. The pillars and percentages and pyramids are this cycle’s templates. Underneath them, the strategies bifurcate.
One path is brand presence exploitation. Plant your name where the engines look. Reddit threads, top-X listicles, the same citation surfaces over and over. The cycle feeds itself: engines cite the surfaces, brands work the surfaces, surfaces feed the engines. I have written about this loop before; I called it the Ouroboros pattern. The short version is that the loop is less stable than the strategy assumes.
The other path is content at scale. Produce variations, pump out volume, treat the templated output as content that could earn a citation. I have written about this approach before, in the “Scaling Disappointment” piece. The short version is that uniqueness is not value, and at the pace these prescriptions enable, qualitative review stops being possible. The volume of AI-generated copy produced under this path is this cycle’s externality.
The next cycle will sell the cleanup.
Forget for a second whether your “Technical GEO” is set up correctly. Ask whether the thing you are putting on the page is worth reading. Large language models were designed to read whatever is there. If what is there is good, it will be read. If what is there is templated, low-utility content optimized against a chunking heuristic that does not exist, it will eventually be filtered out: by the engine, by the user, or by the next academic paper showing that retrieval quality is degraded by exactly this kind of slop.
The advantage, when it accrues, will accrue to the people who do not get distracted. Who do not subscribe to the dashboard. Who keep working on product-driven SEO and the foundations that have always connected content to people. There are early signs of this on the timelines I read. Practitioners openly questioning whether optimizing against a non-deterministic surface makes sense at all, and asking whether their attention belongs back on classical search; which, at the end of the chain, is what feeds these systems anyway.
The mess was always the point. The architecture handles it. The industry just needs to stop pretending the mess is the problem.
More Resources:

This post was originally published on The Inference.

Featured Image: Roman Samborskyi/Shutterstock

Read More »
Generative AI

Google Adds More Links & Link Context To AI Search via @sejournal, @MattGSouthern

Google is rolling out five updates to how links appear in its generative AI Search experiences, including AI Mode and AI Overviews. The changes add subscription labels and inline links within responses, among other features.Here’s an example of how the changes will appear:
Image Credit: Google
Hema Budaraju, VP of Product Management, wrote about the updates in a blog post.
What’s New
The updates cover five areas of link display across Google’s generative AI Search features.
Subscription Highlighting In AI Mode & AI Overviews
Google is now labeling links from users’ news subscriptions in AI Mode and AI Overviews.
Google announced subscription highlighting in December for the Gemini app but didn’t provide a timeline for AI Mode or AI Overviews. Today’s announcement confirms the expansion to both surfaces.
Google said that in early testing, people were “significantly more likely” to click links labeled as their subscriptions. The company didn’t share specific numbers.
Publishers who want to help subscribers connect their subscriptions with Google can find details on Google’s developer website.
Topic Suggestions After AI Responses
Serchers will start to see suggestions for related content at the end of many AI responses. These link to articles or analyses on different aspects of the topic.
Discussion and Social Media Previews
Google’s AI responses will include previews of perspectives from public online discussions, social media, and other firsthand sources.
The company is also adding context to these links, such as creator names and community names.
See a provided example:
Image Credit: Google
More Inline Links Within Responses
Users will start to see more links directly within AI response text, positioned next to the relevant passage. Google didn’t quantify how many more inline links users will see or where the change will appear.
See a provided example:
Image Credit: Google
Link Hover Previews on Desktop
On desktop, hovering over an inline link in Google’s AI experiences will show a preview of the linked website. The preview includes the site name and page title. Google noted that people hesitate to click links when they don’t know where they lead.
See a provided example:

Why This Matters
Image Credit: Google
These updates show Google trying to make links more visible in AI Search at a time when publishers are closely monitoring referral traffic.
More inline links, hover previews, discussion cards, and subscription labels all point in the same direction. Google wants AI responses to feel less like dead ends and more like starting points for deeper exploration of the web.
That matters because the debate around AI Search has centered on whether AI answers reduce the need to click. Google is now adding more ways to click, but it isn’t providing the data publishers need to judge the impact.
For websites, that leaves the update in a familiar place. The link treatment may improve visibility, but the traffic impact will still need to be measured in analytics after the rollout reaches its audience.
Looking Ahead
The next question is how consistently these link treatments appear across AI Search surfaces.
Google didn’t provide rollout details for most of the updates, including geography, language, eligibility, or timing. That makes early testing difficult to interpret until we can see where the features appear and which types of queries trigger them.

Featured Image: Danuta Hyniewska/Shutterstock

Read More »
News

WordPress Loses Marketshare. Is Astro Eroding Their User Base? via @sejournal, @martinibuster

A discussion on Twitter about the many people posting that they’ve left WordPress for Astro went modestly viral, with longtime WordPress supporters explaining why they ditched WordPress for Astro. Statistics show that WordPress is losing users and Astro is gaining them at a rate of 100% year over year, indicating that the shift toward Astro is more than a passing trend.WordPress Marketshare Steadily Declining
An underreported statistic about WordPress is that WordPress peaked in mid 2025 with a marketshare of 43.6% and has been on a steady decline ever since, currently sitting at a 42.2% marketshare (according to W3Techs), a drop of 1.4%.  WordPress is losing marketshare, this is a fact.
An argument could be made that the WordPress marketshare percentage is inflated because a significant number of WordPress websites are abandoned or spam. The official WordPress statistics show that 10.56% of WordPress websites haven’t been updated since 2022. That means, if you don’t count abandoned websites, the actual WordPress marketshare is less than the 42.2%.
Astro Downloaded 2.5 Million Times Per Week
Astro is a static site generator and web framework. It’s not a content management system, it is software that generates websites from content. Sites created with Astro are called static because they are classic HTML web pages that are not dynamically generated from a database the way that PHP-based sites built with WordPress are. The consequence is that Astro-based sites download faster and are less complicated.
Astro had been steadily gaining popularity since it debuted in 2021. The idea that Astro is not popular and that the hoopla is loud voices is demonstrably false. Astro’s popularity is real. It is currently being downloaded at a rate of 2.5 million downloads per week. That’s a 100% increase from 2025 when it was being downloaded at a rate of 1.4 million weekly downloads.
Screenshot Of The Astro Download Statistics

Joost de Valk, founder of the Yoast SEO plugin, may have identified one of the reasons why so many people are turning to Astro. He recently wrote of an epiphany in which he realized he didn’t need a content management system (CMS); he just needed a website.
He wrote:
“For twenty years, ‘I want a website’ meant “I need a CMS.” WordPress, Joomla, Drupal: the conversation was always about which one. That framing is outdated. People never wanted a CMS. They want a website.”
Some Doubt The Astro Reality
Rayhan Arif, a WordPress business person, recently expressed his incredulity over the many tweets and blog posts by people sharing their experience leaving WordPress for Astro.
He tweeted:
“Every “’leaving WordPress’ post I come across seems to point to Astro. But when I dig a little deeper, I often don’t see any prior conversations or context showing those people were actually using WordPress in the first place.
To me, it starts to feel less organic and more like a coordinated narrative almost like a well-planned negative campaign against WordPress. It even makes me wonder whether some companies might be incentivizing this kind of messaging for their own business gains.
I could be wrong, but that’s honestly the impression I’ve been getting.”
That tweet seemed to imply that Cloudflare, which acquired Astro at the beginning of 2026, may have been orchestrating a whisper campaign. But nobody in that discussion agreed with him.
Astro Had Momentum Predates Cloudflare’s Acquisition
One of the first responses pushed back against the idea of a whisper campaign. Tommy J. Vedvik argued that the movement toward Astro was already happening before Cloudflare acquired Astro or created EmDash.
Vedvik responded:
“You’re probably wrong if you think it’s Cloudflare who’s behind it. This happened way before Cloudflare acquired Astro or created EmDash”
David V. Kimball made a similar point from his own experience, saying he had been encouraging people to move away from WordPress before Astro became part of a broader public conversation.
Kimball wrote:
“No, I’ve been pushing people away from WordPress to Astro before it was cool, starting about two years ago.”
He added that he had not seen many others doing the same until recently, but described himself as “a living breathing person” who had already helped many people make that move.
The pushback did not prove that every post was organic, but it did challenge the idea that the pattern began with Cloudflare. The replies suggested that Astro had already been gaining ground among some developers before the recent attention around EmDash.
Not everyone who left WordPress rushed toward Astro. Front end developer Tammy Hart shared that she was a WordPress defector but that she loathed Astro.
Longtime WordPress Defecting To Astro
Several respondents made a point of establishing their WordPress credentials. That became one of the thread’s strongest themes: the most detailed criticisms were not coming from people with no history in WordPress, but from people who said they had built careers, businesses, or client work around it.
Daniel Schutzsmith responded by identifying himself as both an Astro user and a long-time WordPress professional.
Schutzsmith wrote:
“Real Astro user here and I think you’ll see I’ve made 100s of sites with WordPress, been WCUS organizer 3 times, WCNYC organizer 2 times, and WCMIA organizer 1 time.”
Keanan Koppenhaver made a similar credibility claim, writing:
“Former VIP-agency dev, WP agency owner, current plugin owner and multi-time WordCamp speaker, here.
I’m using Astro for a lot now! Still WP in some cases, but Astro, especially when you’re working by yourself or with a git-knowledgeable small team, helps you move way faster.”
Mike Sewell acknowledged that he used both WordPress and Astro but that WordPress is no longer his go-to:
“I have been building client sites with WordPress since 2010. I still use it for some jobs, but I have found myself more and more reaching for other tools – nextjs + sanity, 11ty, and experimenting with EmDash. WordPress isn’t going anywhere but it’s no longer my go to.”
Many others shared similar backgrounds with WordPress, showing that there may be a movement from within the WordPress ecosystem that is moving away from WordPress.
The Matt Mullenweg Effect
While most users shared that the reason for leaving WordPress were pragmatic reasons like Astro’s performance and relative simplicity, Schutzsmith, the former WordCamp organizer who had built hundreds of WordPress sites, explained that his reason for moving away from WordPress was due to clients expressing skittishness about committing to WordPress after Matt Mullenweg’s actions which left hundreds, if not thousands, of WP Engine customers unable to update their WordPress websites.
Schutzsmith shared:
“Matt steered it in a horrible direction and now it’s become very hard to sell WP to enterprise clients that literally see the drama he creates by seeing the articles and videos across publications and influencers about it.
That impact has not gone away. In fact, the monumental expansion of things like Claude Code and OpenAI Codex, make moving to a less dramatic, more stable content management system, a no-brainer.
Selling enterprise on JavaScript based solutions has become much easier than convincing that same buyer that their site won’t be affected if Matt has another meltdown.
The minute he said .org is his personal website to distribute plugins and themes, it made it no longer safe for the enterprise.”
Schutzsmith was speaking from professional experience, later explaining that six of his client’s websites hosted on WP Engine were disrupted because of actions taken by Mullenweg.
AI Coding Tools Are Making Astro Viable
Lastly, AI-assisted coding was one of the highest cited explanations for why Astro is receiving more attention now. Several replies in the discussion suggested that AI tools make code-first site development feel faster and less dependent on traditional CMS interfaces.
David Hamilton described Astro as a strong fit for Claude Code, writing:
“I use astro because it is ridiculously compatible with Claude code.
I haven’t had to open a CMS a figma board or anything.
My latest websites are all made through Claude and astro, I don’t see myself moving back to the traditional website builders anytime soon.”
There were many others who shared the exact same experience.
A Balanced View Of AI And Web Development
Yet there are others like Kevin Geary, developer of the Etch website builder, who express a nuanced opinion of using AI for creating websites.
In a separate post from several weeks earlier he wrote:
“A logical, evidence-based conclusion about where AI fits into a quality development workflow is: AI is a great tool for improving productivity but has to be heavily reviewed and steered by someone who actually knows what they’re doing.
If you’re in the 100% anti-AI camp, you’re likely taking a purely emotional position.
And if you’re in the “AI can do it all, run 18 agents at a time, coders are cooked” camp, you’re also taking a purely emotional position.”
Is The WordPress Ecosystem Eroding?
Anecdotal evidence indicates that WordPress veterans are leaving or minimizing their use of WordPress, largely because of the benefits of Astro, not necessarily because WordPress is a poor experience. Some are choosing Astro because it is faster. Others are using it because AI coding tools make code-first workflows easier and faster. Some are choosing it because their sites are mostly static, making WordPress somewhat overkill for their situation.
Then there are some who believe that the WordPress governance drama has become a business risk.
The larger story is not that Astro is replacing WordPress; it’s still too early to make that claim. The more important question is whether the turn toward Astro is a sign that WordPress has become overly complex and that it’s now easier to build with AI.
Ironically, on the other side of that argument, WordPress is on the verge of a major transformational change due to AI. WordPress version 7.0 is set to bring all the benefits of AI into WordPress at a scale that no other CMS or website-building framework can match. The massive community of plugin and theme developers is poised to roll out AI-assisted features that will be hard to compete against.
Featured Image by Shutterstock/yulsiart

Read More »
digital marketing

AI Just Handed PR Its Best Opportunity In SEO & Most Teams Are Missing It via @sejournal, @gregjarboe

A recent Linkedin post by Jim Yu flagged that BrightEdge’s AI Catalyst team analyzed citation and brand mention patterns  from prompts across Finance, Healthcare, Education, and B2B Tech in five AI search engines: ChatGPT, Perplexity, Gemini, Google AI Mode, and Google AI Overviews. The finding that mattered most was buried in the data. Despite wildly different source preferences, every engine tends to surface the same brands. Source overlap across engine pairs runs from 16% to 59%. Brand overlap lands in a much tighter band, 35% to 55%. The engines wander far on what they cite. They hold fast to who they recommend.“Review sites, comparison content, trade press, retailer listings, and finance data are the sources AI most frequently reaches for. Investment in PR, trade coverage, review site visibility, and category comparison content translates into visibility across every engine, not just one.”
I sent that takeaway to Katie Delahaye Paine, as I have watched her track the collision points between data and communications longer than most people in this industry have been alive. She sent back a link to a press release that looks like a Yahoo Finance story with one question: “What do you think of this?”
In the link, Zen Media argued that AI tools are giving PR teams measurable citation data for the first time – a genuine breakthrough for a profession that has historically struggled to tie its work to business outcomes. I told her I thought PR had a new opportunity, if there were communications professionals brave enough to seize it. Unfortunately, too many are so service-oriented that they have become servile.
She responded, “Sad, but true.”
The Opportunity Is Real
The data backing this shift is not subtle. According to new Stacker research, earned media distribution can increase AI citations by a median lift of 239%. Brands with review profiles on platforms like Trustpilot, G2, and Capterra are three times more likely to be cited by ChatGPT than brands without them.
Lily Ray, while vice president of SEO Strategy & Research at Amsive, found that digital PR and YouTube optimization have become essential tactics for AI discovery. Amsive’s research showed ChatGPT most frequently cites Wikipedia, Perplexity leans on Reddit and YouTube, and Microsoft Copilot gravitates toward Forbes and Gartner. The implication is that being discussed in credible third-party sources, exactly what good PR has always produced, now feeds directly into the sources AI trusts most.
Research from Muck Rack’s Generative Pulse platform found that earned media still accounts for 25% of all AI citations. Press coverage, authoritative reviews, third-party writeups. The raw material of traditional PR. Being mentioned in a Wirecutter roundup or a TechCrunch feature, their team noted, does more for AI visibility than almost anything a brand publishes on its own site.
PR Has The Raw Material. It Lacks The Ambition
Here is the maddening part. Everything that matters for AI citation, third-party credibility, trade press coverage, review site presence, expert mentions,  is work that PR professionals are already positioned to do. They understand how to cultivate relationships with the publications and journalists that AI engines trust. They know how to place stories in the outlets that show up as authoritative sources. What they have lacked, historically, is a measurable link between that activity and business outcomes.
That link now exists. AI engines create a citation trail. Brand visibility in AI responses can be tracked, measured, and attributed. Katie has spent her career making the case that PR’s contribution to business value must be expressed in persuasion, trust, and credibility, which are all imminently measurable, she has argued for decades, if the profession would simply demand better tools. The tools now exist. The measurement imperative is sharper than ever.
So, why isn’t the initiative to combine SEO and PR coming from PR? Because far too many practitioners remain reactive. They wait to be briefed, execute campaigns, report outputs, and repeat. The organizations most likely to move first on this are the ones where someone outside the PR function, such as an SEO professional who understands earned media, a digital marketer watching their traffic erode from AI Overviews, a content strategist, or an entrepreneur tracking every conversion, recognizes that the citation graph and the PR strategy map are now the same document.
What A Unified Strategy Actually Looks Like
BrightEdge made the point clearly: Build for three source layers, not five LLM playbooks. Every AI engine draws from authoritative sources, commercial and editorial content, and user-generated content. They weigh the mix differently, Perplexity and Gemini lean toward authority, Google AI Overviews lean toward UGC, ChatGPT and AI Mode lean toward commercial content, but all three layers matter in every engine.
That means the practical work is, earn placement in trade press and analyst reports that are relevant to your category. Generate real customer reviews at scale. Produce comparison and category content that review aggregators and editorial sources want to reference. Get on the podcasts and YouTube channels that AI engines are already pulling from. None of this requires a new discipline. It requires PR and SEO professionals to stop treating their work as separate and start treating the citation graph as shared territory.
The brands that establish citation authority now are building something that compounds. Entity authority is slow to build and slow to decay. Early movers in AI visibility are capturing ground that late movers will find increasingly expensive to reclaim.
AI has handed PR the measurement framework it never had and the strategic mandate it always deserved. The question is whether the profession will recognize the moment, or wait for someone else in the organization to seize it first.
More Resources:

Featured Image: Red Vector/Shutterstock

Read More »
Generative AI

How To Design URL Structures For AI Retrieval, Not Just Rankings

For years, URL structure was a technical SEO checkbox. Keep it short, use hyphens, include the keyword, done.While that playbook still works, it’s increasingly incomplete. A growing share of the target audience now discovers content through AI assistants and large language models like ChatGPT, Perplexity, Claude, Google’s AI Overviews, and more.
These systems retrieve and synthesize information differently from traditional search crawlers, and if your URL architecture isn’t built with that in mind, you are increasing your chances of not being cited by LLMs.
In the new age of search, we need to extend those SEO fundamentals to also align with AI bots and how they crawl URLs.
Why AI Systems Read URLs Differently
Search engines have spent decades developing sophisticated crawling and indexing infrastructure. They follow redirects, resolve canonicals, parse JavaScript (sometimes…), and can infer context from a page when the URL is a string of random characters.
AI retrieval systems, particularly retrieval-augmented generation (RAG) pipelines and web-connected LLMs, often work differently.
There are three core parts to how RAG works:

The input prompt is converted into a vector embedding
Relevant passages are then retrieved from indexed URLs, documents and knowledge graphs in traditional search results like Google and Bing.
 An LLM like ChatGPT or similar will then process this information and generate a refined response.

A developer-built RAG system will essentially use data sources from URLs to extract content – they will crawl the URL, convert the web content into searchable “chunks” and store them as numerical vectors for later retrieval.
This is now also evolving into a realm of URL context grounding, which is specific to Gemini. The aim for URL context grounding is to help Gemini (and presumably AI Overviews / AI Mode) to better understand and answer questions about content and data in individual URLs without performing traditional RAG processing.
The aim here is for the LLM to specifically pull direct information from multiple URLs, analyze multiple reports and combine information from several sources to generate more accurate summaries. This should, in theory, help to improve AI factual accuracy and reduce hallucinations.
Then there’s zero shot classification – a technique that enables models to categorize the purpose of a webpage without any task-specific training data.
Rather than relying on labeled examples, the model analyzes semantic cues such as URL structures (treated as plain text strings) and maps them to predefined categories using methods like cosine similarity or prompt-based reasoning.
This works by drawing on the model’s pre-trained language knowledge to infer a page’s likely function, while also detecting distinct patterns in the words and phrasing that signal what type of content the page contains.
This has been particularly useful in identifying phishing links and other malicious links based solely on their URL patterns but also indicates how LLMs could begin to leverage zero-shot classification to rely solely on URLs to infer semantic relevance.
A URL that communicates nothing forces LLM models to work harder and introduces ambiguity in how the content gets categorized.
More practically, when an AI system cites a source in a response, it often surfaces the URL alongside the excerpt. That URL becomes visible to real users, in the same way it does in a search result, and they’re going to make real decisions about whether or not to click.
A clean, descriptive path builds trust in a way that something like /p?id-4821 never will.
The Core Principle Of URLs As Semantic Signals
Think of your URL structure as a secondary content layer – one that communicates hierarchy, topic, and specificity independently to the page title or H1, or other metadata.
A URL like /resources/seo/url-structure-ai-retrieval/ tells a retrieval system several things at once: This lives under a resources hub, it’s within an SEO category, and it covers a specific subtopic at a granular level.
That’s a useful signal. It maps to how AI systems try to understand content provenance and relevance before surfacing it in a response.
This matters especially for:

Long-tail and question-based queries, where AI systems are looking for precise matches to specific information needs.
Topical authority, where your URL hierarchy can reinforce that your domain owns a subject area.
Citation quality, where a descriptive URL increases the likelihood an AI agent references your content over a competitor’s near-identical page.

Practical Architecture Principles
There are a number of practical architecture principles that you should consider for both traditional search as well as AI search.
Use A Logical, Shallow Hierarchy
Deep nesting (i.e., /blog/category/subcategory/year/month/post-title/) creates noise, and your content is multiple steps away from the homepage. A structure three levels deep is almost always sufficient, i.e., domain > category > specific page. There are some CMS setups, like Shopify, where you are forced into four or five, depending on your theme (i.e., domain/blog/name-of-blog/blog-post-title/), but as long as you’re adding meaningful context and not administrative clutter, your structure will be aligned with the principle.
Make Every Segment Human-Readable And Descriptive
Avoid abbreviations, internal jargon, or ID numbers in public-facing URLs. A URL like /ai-search-optimization communicates the topic directly, whereas a URL like /aso-v2 communicates nothing without prior knowledge.
Align URL Slugs With The Actual Search Intent, Not Just The Keyword
There’s a big difference between /email-marketing and /email-marketing-best-practices-b2b. The second one signals specificity. It’s more likely to surface when an AI system is generating a response to a precise question, because the URL itself narrows the relevance scope before the content is even parsed.
Be Consistent With Category Naming Across Your Site
If your content strategy uses /guides/ for long-form education content and /blog/ for shorter commentary, maintain that consistently. It’s likely that AI retrieval systems build a model of your site structure over time. Inconsistency blurs the signal about what type of content lives where.
Avoid Keyword Stuffing In URLs
This is old SEO advice, but it also applies here. A URL crammed with keywords looks spammy to human users who see it cited in an AI response, which undermines the trust benefit you’re trying to build. One primary keyword or phrase per segment is the right call.
What Does This Look Like In Practice
If two different marketers are writing about the same topic, the URL structure could be key for RAG systems to better understand the context of the page as part of content retrieval.
An example:
Marketer A publishes /blog/2024/03/email-tips-part-4.
Marketer B publishes /resources/email-marketing/b2b-deliverability-guide.
Marketer B’s URL structure properly communicates hierarchy (resources hub), category (email marketing), and a specific focus (B2B deliverability) before a single word of body copy is processed.
Users are also more likely to benefit from this URL being cited because they can make sense of it immediately.
It can be argued that this type of clarity and specificity could compound as your URL structure and site’s information architecture can dictate the entire topical structure of your site, also helping to communicate both expertise and relevance.
The Redirect & Consolidation Problem
This is more relevant to enterprise sites that have accumulated URL debt like redirects, duplicate paths, and inconsistent slugs due to historical content management system migrations.
This could create a specific problem for AI retrieval if there are redirect chains and duplicate paths, as crawlers may not consistently land on the canonical version of a page, and different retrieval systems handle redirect resolution differently.
A practical fix will be to prioritize your website’s URLs. Audit your highest traffic and highest value pages, and confirm that their canonical URLs are clean, accessible, and structured in line with your current taxonomy.
Then work backward.
You don’t need to restructure the entire site for the chance of being cited in AI responses, but especially for your highest value pages, you should ensure that you’re offering the best possible URL signals.
What You Should Avoid Changing
It’s important not to always chase the big and shiny, so don’t completely restructure your entire site’s URL architecture just for marginal AI retrieval gains.
URL restructuring carries real SEO risk and time to recover link equity if 301 redirects are put in place – and there have been many web migration horror stories that can attest to what can happen when they’re not implemented correctly.
The goal is to apply these principles to new content and flag structural problems in existing high-value pages where the case to remediate these issues is clear and lower risk.
If your current URL structure already follows clean, descriptive, hierarchical conventions (which is all a standard part of SEO best practice), then congratulations! You’ve been optimizing for AI retrieval without even knowing.
In Summary
URL structure has always been a relatively small signal, but as AI assistants become more of a meaningful discovery channel, URL structures have the potential to be cited in more places than just Google and Bing.
They can help you to appear in AI-generated answers, they can shape citation quality, and they can contribute to how retrieval systems will categorize your content before anything else.
Simply build URLs that tell the story of your content clearly, before the user clicks on it.
More Resources:

Featured Image: Vitya_M/Shutterstock

Read More »
Local Search

Is Your Small Business Showing Up in Local Search? Here’s How To Find Out [Webinar] via @sejournal, @lorenbaker

Most small business owners have a Google Business Profile. Few have optimized it for how customers are actually searching today.Local search has split across multiple surfaces.
Customers are using Google Maps, asking voice assistants like Siri and Alexa, checking Yelp and Facebook reviews, and getting answers straight from AI tools like ChatGPT, often before they ever visit a website. If your small business isn’t showing up across those touchpoints, you’re losing customers to competitors who are.
Why Local Search Visibility Is Harder Than It Used to Be
Ranking on Google used to be the whole game. Now, local SEO means making sure your business information is accurate, consistent, and optimized across every platform a nearby customer might use to find you. That includes AI-generated search results, which pull from a different set of signals than traditional rankings, and most small business owners haven’t had time to figure out what those signals are.
What You’ll Learn in This Free SEO Webinar

About The Speakers
Thryv’s small business trainers work directly with owners every day, which means their advice is grounded in what actually works for businesses with small teams and limited time. Their last SEJ webinar drew over 1,000 registrants, and this session goes even deeper on the local search and AI visibility questions small business owners are asking right now.

Read More »