GEO Platform Shutdown Sparks Industry Debate Over AI Search via @sejournal, @MattGSouthern

Benjamin Houy shut down Lorelight, a generative engine optimization (GEO) platform designed to track brand visibility in ChatGPT, Claude, and Perplexity, after concluding most brands don’t need a specialized tool for AI search visibility.

Houy writes that, after reviewing hundreds of AI answers, the brands mentioned most often share familiar traits: quality content, mentions in authoritative publications, strong reputation, and genuine expertise.

He claims:

“There’s no such thing as ‘GEO strategy’ or ‘AI optimization’ separate from brand building… The AI models are trained on the same content that builds your brand everywhere else.”

Houy explains in a blog post that customers liked Lorelight’s insights but often churned because the data didn’t change their tactics. In his view, users pursued the same fundamentals with or without GEO dashboards.

He argues GEO tracking makes more sense as one signal inside broader SEO suites rather than as a standalone product. He points to examples of traditional SEO platforms incorporating AI-style visibility signals into existing toolsets rather than creating a separate category.

Debate Snapshot: Voices On Both Sides

Reactions show a genuine split in how marketers see “AI search.”

Some SEO professionals applauded the back-to-basics message. Others countered with cases where assistant referrals appear meaningful.

Here are some of the responses published so far:

  • Lily Ray: “Thank you for being honest and for sharing this publicly. The industry needs to hear this loud and clear.”
  • Randall Choh: “I beg to differ. It’s a growing metric… LLM searches usually have better search intents that lead to higher conversions.”
  • Karl McCarthy: “You’re right that quality content + authoritative mentions + reputation is what works… That’s not a tool. It’s a network.”
  • Nikki Pilkington raised consumer-fairness questions about shuttering a product and whether prior GEO-promotional content should be updated or removed.

These perspectives capture the industry tension. Some see AI search as a new performance channel worth measuring. Others see the same brand signals driving outcomes across SEO, PR, and now AI assistants.

How “AI Search Visibility” Is Being Measured

Because assistants work differently from web search, measurement is still uneven.

Assistants surface brands in two main ways: by citing and linking sources directly in answers, and by guiding people into familiar web results.

Referral tracking can come through direct links, copy-and-paste, or branded search follow-ups.

Attribution is messy because not all assistants pass clear referrers. Teams often combine UTM tagging on shared links with branded-search lift, direct-traffic spikes, and assisted-conversion reports to triangulate “LLM influence.”

That patchwork makes case studies persuasive but hard to generalize.

Why This Matters

The main question is whether AI search needs its own optimization framework or if it primarily benefits from the same brand signals.

If Houy is correct, standalone GEO tools might only produce engaging dashboards that seldom influence strategy.

On the other hand, if the advocates are correct, overlooking assistant visibility could mean missing out on profitable opportunities between traditional search and LLM-referred traffic.

What’s Next

It’s likely that SEO platforms will continue to fold “AI visibility” into existing analytics rather than creating a separate category.

The safest path for businesses is to continue doing the brand-building work that assistants already reward, while testing assistant-specific measurements where they are most likely to pay off.


Featured Image: Roman Samborskyi/Shutterstock

Can You Use AI To Write For YMYL Sites? (Read The Evidence Before You Do) via @sejournal, @MattGSouthern

Your Money or Your Life (YMYL) covers topics that affect people’s health, financial stability, safety, or general welfare, and rightly so Google applies measurably stricter algorithmic standards to these topics.

AI writing tools might promise to scale content production, but as writing for YMYL requires more consideration and author credibility than other content, can an LLM write content that is acceptable for this niche?

The bottom line is that AI systems fail at YMYL content, offering bland sameness where unique expertise and authority matter the most. AI produces unsupported medical claims 50% of the time, and hallucinates court holdings 75% of the time.

This article examines how Google enforces YMYL standards, shows evidence where AI fails, and why publishers relying on genuine expertise are positioning themselves for long-term success.

Google Treats YMYL Content With Algorithmic Scrutiny

Google’s Search Quality Rater Guidelines state that “for pages about clear YMYL topics, we have very high Page Quality rating standards” and these pages “require the most scrutiny.” The guidelines define YMYL as topics that “could significantly impact the health, financial stability, or safety of people.”

The algorithmic weight difference is documented. Google’s guidance states that for YMYL queries, the search engine gives “more weight in our ranking systems to factors like our understanding of the authoritativeness, expertise, or trustworthiness of the pages.”

The March 2024 core update demonstrated this differential treatment. Google announced expectations for a 40% reduction in low-quality content. YMYL websites in finance and healthcare were among the hardest hit.

The Quality Rater Guidelines create a two-tier system. Regular content can achieve “medium quality” with everyday expertise. YMYL content requires “extremely high” E-E-A-T levels. Content with inadequate E-E-A-T receives the “Lowest” designation, Google’s most severe quality judgment.

Given these heightened standards, AI-generated content faces a challenge in meeting them.

It might be an industry joke that the early hallucinations from ChatGPT advised people to eat stones, but it does highlight a very serious issue. Users depend on the quality of the results they read online, and not everyone is capable of deciphering fact from fiction.

AI Error Rates Make It Unsuitable For YMYL Topics

A Stanford HAI study from February 2024 tested GPT-4 with Retrieval-Augmented Generation (RAG).

Results: 30% of individual statements were unsupported. Nearly 50% of responses contained at least one unsupported statement. Google’s Gemini Pro achieved 10% fully supported responses.

These aren’t minor discrepancies. GPT-4 RAG gave treatment instructions for the wrong type of medical equipment. That kind of error could harm patients during emergencies.

Money.com tested ChatGPT Search on 100 financial questions in November 2024. Only 65% correct, 29% incomplete or misleading, and 6% wrong.

The system sourced answers from less-reliable personal blogs, failed to mention rule changes, and didn’t discourage “timing the market.”

Stanford’s RegLab study testing over 200,000 legal queries found hallucination rates ranging from 69% to 88% for state-of-the-art models.

Models hallucinate at least 75% of the time on court holdings. The AI Hallucination Cases Database tracks 439 legal decisions where AI produced hallucinated content in court filings.

Men’s Journal published its first AI-generated health article in February 2023. Dr. Bradley Anawalt of University of Washington Medical Center identified 18 specific errors.

He described “persistent factual mistakes and mischaracterizations of medical science,” including equating different medical terms, claiming unsupported links between diet and symptoms, and providing unfounded health warnings.

The article was “flagrantly wrong about basic medical topics” while having “enough proximity to scientific evidence to have the ring of truth.” That combination is dangerous. People can’t spot the errors because they sound plausible.

But even when AI gets the facts right, it fails in a different way.

Google Prioritizes What AI Can’t Provide

In December 2022, Google added “Experience” as the first pillar of its evaluation framework, expanding E-A-T to E-E-A-T.

Google’s guidance now asks whether content “clearly demonstrate first-hand expertise and a depth of knowledge (for example, expertise that comes from having used a product or service, or visiting a place).”

This question directly targets AI’s limitations. AI can produce technically accurate content that reads like a medical textbook or legal reference. What it can’t produce is practitioner insight. The kind that comes from treating patients daily or representing defendants in court.

The difference shows in the content. AI might be able to give you a definition of temporomandibular joint disorder (TMJ). A specialist who treats TMJ patients can demonstrate expertise by answering real questions people ask.

What does recovery look like? What mistakes do patients commonly make? When should you see a specialist versus your general dentist? That’s the “Experience” in E-E-A-T, a demonstrated understanding of real-world scenarios and patient needs.

Google’s content quality questions explicitly reward this. The company encourages you to ask “Does the content provide original information, reporting, research, or analysis?” and “Does the content provide insightful analysis or interesting information that is beyond the obvious?”

The search company warns against “mainly summarizing what others have to say without adding much value.” That’s precisely how large language models function.

This lack of originality creates another problem. When everyone uses the same tools, content becomes indistinguishable.

AI’s Design Guarantees Content Homogenization

UCLA research documents what researchers term a “death spiral of homogenization.” AI systems default toward population-scale mean preferences because LLMs predict the most statistically probable next word.

Oxford and Cambridge researchers demonstrated this in nature. When they trained an AI model on different dog breeds, the system increasingly produced only common breeds, eventually resulting in “Model Collapse.”

A Science Advances study found that “generative AI enhances individual creativity but reduces the collective diversity of novel content.” Writers are individually better off, but collectively produce a narrower scope of content.

For YMYL topics where differentiation and unique expertise provide competitive advantage, this convergence is damaging. If three financial advisors use ChatGPT to generate investment guidance on the same topic, their content will be remarkably similar. That offers no reason for Google or users to prefer one over another.

Google’s March 2024 update focused on “scaled content abuse” and “generic/undifferentiated content” that repeats widely available information without new insights.

So, how does Google determine whether content truly comes from the expert whose name appears on it?

How Google Verifies Author Expertise

Google doesn’t just look at content in isolation. The search engine builds connections in its knowledge graph to verify that authors have the expertise they claim.

For established experts, this verification is robust. Medical professionals with publications on Google Scholar, attorneys with bar registrations, financial advisors with FINRA records all have verifiable digital footprints. Google can connect an author’s name to their credentials, publications, speaking engagements, and professional affiliations.

This creates patterns Google can recognize. Your writing style, terminology choices, sentence structure, and topic focus form a signature. When content published under your name deviates from that pattern, it raises questions about authenticity.

Building genuine authority requires consistency, so it helps to reference past work and demonstrate ongoing engagement with your field. Link author bylines to detailed bio pages. Include credentials, jurisdictions, areas of specialization, and links to verifiable professional profiles (state medical boards, bar associations, academic institutions).

Most importantly, have experts write or thoroughly review content published under their names. Not just fact-checking, but ensuring the voice, perspective, and insights reflect their expertise.

The reason these verification systems matter goes beyond rankings.

The Real-World Stakes Of YMYL Misinformation

A 2019 University of Baltimore study calculated that misinformation costs the global economy $78 billion annually. Deepfake financial fraud affected 50% of businesses in 2024, with an average loss of $450,000 per incident.

The stakes differ from other content types. Non-YMYL errors cause user inconvenience. YMYL errors cause injury, financial mistakes, and erosion of institutional trust.

U.S. federal law prescribes up to 5 years in prison for spreading false information that causes harm, 20 years if someone suffers severe bodily injury, and life imprisonment if someone dies as a result. Between 2011 and 2022, 78 countries passed misinformation laws.

Validation matters more for YMYL because consequences cascade and compound.

Medical decisions delayed by misinformation can worsen conditions beyond recovery. Poor investment choices create lasting economic hardship. Wrong legal advice can result in loss of rights. These outcomes are irreversible.

Understanding these stakes helps explain what readers are looking for when they search YMYL topics.

What Readers Want From YMYL Content

People don’t open YMYL content to read textbook definitions they could find on Wikipedia. They want to connect with practitioners who understand their situation.

They want to know what questions other patients ask. What typically works. What to expect during treatment. What red flags to watch for. These insights come from years of practice, not from training data.

Readers can tell when content comes from genuine experience versus when it’s been assembled from other articles. When a doctor says “the most common mistake I see patients make is…” that carries weight AI-generated advice can’t match.

The authenticity matters for trust. In YMYL topics where people make decisions affecting their health, finances, or legal standing, they need confidence that guidance comes from someone who has navigated these situations before.

This understanding of what readers want should inform your strategy.

The Strategic Choice

Organizations producing YMYL content face a decision. Invest in genuine expertise and unique perspectives, or risk algorithmic penalties and reputational damage.

The addition of “Experience” to E-A-T in 2022 targeted AI’s inability to have first-hand experience. The Helpful Content Update penalized “summarizing what others have to say without adding much value,” an exact description of LLM functionality.

When Google enforces stricter YMYL standards and AI error rates are 18-88%, the risks outweigh the benefits.

Experts don’t need AI to write their content. They need help organizing their knowledge, structuring their insights, and making their expertise accessible. That’s a different role than generating content itself.

Looking Ahead

The value in YMYL content comes from knowledge that can’t be scraped from existing sources.

It comes from the surgeon who knows what questions patients ask before every procedure. The financial advisor who has guided clients through recessions. The attorney who has seen which arguments work in front of which judges.

The publishers who treat YMYL content as a volume game, whether through AI or human content farms, are facing a difficult path. The ones who treat it as a credibility signal have a sustainable model.

You can use AI as a tool in your process. You can’t use it as a replacement for human expertise.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Google Discusses Digital PR Impact On AI Recommendations via @sejournal, @martinibuster

Google’s VP of Product for Google Search confirmed that PR activities may be helpful for ranking better in certain contexts and offered an explanation of how AI search works and what content creators should focus on to stay relevant to users.

PR Helps Sites Get Recommended By AI

Something interesting that was said in the podcast was that it could be beneficial to be mentioned by other sites if you want your site to be recommended by AI. Robby Stein didn’t say that this is a ranking factor. He said this in the context of showing how AI search works, saying that the behavior of AI is similar to how a human might research a question.

The context of Robby Stein’s answer was about what businesses should focus on to rank better in AI chat.

Stein’s answer implies the context of the query fan-out technique, where, to answer a question, it performs Google searches (“questions it issues“).

Here’s his answer:

“Yeah, interestingly, the AI thinks a lot like a person would in terms of the kinds of questions it issues. And so if you’re a business and you’re mentioned in top business lists or from a public article that lots of people end up finding, those kinds of things become useful for the AI to find.”

The podcast host, Marina Mogilko, interrupted his answer to remark that this is about investing in PR. And Robby Stein agreed.

He continued:

“So it’s not really different from what you would do in that regard. I think ultimately, how else are you going to decide what business to go to? Well, you’d want to understand that.”

So the point he’s making is that in order to understand if a business should be recommended, the AI, like a human, would search on Google to see what businesses are recommended by other sites. The podcast host connected that statement to PR and Stein agreed. This aligns with anecdotal experiences where not just Google’s AI but also ChatGPT will provide answers to recommendation type queries with links to sites that recommend businesses. As the podcast host suggested and Stein seems to agree, this raises the importance of PR work, getting sites to mention your business.

Mogilko then noted that her friends might not have seen the articles that were published as a result of PR activities but that she notices that the AI does see those mentions and that the AI uses them in answers.

Robby agreed with her, affirming her observation, saying:

“That’s actually a good way of thinking about it because the way I mentioned before how our AI models work, they’re issuing these Google searches as a tool.”

Content Best Practices Are Key To Ranking In AI

Stein continued his answer, shifting the topic over to what kind of content ranks well in an AI model. He said that the same best practices for making helpful and clear content also applies for ranking in AI.

Stein continued his answer:

“And so in the same way that you would optimize your website and think about how I make helpful, clear information for people? People search for a certain topic, my website’s really helpful for that. Think of an AI doing that search now. And then knowing for that query, here are the best websites given that question.

That’s now… will come into the context window of the model. And so when it renders a response and provides all of these links for you to go deeper, that website’s more likely to show up.

And so it’s a lot of that standard best practices around building great content really do apply in the AI age for sure.”

The takeaway here is that helpful and clear content is important for standard search, AI answers, and people.

The podcast host next asked Robby about reviews, candidly remarking that some people pay for reviews and asking how that would “affect the system.” Stein didn’t address the question about how paid reviews would affect AI answers, but he did circle back to affirming that AI behaves like a human might, implying that if you’re going to think about how the AI system approaches answering a question, think of it in terms of how a human could go about it.

Stein answered:

“It’s hard. I mean, the reviews, I think, again, it’s kind of like a person where like imagine something is scanning for information and trying to find things that are helpful. So it’s possible that if you have reviews that are helpful, it could come up.

But I think it’s tricky to say to pinpoint any one thing like that. I think ultimately it’s about these general best practices where you want is reliable. Kind of like if you were to Google something, what pages would show up at the top of that query? It’s still a good way of thinking about it.”

AI Visibility Overlaps With SEO

At this point, the host responded to Stein’s answer by asking if optimizing for AI is “basically the same as SEO?”

Stein answered that there’s an overlap with SEO, but that the questions are different between regular organic search and AI. The implication is that organic search tends to have keyword-based queries, and AI is conversational.

Here’s Stein’s answer:

“I think there’s a lot of overlap. I think maybe one added nuance is that the kinds of questions that people ask AI are increasingly complicated and they tend to be in different spaces.

…And so if you think about what people use AI for, a lot of it is how to for complicated things or for purchase decisions or for advice about life things.

So people who are creating content in those areas, like if I were them, I would be a student of understanding the use cases of AI and what are growing in those use cases.

And there’s been some studies that have done around how people use these products in AI.

Those are really interesting to understand.”

Stein advised content creators to study how people are using AI to find answers to specific questions. He seemed to put some emphasis on this, so it appears to be something important to pay attention to.

Understand How People Use AI

This next part changes direction to emphasize that search is transforming beyond just simple text search, saying that it is going multimodal. A modality is a computer science word that refers to a type of information such as text, images, speech, or video. This circles back to studying how users are interacting with AI, in this case expanding to include the modality of information.

The podcast host asked the natural follow-up question to what Stein previously said about the overlap with SEO, asking how business owners can understand what people are looking for and whether Google Trends is useful for this.

Stein affirmed that Google Trends is useful for this purpose.

He responded:

“Google Trends is a really useful thing. I actually think people really underutilize that. Like we have real-time information around exactly what’s trending. You can see keyword values.

I think also, you know, the ads has a really fantastic estimation too. Like as you’re booking ads, you can see kind of traffic estimates for various things. So there’s Google has a lot of tools across ads, across the search console and search trends to get information about what people are searching for.

And I think that’s going to increasingly be more interesting as, a lot more of people’s time and attention goes towards not just the way people use search too, but in these areas that are growing quickly, particularly these long specific questions people ask and multimodal, where they’re asking with images or they’re using voice to have live conversation.”

Stein’s response reflects that SEOs and businesses may want to go beyond keyword-based research toward also understanding intent across multiple ways in which users interact with AI. We’re in a moment of volatility where it’s becoming important to recognize the context and purpose in how people search.

The two takeaways that I think are important are:

  1. Long and specific questions
  2. Multimodal contexts

What makes that important is that Stein confirmed that these kinds of searches are growing quickly. Businesses and SEOs should, therefore, be thinking, will my business or client show up if a person searches with voice using a lot of specific details? Will they show up if people use images to search? Image SEO may be becoming increasingly important as more people transition to finding things using AI.

Google Wants To Provide More Information

The host followed up by asking if Google would be providing more information about how users are searching, and Stein confirmed that in the future that’s something they want to do, not just for advertisers but for everyone who is impacted by AI search.

He answered:

“I think down the road we want to get, provide a glimpse into what people are searching for broadly. Yeah. Not just advertisers too. Yeah, it could be forever for anyone.

But ultimately, I think more and more people are searching in these new ways and so the systems need to better reflect those over time.”

Watch the interview at about the 13:30 minute mark:

Featured Image by Shutterstock/Krot_Studio

Discounted ChatGPT Go Is Now Available In 98 Countries via @sejournal, @martinibuster

ChatGPT Go, OpenAI’s heavily discounted version of ChatGPT, is now available in 98 countries, including eight European countries and five Latin American countries.

ChatGPT Go offers everything that’s included in the Free plan but more. So there’s more access to GPT-5, image generation, extended file upload capabilities, a larger context window, and collaboration features. ChatGPT Go is available on both Android and Apple mobile apps and on the macOS and Windows desktop environments.

The eight new European countries where ChatGPT Go is now available are:

  1. Austria
  2. Czech Republic
  3. Denmark
  4. Norway
  5. Poland
  6. Portugal
  7. Spain
  8. Sweden

The five Latin American countries are:

  1. Bolivia
  2. Brazil
  3. El Salvador
  4. Honduras
  5. Nicaragua

The full ChatGPT availability list is here. Note: The official list doesn’t list Sweden, but Sweden appears in the official changelog.

Featured Image by Shutterstock/Nithid

How Agentic Browsers Will Change Digital Marketing via @sejournal, @DuaneForrester

The footprint of large language models keeps expanding. You see it in productivity suites, CRM, ERP, and now in the browser itself. When the browser thinks and acts, the surface you optimize for changes. That has consequences for how people find, decide, and buy.

Microsoft shows how quickly this footprint can spread across daily work. Microsoft says nearly 70% of the Fortune 500 now use Microsoft 365 Copilot. The company also reports momentum through 2025 customer stories and events. These numbers do not represent unique daily users across every product; rather, they signal reach into large enterprises where Microsoft already has distribution.

Google is pushing Gemini across Search, Workspace, and Cloud. Google highlights Gemini inside Search’s AI Mode and AI Overviews, and claims billions of monthly AI assists across Workspace. Google also points to customers putting Gemini to work across industries and reports average time savings in Workspace studies. In education, Google says Gemini for Education now reaches more than 10 million U.S. college students.

Salesforce and SAP are bringing agents into core enterprise flows. Salesforce announced Agentforce and the Agentic Enterprise, with updates in 2025 that focus on visibility and control for scaled agent deployments. SAP positioned Joule as its AI copilot and added collaborative AI agents across business processes at TechEd 2024, with ongoing releases in 2025.

And with all of that as the backdrop, should we be surprised that the browser is the next layer?

Agentic BrowsersImage Credit: Duane Forrester

What Is An Agentic Browser?

A traditional browser shows you pages and links. An agentic browser interprets the page, carries context, and can act on your behalf. It can read, synthesize, click, fill forms, and complete tasks. You ask for an outcome. It gets you there.

Perplexity’s Comet positions itself as an AI-first browser that works for you. Reuters covered its launch and the pitch to challenge Chrome’s dominance, and The Verge reports that Comet is now available to everyone for free, after a staged rollout.

Security has already surfaced as a real issue for agentic browsers. Brave’s research describes indirect prompt injection in Comet and Guardio’s work, and coverage in the trade press highlights risks of agent-led flows being manipulated.

Now OpenAI has launched ChatGPT Atlas, a browser with ChatGPT at the core and an Agent Mode for task execution.

Why This Matters To Marketing

If the browser acts, people click less and complete more tasks in place. That compresses discovery and decision steps. It raises the bar for how your content gets selected, summarized, and executed against. Martech’s analysis points to a redefined search and discovery experience when browsers bring agentic and conversational layers to the fore.

You should expect four big shifts.

Search And Discovery

Agentic flows reduce list-based searching. The agent decides which sources to read, how to synthesize, and what to do with the result. Your goal shifts from ranking to getting selected by an agent that is optimizing for the user’s preferences and constraints. That may lower raw click volumes and raise the value of being the canonical source for a clear, task-oriented answer.

Content And Experience

Content needs to be agent-friendly. That means clear structure, strong headings, accurate metadata, concise summaries, and explicit steps. You are writing for two audiences. The human who skims. The agent that must parse, validate, and act. You also need task artifacts. Checklists. How to flows. Short-form answers that are safe to act on. If your page is the long version, your agent-friendly artifact is the short version. Both matter.

CRM And First-Party Data

Agents may mediate more of the journey. You need earlier value exchanges to earn consent. You need clean APIs and structured data so agents can hand off context, initiate sessions, and trigger next best actions. You will also need to model events differently when some actions never hit your pages.

Attribution And Measurement

If an agent fills the cart or completes a form from the browser, you will not see traditional click paths. Define agent-mediated events. Track handoffs between browser agent and brand systems. Update your models so agent exposure and agent action can be credited. This is the same lesson marketers learned with assistants and chat surfaces. The browser now brings that dynamic to the mainstream.

What To Do Now

Start With Content

Audit your top 10 discovery and consideration assets. Tighten structure. Add short summaries and task snippets that an agent can lift safely. Add schema markup where it makes sense. Make dates and facts explicit. Your goal is clarity that a machine can parse and that a person can trust. Guidance on why this matters sits in the information above from the Martech article.

Build Better Machine Signals

Use schema.org where it helps understanding. Ensure feeds, sitemaps, Open Graph, and product data are complete and current. If you have APIs that expose inventory, pricing, appointments, or availability, document them clearly and make developer access straightforward.

Map Agent-First Journeys

Draft a simple flow for how your category works when the browser is the assistant. Query. Synthesis. Selection. Action. Handoff. Conversion. Then decide where you can add value. This is not only about SEO. It is about being callable by an agent to help someone finish a task with less friction.

Rethink Metrics

Define what counts as an agent impression and an agent conversion for your brand. Tag flows where the agent initiates the session. Set targets for assisted conversions that originate in agent environments. Treat this as a separate channel for planning.

Run Small Tests

Try optimizing one or two pages for agent selection and summarize ability. Instrument the flows. If there are early integrations or pilots available with agent browsers, get on the list and learn fast. For competitive context, it is useful to watch how quickly Atlas and Comet gain traction relative to incumbent browsers. Sources on current market share are below.

Why Timing Matters

We have seen how fast browsers can grow when they meet a new need. Google launched Chrome in 2008. Within a year, it was already climbing the charts. Ars Technica covered Chrome’s 1.0 release on December 11, 2008. StatCounter Press said Chrome exceeded 20% worldwide in June 2011, up from 2.8% in June 2009. By May 2012, StatCounter reported Chrome overtook Internet Explorer for the first full month. Annual StatCounter data for 2012 shows Chrome at 31.42%, Internet Explorer at 26.47%, and Firefox at 18.88%.

Firefox had its own rapid start earlier in the 2000s. Mozilla announced 50 million Firefox downloads in April 2005 and 100 million by October 2005, less than a year after 1.0. Contemporary reporting placed Firefox at roughly 9 to 10% market share by late 2005 and 18% by mid-2008.

Microsoft Edge entered later. Edge originally shipped in 2015, then relaunched on Chromium in January 2020. Edge has fluctuated. Recent coverage says Edge lost share over the summer of 2025 on desktop, citing StatCounter.

For an executive snapshot of the current landscape, StatCounter’s September 2025 worldwide totals show Chrome at about 71.8%, Safari at about 13.9%, Edge at about 4.7%, Firefox at about 2.2%, Samsung Internet at about 1.9%, and Opera at about 1.7%.

What This History Tells Us

Each major browser shift came with a clear promise. Netscape made the web accessible. Internet Explorer bundled it with the operating system. Firefox made it safer and more private. Chrome made it faster and more reliable. Every breakthrough paired capability with trust. That pattern will repeat here.

Agentic browsers can only scale if they prove both utility and safety. They must handle tasks faster and more accurately than people, without introducing new risks. Security research around Comet shows what happens when that balance tips the wrong way. If users see agentic browsing as unpredictable or unsafe, adoption slows. If it saves them time and feels dependable, adoption accelerates. History shows that trust, not novelty, drives the curves that turn experiments into standards.

For marketers, that means your work will increasingly live inside systems where trust and clarity are prerequisites. Agents will need unambiguous facts, consistent markup, and licensing that spells out how your content can be reused. Brands that make that easy will be indexed, quoted, and recommended. Brands that make it hard will vanish from the new surface before they even know it exists.

How To Position Your Brand For Agentic Browsing

Keep your approach simple and disciplined. Make your best content easy to select, summarize, and act on. Structure it tightly, keep data fresh, and ensure everything you publish can stand on its own when pulled out of context. Give agents clean, accurate snippets they can carry forward without risk of misrepresentation.

Expose the data and signals that let agents work with you. APIs, feeds, and machine-readable product information reduce guesswork. If agents can confirm availability, pricing, or location from a trusted feed, your brand becomes a reliable component in the user’s automated flow. Pair that with clear permissions on how your data can be displayed or executed, so platforms have a reason to include you without fear of legal exposure.

Treat agent-mediated activity as its own marketing channel. Name it. Measure it. Fund it. You are early, so your metrics will change as you learn, but the act of measuring will force better questions about what visibility and conversion mean when browsers complete tasks for users. The first teams to formalize this channel will understand its economics long before competitors notice the traffic shift.

Finally, stay close to the platform evolution. Watch every release of OpenAI’s Atlas and Perplexity’s Comet. Track Google’s response as it blends Gemini deeper into Chrome and Search. The pace will feel familiar (like the late 2000s browser race), but the consequences will be larger. When the browser becomes an agent, it doesn’t just display the web; it intermediates it. Every business that relies on discovery, trust, or conversion will feel that change.

The Takeaway

Agentic browsers will not replace marketing, but they will reshape how attention, trust, and action flow online. The winners will be brands that think like system integrators (clear data, structured content, and dependable facts) because those are the materials agents build with. This is the early moment before the inflection point, the time to experiment while risk is low and visibility is still yours to claim.

History shows that when browsers evolve, the web follows. This time, the web won’t just render pages. It will think, decide, and act. Your job is to make sure that when it does, it acts in your favor.

Looking ahead, even a modest 10 to 15% adoption rate for agentic browsers within three years would represent one of the fastest paradigm shifts since Chrome’s launch. For marketers, that scale means the agent layer will become a measurable channel, and every optimization choice made now – how your data is structured, how your content is summarized, how trust is signaled – will compound its impact later.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Roman Samborskyi/Shutterstock

Anthropic Research Shows How LLMs Perceive Text via @sejournal, @martinibuster

Researchers from Anthropic investigated Claude 3.5 Haiku’s ability to decide when to break a line of text within a fixed width, a task that requires the model to track its position as it writes. The study yielded the surprising result that language models form internal patterns resembling the spatial awareness that humans use to track location in physical space.

Andreas Volpini tweeted about this paper and made an analogy to chunking content for AI consumption. In a broader sense, his comment works as a metaphor for how both writers and models navigate structure, finding coherence at the boundaries where one segment ends and another begins.

This research paper, however, is not about reading content but about generating text and identifying where to insert a line break in order to fit the text into an arbitrary fixed width. The purpose of doing that was to better understand what’s going on inside an LLM as it keeps track of text position, word choice, and line break boundaries while writing.

The researchers created an experimental task of generating text with a line break at a specific width. The purpose was to understand how Claude 3.5 Haiku decides on words to fit within a specified width and when to insert a line break, which required the model to track the current position within the line of text it is generating.

The experiment demonstrates how language models learn structure from patterns in text without explicit programming or supervision.

The Linebreaking Challenge

The linebreaking task requires the model to decide whether the next word will fit on the current line or if it must start a new one. To succeed, the model must learn the line width constraint (the rule that limits how many characters can fit on a line, like in physical space on a sheet of paper). To do this the LLM must track the number of characters written, compute how many remain, and decide whether the next word fits. The task demands reasoning, memory, and planning. The researchers used attribution graphs to visualize how the model coordinates these calculations, showing distinct internal features for the character count, the next word, and the moment a line break is required.

Continuous Counting

The researchers observed that Claude 3.5 Haiku represents line character counts not as counting step by step, but as a smooth geometric structure that behaves like a continuously curved surface, allowing the model to track position fluidly (on the fly) rather than counting symbol by symbol.

Something else that’s interesting is that they discovered the LLM had developed a boundary head (an “attention head”) that is responsible for detecting the line boundary. An attention mechanism weighs the importance of what is being considered (tokens). An attention head is a specialized component of the attention mechanism of an LLM. The boundary head, which is an attention head, specializes in the narrow task of detecting the end of line boundary.

The research paper states:

“One essential feature of the representation of line character counts is that the “boundary head” twists the representation, enabling each count to pair with a count slightly larger, indicating that the boundary is close. That is, there is a linear map QK which slides the character count curve along itself. Such an action is not admitted by generic high-curvature embeddings of the circle or the interval like the ones in the physical model we constructed. But it is present in both the manifold we observe in Haiku and, as we now show, in the Fourier construction. “

How Boundary Sensing Works

The researchers found that Claude 3.5 Haiku knows when a line of text is almost reaching the end by comparing two internal signals:

  1. How many characters it has already generated, and
  2. How long the line is supposed to be.

The aforementioned boundary attention heads decide which parts of the text to focus on. Some of these heads specialize in spotting when the line is about to reach its limit. They do this by slightly rotating or lining up the two internal signals (the character count and the maximum line width) so that when they nearly match, the model’s attention shifts toward inserting a line break.

The researchers explain:

“To detect an approaching line boundary, the model must compare two quantities: the current character count and the line width. We find attention heads whose QK matrix rotates one counting manifold to align it with the other at a specific offset, creating a large inner product when the difference of the counts falls within a target range. Multiple heads with different offsets work together to precisely estimate the characters remaining. “

Final Stage

At this stage of the experiment, the model has already determined how close it is to the line’s boundary and how long the next word will be. The last step is use that information.

Here’s how it’s explained:

“The final step of the linebreak task is to combine the estimate of the line boundary with the prediction of the next word to determine whether the next word will fit on the line, or if the line should be broken.”

The researchers found that certain internal features in the model activate when the next word would cause the line to exceed its limit, effectively serving as boundary detectors. When that happens, the model raises the chance of predicting a newline symbol and lowers the chance of predicting another word. Other features do the opposite: they activate when the word still fits, lowering the chance of inserting a line break.

Together, these two forces, one pushing for a line break and one holding it back, balance out to make the decision.

Model’s Can Have Visual Illusions?

The next part of the research is kind of incredible because they endeavored to test whether the model could be susceptible to visual illusions that would cause trip it up. They started with the idea of how humans can be tricked by visual illusions that present a false perspective that make lines of the same length appear to be different lengths, one shorter than the other.

Screenshot Of A Visual Illusion

Screenshot of two lines with arrow lines on each end that are pointed in different directions for each line, one inward and the other outward. This gives the illusion that one line is longer than the other.

The researchers inserted artificial tokens, such as “@@,” to see how they disrupted the model’s sense of position. These tests caused misalignments in the model’s internal patterns it uses to keep track of position, similar to visual illusions that trick human perception. This caused the model’s sense of line boundaries to shift, showing that its perception of structure depends on context and learned patterns. Even though LLMs don’t see, they experience distortions in their internal organization similar to how humans misjudge what they see by disrupting the relevant attention heads.

They explained:

“We find that it does modulate the predicted next token, disrupting the newline prediction! As predicted, the relevant heads get distracted: whereas with the original prompt, the heads attend from newline to newline, in the altered prompt, the heads also attend to the @@.”

They wondered if there was something special about the @@ characters or would any other random characters disrupt the model’s ability to successfully complete the task. So they ran a test with 180 different sequences and found that most of them did not disrupt the models ability to predict the line break point. They discovered that only a small group of characters that were code related were able to distract the relevant attention heads and disrupt the counting process.

LLMs Have Visual-Like Perception For Text

The study shows how text-based features evolve into smooth geometric systems inside a language model. It also shows that models don’t only process symbols, they create perception-based maps from them. This part, about perception, is to me what’s really interesting about the research. They keep circling back to analogies related to human perception and how those analogies keep fitting into what they see going on inside the LLM.

They write:

“Although we sometimes describe the early layers of language models as responsible for “detokenizing” the input, it is perhaps more evocative to think of this as perception. The beginning of the model is really responsible for seeing the input, and much of the early circuitry is in service of sensing or perceiving the text similar to how early layers in vision models implement low level perception.”

Then a little later they write:

“The geometric and algorithmic patterns we observe have suggestive parallels to perception in biological neural systems. …These features exhibit dilation—representing increasingly large character counts activating over increasingly large ranges—mirroring the dilation of number representations in biological brains. Moreover, the organization of the features on a low dimensional manifold is an instance of a common motif in biological cognition. While the analogies are not perfect, we suspect that there is still fruitful conceptual overlap from increased collaboration between neuroscience and interpretability.”

Implications For SEO?

Arthur C. Clarke wrote that advanced technology is indistinguishable from magic. I think that once you understand a technology it becomes more relatable and less like magic. Not all knowledge has a utilitarian use and I think understanding how an LLM perceives content is useful to the extent that it’s no longer magical. Will this research make you a better SEO? It deepens our understanding of how language models organize and interpret content structure, makes it more understandable and less like magic.

Read about the research here:

When Models Manipulate Manifolds: The Geometry of a Counting Task

Featured Image by Shutterstock/Krot_Studio

Google Labs & DeepMind Launch Pomelli AI Marketing Tool via @sejournal, @MattGSouthern

Pomelli, a Google Labs & DeepMind AI experiment, builds a “Business DNA” from your site and generates editable branded campaign assets for small businesses.

  • Pomelli scans your website to create a “Business DNA” profile.
  • It uses the created profile to keep content consistent across channels.
  • It suggests campaign ideas and generates editable marketing assets.
Why The Build Process Of Custom GPTs Matters More Than The Technology Itself

When Google introduced the transformer architecture in its 2017 paper “Attention Is All You Need,” few realized how much it would help transform digital work. Transformer architecture laid the foundations for today’s GPTs, which are now part of our daily work in SEO and digital marketing.

Search engines have used machine learning for decades, but it was the rise of generative AI that made many of us actively explore AI. AI platforms and tools like custom GPTs are already influencing how we research keywords, generate content ideas, and analyze data.

The real value, however, is not in using these tools to cut corners. It lies in designing them intentionally, aligning them with business goals, and ensuring they serve users’ needs.

This article is not a tutorial on how to build GPTs. I share why the build process itself matters, what I have learned so far, and how SEOs can use this product mindset to think more strategically in the age of AI.

From Barriers To Democratization

Not long ago, building tools without coding experience meant relying on developers, dealing with long lead times, and waiting for vendors to release new features. That has changed slightly. The democratization of technology has lowered the entry barriers, making it possible for anyone with curiosity to experiment with building tools like custom GPTs. At the same time, expectations have necessarily risen, as we expect tools to be intuitive, efficient, and genuinely useful.

This is a reason why technical skills still matter. But they’re not enough on their own. What matters more, in my opinion, is how we apply them. Are we solving a real problem? Are we creating workflows that align with business needs?

The strategic questions SEOs should be asking are no longer just “Can I build this?,” but:

  • Should I build this?
  • What problem am I solving, and for whom?
  • What’s the ultimate goal?

Why The Build Process Matters

Building a custom GPT is straightforward. Anyone can add a few instructions and click “save.” What really matters is what happens before and after: defining the audience, identifying the problem, scoping the work realistically, testing and refining outputs, and aligning them with business objectives.

In many ways, this is what good marketing has always been about: understanding the audience, defining their needs, and designing solutions that meet them.

As an international SEO, I’ve often seen cultural relevance and digital accessibility treated as afterthoughts. OpenAI offered me a way to explore whether AI could help address these challenges, especially since the tool is accessible to those of us without any coding expertise.

What began as a single project to improve cultural relevance in global SEO soon evolved into two separate GPTs when I realized the scope was larger than I could manage at the time.

That change wasn’t a failure, but a part of the process that led me toward a better solution.

Case Study: 2 GPTs, 1 Lesson

The Initial Idea

My initial idea was to build a custom GPT that could generate content ideas tailored to the UK, US, Canada, and Australia, taking both linguistic and cultural nuances into account.

As an international SEO, I know it is hard to engage global audiences who expect personalized experiences. Translation alone is not enough. Content must be linguistically accurate and contextually relevant.

This mirrors the wider shift in search itself. Users now expect personalized, context-driven results, and search engines are moving in that same direction.

A Change In Direction

As I began building, I quickly realized that the scope was bigger than expected. Capturing cultural nuance across four different markets while also learning how to build and refine GPTs required more time than I could commit at that moment.

Rather than leaving the project, I reframed it as a minimum viable product. I revisited the scope and shifted focus to another important challenge, but with a more consistent requirement – digital accessibility.

The accessibility GPT was designed to flag issues, suggest inclusive phrasing, and support internal advocacy. It adapted outputs to different roles, so SEOs, marketers, and project managers could each use it in relevant ways in their day-to-day work.

This wasn’t giving up on the content project. It was a deliberate choice to learn from one use case and apply those lessons to the next.

The Outcome

Working on the accessibility GPT first helped me think more carefully about scope and validation, which paid off.

As accessibility requirements are more consistent than cultural nuance, it was easier to refine prompts and test role-specific outputs, ensuring an inclusive, non-judgmental tone.

I shared the prototype with other SEOs and accessibility advocates. Their feedback was invaluable. Although their feedback was generally positive, they pointed out inconsistencies I hadn’t seen, including how I described the prompt in the GPT store.

After all, accessibility is not just about alt text or color contrast. It’s about how information is presented.

Once the accessibility GPT was running, I went back to the cultural content GPT, better prepared, with clearer expectations and a stronger process.

The key takeaway here is that the value lies not only in the finished product, but in the process of building, testing, and refining.

Risks And Challenges Along The Way

Not every risk became an issue, but the process brought its share of challenges.

The biggest was underestimating time and scope, which I solved by revisiting the plan and starting smaller. There were also platform limitations – ongoing model development, AI fatigue, and hallucinations. OpenAI itself has admitted that hallucinations are mathematically unavoidable. The best response is to be precise with prompts, keep instructions detailed, and always maintain a human-in-the-loop approach. GPTs should be seen as assistants, not replacements.

Collaboration added another layer of complexity. Feedback loops depended on colleagues’ availability, so I had to stay flexible and allow extra time. Their input, however, was crucial – I couldn’t have made progress without them. As none of the these are under my control, I could only keep on top of developments and do my best to handle all of them.

These challenges reinforced an important truth: Building strategically isn’t about chasing perfection, but about learning, adapting, and improving with each iteration.

Applying Product Thinking

The process I followed was similar to how product managers approach new products. SEOs can adopt the same mindset to design workflows that are both practical and strategic.

Validate The Problem

Not every issue needs AI – and not every issue needs solving. Identify and prioritize what really matters at that time and confirm whether a custom GPT, or any other tool, is the right way to address it.

Define The Use Case

Who will use the GPT, and how? A wide reach may sound appealing, but value comes from meeting specific needs. Otherwise, success can quickly fade away.

My GPTs are designed to support SEOs, marketers, and project managers in different scenarios of their daily work.

Prototype And Test

There is real value in starting small. With GPTs, I needed to write clear, specific instructions, then review the outputs and refine.

For instance, instead of asking the accessibility GPT for general ideas on making a form accessible, I instructed it to act as an SEO briefing developers on fixes or as a project manager assigning tasks.

For the content GPT, I instructed the GPT to act as a UK/ U.S. content strategist, developing inclusive, culturally relevant ideas for specific publications in British English/Standard American.

Iterate With Feedback

Bring colleagues and subject-matter experts into the process early. Their insights challenge assumptions, highlight inconsistencies, and make outputs more robust.

Keep On Top Of Developments

AI platforms evolve quickly, and processes also need to adapt to different scenarios. Product thinking means staying agile, adapting to change, and reassessing whether the tools we build still serve their purpose.

The roll-out of the failed GPT-5 reminded me how volatile the landscape can be.

Practical Applications For SEOs

Why build GPTs when there are already so many excellent SEO tools available? For me, it was partly curiosity and partly a way to test what I could achieve with my existing skills before suggesting a collaboration for a different product.

Custom GPTs can add real value in specific situations, especially with a human-in-the-loop approach. Some of the most useful applications I have found include:

  • Analyzing campaign data to support decision-making.
  • Assisting with competitor analysis across global markets.
  • Supporting content ideation for international audiences.
  • Clustering keywords or highlighting internal linking opportunities.
  • Drafting documentation or briefs.

The point is not to replace established tools or human expertise, but to use them as assistants within structured workflows. They can free up time for deeper thinking, while still requiring careful direction and review.

How SEOs Can Apply Product Thinking

Even if you never build a GPT, you can apply the same mindset in your day-to-day work. Here are a few suggestions:

  • Frame challenges strategically: Ask who the end user is, what they need, and what is broken in their experience. Don’t start with tactics without context.
  • Design repeatable processes: Build workflows that scale and evolve over time, instead of one-off fixes.
  • Test and learn: Treat tactics like prototypes. Run experiments, refine based on results. If A/B testing isn’t possible, as it often happens, at least be open to making any necessary adjustments where necessary.
  • Collaborate across teams: SEO does not exist in isolation. Work with UX, development, and content teams early. The key is to find ways to add value to their work.
  • Redefine success metrics: Qualified traffic, conversions, and internal process improvements matter in AI times. Success should reflect actual business impact.
  • Use AI strategically: Quick wins are tempting, but GPTs and other tools are best used to support structured workflows and highlight blind spots. Keep a human-in-the-loop approach to ensure outputs are accurate and relevant to your business needs.

Final Thought

The real innovation is not in the technology itself, but in how we choose to apply it.

We are now in the fifth industrial revolution, a time when humans and machines collaborate more closely than ever.

For SEOs, the opportunity is to move beyond tactical execution and start thinking like product strategists. That means asking sharper questions, testing hypotheses, designing smarter workflows, and creating solutions that adapt to real-world constraints.

It is about providing solutions, not just executing tasks.

More Resources:


Featured Image: SvetaZi/Shutterstock

The AI Search Visibility Audit: 15 Questions Every CMO Should Ask

This post was sponsored by IQRush. The opinions expressed in this article are the sponsor’s own.

Your traditional SEO is winning. Your AI visibility is failing. Here’s how to fix it.

Your brand dominates page one of Google. Domain authority crushes competitors. Organic traffic trends upward quarter after quarter. Yet when customers ask ChatGPT, Perplexity, or others about your industry, your brand is nowhere to be found.

This is the AI visibility gap, which causes missed opportunities in awareness and sales.

SEO ranking on page one doesn’t guarantee visibility in AI search.  The rules of ranking have shifted from optimization to verification.”

Raj Sapru, Netrush, Chief Strategy Officer

Recent analysis of AI-powered search patterns reveals a troubling reality: commercial brands with excellent traditional SEO performance often achieve minimal visibility in AI-generated responses. Meanwhile, educational institutions, industry publications, and comparison platforms consistently capture citations for product-related queries.

The problem isn’t your content quality. It’s that AI engines prioritize entirely different ranking factors than traditional search: semantic query matching over keyword density, verifiable authority markers over marketing claims, and machine-readable structure over persuasive copy.

This audit exposes 15 questions that separate AI-invisible brands from citation leaders.

We’re sharing the first 7 critical questions below, covering visibility assessment, authority verification, and measurement fundamentals. These questions will reveal your most urgent gaps and provide immediate action steps.

Question 1: Are We Visible in AI-Powered Search Results?

Why This Matters: Commercial brands with strong traditional SEO often achieve minimal AI citation visibility in their categories. A recent IQRush field audit found fewer than one in ten AI-generated answers included in the brand, showing how limited visibility remains, even for strong SEO performers. Educational institutions, industry publications, and comparison sites dominate AI responses for product queries—even when commercial sites have superior content depth. In regulated industries, this gap widens further as compliance constraints limit commercial messaging while educational content flows freely into AI training data.

How to Audit:

  • Test core product or service queries through multiple AI platforms (ChatGPT, Perplexity, Claude)
  • Document which sources AI engines cite: educational sites, industry publications, comparison platforms, or adjacent content providers
  • Calculate your visibility rate: queries where your brand appears vs. total queries tested

Action: If educational/institutional sources dominate, implement their citation-driving elements:

  • Add research references and authoritative citations to product content
  • Create FAQ-formatted content with an explicit question-answer structure
  • Deploy structured data markup (Product, FAQ, Organization schemas)
  • Make commercial content as machine-readable as educational sources

IQRush tracks citation frequency across AI platforms. Competitive analysis shows which schema implementations, content formats, and authority signals your competitors use to capture citations you’re losing.

Question 2: Are Our Expertise Claims Actually Verifiable?

Why This Matters: Machine-readable validation drives AI citation decisions: research references, technical standards, certifications, and regulatory documentation. Marketing claims like “industry-leading” or “trusted by thousands” carry zero weight. In one IQRush client analysis, more than four out of five brand mentions were supported by citations—evidence that structured, verifiable content is far more likely to earn visibility. Companies frequently score high on human appeal—compelling copy, strong brand messaging—but lack the structured authority signals AI engines require. This mismatch explains why brands with excellent traditional marketing achieve limited citation visibility.

How to Audit:

  • Review your priority pages and identify every factual claim made (performance stats, quality standards, methodology descriptions)
  • For each claim, check whether it links to or cites an authoritative source (research, standards body, certification authority)
  • Calculate verification ratio: claims with authoritative backing vs. total factual claims made

Action: For each unverified claim, either add authoritative backing or remove the statement:

  • Add specific citations to key claims (research databases, technical standards, industry reports)
  • Link technical specifications to recognized standards bodies
  • Include certification or compliance verification details where applicable
  • Remove marketing claims that can’t be substantiated with machine-verifiable sources

IQRush’s authority analysis identifies which claims need verification and recommends appropriate authoritative sources for your industry, eliminating research time while ensuring proper citation implementation.

Question 3: Does Our Content Match How People Query AI Engines?

Why This Matters: Semantic alignment matters more than keyword density. Pages optimized for traditional keyword targeting often fail in AI responses because they don’t match conversational query patterns. A page targeting “best project management software” may rank well in Google but miss AI citations if it doesn’t address how users actually ask: “What project management tool should I use for a remote team of 10?” In recent IQRush client audits, AI visibility clustered differently across verticals—consumer brands surfaced more frequently for transactional queries, while financial clients appeared mainly for informational intent. Intent mapping—informational, consideration, or transactional—determines whether AI engines surface your content or skip it.

How to Audit:

  • Test sample queries customers would use in AI engines for your product category
  • Evaluate whether your content is structured for the intent type (informational vs. transactional)
  • Assess if content uses conversational language patterns vs. traditional keyword optimization

Action: Align content with natural question patterns and semantic intent:

  • Restructure content to directly address how customers phrase questions
  • Create content for each intent stage: informational (education), consideration (comparison), transactional (specifications)
  • Use conversational language patterns that match AI engine interactions
  • Ensure semantic relevance beyond just keyword matching

IQRush maps your content against natural query patterns customers use in AI platforms, showing where keyword-optimized pages miss conversational intent.

Question 4: Is Our Product Information Structured for AI Recommendations?

Why This Matters: Product recommendations require structured data. AI engines extract and compare specifications, pricing, availability, and features from schema markup—not from marketing copy. Products with a comprehensive Product schema capture more AI citations in comparison queries than products buried in unstructured text. Bottom-funnel transactional queries (“best X for Y,” product comparisons) depend almost entirely on machine-readable product data.

How to Audit:

  • Check whether product pages include Product schema markup with complete specifications
  • Review if technical details (dimensions, materials, certifications, compatibility) are machine-readable
  • Test transactional queries (product comparisons, “best X for Y”) to see if your products appear
  • Assess whether pricing, availability, and purchase information is structured

Action: Implement comprehensive product data structure:

  • Deploy Product schema with complete technical specifications
  • Structure comparison information (tables, lists) that AI can easily parse
  • Include precise measurements, certifications, and compatibility details
  • Add FAQ schema addressing common product selection questions
  • Ensure pricing and availability data is machine-readable

IQRush’s ecommerce audit scans product pages for missing schema fields—price, availability, specifications, reviews—and prioritizes implementations based on query volume in your category.

Question 5: Is Our “Fresh” Content Actually Fresh to AI Engines?

Why This Matters: Recency signals matter, but timestamp manipulation doesn’t work. Pages with recent publication dates, but outdated information underperforms older pages with substantive updates: new research citations, current industry data, or refreshed technical specifications. Genuine content updates outweigh simple republishing with changed dates.

How to Audit:

  • Review when your priority pages were last substantively updated (not just timestamp changes)
  • Check whether content references recent research, current industry data, or updated standards
  • Assess if “evergreen” content has been refreshed with current examples and information
  • Compare your content recency to competitors appearing in AI responses

Action: Establish genuine content freshness practices:

  • Update high-priority pages with current research, data, and examples
  • Add recent case studies, industry developments, or regulatory changes
  • Refresh citations to include latest research or technical standards
  • Implement clear “last updated” dates that reflect substantive changes
  • Create update schedules for key content categories

IQRush compares your content recency against competitors capturing citations in your category, flagging pages that need substantive updates (new research, current data) versus pages where timestamp optimization alone would help.

Question 6: How Do We Measure What’s Actually Working?

Why This Matters: Traditional SEO metrics—rankings, traffic, CTR—miss the consideration impact of AI citations. Brand mentions in AI responses influence purchase decisions without generating click-through attribution, functioning more like brand awareness channels than direct response. CMOs operating without AI visibility measurement can’t quantify ROI, allocate budgets effectively, or report business impact to executives.

How to Audit:

  • Review your executive dashboards: Are AI visibility metrics present alongside SEO metrics?
  • Examine your analytics capabilities: Can you track how citation frequency changes month-over-month?
  • Assess competitive intelligence: Do you know your citation share relative to competitors?
  • Evaluate coverage: Which query categories are you blind to?

Action: Establish AI citation measurement:

  • Track citation frequency for core queries across AI platforms
  • Monitor competitive citation share and positioning changes
  • Measure sentiment and accuracy of brand mentions
  • Add AI visibility metrics to executive dashboards
  • Correlate AI visibility with consideration and conversion metrics

IQRush tracks citation frequency, competitive share, and month-over-month trends across across AI platforms. No manual testing or custom analytics development is required.

Question 7: Where Are Our Biggest Visibility Gaps?

Why This Matters: Brands typically achieve citation visibility for a small percentage of relevant queries, with dramatic variation by funnel stage and product category. IQRush analysis showed the same imbalance: consumer brands often surfaced in purchase-intent queries, while service firms appeared mostly in educational prompts. Most discovery moments generate zero brand visibility. Closing these gaps expands reach at stages where competitors currently dominate.

How to Audit:

  • List queries customers would ask about your products/services across different funnel stages
  • Group them by funnel stage (informational, consideration, transactional)
  • Test each query in AI platforms and document: Does your brand appear?
  • Calculate what percentage of queries produce brand mentions in each funnel stage
  • Identify patterns in the queries where you’re absent

Action: Target the funnel stages with lowest visibility first:

  • If weak at informational stage: Build educational content that answers “what is” and “how does” queries
  • If weak at consideration stage: Create comparison content structured as tables or side-by-side frameworks
  • If weak at transactional stage: Add comprehensive product specs with schema markup
  • Focus resources on stages where small improvements yield largest reach gains

IQRush’s funnel analysis quantifies gap size by stage and estimates impact, showing which content investments will close the most visibility gaps fastest.

The Compounding Advantage of Early Action

The first seven questions and actions highlight the differences between traditional SEO performance and AI search visibility. Together, they explain why brands with strong organic rankings often have zero citations in AI answers.

The remaining 8 questions in the comprehensive audit help you take your marketing further. They focus on technical aspects: the structure of your content, the backbone of your technical infrastructure, and the semantic strategies that signal true authority to AI. 

“Visibility in AI search compounds, making it harder for your competition to break through. The brands that make themselves machine-readable today will own the conversation tomorrow.”
Raj Sapru, Netrush, Chief Strategy Officer

IQRush data shows the same thing across industries: early brands that adopt a new AI answer engine optimization strategy quickly start to lock in positions of trust that competitors can’t easily replace. Once your brand becomes the reliable answer source, AI engines will start to default to you for related queries, and the advantage snowballs.

The window to be an early adopter and take AI visibility for your brand will not stay open forever.  As more brands invest in AI visibility, the visibility race is heating up.

Download the Complete AI Search Visibility Audit with detailed assessment frameworks, implementation checklists, and the 8 strategic questions covering content architecture, technical infrastructure, and linguistic optimization. Each question includes specific audit steps and immediate action items to close your visibility gaps and establish authoritative positioning before your market becomes saturated with AI-optimized competitors.

Image Credits

Featured Image: Image by IQRush. Used with permission.

In-Post Images: Image by IQRush. Used with permission.

Trust In AI Shopping Is Limited As Shoppers Verify On Websites via @sejournal, @MattGSouthern

A new IAB and Talk Shoppe study finds AI is accelerating discovery and comparisons, but it’s not the last stop.

Here are the key points before we get into the details:

  • AI pushes people to verify details on retailer sites, search, reviews, and forums rather than replacing those steps.
  • Only about half fully trust AI recommendations, which creates predictable detours when links are broken or specs and pricing don’t match.
  • Retailer traffic rises after AI, with one in three shoppers clicking through directly from an assistant.

About The Report

This report combines more than 450 screen-recorded AI shopping sessions with a U.S. survey of 600 consumers, giving you observed behavior and stated attitudes in one place.

It tracks where AI helps, where trust breaks, and what people do next.

Key Findings

AI speeds up research and makes it more focused, especially for comparing options, but it increases the number of steps as shoppers validate details elsewhere.

In the sessions, people averaged 1.6 steps before AI and 3.8 afterward, and 95% took extra steps to feel confident before ending a session.

Retailer and marketplace sites are the primary destination for validation. Seventy-eight percent of shoppers visited a retailer or marketplace during the journey, and 32% clicked directly from an AI tool.

The share that visited retailer sites rose from 20% before AI to 50% after AI. On those pages, people most often checked prices and deals, variants, reviews, and availability.

Low Trust In AI Recommendations

Trust is a constraint. Only 46% fully trusted AI shopping recommendations.

Common friction points where people lost trust were:

  • Missing links or sources
  • Mismatched specs or pricing
  • Outdated availability
  • Recommendations that didn’t fit budget or compatibility needs

These friction points sent people back to search, retailers, reviews, and forums.

Why This Matters

AI chatbots now shape mid-journey research.

If your product data, comparison content, and reviews are inconsistent with retailer listings, shoppers will notice when they verify elsewhere.

This reinforces the need to align details across channels to retain customer trust.

What To Do With This Info

Here are concrete steps you can take based on the report’s information:

  • Keep specs, pricing, availability, and variants in sync with retailer feeds.
  • Build comparison and “alternatives” pages around the attributes people prompt for.
  • Expand structured data for specs, variants, availability, and reviews.
  • Create content to answer common objections surfaced in forums and comment threads.
  • Monitor the queries and communities where shoppers validate information to close recurring gaps.

Looking Ahead

Respondents said AI made research feel easier, but confidence still depends on clear sources and verified reviews.

Expect assistants to keep influencing discovery while retailer and brand pages confirm the details that matter.

For more insight into how AI influences the shopping journey, see the full report.


Featured Image: Andrey_Popov/Shutterstock