Discounted ChatGPT Go Is Now Available In 98 Countries via @sejournal, @martinibuster

ChatGPT Go, OpenAI’s heavily discounted version of ChatGPT, is now available in 98 countries, including eight European countries and five Latin American countries.

ChatGPT Go offers everything that’s included in the Free plan but more. So there’s more access to GPT-5, image generation, extended file upload capabilities, a larger context window, and collaboration features. ChatGPT Go is available on both Android and Apple mobile apps and on the macOS and Windows desktop environments.

The eight new European countries where ChatGPT Go is now available are:

  1. Austria
  2. Czech Republic
  3. Denmark
  4. Norway
  5. Poland
  6. Portugal
  7. Spain
  8. Sweden

The five Latin American countries are:

  1. Bolivia
  2. Brazil
  3. El Salvador
  4. Honduras
  5. Nicaragua

The full ChatGPT availability list is here. Note: The official list doesn’t list Sweden, but Sweden appears in the official changelog.

Featured Image by Shutterstock/Nithid

How Agentic Browsers Will Change Digital Marketing via @sejournal, @DuaneForrester

The footprint of large language models keeps expanding. You see it in productivity suites, CRM, ERP, and now in the browser itself. When the browser thinks and acts, the surface you optimize for changes. That has consequences for how people find, decide, and buy.

Microsoft shows how quickly this footprint can spread across daily work. Microsoft says nearly 70% of the Fortune 500 now use Microsoft 365 Copilot. The company also reports momentum through 2025 customer stories and events. These numbers do not represent unique daily users across every product; rather, they signal reach into large enterprises where Microsoft already has distribution.

Google is pushing Gemini across Search, Workspace, and Cloud. Google highlights Gemini inside Search’s AI Mode and AI Overviews, and claims billions of monthly AI assists across Workspace. Google also points to customers putting Gemini to work across industries and reports average time savings in Workspace studies. In education, Google says Gemini for Education now reaches more than 10 million U.S. college students.

Salesforce and SAP are bringing agents into core enterprise flows. Salesforce announced Agentforce and the Agentic Enterprise, with updates in 2025 that focus on visibility and control for scaled agent deployments. SAP positioned Joule as its AI copilot and added collaborative AI agents across business processes at TechEd 2024, with ongoing releases in 2025.

And with all of that as the backdrop, should we be surprised that the browser is the next layer?

Agentic BrowsersImage Credit: Duane Forrester

What Is An Agentic Browser?

A traditional browser shows you pages and links. An agentic browser interprets the page, carries context, and can act on your behalf. It can read, synthesize, click, fill forms, and complete tasks. You ask for an outcome. It gets you there.

Perplexity’s Comet positions itself as an AI-first browser that works for you. Reuters covered its launch and the pitch to challenge Chrome’s dominance, and The Verge reports that Comet is now available to everyone for free, after a staged rollout.

Security has already surfaced as a real issue for agentic browsers. Brave’s research describes indirect prompt injection in Comet and Guardio’s work, and coverage in the trade press highlights risks of agent-led flows being manipulated.

Now OpenAI has launched ChatGPT Atlas, a browser with ChatGPT at the core and an Agent Mode for task execution.

Why This Matters To Marketing

If the browser acts, people click less and complete more tasks in place. That compresses discovery and decision steps. It raises the bar for how your content gets selected, summarized, and executed against. Martech’s analysis points to a redefined search and discovery experience when browsers bring agentic and conversational layers to the fore.

You should expect four big shifts.

Search And Discovery

Agentic flows reduce list-based searching. The agent decides which sources to read, how to synthesize, and what to do with the result. Your goal shifts from ranking to getting selected by an agent that is optimizing for the user’s preferences and constraints. That may lower raw click volumes and raise the value of being the canonical source for a clear, task-oriented answer.

Content And Experience

Content needs to be agent-friendly. That means clear structure, strong headings, accurate metadata, concise summaries, and explicit steps. You are writing for two audiences. The human who skims. The agent that must parse, validate, and act. You also need task artifacts. Checklists. How to flows. Short-form answers that are safe to act on. If your page is the long version, your agent-friendly artifact is the short version. Both matter.

CRM And First-Party Data

Agents may mediate more of the journey. You need earlier value exchanges to earn consent. You need clean APIs and structured data so agents can hand off context, initiate sessions, and trigger next best actions. You will also need to model events differently when some actions never hit your pages.

Attribution And Measurement

If an agent fills the cart or completes a form from the browser, you will not see traditional click paths. Define agent-mediated events. Track handoffs between browser agent and brand systems. Update your models so agent exposure and agent action can be credited. This is the same lesson marketers learned with assistants and chat surfaces. The browser now brings that dynamic to the mainstream.

What To Do Now

Start With Content

Audit your top 10 discovery and consideration assets. Tighten structure. Add short summaries and task snippets that an agent can lift safely. Add schema markup where it makes sense. Make dates and facts explicit. Your goal is clarity that a machine can parse and that a person can trust. Guidance on why this matters sits in the information above from the Martech article.

Build Better Machine Signals

Use schema.org where it helps understanding. Ensure feeds, sitemaps, Open Graph, and product data are complete and current. If you have APIs that expose inventory, pricing, appointments, or availability, document them clearly and make developer access straightforward.

Map Agent-First Journeys

Draft a simple flow for how your category works when the browser is the assistant. Query. Synthesis. Selection. Action. Handoff. Conversion. Then decide where you can add value. This is not only about SEO. It is about being callable by an agent to help someone finish a task with less friction.

Rethink Metrics

Define what counts as an agent impression and an agent conversion for your brand. Tag flows where the agent initiates the session. Set targets for assisted conversions that originate in agent environments. Treat this as a separate channel for planning.

Run Small Tests

Try optimizing one or two pages for agent selection and summarize ability. Instrument the flows. If there are early integrations or pilots available with agent browsers, get on the list and learn fast. For competitive context, it is useful to watch how quickly Atlas and Comet gain traction relative to incumbent browsers. Sources on current market share are below.

Why Timing Matters

We have seen how fast browsers can grow when they meet a new need. Google launched Chrome in 2008. Within a year, it was already climbing the charts. Ars Technica covered Chrome’s 1.0 release on December 11, 2008. StatCounter Press said Chrome exceeded 20% worldwide in June 2011, up from 2.8% in June 2009. By May 2012, StatCounter reported Chrome overtook Internet Explorer for the first full month. Annual StatCounter data for 2012 shows Chrome at 31.42%, Internet Explorer at 26.47%, and Firefox at 18.88%.

Firefox had its own rapid start earlier in the 2000s. Mozilla announced 50 million Firefox downloads in April 2005 and 100 million by October 2005, less than a year after 1.0. Contemporary reporting placed Firefox at roughly 9 to 10% market share by late 2005 and 18% by mid-2008.

Microsoft Edge entered later. Edge originally shipped in 2015, then relaunched on Chromium in January 2020. Edge has fluctuated. Recent coverage says Edge lost share over the summer of 2025 on desktop, citing StatCounter.

For an executive snapshot of the current landscape, StatCounter’s September 2025 worldwide totals show Chrome at about 71.8%, Safari at about 13.9%, Edge at about 4.7%, Firefox at about 2.2%, Samsung Internet at about 1.9%, and Opera at about 1.7%.

What This History Tells Us

Each major browser shift came with a clear promise. Netscape made the web accessible. Internet Explorer bundled it with the operating system. Firefox made it safer and more private. Chrome made it faster and more reliable. Every breakthrough paired capability with trust. That pattern will repeat here.

Agentic browsers can only scale if they prove both utility and safety. They must handle tasks faster and more accurately than people, without introducing new risks. Security research around Comet shows what happens when that balance tips the wrong way. If users see agentic browsing as unpredictable or unsafe, adoption slows. If it saves them time and feels dependable, adoption accelerates. History shows that trust, not novelty, drives the curves that turn experiments into standards.

For marketers, that means your work will increasingly live inside systems where trust and clarity are prerequisites. Agents will need unambiguous facts, consistent markup, and licensing that spells out how your content can be reused. Brands that make that easy will be indexed, quoted, and recommended. Brands that make it hard will vanish from the new surface before they even know it exists.

How To Position Your Brand For Agentic Browsing

Keep your approach simple and disciplined. Make your best content easy to select, summarize, and act on. Structure it tightly, keep data fresh, and ensure everything you publish can stand on its own when pulled out of context. Give agents clean, accurate snippets they can carry forward without risk of misrepresentation.

Expose the data and signals that let agents work with you. APIs, feeds, and machine-readable product information reduce guesswork. If agents can confirm availability, pricing, or location from a trusted feed, your brand becomes a reliable component in the user’s automated flow. Pair that with clear permissions on how your data can be displayed or executed, so platforms have a reason to include you without fear of legal exposure.

Treat agent-mediated activity as its own marketing channel. Name it. Measure it. Fund it. You are early, so your metrics will change as you learn, but the act of measuring will force better questions about what visibility and conversion mean when browsers complete tasks for users. The first teams to formalize this channel will understand its economics long before competitors notice the traffic shift.

Finally, stay close to the platform evolution. Watch every release of OpenAI’s Atlas and Perplexity’s Comet. Track Google’s response as it blends Gemini deeper into Chrome and Search. The pace will feel familiar (like the late 2000s browser race), but the consequences will be larger. When the browser becomes an agent, it doesn’t just display the web; it intermediates it. Every business that relies on discovery, trust, or conversion will feel that change.

The Takeaway

Agentic browsers will not replace marketing, but they will reshape how attention, trust, and action flow online. The winners will be brands that think like system integrators (clear data, structured content, and dependable facts) because those are the materials agents build with. This is the early moment before the inflection point, the time to experiment while risk is low and visibility is still yours to claim.

History shows that when browsers evolve, the web follows. This time, the web won’t just render pages. It will think, decide, and act. Your job is to make sure that when it does, it acts in your favor.

Looking ahead, even a modest 10 to 15% adoption rate for agentic browsers within three years would represent one of the fastest paradigm shifts since Chrome’s launch. For marketers, that scale means the agent layer will become a measurable channel, and every optimization choice made now – how your data is structured, how your content is summarized, how trust is signaled – will compound its impact later.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Roman Samborskyi/Shutterstock

Anthropic Research Shows How LLMs Perceive Text via @sejournal, @martinibuster

Researchers from Anthropic investigated Claude 3.5 Haiku’s ability to decide when to break a line of text within a fixed width, a task that requires the model to track its position as it writes. The study yielded the surprising result that language models form internal patterns resembling the spatial awareness that humans use to track location in physical space.

Andreas Volpini tweeted about this paper and made an analogy to chunking content for AI consumption. In a broader sense, his comment works as a metaphor for how both writers and models navigate structure, finding coherence at the boundaries where one segment ends and another begins.

This research paper, however, is not about reading content but about generating text and identifying where to insert a line break in order to fit the text into an arbitrary fixed width. The purpose of doing that was to better understand what’s going on inside an LLM as it keeps track of text position, word choice, and line break boundaries while writing.

The researchers created an experimental task of generating text with a line break at a specific width. The purpose was to understand how Claude 3.5 Haiku decides on words to fit within a specified width and when to insert a line break, which required the model to track the current position within the line of text it is generating.

The experiment demonstrates how language models learn structure from patterns in text without explicit programming or supervision.

The Linebreaking Challenge

The linebreaking task requires the model to decide whether the next word will fit on the current line or if it must start a new one. To succeed, the model must learn the line width constraint (the rule that limits how many characters can fit on a line, like in physical space on a sheet of paper). To do this the LLM must track the number of characters written, compute how many remain, and decide whether the next word fits. The task demands reasoning, memory, and planning. The researchers used attribution graphs to visualize how the model coordinates these calculations, showing distinct internal features for the character count, the next word, and the moment a line break is required.

Continuous Counting

The researchers observed that Claude 3.5 Haiku represents line character counts not as counting step by step, but as a smooth geometric structure that behaves like a continuously curved surface, allowing the model to track position fluidly (on the fly) rather than counting symbol by symbol.

Something else that’s interesting is that they discovered the LLM had developed a boundary head (an “attention head”) that is responsible for detecting the line boundary. An attention mechanism weighs the importance of what is being considered (tokens). An attention head is a specialized component of the attention mechanism of an LLM. The boundary head, which is an attention head, specializes in the narrow task of detecting the end of line boundary.

The research paper states:

“One essential feature of the representation of line character counts is that the “boundary head” twists the representation, enabling each count to pair with a count slightly larger, indicating that the boundary is close. That is, there is a linear map QK which slides the character count curve along itself. Such an action is not admitted by generic high-curvature embeddings of the circle or the interval like the ones in the physical model we constructed. But it is present in both the manifold we observe in Haiku and, as we now show, in the Fourier construction. “

How Boundary Sensing Works

The researchers found that Claude 3.5 Haiku knows when a line of text is almost reaching the end by comparing two internal signals:

  1. How many characters it has already generated, and
  2. How long the line is supposed to be.

The aforementioned boundary attention heads decide which parts of the text to focus on. Some of these heads specialize in spotting when the line is about to reach its limit. They do this by slightly rotating or lining up the two internal signals (the character count and the maximum line width) so that when they nearly match, the model’s attention shifts toward inserting a line break.

The researchers explain:

“To detect an approaching line boundary, the model must compare two quantities: the current character count and the line width. We find attention heads whose QK matrix rotates one counting manifold to align it with the other at a specific offset, creating a large inner product when the difference of the counts falls within a target range. Multiple heads with different offsets work together to precisely estimate the characters remaining. “

Final Stage

At this stage of the experiment, the model has already determined how close it is to the line’s boundary and how long the next word will be. The last step is use that information.

Here’s how it’s explained:

“The final step of the linebreak task is to combine the estimate of the line boundary with the prediction of the next word to determine whether the next word will fit on the line, or if the line should be broken.”

The researchers found that certain internal features in the model activate when the next word would cause the line to exceed its limit, effectively serving as boundary detectors. When that happens, the model raises the chance of predicting a newline symbol and lowers the chance of predicting another word. Other features do the opposite: they activate when the word still fits, lowering the chance of inserting a line break.

Together, these two forces, one pushing for a line break and one holding it back, balance out to make the decision.

Model’s Can Have Visual Illusions?

The next part of the research is kind of incredible because they endeavored to test whether the model could be susceptible to visual illusions that would cause trip it up. They started with the idea of how humans can be tricked by visual illusions that present a false perspective that make lines of the same length appear to be different lengths, one shorter than the other.

Screenshot Of A Visual Illusion

Screenshot of two lines with arrow lines on each end that are pointed in different directions for each line, one inward and the other outward. This gives the illusion that one line is longer than the other.

The researchers inserted artificial tokens, such as “@@,” to see how they disrupted the model’s sense of position. These tests caused misalignments in the model’s internal patterns it uses to keep track of position, similar to visual illusions that trick human perception. This caused the model’s sense of line boundaries to shift, showing that its perception of structure depends on context and learned patterns. Even though LLMs don’t see, they experience distortions in their internal organization similar to how humans misjudge what they see by disrupting the relevant attention heads.

They explained:

“We find that it does modulate the predicted next token, disrupting the newline prediction! As predicted, the relevant heads get distracted: whereas with the original prompt, the heads attend from newline to newline, in the altered prompt, the heads also attend to the @@.”

They wondered if there was something special about the @@ characters or would any other random characters disrupt the model’s ability to successfully complete the task. So they ran a test with 180 different sequences and found that most of them did not disrupt the models ability to predict the line break point. They discovered that only a small group of characters that were code related were able to distract the relevant attention heads and disrupt the counting process.

LLMs Have Visual-Like Perception For Text

The study shows how text-based features evolve into smooth geometric systems inside a language model. It also shows that models don’t only process symbols, they create perception-based maps from them. This part, about perception, is to me what’s really interesting about the research. They keep circling back to analogies related to human perception and how those analogies keep fitting into what they see going on inside the LLM.

They write:

“Although we sometimes describe the early layers of language models as responsible for “detokenizing” the input, it is perhaps more evocative to think of this as perception. The beginning of the model is really responsible for seeing the input, and much of the early circuitry is in service of sensing or perceiving the text similar to how early layers in vision models implement low level perception.”

Then a little later they write:

“The geometric and algorithmic patterns we observe have suggestive parallels to perception in biological neural systems. …These features exhibit dilation—representing increasingly large character counts activating over increasingly large ranges—mirroring the dilation of number representations in biological brains. Moreover, the organization of the features on a low dimensional manifold is an instance of a common motif in biological cognition. While the analogies are not perfect, we suspect that there is still fruitful conceptual overlap from increased collaboration between neuroscience and interpretability.”

Implications For SEO?

Arthur C. Clarke wrote that advanced technology is indistinguishable from magic. I think that once you understand a technology it becomes more relatable and less like magic. Not all knowledge has a utilitarian use and I think understanding how an LLM perceives content is useful to the extent that it’s no longer magical. Will this research make you a better SEO? It deepens our understanding of how language models organize and interpret content structure, makes it more understandable and less like magic.

Read about the research here:

When Models Manipulate Manifolds: The Geometry of a Counting Task

Featured Image by Shutterstock/Krot_Studio

Google Labs & DeepMind Launch Pomelli AI Marketing Tool via @sejournal, @MattGSouthern

Pomelli, a Google Labs & DeepMind AI experiment, builds a “Business DNA” from your site and generates editable branded campaign assets for small businesses.

  • Pomelli scans your website to create a “Business DNA” profile.
  • It uses the created profile to keep content consistent across channels.
  • It suggests campaign ideas and generates editable marketing assets.
Why The Build Process Of Custom GPTs Matters More Than The Technology Itself

When Google introduced the transformer architecture in its 2017 paper “Attention Is All You Need,” few realized how much it would help transform digital work. Transformer architecture laid the foundations for today’s GPTs, which are now part of our daily work in SEO and digital marketing.

Search engines have used machine learning for decades, but it was the rise of generative AI that made many of us actively explore AI. AI platforms and tools like custom GPTs are already influencing how we research keywords, generate content ideas, and analyze data.

The real value, however, is not in using these tools to cut corners. It lies in designing them intentionally, aligning them with business goals, and ensuring they serve users’ needs.

This article is not a tutorial on how to build GPTs. I share why the build process itself matters, what I have learned so far, and how SEOs can use this product mindset to think more strategically in the age of AI.

From Barriers To Democratization

Not long ago, building tools without coding experience meant relying on developers, dealing with long lead times, and waiting for vendors to release new features. That has changed slightly. The democratization of technology has lowered the entry barriers, making it possible for anyone with curiosity to experiment with building tools like custom GPTs. At the same time, expectations have necessarily risen, as we expect tools to be intuitive, efficient, and genuinely useful.

This is a reason why technical skills still matter. But they’re not enough on their own. What matters more, in my opinion, is how we apply them. Are we solving a real problem? Are we creating workflows that align with business needs?

The strategic questions SEOs should be asking are no longer just “Can I build this?,” but:

  • Should I build this?
  • What problem am I solving, and for whom?
  • What’s the ultimate goal?

Why The Build Process Matters

Building a custom GPT is straightforward. Anyone can add a few instructions and click “save.” What really matters is what happens before and after: defining the audience, identifying the problem, scoping the work realistically, testing and refining outputs, and aligning them with business objectives.

In many ways, this is what good marketing has always been about: understanding the audience, defining their needs, and designing solutions that meet them.

As an international SEO, I’ve often seen cultural relevance and digital accessibility treated as afterthoughts. OpenAI offered me a way to explore whether AI could help address these challenges, especially since the tool is accessible to those of us without any coding expertise.

What began as a single project to improve cultural relevance in global SEO soon evolved into two separate GPTs when I realized the scope was larger than I could manage at the time.

That change wasn’t a failure, but a part of the process that led me toward a better solution.

Case Study: 2 GPTs, 1 Lesson

The Initial Idea

My initial idea was to build a custom GPT that could generate content ideas tailored to the UK, US, Canada, and Australia, taking both linguistic and cultural nuances into account.

As an international SEO, I know it is hard to engage global audiences who expect personalized experiences. Translation alone is not enough. Content must be linguistically accurate and contextually relevant.

This mirrors the wider shift in search itself. Users now expect personalized, context-driven results, and search engines are moving in that same direction.

A Change In Direction

As I began building, I quickly realized that the scope was bigger than expected. Capturing cultural nuance across four different markets while also learning how to build and refine GPTs required more time than I could commit at that moment.

Rather than leaving the project, I reframed it as a minimum viable product. I revisited the scope and shifted focus to another important challenge, but with a more consistent requirement – digital accessibility.

The accessibility GPT was designed to flag issues, suggest inclusive phrasing, and support internal advocacy. It adapted outputs to different roles, so SEOs, marketers, and project managers could each use it in relevant ways in their day-to-day work.

This wasn’t giving up on the content project. It was a deliberate choice to learn from one use case and apply those lessons to the next.

The Outcome

Working on the accessibility GPT first helped me think more carefully about scope and validation, which paid off.

As accessibility requirements are more consistent than cultural nuance, it was easier to refine prompts and test role-specific outputs, ensuring an inclusive, non-judgmental tone.

I shared the prototype with other SEOs and accessibility advocates. Their feedback was invaluable. Although their feedback was generally positive, they pointed out inconsistencies I hadn’t seen, including how I described the prompt in the GPT store.

After all, accessibility is not just about alt text or color contrast. It’s about how information is presented.

Once the accessibility GPT was running, I went back to the cultural content GPT, better prepared, with clearer expectations and a stronger process.

The key takeaway here is that the value lies not only in the finished product, but in the process of building, testing, and refining.

Risks And Challenges Along The Way

Not every risk became an issue, but the process brought its share of challenges.

The biggest was underestimating time and scope, which I solved by revisiting the plan and starting smaller. There were also platform limitations – ongoing model development, AI fatigue, and hallucinations. OpenAI itself has admitted that hallucinations are mathematically unavoidable. The best response is to be precise with prompts, keep instructions detailed, and always maintain a human-in-the-loop approach. GPTs should be seen as assistants, not replacements.

Collaboration added another layer of complexity. Feedback loops depended on colleagues’ availability, so I had to stay flexible and allow extra time. Their input, however, was crucial – I couldn’t have made progress without them. As none of the these are under my control, I could only keep on top of developments and do my best to handle all of them.

These challenges reinforced an important truth: Building strategically isn’t about chasing perfection, but about learning, adapting, and improving with each iteration.

Applying Product Thinking

The process I followed was similar to how product managers approach new products. SEOs can adopt the same mindset to design workflows that are both practical and strategic.

Validate The Problem

Not every issue needs AI – and not every issue needs solving. Identify and prioritize what really matters at that time and confirm whether a custom GPT, or any other tool, is the right way to address it.

Define The Use Case

Who will use the GPT, and how? A wide reach may sound appealing, but value comes from meeting specific needs. Otherwise, success can quickly fade away.

My GPTs are designed to support SEOs, marketers, and project managers in different scenarios of their daily work.

Prototype And Test

There is real value in starting small. With GPTs, I needed to write clear, specific instructions, then review the outputs and refine.

For instance, instead of asking the accessibility GPT for general ideas on making a form accessible, I instructed it to act as an SEO briefing developers on fixes or as a project manager assigning tasks.

For the content GPT, I instructed the GPT to act as a UK/ U.S. content strategist, developing inclusive, culturally relevant ideas for specific publications in British English/Standard American.

Iterate With Feedback

Bring colleagues and subject-matter experts into the process early. Their insights challenge assumptions, highlight inconsistencies, and make outputs more robust.

Keep On Top Of Developments

AI platforms evolve quickly, and processes also need to adapt to different scenarios. Product thinking means staying agile, adapting to change, and reassessing whether the tools we build still serve their purpose.

The roll-out of the failed GPT-5 reminded me how volatile the landscape can be.

Practical Applications For SEOs

Why build GPTs when there are already so many excellent SEO tools available? For me, it was partly curiosity and partly a way to test what I could achieve with my existing skills before suggesting a collaboration for a different product.

Custom GPTs can add real value in specific situations, especially with a human-in-the-loop approach. Some of the most useful applications I have found include:

  • Analyzing campaign data to support decision-making.
  • Assisting with competitor analysis across global markets.
  • Supporting content ideation for international audiences.
  • Clustering keywords or highlighting internal linking opportunities.
  • Drafting documentation or briefs.

The point is not to replace established tools or human expertise, but to use them as assistants within structured workflows. They can free up time for deeper thinking, while still requiring careful direction and review.

How SEOs Can Apply Product Thinking

Even if you never build a GPT, you can apply the same mindset in your day-to-day work. Here are a few suggestions:

  • Frame challenges strategically: Ask who the end user is, what they need, and what is broken in their experience. Don’t start with tactics without context.
  • Design repeatable processes: Build workflows that scale and evolve over time, instead of one-off fixes.
  • Test and learn: Treat tactics like prototypes. Run experiments, refine based on results. If A/B testing isn’t possible, as it often happens, at least be open to making any necessary adjustments where necessary.
  • Collaborate across teams: SEO does not exist in isolation. Work with UX, development, and content teams early. The key is to find ways to add value to their work.
  • Redefine success metrics: Qualified traffic, conversions, and internal process improvements matter in AI times. Success should reflect actual business impact.
  • Use AI strategically: Quick wins are tempting, but GPTs and other tools are best used to support structured workflows and highlight blind spots. Keep a human-in-the-loop approach to ensure outputs are accurate and relevant to your business needs.

Final Thought

The real innovation is not in the technology itself, but in how we choose to apply it.

We are now in the fifth industrial revolution, a time when humans and machines collaborate more closely than ever.

For SEOs, the opportunity is to move beyond tactical execution and start thinking like product strategists. That means asking sharper questions, testing hypotheses, designing smarter workflows, and creating solutions that adapt to real-world constraints.

It is about providing solutions, not just executing tasks.

More Resources:


Featured Image: SvetaZi/Shutterstock

The AI Search Visibility Audit: 15 Questions Every CMO Should Ask

This post was sponsored by IQRush. The opinions expressed in this article are the sponsor’s own.

Your traditional SEO is winning. Your AI visibility is failing. Here’s how to fix it.

Your brand dominates page one of Google. Domain authority crushes competitors. Organic traffic trends upward quarter after quarter. Yet when customers ask ChatGPT, Perplexity, or others about your industry, your brand is nowhere to be found.

This is the AI visibility gap, which causes missed opportunities in awareness and sales.

SEO ranking on page one doesn’t guarantee visibility in AI search.  The rules of ranking have shifted from optimization to verification.”

Raj Sapru, Netrush, Chief Strategy Officer

Recent analysis of AI-powered search patterns reveals a troubling reality: commercial brands with excellent traditional SEO performance often achieve minimal visibility in AI-generated responses. Meanwhile, educational institutions, industry publications, and comparison platforms consistently capture citations for product-related queries.

The problem isn’t your content quality. It’s that AI engines prioritize entirely different ranking factors than traditional search: semantic query matching over keyword density, verifiable authority markers over marketing claims, and machine-readable structure over persuasive copy.

This audit exposes 15 questions that separate AI-invisible brands from citation leaders.

We’re sharing the first 7 critical questions below, covering visibility assessment, authority verification, and measurement fundamentals. These questions will reveal your most urgent gaps and provide immediate action steps.

Question 1: Are We Visible in AI-Powered Search Results?

Why This Matters: Commercial brands with strong traditional SEO often achieve minimal AI citation visibility in their categories. A recent IQRush field audit found fewer than one in ten AI-generated answers included in the brand, showing how limited visibility remains, even for strong SEO performers. Educational institutions, industry publications, and comparison sites dominate AI responses for product queries—even when commercial sites have superior content depth. In regulated industries, this gap widens further as compliance constraints limit commercial messaging while educational content flows freely into AI training data.

How to Audit:

  • Test core product or service queries through multiple AI platforms (ChatGPT, Perplexity, Claude)
  • Document which sources AI engines cite: educational sites, industry publications, comparison platforms, or adjacent content providers
  • Calculate your visibility rate: queries where your brand appears vs. total queries tested

Action: If educational/institutional sources dominate, implement their citation-driving elements:

  • Add research references and authoritative citations to product content
  • Create FAQ-formatted content with an explicit question-answer structure
  • Deploy structured data markup (Product, FAQ, Organization schemas)
  • Make commercial content as machine-readable as educational sources

IQRush tracks citation frequency across AI platforms. Competitive analysis shows which schema implementations, content formats, and authority signals your competitors use to capture citations you’re losing.

Question 2: Are Our Expertise Claims Actually Verifiable?

Why This Matters: Machine-readable validation drives AI citation decisions: research references, technical standards, certifications, and regulatory documentation. Marketing claims like “industry-leading” or “trusted by thousands” carry zero weight. In one IQRush client analysis, more than four out of five brand mentions were supported by citations—evidence that structured, verifiable content is far more likely to earn visibility. Companies frequently score high on human appeal—compelling copy, strong brand messaging—but lack the structured authority signals AI engines require. This mismatch explains why brands with excellent traditional marketing achieve limited citation visibility.

How to Audit:

  • Review your priority pages and identify every factual claim made (performance stats, quality standards, methodology descriptions)
  • For each claim, check whether it links to or cites an authoritative source (research, standards body, certification authority)
  • Calculate verification ratio: claims with authoritative backing vs. total factual claims made

Action: For each unverified claim, either add authoritative backing or remove the statement:

  • Add specific citations to key claims (research databases, technical standards, industry reports)
  • Link technical specifications to recognized standards bodies
  • Include certification or compliance verification details where applicable
  • Remove marketing claims that can’t be substantiated with machine-verifiable sources

IQRush’s authority analysis identifies which claims need verification and recommends appropriate authoritative sources for your industry, eliminating research time while ensuring proper citation implementation.

Question 3: Does Our Content Match How People Query AI Engines?

Why This Matters: Semantic alignment matters more than keyword density. Pages optimized for traditional keyword targeting often fail in AI responses because they don’t match conversational query patterns. A page targeting “best project management software” may rank well in Google but miss AI citations if it doesn’t address how users actually ask: “What project management tool should I use for a remote team of 10?” In recent IQRush client audits, AI visibility clustered differently across verticals—consumer brands surfaced more frequently for transactional queries, while financial clients appeared mainly for informational intent. Intent mapping—informational, consideration, or transactional—determines whether AI engines surface your content or skip it.

How to Audit:

  • Test sample queries customers would use in AI engines for your product category
  • Evaluate whether your content is structured for the intent type (informational vs. transactional)
  • Assess if content uses conversational language patterns vs. traditional keyword optimization

Action: Align content with natural question patterns and semantic intent:

  • Restructure content to directly address how customers phrase questions
  • Create content for each intent stage: informational (education), consideration (comparison), transactional (specifications)
  • Use conversational language patterns that match AI engine interactions
  • Ensure semantic relevance beyond just keyword matching

IQRush maps your content against natural query patterns customers use in AI platforms, showing where keyword-optimized pages miss conversational intent.

Question 4: Is Our Product Information Structured for AI Recommendations?

Why This Matters: Product recommendations require structured data. AI engines extract and compare specifications, pricing, availability, and features from schema markup—not from marketing copy. Products with a comprehensive Product schema capture more AI citations in comparison queries than products buried in unstructured text. Bottom-funnel transactional queries (“best X for Y,” product comparisons) depend almost entirely on machine-readable product data.

How to Audit:

  • Check whether product pages include Product schema markup with complete specifications
  • Review if technical details (dimensions, materials, certifications, compatibility) are machine-readable
  • Test transactional queries (product comparisons, “best X for Y”) to see if your products appear
  • Assess whether pricing, availability, and purchase information is structured

Action: Implement comprehensive product data structure:

  • Deploy Product schema with complete technical specifications
  • Structure comparison information (tables, lists) that AI can easily parse
  • Include precise measurements, certifications, and compatibility details
  • Add FAQ schema addressing common product selection questions
  • Ensure pricing and availability data is machine-readable

IQRush’s ecommerce audit scans product pages for missing schema fields—price, availability, specifications, reviews—and prioritizes implementations based on query volume in your category.

Question 5: Is Our “Fresh” Content Actually Fresh to AI Engines?

Why This Matters: Recency signals matter, but timestamp manipulation doesn’t work. Pages with recent publication dates, but outdated information underperforms older pages with substantive updates: new research citations, current industry data, or refreshed technical specifications. Genuine content updates outweigh simple republishing with changed dates.

How to Audit:

  • Review when your priority pages were last substantively updated (not just timestamp changes)
  • Check whether content references recent research, current industry data, or updated standards
  • Assess if “evergreen” content has been refreshed with current examples and information
  • Compare your content recency to competitors appearing in AI responses

Action: Establish genuine content freshness practices:

  • Update high-priority pages with current research, data, and examples
  • Add recent case studies, industry developments, or regulatory changes
  • Refresh citations to include latest research or technical standards
  • Implement clear “last updated” dates that reflect substantive changes
  • Create update schedules for key content categories

IQRush compares your content recency against competitors capturing citations in your category, flagging pages that need substantive updates (new research, current data) versus pages where timestamp optimization alone would help.

Question 6: How Do We Measure What’s Actually Working?

Why This Matters: Traditional SEO metrics—rankings, traffic, CTR—miss the consideration impact of AI citations. Brand mentions in AI responses influence purchase decisions without generating click-through attribution, functioning more like brand awareness channels than direct response. CMOs operating without AI visibility measurement can’t quantify ROI, allocate budgets effectively, or report business impact to executives.

How to Audit:

  • Review your executive dashboards: Are AI visibility metrics present alongside SEO metrics?
  • Examine your analytics capabilities: Can you track how citation frequency changes month-over-month?
  • Assess competitive intelligence: Do you know your citation share relative to competitors?
  • Evaluate coverage: Which query categories are you blind to?

Action: Establish AI citation measurement:

  • Track citation frequency for core queries across AI platforms
  • Monitor competitive citation share and positioning changes
  • Measure sentiment and accuracy of brand mentions
  • Add AI visibility metrics to executive dashboards
  • Correlate AI visibility with consideration and conversion metrics

IQRush tracks citation frequency, competitive share, and month-over-month trends across across AI platforms. No manual testing or custom analytics development is required.

Question 7: Where Are Our Biggest Visibility Gaps?

Why This Matters: Brands typically achieve citation visibility for a small percentage of relevant queries, with dramatic variation by funnel stage and product category. IQRush analysis showed the same imbalance: consumer brands often surfaced in purchase-intent queries, while service firms appeared mostly in educational prompts. Most discovery moments generate zero brand visibility. Closing these gaps expands reach at stages where competitors currently dominate.

How to Audit:

  • List queries customers would ask about your products/services across different funnel stages
  • Group them by funnel stage (informational, consideration, transactional)
  • Test each query in AI platforms and document: Does your brand appear?
  • Calculate what percentage of queries produce brand mentions in each funnel stage
  • Identify patterns in the queries where you’re absent

Action: Target the funnel stages with lowest visibility first:

  • If weak at informational stage: Build educational content that answers “what is” and “how does” queries
  • If weak at consideration stage: Create comparison content structured as tables or side-by-side frameworks
  • If weak at transactional stage: Add comprehensive product specs with schema markup
  • Focus resources on stages where small improvements yield largest reach gains

IQRush’s funnel analysis quantifies gap size by stage and estimates impact, showing which content investments will close the most visibility gaps fastest.

The Compounding Advantage of Early Action

The first seven questions and actions highlight the differences between traditional SEO performance and AI search visibility. Together, they explain why brands with strong organic rankings often have zero citations in AI answers.

The remaining 8 questions in the comprehensive audit help you take your marketing further. They focus on technical aspects: the structure of your content, the backbone of your technical infrastructure, and the semantic strategies that signal true authority to AI. 

“Visibility in AI search compounds, making it harder for your competition to break through. The brands that make themselves machine-readable today will own the conversation tomorrow.”
Raj Sapru, Netrush, Chief Strategy Officer

IQRush data shows the same thing across industries: early brands that adopt a new AI answer engine optimization strategy quickly start to lock in positions of trust that competitors can’t easily replace. Once your brand becomes the reliable answer source, AI engines will start to default to you for related queries, and the advantage snowballs.

The window to be an early adopter and take AI visibility for your brand will not stay open forever.  As more brands invest in AI visibility, the visibility race is heating up.

Download the Complete AI Search Visibility Audit with detailed assessment frameworks, implementation checklists, and the 8 strategic questions covering content architecture, technical infrastructure, and linguistic optimization. Each question includes specific audit steps and immediate action items to close your visibility gaps and establish authoritative positioning before your market becomes saturated with AI-optimized competitors.

Image Credits

Featured Image: Image by IQRush. Used with permission.

In-Post Images: Image by IQRush. Used with permission.

Trust In AI Shopping Is Limited As Shoppers Verify On Websites via @sejournal, @MattGSouthern

A new IAB and Talk Shoppe study finds AI is accelerating discovery and comparisons, but it’s not the last stop.

Here are the key points before we get into the details:

  • AI pushes people to verify details on retailer sites, search, reviews, and forums rather than replacing those steps.
  • Only about half fully trust AI recommendations, which creates predictable detours when links are broken or specs and pricing don’t match.
  • Retailer traffic rises after AI, with one in three shoppers clicking through directly from an assistant.

About The Report

This report combines more than 450 screen-recorded AI shopping sessions with a U.S. survey of 600 consumers, giving you observed behavior and stated attitudes in one place.

It tracks where AI helps, where trust breaks, and what people do next.

Key Findings

AI speeds up research and makes it more focused, especially for comparing options, but it increases the number of steps as shoppers validate details elsewhere.

In the sessions, people averaged 1.6 steps before AI and 3.8 afterward, and 95% took extra steps to feel confident before ending a session.

Retailer and marketplace sites are the primary destination for validation. Seventy-eight percent of shoppers visited a retailer or marketplace during the journey, and 32% clicked directly from an AI tool.

The share that visited retailer sites rose from 20% before AI to 50% after AI. On those pages, people most often checked prices and deals, variants, reviews, and availability.

Low Trust In AI Recommendations

Trust is a constraint. Only 46% fully trusted AI shopping recommendations.

Common friction points where people lost trust were:

  • Missing links or sources
  • Mismatched specs or pricing
  • Outdated availability
  • Recommendations that didn’t fit budget or compatibility needs

These friction points sent people back to search, retailers, reviews, and forums.

Why This Matters

AI chatbots now shape mid-journey research.

If your product data, comparison content, and reviews are inconsistent with retailer listings, shoppers will notice when they verify elsewhere.

This reinforces the need to align details across channels to retain customer trust.

What To Do With This Info

Here are concrete steps you can take based on the report’s information:

  • Keep specs, pricing, availability, and variants in sync with retailer feeds.
  • Build comparison and “alternatives” pages around the attributes people prompt for.
  • Expand structured data for specs, variants, availability, and reviews.
  • Create content to answer common objections surfaced in forums and comment threads.
  • Monitor the queries and communities where shoppers validate information to close recurring gaps.

Looking Ahead

Respondents said AI made research feel easier, but confidence still depends on clear sources and verified reviews.

Expect assistants to keep influencing discovery while retailer and brand pages confirm the details that matter.

For more insight into how AI influences the shopping journey, see the full report.


Featured Image: Andrey_Popov/Shutterstock

AI Is Breaking The Economics Of Content via @sejournal, @Kevin_Indig

What does it say about the economics of content when the most visible site on the web loses significant traffic?

A status report by Wikipedia shows a significant decline in human page views over the last few months as a result of generative AI, “especially with search engines providing answers directly to searchers” [1].

Image Credit: Kevin Indig

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

  • Evergreen content = Educational content covering established, timeless topics.
  • Additive content = Content that provides net-new takes, insights, and conversations.

Wikipedia is an evergreen site. Even though it’s a user-generated content (UGC) platform like Reddit or YouTube, its primary purpose is to serve comprehensive definitions on established topics. Reddit, YouTube, and LinkedIn & Co. are about additive topics and insights.

AI destroys the value of one while raising it for the other.

Wikipedia’s human traffic has dipped -5% YoY, while scrapers grew by 10.5% and bots by 162.4% [2]. The fact that scrapers and bots together make up almost as much traffic as humans is symbolic of the eroding value of answering questions.

Even though Wikipedia’s direct traffic is up ~23% and Chat GPT referrals are up 3.5x YoY, Google referrals are down -35% because AI Overviews make it redundant for users to click through.

Image Credit: Kevin Indig

Over the same time that Wikipedia lost ~90 million visits, Google started showing a lot more AI Overviews that answer user questions directly – often based on Wikipedia’s content.

Image Credit: Kevin Indig

Almost 50% of Wikipedia’s queries display a large AIO at the top of the search results. That’s no outlier: Reddit is at 46% and YouTube at 38%.

Google and ChatGPT reward additive content.

YouTube’s citation rate jumped from 37% to 54% (up 17 percentage points) at the same time as Wikipedia dropped from 58% to 42% (down 16 percentage points). Video is replacing text as Google’s primary source for answers.

Image Credit: Kevin Indig

ChatGPT cites Wikipedia 3x more often than it mentions the site, while Reddit is at one-to-one and YouTube at ~250%! Since users don’t click citations, mentions are much more valuable. [3]

Pre-AI, the economics of evergreen content were net-positive because it attracted clicks from Google, some of which converted into customers. LLMs like ChatGPT, AI Overviews, or AI Mode are not incentivized to send out traffic but to give the best answer, which makes the experience more similar to TikTok than Search.

LLMs use web content like Wikipedia for training, but offer invisible citations instead of mentions. The net return is negative. Wikipedia has to convince donors that it’s still worth giving money, while its content is used as a utility for LLMs.

Over the last 12 months, sites offering additive UGC have gained LLM visibility [4]:

  • Reddit.
  • LinkedIn.
  • Youtube.
  • Quora.
  • Yelp.
  • Tripadvisor.
  • Etc.

At the same time, content sites offering evergreen content lost significant amounts of organic traffic (and value):

  • Stackoverflow.
  • Chegg.
  • Britannica.
  • Wiktionary.
  • History.com.
  • eHow.
  • Etc.

With fewer and eventually maybe zero clicks arriving [5], the value of creating evergreen content is questionable – not just for Wikipedia.

The fix is to shift focus from evergreen topics to net-new insights:

  1. Invest more in additive content: data stories, research, customer success stories, thought leadership, etc. Oura, Ramp, Okta, and others are already making the shift and hiring economists, journalists, and researchers. [678]
  2. Lower your investment in evergreen content in favor of additive content. We don’t know the right mix, but 50/50 or even 70/30 seems better than 80/20.
  3. When to keep evergreen content: For user experience (critical to understand a topic), Topical Authority, or when you can automate + enrich with unique data.
  4. When creating evergreen content, focus on hyperlong-tail topics aligned with your audience personas and positioning that no one else is visible for.

Evaluate additive content against influenced pipeline, LLM citations/mentions/Share of Voice, and publisher links/coverage.


Featured Image: Paulo Bobita/Search Engine Journal

OpenAI Flags Emotional Reliance On ChatGPT As A Safety Risk via @sejournal, @MattGSouthern

OpenAI is telling companies that “relationship building” with AI has limits. Emotional dependence on ChatGPT is considered a safety risk, with new guardrails in place.

  • OpenAI says it has added “emotional reliance on AI” as a safety risk.
  • The new system is trained to discourage exclusive attachment to ChatGPT.
  • Clinicians helped define what “unhealthy attachment” looks like and how ChatGPT should respond.