Sam Altman Explains OpenAI’s Bet On Profitability via @sejournal, @martinibuster

In an interview with the Big Technology Podcast, Sam Altman seemed to struggle answering the tough questions about OpenAI’s path to profitability.

At about the 36 minute mark the interviewer asked the big question about revenues and spending. Sam Altman said OpenAI’s losses are tied to continued increases in training costs while revenue is growing. He said the company would be profitable much earlier if it were not continuing to grow its training spend so aggressively.

Altman said concern about OpenAI’s spending would be reasonable only if the company reached a point where it had large amounts of computing it could not monetize profitably.

The interviewer asked:

“Let’s, let’s talk about numbers since you brought it up. Revenue’s growing, compute spend is growing, but compute spend still outpaces revenue growth. I think the numbers that have been reported are OpenAI is supposed to lose something like 120 billion between now and 2028, 29, where you’re going to become profitable.

So talk a little bit about like, how does that change? Where does the turn happen?”

Sam Altman responded:

“I mean, as revenue grows and as inference becomes a larger and larger part of the fleet, it eventually subsumes the training expense. So that’s the plan. Spend a lot of money training, but make more and more.

If we weren’t continuing to grow our training costs by so much, we would be profitable way, way earlier. But the bet we’re making is to invest very aggressively in training these big models.”

At this point the interviewer pressed Altman harder about the path to profitability, this time mentioning the spending commitment of $1.4 trillion dollars versus the $20 billion dollars in revenue. This was not a softball question.

The interviewer pushed back:

“I think it would be great just to lay it out for everyone once and for all how those numbers are gonna work.”

Sam Altman’s first attempt to answer seemed to stumble in a word salad kind of way: 

“It’s very hard to like really, I find that one thing I certainly can’t do it and very few people I’ve ever met can do it.

You know, you can like, you have good intuition for a lot of mathematical things in your head, but exponential growth is usually very hard for people to do a good quick mental framework on.

Like for whatever reason, there were a lot of things that evolution needed us to be able to do well with math in our heads. Modeling exponential growth doesn’t seem to be one of them.”

Altman then regained his footing with a more coherent answer:

“The thing we believe is that we can stay on a very steep growth curve of revenue for quite a while. And everything we see right now continues to indicate that we cannot do it if we don’t have the compute.

Again, we’re so compute constrained, and it hits the revenue line so hard that I think if we get to a point where we have like a lot of compute sitting around that we can’t monetize on a profitable per unit of compute basis, it’d be very reasonable to say, okay, this is like a little, how’s this all going to work?

But we’ve penciled this out a bunch of ways. We will of course also get more efficient on like a flops per dollar basis, as you know, all of the work we’ve been doing to make compute cheaper comes to pass.

But we see this consumer growth, we see this enterprise growth. There’s a whole bunch of new kinds of businesses that, that we haven’t even launched yet, but will. But compute is really the lifeblood that enables all of this.

We have always been in a compute deficit. It has always constrained what we’re able to do.

I unfortunately think that will always be the case, but I wish it were less the case, and I’d like to get it to be less of the case over time, because I think there’s so many great products and services that we can deliver, and it’ll be a great business.”

The interviewer then sought to clarify the answer, asking:

“And then your expectation is through things like this enterprise push, through things like people being willing to pay for ChatGPT through the API, OpenAI will be able to grow revenue enough to pay for it with revenue.”

Sam Altman responded:

“Yeah, that is the plan.”

Altman’s comments define a specific threshold for evaluating whether OpenAI’s spending is a problem. He points to unused or unmonetizable computing power as the point at which concern would be justified, rather than current losses or large capital commitments.

In his explanation, the limiting factor is not willingness to pay, but how much computing capacity OpenAI can bring online and use. The follow-up question makes that explicit, and Altman’s confirmation makes clear that the company is relying on revenue growth from consumer use, enterprise adoption, and additional products to cover its costs over time.

Altman’s path to profitability rests on a simple bet: that OpenAI can keep finding buyers for its computing as fast as it can build it. Eventually, that bet either keeps winning or the chips run out.

Watch the interview starting at about the 36 minute mark:

Featured Image/Screenshot

Google’s Robby Stein Names 5 SEO Factors For AI Mode via @sejournal, @martinibuster

Robby Stein, Vice President of Product for Google Search, recently sat down for an interview where he answered questions about how Google’s AI Mode handles quality, how Google evaluates helpfulness, and how it leverages its experience with search to identify which content is helpful, including metrics like clicks. He also outlined five quality SEO-related factors used for AI Mode.

How Google Controls Hallucinations

Stein answered a question about hallucinations, where an AI lies in its answers. He said that the quality systems within AI Mode are based on everything Google has learned about quality from 25 years of experience with classic search. The systems that determine what links to show and whether content is good are encoded within the model and are based on Google’s experience with classic search.

The interviewer asked:

“These models are non-deterministic and they hallucinate occasionally… how do you protect against that? How do you make sure the core experience of searching on Google remains consistent and high quality?”

Robby Stein answered:

“Yeah, I mean, the good news is this is not new. While AI and generative AI in this way is frontier, thinking about quality systems for information is something that’s been happening for 20, 25 years.

And so all of these AI systems are built on top of those. There’s an incredibly rigorous approach to understanding, for a given question, is this good information? Are these the right links? Are these the right things that a user would value?

What’s all the signals and information that are available to know what the best things are to show someone. That’s all encoded in the model and how the model’s reasoning and using Google search as a tool to find you information.

So it’s building on that history. It’s not starting from scratch because it’s able to say, oh, okay, Robbie wants to go on this trip and is looking up cool restaurants in some neighborhood.

What are the things that people who are doing that have been relying on on Google for all these years? We kind of know what those resources are we can show you right there. And so I think that helps a lot.

And then obviously the models, now that you release the constraint on layout, obviously the models over time have also become just better at instruction following as well. And so you can actually just define, hey, here are my primitives, here are my design guidelines. Don’t do this, do this.

And of course it makes mistakes at times, but I think just the quality of the model has gotten so strong that those are much less likely to happen now.”

Stein’s explanation makes clear that AI Mode is encoded with everything learned from Google’s classic search systems rather than a rebuild from scratch or a break from them. The risk of hallucinations is managed by grounding AI answers in the same relevance, trust, and usefulness signals that have underpin classic search for decades. Those signals continue to determine which sources are considered reliable and which information users have historically found valuable. Accuracy in AI search follows from that continuity, with model reasoning guided by longstanding search quality signals rather than operating independently of them.

How Google Evaluates Helpfulness In AI Mode

The next question is about the quality signals that Google uses within AI Mode. Robby Stein’s answer explains that the way AI Mode determines quality is very much the same as with classic search.

The interviewer asked:

“And Robbie, as search is evolving, as the results are changing and really, again, becoming dynamic, what signals are you looking at to know that the user is not only getting what they want, but that is the best experience possible for their search?”

Stein answered:

“Yeah, there’s a whole battery of things. I mean, we look at, like we really study helpfulness and if people find information helpful.

And you do that through evaluating the content kind of offline with real people. You do that online by looking at the actual responses themselves.

And are people giving us thumbs up and thumbs downs?

Are they appreciating the information that’s coming?

And then you kind of like, you know, are they using it more? Are they coming back? Are they voting with their feet because it’s valuable to you.

And so I think you kind of triangulate, any one of those things can lead you astray.

There’s lots of ways that, interestingly, in many products, if the product’s not working, you may also cause you to use it more.

In search, it’s an interesting thing.

We have a very specific metric that manages people trying to use it again and again for the same thing.

We know that’s a bad thing because it means that they can’t find it.

You got to be really careful.

I think that’s how we’re building on what we’ve learned in search, that we really feel good that the things that we’re shipping are being found useful by people.”

Stein’s answer shows that AI Mode evaluates success using the same core signals used for search quality, even as the interface becomes more dynamic. Usefulness is not inferred from a single engagement signal but from a combination of human evaluation, explicit feedback, and behavioral patterns over time.

Importantly, Stein notes that just because people use it a lot, presumably in a single session, that the increased usage alone is not treated as success, since repeated attempts to answer the same query indicate failure rather than satisfaction. The takeaway is that AI Mode’s success is judged by whether users are satisfied, and that it uses quality signals designed to detect friction and confusion as much as positive engagement. This carries over continuity from classic search rather than redefining what usefulness means.

Five Quality Signals For AI Search

Lastly, Stein answers a question about the ranking of AI generated content and if SEO best practices still help for ranking in AI. Stein’s answer includes five factors that are used for determining if a website meets their quality and helpfulness standards.

Stein answered:

“The core mechanic is the model takes your question and reasons about it, tries to understand what you’re trying to get out of this.

It then generates a fan-out of potentially dozens of queries that are being Googled under the hood. That’s approximating what information people have found helpful for those questions.

There’s a very strong association to the quality work we’ve done over 25 years.

Is this piece of content about this topic?

Has someone found it helpful for the given question?

That allows us to surface a broader diversity of content than traditional Search, because it’s doing research for you under the hood.

The short of it is the same things apply.

  1. Is your content directly answering the user’s question?
  2. Is it high quality?
  3. Does it load quickly?
  4. Is it original?
  5. Does it cite sources?

If people click on it, value it, and come back to it, that content will rank for a given question and it will rank in the AI world as well.”

Watch the interview starting about the one hour and twenty three minute mark:

Google Says What To Tell Clients Who Want SEO For AI via @sejournal, @martinibuster

Google’s Danny Sullivan offered advice to SEOs who have clients asking for updates on what they’re going to do for AI SEO. He acknowledged it’s easier to give the advice than it is to have to actually tell clients, but he also said that advancements in content management systems drive technical SEO into the background, enabling SEOs and publishers to focus on the content.

What To Tell Clients

Danny Sullivan acknowledged that SEOs are in a tough spot with clients. He didn’t suggest specifics for how to rank better in AI search (although later in the podcast he did offer suggestions for what to do to rank better in AI search).

But he did offer suggestions for what to tell clients.

Danny explained:

“And the other thing is, and I’ve seen a number of people remark on this, is this concern that, well, I’ve been doing SEO, but now I’m getting clients or people saying to me, but I need the new stuff. I need the new stuff. And I can’t just tell them it’s the same old stuff.

So I don’t know if you feel like you need to dress it up a bit more, but I think the way you dress it up is to say, These are continuing to be the things that are going to make you successful in the long-term. I get you want the fancy new type of thing, but the history is that the fancy new type of thing doesn’t always stick around if we go off and do these particular types of things…

I’m keeping an eye on it, but right now, the best advice I can tell you when it comes to how we’re going to be successful with our AEO is that we continue on doing the stuff that we’ve been doing because that is what it’s built on.

Which is easy for me to say ’cause I don’t got someone banging on the door to say, Well, actually we do. And so we are doing that.

So that’s why, as part of the podcast, it’s just to kind of reassure that, look, just because the formats are changing didn’t mean you have to change everything that you had to do and that everything you had to shift around.”

Downside Of Prioritizing AEO/GEO For AI Search Visibility

There are many in the SEO community who are suggesting fairly spammy things to do to rank better in AI chatbots like ChatGPT, like creating listicles that recommend themselves as best whatever. Others are doing things like tweaking on keyword phrases, the kind of thing SEOs stopped doing by 2005 or 2006.

The problem with making dramatic changes to content in order to rank better in chatbots is that ChatGPT, Perplexity, and Anthropic Claude’s search traffic share is a fraction of a percent for each of them, with Claude close zero and ChatGPT estimated to be 0.2% – 0.5%.

So it absolutely makes zero sense to prioritize AEO/GEO over Google and Bing search at this point because the return on the investment is close to zero. It’s a different story when it comes to Google AI Overviews and AI Mode, but the underlying ranking systems for both AI interfaces remain Google’s classic search.

Danny shared that focusing on things that are specific to AI risks complicating what should be simple.

Google’s Danny Sullivan shared:

“And in fact, that the more that you dramatically shift things around, and start doing something completely different, or the more that you start thinking I need to do two different things, the more that you may be making things far more complicated, not necessarily successful in the long term as you think they are.”

Technical SEO Is Needed Less?

John Mueller followed up by mentioning that the advanced state of content management systems today means that SEOs and publishers no longer have to spend as much time on technical SEO issues because most CMS’s have the basics of SEO handled virtually out of the box. Danny Sullivan said that this frees up SEOs and creators to focus on their content, which he insisted will be helpful for ranking in AI search surfaces.

John Mueller commented:

“I think that makes a lot of sense. I think one of the things that perhaps throws SEOs off a little bit is that in the early days, there was a lot of almost like a technical transition where people initially had to do a lot of technical specific things to make their site even kind of accessible in search. And at some point nowadays, I think if you’re using a popular CMS like WordPress or Wix or any of them, basically you don’t have to worry about any of those technical details.

So it’s almost like that technical side of things is a lot less in the foreground now, and you can really focus on the content, and that’s really what users are looking for. So it’s like that, almost like a transition from technical to content side with regards to SEO.”

This echoed a previous statement from earlier in the podcast where Danny remarked on how some people have begun worrying less about SEO and focusing on content.

Danny said:

“But we really just want you to focus on your content and not really worry about this. If your content is on the web and generally accessible as most people’s content is, that’s it.

I’ve actually been heartened that I’ve seen a number of people saying things like: I don’t even want to think about this SEO stuff anymore. I’m just getting back into the joy of writing blogs.

I’m like, yes, great. That’s what we want you to do. That’s where we think you’re going to find your most success.”

Listen to Danny Sullivan’s remarks at about the 8 minute mark:

Featured Image by Shutterstock/Just dance

How Will AI Mode Impact Local SEO? via @sejournal, @JRiddall

In organic search, disruption has always been the norm, but the integration of AI into Google Search – with AI Overviews and now AI Mode – is not an incremental change; it is a fundamental restructuring. For marketers overseeing single or multi-location SEO strategies, the transition from the traditional blue-link environment to a conversational, synthesized search experience carries important stakes.

The initial manifestation of this shift, the AI Overview (AIO), which claims the premium “Position 0” real estate on a search engine results page (SERP), provided the initial shockwave. However, the long-term competitive reality is defined by AI Mode, a full conversational ecosystem where users can engage in multi-stage dialogue with AI. This interactive mode anticipates a user’s entire “information journey” by mapping out potential subsequent inquiries, known as latent questions or query fan-out, negating the need for users to click through for additional information.

The implications for local SEO are profound. Data confirms that when an AIO is present and a business’s content is not cited, organic click-through rates (CTR) can plummet by as much as 61%.

The priority for local marketing has irrevocably shifted: Success is no longer defined by securing Position 1 in the traditional organic listings, but by achieving inclusion and citation within the Position 0 AI Overview and the expanded AI Mode. Some are of the belief Google could go full AI Mode at any moment.

This blueprint outlines eight strategic imperatives for marketers to ensure resilient local visibility and drive high-intent conversions in the AI Mode era to come.

The Paradigm Shift: From Blue Links To Entity Authority

The mechanics of AI Mode fundamentally alter local search competition. For high-intent, local or transactional queries (e.g., “best walking tour in Chicago”), the AI often replaces the traditional Google 3-Pack with an expanded, enhanced local AI Mode display including Google Business Profile (GBP) cards.

AI Mode GBP Cards screenshot
Screenshot from Google search for [best walking tours in New Orleans], November 2025

A limited study conducted in May 2025 found AI Overviews (now typically accompanied by AI Mode) appeared for local search queries 57% of the time and were particularly dominant for informational, as opposed to local/commercial, intent queries.

A more recent behavioral study of travel booking in AI Mode found Google Business Profiles to be among the most highly displayed and engaged content for searchers booking local accommodations and experiences. This is likely the case for any locally oriented search. This creates new opportunities, but demands a strategic overhaul to ensure top-tier visibility.

The AI’s choice of businesses for this enhanced local pack leans heavily on Entity Authority. LLMs synthesize business summaries and attributes by drawing information from diverse, omni-channel sources. This reliance on verified, consistent facts across the entire web makes the digital ecosystem, rather than just the website’s content or backlink profile, the primary ranking vector.

In this new environment, traditional SEO and link acquisition strategies must be rebalanced with unique fact provision and entity authority strategies

8 Local SEO Recommendations For Visibility In AI Mode

To command a dominant position in the conversational search environment, local marketers must execute a comprehensive strategy focusing on local authority, data integrity, technical compliance, and an answer-first content structure.

1. Fortify Your Google Business Profile (GBP) As The Verified Core

GBP has been identified as generative AI’s most critical source of verified local data. Full optimization and consistent verification are non-negotiable gatekeepers for inclusion and visibility within AI Mode.

Non-Negotiable GBP Optimization:

Primary And Secondary Category Selection
Choose the most relevant and appropriate primary category for the business, along with limited additional secondary categories. Do not select generic or non-relevant categories as a means to being included or found within the same via AI search. Far too many businesses make the mistake of choosing as many categories as they think are even tangentially related to the services they offer, often diluting their primary area of expertise.

Comprehensive Service Listings
Ensure accurate and comprehensive listings of all services offered, aligning them perfectly with the services listed on the website and within schema markup. Here again, do not over-extend into generic or non-relevant service offerings.

Verified Hours and Attributes
Maintain current, verified hours of operation, paying special attention to temporary or seasonal closures. A newly important factor in organic and AI search visibility is whether or not a business is physically open when a search is being conducted.

Fill out all relevant business attributes, including payment types accepted, amenities (e.g., parking) available, and anything else which may set the business apart.

Active Engagement Signals
Behavioral signals, such as in-store visits tracked by Google Maps, and engagement signals on the GBP are increasing in importance, suggesting the AI weights profiles demonstrating real-world activity. Responding promptly to reviews and questions posed via GBP is critical, as is regularly posting photos, offers, updates, and other helpful content for your target audience.

Recommendation: The GBP must be treated as a live, mission-critical data feed, not a static listing. Any change to a service, hour, or attribute must be propagated across the GBP first, then the website, and finally any other third-party local or industry-specific directories.

2. Mandate Technical Precision With Schema

Structured data can support AI search visibility. Large Language Models (LLMs), in part, use schema markup to categorize, verify, and ingest factual information directly. Failure to comply with stringent technical specifications may render an entity ineligible for expanded, visually-rich AI results.

Required Technical Specifications:

LocalBusiness Schema And Service Schema
These must be implemented meticulously, defining the business type (e.g., Dentist, Vacation Rental Operator) and precisely describing the services offered using the Service and makesOffer properties.

Geographical Precision
The geo property (latitude and longitude) must be included in the LocalBusiness schema to satisfy the AI’s need for hyper-local accuracy in “near me” and navigational queries.

Visual Asset Compliance
To qualify for visually enhanced AI results, websites must provide multiple relevant service, product, and location-specific images. All images require relevant descriptive filenames and alt text, which must include pertinent keywords, where applicable.

Recommendation: Implement all schema using JSON-LD for simplified maintenance and validation via Google’s Rich Results Test and Schema.org markup validator, keeping the technical markup separate from page design.

3. Achieve Omnichannel Entity Consistency (NAP Harmony)

Generative AI systems rely on consistency and verifiability of a business’s factual data across multiple sources. Any conflict in Name, Address, and Phone (NAP) details, or service descriptions, across primary and third-party sources introduces ambiguity. AI models, like organic search algorithms preceding them, are programmed to reject or hesitate to cite conflicting data points, significantly degrading a business’s trustworthiness.

The Data Harmonization Mandate:

GBP Vs. Website
If a business lists four specific services on its website, but six on its Google Business Profile (GBP), the AI may not be able to provide a definitive, confident summary of service offerings.

Comprehensive Auditing
Invest in robust, real-time auditing and monitoring tools to ensure 100% NAP consistency across the corporate website, all individual location pages, GBPs, and major third-party directories (e.g., Yelp, Tripadvisor).

Recommendation: Treat your structured data and GBP as the single source of truth, and enforce a technical and content compliance mandate across all third-party listings and local data aggregators to eliminate signal dilution. Local authority is now synonymous with holistic entity management.

4. Harness The Power Of Authentic Review Sentiment (E-E-A-T)

Within AI-search, Google continues to emphasize the E-E-A-T framework (Experience, Expertise, Authoritativeness, and Trustworthiness). For local entities, this can in part be demonstrated through verifiable user interactions, authentic customer feedback, and structured review data. The AI synthesizes customer reviews into concise, attribute-level summaries serving as the user’s immediate decision cue.

Shifting Review Strategy To Influence The AI Summary:

Attribute-Level Prompting
The strategy must shift from merely gathering high star ratings to encouraging customers to mention desirable operational attributes (e.g., “fast service,” “knowledgeable staff,” “great atmosphere”). This provides the AI with positive attributes to feature prominently in the generated summary, which acts as a primary conversion trigger.

Review Schema Implementation
Implementing Review and AggregateRating schema is critical for providing the AI model with a structured roadmap to quickly identify recurring sentiment themes.

Proactive Management
Active, prompt management and response to both positive and negative reviews, focusing on service attributes, further establishes the ‘A’ authority and ‘T’ trust in E-E-A-T.

5. Adopt Answer Engine Optimization (AEO) And Query Fan-Out Mapping

Content strategy must transition from traditional keyword SEO to Answer Engine Optimization (AEO). AI Mode prioritizes highly informative, concise content specifically structured to answer user queries directly. Query fan-out refers to the process of not only answering the first query submitted, but also anticipating and providing answers to a range of subsequent related questions users have.

Content Strategy For Conversational Search

Map Latent Questions
Since complex queries often trigger AI Overviews, and AI Mode builds on the same multi-step reasoning systems, Google’s LLMs attempt to map the user’s broader information journey by predicting the follow-up questions they are likely to ask. Content therefore needs to address not only the initial ‘head query’ but also the latent questions that make up the next steps in that journey.

Structure For Extraction
Content inclusion is assessed partly by structure. Utilize clear formatting elements easy for the AI to extract and cite:

  • Hierarchical Headings: Implement a clean, tiered heading structure to guide LLMs through content based on its hierarchical importance.
  • Answer First Content: Incorporate semantically related questions and answers tied to perceived user intent naturally into body content.
  • FAQs/Q&A Formatting: Use structured Q&A formats along with FAQPage schema.
  • Ordered Lists: Present verifiable facts in easily digestible formats like bulleted and numbered lists.
  • Short, Concise Paragraphs: Ensure maximum readability and extraction suitability for the LLM.

Implement A Dual Content Strategy

  • Tier 1 (Informational/AEO): Unique, helpful, experience-backed content optimized for AIO citation (FAQs, guides) to establish E-E-A-T and secure brand visibility.
  • Tier 2 (Transactional/CRO): Core service pages and hyper-local pages focused on high-intent, bottom-of-the-funnel queries (“emergency plumber near me”), prioritizing clear calls-to-action and conversion architecture.

6. Diversify Entity Authority: Chase Branded Web Mentions

The AI’s holistic approach to entity authority means links are less important than they once were, while branded mentions are experiencing a resurgence. Research indicates a strong correlation between brands cited in AI Overviews/AI Mode and the frequency of their mention across the broader web (including social media, blogs, and forums like Reddit). In AI SEO, brand mentions (linked or not) are the new link. This shift is supported by data showing web mentions correlate highly with AI visibility.

Strategy For Earning “The AI Vote”:

Omnichannel Entity Acquisition
Proactively pursue high-quality, non-linked citations from authoritative local news sources, industry blogs, and high-quality directories. The goal is to maximize the sheer volume of high-quality, reinforcing brand mentions AI can reference.

Social & Video Integration
Leverage social media platforms and, critically, YouTube content. LLMs scrape video and social channels for entity information and context, making these verifiable sources of service and brand attribute data.

Recommendation: Shift resources from low-value link-building activities toward Digital PR and Content Distribution campaigns designed to earn non-linked brand mentions and reinforce local expertise across third-party industry and media sites.

7. Optimize For High-Velocity Conversions (CRO)

The inevitable decline in raw organic traffic is accompanied by an efficiency challenge. The traffic successfully navigating from AI Mode to the website should typically be more qualified and higher-intent, as the AI has already satisfied low-intent informational needs. The traffic remaining is typically the commercially valuable “bottom-of-the-funnel” user.

The Conversion Imperative:

CRO Over Traffic Generation
Resources should be strategically reallocated away from mass traffic generation toward maximizing the conversion potential of the qualified users who land on the website.

One interesting finding from the aforementioned AI Mode behavioral study was the number of users who expected to simply be able to complete their transaction once they left AI Mode, i.e., just click Book Now and pay. While this may be coming in the form of future Google integrations, the current transactional workflow requires users to start their booking from the beginning.

While the percentage of traffic from AI search may initially be less than 1%, the potential volume – with 1% of a trillion searches equating to 10 billion opportunities – justifies a dedicated focus on conversion for this high-value segment.

Perfecting Conversion Architecture
The final click from AI Mode to the website must lead to a seamless, high-velocity user experience. This involves:

  • Above-the-Fold CTAs: Ensuring clear, single-focus calls-to-action (CTAs) are immediately visible on landing pages.
  • Minimal Friction: Reducing form fields and providing one-click access to the most high-intent action (e.g., “Request a Quote,” “Book Now,” “Call Us”).
  • KPI Recalibration: Focus key performance indicators (KPIs) on high-value, direct actions tracked through Google Business Insights and Search Console, emphasizing direct calls, requests for driving directions, and specific booking actions, rather than low-intent clicks. Visibility in AI Mode becomes a more meaningful success metric than a singular keyword rank.

8. Future-Proofing: Un-hide Content And Prioritize Accessibility

A foundational requirement for AI Mode visibility is ensuring technical accessibility of content for the LLM’s consumption.

Accessibility As A Generative Requirement:

Un-hide Critical Content
Content crucial to establishing entity authority (e.g., licenses, certifications, key service attributes, location details) must not be hidden within toggles, tabs, accordions, or JavaScript requiring a user click to reveal.

Plain Text And HTML
While visuals are important, the core factual assertions must be rendered in clean, accessible HTML any machine can easily read and interpret.

Proactive Monitoring
Use LLM analysis tools (or reverse question-answering prompts) to regularly audit which questions your site is answering and which critical facts are not being found by the AI, ensuring your core message is the stuff being crawled and indexed.

The Generative Mandate For Local SEO In The AI Era

Google AI Mode represents the definitive passing of the torch from traditional link-based SEO to a sophisticated strategy centered on fact provision and entity validation. For marketers, the shift is not one to debate, but one to embrace immediately.

The future of local search visibility is a high-stakes competition for the top-tier real estate of the AI Overview and AI Mode. The required investment is a mandate across the entire digital portfolio:

  1. Technical Compliance: Adhering to strict schema and content specifications to gain eligibility.
  2. Data Integrity: Enforcing omnichannel consistency to build undeniable entity trust.
  3. Content Refinement: Adopting Answer Engine Optimization to answer the full spectrum of user queries.
  4. Link or Unlinked Branded Mentions: Earn and establish visibility in relatively high authority local and industry-relevant places.

This strategic pivot – away from mass-traffic keyword pursuits and toward precise entity authority management – is the only way to mitigate the risk of CTR collapse and capitalize on the high-quality, high-intent traffic AI Mode will deliver. Your business must now be structured as an impeccable source of verified, structured facts for AI to cite. The time for strategic adaptation is now.

More Resources:


Featured Image: Koupei Studio/Shutterstock

Google Explains How To Rank In AI Search via @sejournal, @martinibuster

Google’s John Mueller and Danny Sullivan discussed their thoughts on AI search and what SEOs and creators should be doing to make sure their content is surfaced. Danny showed some concern for folks who were relying on commodity content that is widely available.

What Creators Should Focus On For AI

John Mueller asked Danny Sullivan what publishers should be focusing on right now that’s specific to AI. Danny answered by explaining what kind of content you should not focus on and what kind of content creators should be focusing on.

He explained that the kind of content that creators should not focus on is commodity content. Commodity content is web content that consists of information that’s widely available and offers no unique value, no perspective, and requires no expertise. It is the kind of content that’s virtually interchangeable with any other site’s content because they are all essentially generic.

While Danny Sullivan did not mention recipe sites, his discussion about commodity content immediately brought recipe sites to mind because those kinds of sites seemingly go out of their way to present themselves as generically as possible, from the way the sites look, the “I’m just a mom of two kids” bio, and the recipes they provide. In my opinion, what Danny Sullivan said should make creators consider what they bring to the web that makes them notable.

To explain what he meant by commodity content, Danny used the example of publishers who used to optimize a web page for the time that the Super Bowl game began. His description of the long preamble they wrote before giving the generic answer of what time the Super Bowl starts reminded me again of recipe sites.

At about the twelve minute mark John Mueller asked Danny:

“So what would you say web creators should focus on nowadays with all of the AI?”

Danny answered:

“A key thing is to really focus on is the original aspect. Not a new thing.

These are not new things beyond search, but if you’re really trying to reframe your mind about what’s important, I think that on one hand, there’s a lot of content that is just kind of commodity content, factual information, and I think that the… LLM, AI systems are doing a good job of presenting that sort of stuff.

And it’s not originating from any type of thing.

So the classic example, as you know, will make people laugh, …but every year we have this little American football thing called the Super Bowl, which is our big event.

…But no one ever can seem to remember what time it’s on.

…Multiple places would then all write their “what time does the Super Bowl start in 2011?” post. And then they would write these giant long things.

…So, you know, and then at some point, we could see enough information and we have data feeds and everything else that we just kind of said, you do a search and …the Super Bowl is going to be at 3:30.

…I think the vast majority of people say, that’s a good thing. Thank you for just telling me the time of the Super Bowl.

It wasn’t super original information.”

Commodity Content Is Not Your Strength

Next Danny considered some of the content people are publishing today, encouraging them to think  about the generic nature of their content and to give some thought to how they can share something more original and unique.

Danny continued his answer:

“I think that is a thing people need to understand, is that more of this sort of commodity stuff, it isn’t going to necessarily be your strength.

And I do worry that some people, even with traditional SEO, focus on it too much.

There are a number of sites I know from the research and things that I’ve done that get a huge amount of traffic for the answer to various popular online word-solving games.

It’s just every day I’m going to give you the answer to it. …and that is great. Until the system shifts or whatever, and it’s common enough, or we’re pulling it from a feed or whatever, and now it’s like, here’s the answer.”

Bring Your Expertise To AI

Danny next suggested that people who are concerned about showing up in AI should start exploring how to express their authentic experience or expertise. He said this advice is not just for text content but also to video and podcast content as well.

He continued:

“Your original voice is that thing that only you can provide. It’s your particular take.

And so that’s what we think was our number one thing when we’re telling people is like, this is what we think your strength is going to be.

As we go into this new world, is already what you should be doing, but this is what your strength that you should be doing is focus on that original content.

I think related to that is this idea that people are also seeking original content that’s, …authentic to them, which typically means it’s a video, it’s a podcast…

…And you’ve seen that in the search we’ve already done, where we brought in more social, more experiential content.  Not to take away from the expert takes, it’s just that people want that.

Sometimes you’re just wanting to know someone’s firsthand experience alongside some expert take on it as well.

But if you are providing those expert takes, you’re doing reviews or whatever, and you’ve done that in the written form, you still have the opportunity to be doing those in videos and podcasts and so on.  Those are other opportunities.

So those are things that, again, it’s not unique to the AI formats, but they just may be, as you’re thinking about, how do I reevaluate what I’m doing overall in this era, that these are things you may want to be considering with it from there.”

John Mueller agreed that it makes sense to bring your unique voice to content in order to make it stand out. Danny’s point treats visibility in AI driven search as a matter of differentiation rather than optimization. The emphasis is not on adapting content to a new format, but on creating a recognizable voice and perspective with which to stand out.  Given that AI Search is still classic search under the hood, it makes sense to stand out from competitors with unique content that people will recognize and recommend.

Listen to the passage at around the twelve minute mark:

Featured Image by Shutterstock/Asier Romero

Google’s AI Mode Personal Context Features “Still To Come” via @sejournal, @MattGSouthern

Seven months after Google teased “personal context” for AI Mode at Google I/O, Nick Fox, Google’s SVP of Knowledge and Information, says the feature still is not ready for a public rollout.

In an interview with the AI Inside podcast, Fox framed the delay as a product and permissions issue rather than a model-capability issue. As he put it: “It’s still to come.”

What Google Promised At I/O

At Google I/O, Google said AI Mode would “soon” incorporate a user’s past searches to improve responses. It also said you would be able to opt in to connect other Google apps, starting with Gmail, with controls to manage those connections.

The idea was that you wouldn’t need to restate context in every query if you wanted Google to use relevant details already sitting in your account.

On timing, Fox said some internal testing is underway, but he did not share a public rollout date:

“Some of us are testing this internally and working through it, but you know, still to come in terms of the in terms of the public roll out.”

You can hear the question and Fox’s response in the video below starting around the 37-minute mark:

AI Mode Growth Continues Without Personal Context

Even without that personalization layer, Fox pointed to rapid adoption, describing AI Mode as having “grown to 75 million daily active users worldwide.”

The bigger change may be in how people phrase queries. Fox described questions that are “two to three times as long,” with more explicit first-person context.

Instead of relying on AI Mode to infer intent, people are writing the context into the prompt, Fox says:

“People are trying to put put the right context into the query”

That matters because the “personal context” feature was designed to reduce that manual effort.

Geographic Patterns In Adoption

Adoption also appears uneven by market, with the strongest traction in regions that received AI Mode first. Fox described the U.S. as the most “mature” market because the product has had more time to become part of people’s routines.

He also pointed to strong adoption in markets where the web is less developed in certain languages or regions, naming India, Brazil, and Indonesia. The argument there is that AI Mode can stitch together information across languages and borders in ways traditional search results may not have for those markets.

Younger users, he added, are adopting the experience faster across regions.

Publisher Relationship Updates

The interview also included updates tied to how AI Mode connects people back to publisher content.

Preferred Sources is one of them. The feature lets you choose specific publications you want to see more prominently in Google’s Top Stories unit, and Google describes it as available worldwide in English.

Fox also described ongoing work on links in AI experiences, including increasing the number of links shown and adding more context around them:

“We’re actually improving the links within our within our AI experience, increasing the number of them…”

On the commercial side, he noted Google has partnerships with “over 3,000 organizations” across “50 plus countries.”

Technical Updates

Fox talked through product and infrastructure changes now powering AI Mode and related experiences.

One was shipping Gemini 3 Pro in Search on day one, which he described as the first time Google has shipped a frontier model” in Search on launch day.

He also described generative layouts,” where the model can generate UI code on the fly for certain queries.

To keep the experience fast, he emphasized model routing, where simpler queries go to smaller, faster models and heavier work is reserved for more complex prompts.

Why This Matters

A version of AI Mode that personalizes answers using opt-in Gmail context is still not available and doesn’t have a public timeline.

In the meantime, people appear to be compensating by typing more context into their queries. If that becomes the norm, it may push publishers toward satisfying longer, more situation-specific questions.

Looking Ahead

While AI Mode is still in its early stages, the 75 million daily active users figure suggests it’s large enough to monitor for visibility.


Featured Image: Jackpress/Shutterstock

Google Gemini 3 Flash Becomes Default In Gemini App & AI Mode via @sejournal, @MattGSouthern

Google released Gemini 3 Flash, expanding its Gemini 3 model family with a faster model that’s now the default in the Gemini app.

Gemini 3 Flash is also rolling out globally as the default model for AI Mode in Search.

The release builds on Google’s recent Gemini 3 rollout, which introduced Gemini 3 Pro in preview and also announced Gemini 3 Deep Think as an enhanced reasoning mode.

What’s New

Gemini 3 Flash replaces Gemini 2.5 Flash as the default model in the Gemini app globally, which means free users get the Gemini 3 experience by default.

In Search, Gemini 3 Flash is rolling out globally as AI Mode’s default model starting today.

For developers, Gemini 3 Flash is available in preview via the Gemini API, including access through Google AI Studio, Google Antigravity, Vertex AI, Gemini Enterprise, plus tools such as Gemini CLI and Android Studio.

Pricing

Gemini 3 Flash pricing is listed at $0.50 per million input tokens and $3.00 per million output tokens on Google’s Gemini API pricing documentation.

On the same pricing page, Gemini 2.5 Flash is listed at $0.30 per million input tokens and $2.50 per million output tokens.

Google says Gemini 3 Flash uses 30% fewer tokens on average than Gemini 2.5 Pro for typical tasks, and citing third-party benchmarking for a “3x faster” comparison versus 2.5 Pro.

Why This Matters

The default language model in the Gemini app has changed, and users have access at no extra cost.

If you build on Gemini, Gemini 3 Flash offers a new option for high-volume workflows, priced well below Pro-tier rates.

Looking Ahead

Gemini 3 Flash is rolling out now. In Search, Gemini 3 Pro is also available in the U.S. via the AI Mode model menu.

Google AI Mode & AI Overviews Cite Different URLs, Per Ahrefs Report via @sejournal, @MattGSouthern

Google’s AI Mode and AI Overviews can produce answers with similar meaning while citing different sources, according to new data from Ahrefs.

The report, published on the Ahrefs blog, analyzed September 2025 U.S. data from Ahrefs’ Brand Radar tool and compared AI Mode and AI Overview responses for the same queries.

The authors looked at 730,000 query pairs for content similarity and 540,000 query pairs for citation and URL analysis.

What The Study Found

Ahrefs reports that AI Mode and AI Overviews cited the same URLs only 13% of the time. When comparing only the top three citations in each response, overlap increased to 16%.

The language in the responses also varied. Ahrefs reports 16% overlap in unique words and states that AI Mode and AI Overviews share the exact same first sentence only 2.5% of the time.

Ahrefs reported strong semantic alignment, with an average semantic similarity score of 86%, and 89% of response pairs scoring above 0.8 on a scale where 1.0 indicates identical meaning.

Despina Gavoyannis, Senior SEO Specialist at Ahrefs, writes:

“Put simply: 9 out of 10 times, AI Mode and AI Overview agreed on what to say. They just said it differently and cited different sources.”

Different Source Preferences

Ahrefs reports differences in which websites and content types each feature tends to cite.

For example, Wikipedia appears in 28.9% of AI Mode citations compared to 18.1% in AI Overviews. The data also finds that AI Mode cited Quora 3.5x more often and cited health sites at roughly double the rate of AI Overviews.

AI Overviews, by contrast, leaned more heavily on video content. YouTube was the most frequently cited source for AI Overviews, whereas Reddit was cited at similar rates in both AI Mode and AI Overviews.

Ahrefs also reports that AI Overviews cited videos and core pages (such as homepages) nearly twice as often as AI Mode. At the same time, both features showed a strong preference for article-format pages overall.

Entity And Brand Mentions

Ahrefs found AI Mode responses were about four times longer than AI Overviews on average and included more entities.

In the dataset, AI Mode averaged 3.3 entity mentions per response compared to 1.3 for AI Overviews. Approximately 61% of the time, AI Mode included all entities mentioned in the AI Overview response and then added additional entities.

Many responses didn’t include brands or entities. Ahrefs reports that 59.41% of AI Overview responses and 34.66% of AI Mode responses contained no mentions of persons or brands, which the authors associate with informational queries in which named entities are not typically part of the answer.

Citation Gaps

The data finds that AI Mode was more likely to include citations than AI Overviews.

Only 3% of AI Mode responses lacked sources, compared to 11% of AI Overviews. Ahrefs reports that missing citations typically occur in cases such as calculations, sensitive queries, help center redirects, or unsupported languages.

Why This Matters

This report suggests that AI Mode and AI Overviews can differ in the sources they credit, even when they reach similar conclusions for the same query.

For monitoring purposes, this can affect how you interpret “visibility” across experiences. A citation (or a mention) in AI Overviews does not necessarily imply you will be cited in AI Mode for the same query, and AI Mode’s longer responses may include additional entities and competitors compared to the shorter AI Overview format.

Google’s documentation states that both AI Overviews and AI Mode may use “query fan-out,” which issues multiple related searches across subtopics and data sources while a response is being generated.

Google also notes that AI Mode and AI Overviews may use different models and techniques, so the responses and links they display will vary.

Looking Ahead

Ahrefs notes this analysis compares single generations of AI Mode and AI Overview responses. In related research, Ahrefs reported that 45.5% of AI Overview citations change when AI Overviews update, suggesting that overlap can appear different across repeated runs.

Even with that caveat, the low overlap observed in this dataset indicates that AI Mode and AI Overviews frequently select different URLs as supporting sources for the same query.


Featured Image: hafakot/Shutterstock

Why Your AI Agent Keeps ‘Hallucinating’ (Hint: It’s Your Data, Not The AI) via @sejournal, @purnavirji

If it looks like an AI hallucination problem, and sounds like an AI hallucination problem, it’s probably a data hygiene problem.

I’ve sat through dozens of demos this year where marketing leaders show me their shiny new AI agent, ask it a basic question, and watch it confidently spit out information that’s either outdated, conflicting, or flat-out wrong.

The immediate reaction is to blame the AI: “Oh, sorry the AI hallucinated. Let’s try something different.”

But was it really the AI hallucinating?

Don’t shoot the messenger, as the saying goes. While the AI is the messenger bringing you what looks like inaccurate data or hallucination, it’s really sending a deeper message: Your data is a mess.

The AI is simply reflecting that mess back to you at scale.

The Data Crisis Hiding Behind “AI Hallucinations”

An Adverity study found that 45% of marketing data is inaccurate.

Almost half of the data feeding your AI systems, your reporting dashboards, and your strategic decisions is wrong. And we wonder why AI agents give vague answers, contradict themselves, or pull messaging that no one’s used since 2022.

Here’s what I see in nearly every enterprise:

  • Three teams operating with three different definitions of ideal customer profile (ICP).
  • Marketing defines “conversion” one way, sales defines it another.
  • Buyer data scattered across six systems that barely acknowledge each other’s existence.
  • A battlecard last updated in 2019 still floating around, treated like gospel by your AI agent.

When your foundational data argues with itself, AI doesn’t know which version to believe. So it picks one. Sometimes correctly. Often not.

Why Clean Data Matters More Than Smart AI

AI isn’t magic. It reflects whatever you feed it: the good, the bad, and the three-years-outdated.

Everyone wants the “build an agent” sexy moment. The product demo that has everyone applauding. The efficiency gains that guarantee a great review, heck, maybe even a raise.

But the thing that makes AI useful is the boring, unsexy, foundational work of data discipline.

I’ve watched companies spend six figures on AI infrastructure while their product catalog still has duplicate entries from a 2021 migration. I’ve seen sales teams adopt AI coaching tools while their CRM defines “qualified lead” three different ways depending on which region you ask.

The AI works exactly as designed. The problem is what it’s designed to work with.

If your system is messy, AI can’t clean it up (at least, not yet). It amplifies the mess at scale, across every interaction. As much as we would like for it to, even the sexiest AI model in the world won’t save you if your data foundation is broken.

The Real Cost Of Bad Data Hygiene

When your data is inaccurate, inconsistent, or outdated, mistakes are inevitable. These can get risky quickly, especially if they negatively impact customer experience or revenue.

Here’s what that looks like in practice:

Your sales agent gives prospects pricing that changed six months ago because nobody updated the product sheet it’s trained on.

Your content generation tool pulls brand messaging from 2020 because the 2026 messaging framework lives in a deck on someone’s desktop.

Your lead scoring AI uses ICP criteria that marketing and sales never agreed on, so you’re nurturing the wrong prospects while ignoring the right ones.

Your sales enablement agent recommends a case study for a product you discontinued last quarter because nobody archived the old collateral.

This is happening every single week in enterprises that have invested millions in AI transformation. And most teams don’t even realize it until a customer or prospect points it out.

Where To Start: 5 Steps To Fix Your Data Foundation

The good news: You don’t need a massive transformation initiative to fix this. You need discipline and ownership.

1. Audit What Your AI Can Actually See

Before you can fix your data problem, you need to understand its scope.

Pull every document, spreadsheet, presentation, and database your AI systems have access to. Don’t assume. Actually look.

You’ll more than likely find:

  • Conflicting ICP definitions across departments.
  • Outdated pricing from previous years.
  • Messaging from three rebrand cycles ago.
  • Competitive intel that no longer reflects market reality.
  • Case studies for products you no longer sell.

Retire what’s wrong. Update what’s salvageable. Be ruthless about what stays and what goes.

2. Create One Source Of Truth

This is non-negotiable. Pick one system for every definition that matters to your business:

  • ICP criteria.
  • Conversion stage definitions.
  • Territory assignments.
  • Product positioning.
  • Competitive differentiators.

Everyone pulls from it. No exceptions. No “but our team does it differently.”

When marketing and sales use different definitions, your AI can’t arbitrate. It picks one randomly. Sometimes it picks both and contradicts itself across interactions.

One source of truth eliminates that chaos.

3. Set Expiration Dates For Everything

Every asset your AI can access should have a “valid until” date.

Battlecards. Case studies. Competitive intelligence. Messaging frameworks. Product specs.

When it expires, it automatically disappears from AI access. No manual cleanup required. No hoping someone remembers to archive old content.

Stale data is worse than no data. At least with no data, your AI admits it doesn’t know. With stale data, it confidently delivers wrong information.

4. Test What Your AI Actually Knows

Don’t assume your AI is working correctly. Test it.

Ask basic questions:

  • “What’s our ICP?”
  • “How do we define a qualified lead?”
  • “What’s our current pricing for [product]?”
  • “What differentiates us from [competitor]?”

If the answers conflict with what you know is true, you just found your data hygiene problem.

Run these tests monthly. Your business changes. Your data should change with it.

5. Assign Someone To Own It

Data discipline without ownership is a Slack thread that goes nowhere.

One person needs to be explicitly responsible for maintaining your source of truth. Not as an “additional responsibility.” As a core part of their role.

This person:

  • Reviews and approves all updates to the source of truth.
  • Sets and enforces expiration dates for assets.
  • Runs monthly audits of what AI can access.
  • Coordinates with teams to retire outdated content.
  • Reports on data quality metrics.

Without ownership, your data hygiene initiative dies in three months when everyone gets busy with other priorities.

The Bottom Line: Foundation Before Flash

If you don’t fix the mess, AI will scale the mess.

Deploying powerful AI on top of chaotic data is at best inefficient, but at worst, it can actively damage your brand, your customer relationships, and your competitive position.

You can have the most sophisticated AI model in the world. The best prompts. The most expensive infrastructure. None of it matters if you’re feeding it garbage. It takes a disciplined foundation to make it work.

It’s like seeing someone with perfectly white teeth and thinking they just got lucky. What you don’t see is the daily flossing, the regular dental cleanings, the discipline of avoiding sugar and brushing twice a day for years.

Or watching an Olympic athlete make a performance look effortless. You’re not seeing the 5 a.m. training sessions, the strict diet, the thousands of hours of practice that nobody applauds.

The same applies to AI.

To get real value and ROI from AI, start with setting it up for success with the right data foundation. Yes, it might not be the most glamorous or exciting work. But it is what makes the glamorous and exciting possible.

Remember, your AI isn’t hallucinating. It’s telling you exactly what your data looks like.

The question is: Are you ready to fix it?

More Resources:


Featured Image: BestForBest/Shutterstock

WooCommerce Is Integrating Agentic AI Capabilities via @sejournal, @martinibuster

WooCommerce announced that it will roll out integration with Stripe’s Agentic Commerce Suite, which will enable AI shopping assistants to conduct transactions.

Agentic AI Shopping

Agentic AI seems a long way off but OpenAI currently supports end-to-end shopping from the discovery and comparison stages to completing purchases. With the rollout in WooCommerce the infrastructure will be in place to enable over four million stores to be accept product browsing and payments through AI agents.

Stripe Agentic Commerce Suite

Stripe’s Agentic Commerce Suite uses the Agentic Commerce Protocol (ACP), an open source protocol jointly created by Stripe and OpenAI. ACP is model agnostic and does not lock in users to any particular payment provider.

ACP is compatible with the Model Context Protocol (MCP) which was created by Anthropic initially for connecting AI models to external data. The significance is that MCP enables models to call APIs, retrieve data, and perform actions.

According to the official WooCommerce announcement:

“WooCommerce is proud to be a launch partner. Woo merchants will be among the first to benefit when Agentic Commerce Suite rolls out in the coming months.

This is a significant moment for WooCommerce merchants. Instead of building custom integrations for every new AI shopping assistant or platform, you’ll be able to connect your product catalog once and reach customers shopping through whichever AI agent they prefer. Stripe handles discovery, checkout, payments, and fraud protection, while you continue using your existing WooCommerce + Stripe stack.”

This represents a step toward putting the necessary infrastructure in place to enable consumers to interact with AI as part of a new shopping experience. The very near future may see a dramatic change in shopping habits, something SEOs and merchants will have to consider.

Featured Image by Shutterstock/TarikVision