Google’s AI Mode Personal Context Features “Still To Come” via @sejournal, @MattGSouthern

Seven months after Google teased “personal context” for AI Mode at Google I/O, Nick Fox, Google’s SVP of Knowledge and Information, says the feature still is not ready for a public rollout.

In an interview with the AI Inside podcast, Fox framed the delay as a product and permissions issue rather than a model-capability issue. As he put it: “It’s still to come.”

What Google Promised At I/O

At Google I/O, Google said AI Mode would “soon” incorporate a user’s past searches to improve responses. It also said you would be able to opt in to connect other Google apps, starting with Gmail, with controls to manage those connections.

The idea was that you wouldn’t need to restate context in every query if you wanted Google to use relevant details already sitting in your account.

On timing, Fox said some internal testing is underway, but he did not share a public rollout date:

“Some of us are testing this internally and working through it, but you know, still to come in terms of the in terms of the public roll out.”

You can hear the question and Fox’s response in the video below starting around the 37-minute mark:

AI Mode Growth Continues Without Personal Context

Even without that personalization layer, Fox pointed to rapid adoption, describing AI Mode as having “grown to 75 million daily active users worldwide.”

The bigger change may be in how people phrase queries. Fox described questions that are “two to three times as long,” with more explicit first-person context.

Instead of relying on AI Mode to infer intent, people are writing the context into the prompt, Fox says:

“People are trying to put put the right context into the query”

That matters because the “personal context” feature was designed to reduce that manual effort.

Geographic Patterns In Adoption

Adoption also appears uneven by market, with the strongest traction in regions that received AI Mode first. Fox described the U.S. as the most “mature” market because the product has had more time to become part of people’s routines.

He also pointed to strong adoption in markets where the web is less developed in certain languages or regions, naming India, Brazil, and Indonesia. The argument there is that AI Mode can stitch together information across languages and borders in ways traditional search results may not have for those markets.

Younger users, he added, are adopting the experience faster across regions.

Publisher Relationship Updates

The interview also included updates tied to how AI Mode connects people back to publisher content.

Preferred Sources is one of them. The feature lets you choose specific publications you want to see more prominently in Google’s Top Stories unit, and Google describes it as available worldwide in English.

Fox also described ongoing work on links in AI experiences, including increasing the number of links shown and adding more context around them:

“We’re actually improving the links within our within our AI experience, increasing the number of them…”

On the commercial side, he noted Google has partnerships with “over 3,000 organizations” across “50 plus countries.”

Technical Updates

Fox talked through product and infrastructure changes now powering AI Mode and related experiences.

One was shipping Gemini 3 Pro in Search on day one, which he described as the first time Google has shipped a frontier model” in Search on launch day.

He also described generative layouts,” where the model can generate UI code on the fly for certain queries.

To keep the experience fast, he emphasized model routing, where simpler queries go to smaller, faster models and heavier work is reserved for more complex prompts.

Why This Matters

A version of AI Mode that personalizes answers using opt-in Gmail context is still not available and doesn’t have a public timeline.

In the meantime, people appear to be compensating by typing more context into their queries. If that becomes the norm, it may push publishers toward satisfying longer, more situation-specific questions.

Looking Ahead

While AI Mode is still in its early stages, the 75 million daily active users figure suggests it’s large enough to monitor for visibility.


Featured Image: Jackpress/Shutterstock

Google Gemini 3 Flash Becomes Default In Gemini App & AI Mode via @sejournal, @MattGSouthern

Google released Gemini 3 Flash, expanding its Gemini 3 model family with a faster model that’s now the default in the Gemini app.

Gemini 3 Flash is also rolling out globally as the default model for AI Mode in Search.

The release builds on Google’s recent Gemini 3 rollout, which introduced Gemini 3 Pro in preview and also announced Gemini 3 Deep Think as an enhanced reasoning mode.

What’s New

Gemini 3 Flash replaces Gemini 2.5 Flash as the default model in the Gemini app globally, which means free users get the Gemini 3 experience by default.

In Search, Gemini 3 Flash is rolling out globally as AI Mode’s default model starting today.

For developers, Gemini 3 Flash is available in preview via the Gemini API, including access through Google AI Studio, Google Antigravity, Vertex AI, Gemini Enterprise, plus tools such as Gemini CLI and Android Studio.

Pricing

Gemini 3 Flash pricing is listed at $0.50 per million input tokens and $3.00 per million output tokens on Google’s Gemini API pricing documentation.

On the same pricing page, Gemini 2.5 Flash is listed at $0.30 per million input tokens and $2.50 per million output tokens.

Google says Gemini 3 Flash uses 30% fewer tokens on average than Gemini 2.5 Pro for typical tasks, and citing third-party benchmarking for a “3x faster” comparison versus 2.5 Pro.

Why This Matters

The default language model in the Gemini app has changed, and users have access at no extra cost.

If you build on Gemini, Gemini 3 Flash offers a new option for high-volume workflows, priced well below Pro-tier rates.

Looking Ahead

Gemini 3 Flash is rolling out now. In Search, Gemini 3 Pro is also available in the U.S. via the AI Mode model menu.

Google AI Mode & AI Overviews Cite Different URLs, Per Ahrefs Report via @sejournal, @MattGSouthern

Google’s AI Mode and AI Overviews can produce answers with similar meaning while citing different sources, according to new data from Ahrefs.

The report, published on the Ahrefs blog, analyzed September 2025 U.S. data from Ahrefs’ Brand Radar tool and compared AI Mode and AI Overview responses for the same queries.

The authors looked at 730,000 query pairs for content similarity and 540,000 query pairs for citation and URL analysis.

What The Study Found

Ahrefs reports that AI Mode and AI Overviews cited the same URLs only 13% of the time. When comparing only the top three citations in each response, overlap increased to 16%.

The language in the responses also varied. Ahrefs reports 16% overlap in unique words and states that AI Mode and AI Overviews share the exact same first sentence only 2.5% of the time.

Ahrefs reported strong semantic alignment, with an average semantic similarity score of 86%, and 89% of response pairs scoring above 0.8 on a scale where 1.0 indicates identical meaning.

Despina Gavoyannis, Senior SEO Specialist at Ahrefs, writes:

“Put simply: 9 out of 10 times, AI Mode and AI Overview agreed on what to say. They just said it differently and cited different sources.”

Different Source Preferences

Ahrefs reports differences in which websites and content types each feature tends to cite.

For example, Wikipedia appears in 28.9% of AI Mode citations compared to 18.1% in AI Overviews. The data also finds that AI Mode cited Quora 3.5x more often and cited health sites at roughly double the rate of AI Overviews.

AI Overviews, by contrast, leaned more heavily on video content. YouTube was the most frequently cited source for AI Overviews, whereas Reddit was cited at similar rates in both AI Mode and AI Overviews.

Ahrefs also reports that AI Overviews cited videos and core pages (such as homepages) nearly twice as often as AI Mode. At the same time, both features showed a strong preference for article-format pages overall.

Entity And Brand Mentions

Ahrefs found AI Mode responses were about four times longer than AI Overviews on average and included more entities.

In the dataset, AI Mode averaged 3.3 entity mentions per response compared to 1.3 for AI Overviews. Approximately 61% of the time, AI Mode included all entities mentioned in the AI Overview response and then added additional entities.

Many responses didn’t include brands or entities. Ahrefs reports that 59.41% of AI Overview responses and 34.66% of AI Mode responses contained no mentions of persons or brands, which the authors associate with informational queries in which named entities are not typically part of the answer.

Citation Gaps

The data finds that AI Mode was more likely to include citations than AI Overviews.

Only 3% of AI Mode responses lacked sources, compared to 11% of AI Overviews. Ahrefs reports that missing citations typically occur in cases such as calculations, sensitive queries, help center redirects, or unsupported languages.

Why This Matters

This report suggests that AI Mode and AI Overviews can differ in the sources they credit, even when they reach similar conclusions for the same query.

For monitoring purposes, this can affect how you interpret “visibility” across experiences. A citation (or a mention) in AI Overviews does not necessarily imply you will be cited in AI Mode for the same query, and AI Mode’s longer responses may include additional entities and competitors compared to the shorter AI Overview format.

Google’s documentation states that both AI Overviews and AI Mode may use “query fan-out,” which issues multiple related searches across subtopics and data sources while a response is being generated.

Google also notes that AI Mode and AI Overviews may use different models and techniques, so the responses and links they display will vary.

Looking Ahead

Ahrefs notes this analysis compares single generations of AI Mode and AI Overview responses. In related research, Ahrefs reported that 45.5% of AI Overview citations change when AI Overviews update, suggesting that overlap can appear different across repeated runs.

Even with that caveat, the low overlap observed in this dataset indicates that AI Mode and AI Overviews frequently select different URLs as supporting sources for the same query.


Featured Image: hafakot/Shutterstock

Why Your AI Agent Keeps ‘Hallucinating’ (Hint: It’s Your Data, Not The AI) via @sejournal, @purnavirji

If it looks like an AI hallucination problem, and sounds like an AI hallucination problem, it’s probably a data hygiene problem.

I’ve sat through dozens of demos this year where marketing leaders show me their shiny new AI agent, ask it a basic question, and watch it confidently spit out information that’s either outdated, conflicting, or flat-out wrong.

The immediate reaction is to blame the AI: “Oh, sorry the AI hallucinated. Let’s try something different.”

But was it really the AI hallucinating?

Don’t shoot the messenger, as the saying goes. While the AI is the messenger bringing you what looks like inaccurate data or hallucination, it’s really sending a deeper message: Your data is a mess.

The AI is simply reflecting that mess back to you at scale.

The Data Crisis Hiding Behind “AI Hallucinations”

An Adverity study found that 45% of marketing data is inaccurate.

Almost half of the data feeding your AI systems, your reporting dashboards, and your strategic decisions is wrong. And we wonder why AI agents give vague answers, contradict themselves, or pull messaging that no one’s used since 2022.

Here’s what I see in nearly every enterprise:

  • Three teams operating with three different definitions of ideal customer profile (ICP).
  • Marketing defines “conversion” one way, sales defines it another.
  • Buyer data scattered across six systems that barely acknowledge each other’s existence.
  • A battlecard last updated in 2019 still floating around, treated like gospel by your AI agent.

When your foundational data argues with itself, AI doesn’t know which version to believe. So it picks one. Sometimes correctly. Often not.

Why Clean Data Matters More Than Smart AI

AI isn’t magic. It reflects whatever you feed it: the good, the bad, and the three-years-outdated.

Everyone wants the “build an agent” sexy moment. The product demo that has everyone applauding. The efficiency gains that guarantee a great review, heck, maybe even a raise.

But the thing that makes AI useful is the boring, unsexy, foundational work of data discipline.

I’ve watched companies spend six figures on AI infrastructure while their product catalog still has duplicate entries from a 2021 migration. I’ve seen sales teams adopt AI coaching tools while their CRM defines “qualified lead” three different ways depending on which region you ask.

The AI works exactly as designed. The problem is what it’s designed to work with.

If your system is messy, AI can’t clean it up (at least, not yet). It amplifies the mess at scale, across every interaction. As much as we would like for it to, even the sexiest AI model in the world won’t save you if your data foundation is broken.

The Real Cost Of Bad Data Hygiene

When your data is inaccurate, inconsistent, or outdated, mistakes are inevitable. These can get risky quickly, especially if they negatively impact customer experience or revenue.

Here’s what that looks like in practice:

Your sales agent gives prospects pricing that changed six months ago because nobody updated the product sheet it’s trained on.

Your content generation tool pulls brand messaging from 2020 because the 2026 messaging framework lives in a deck on someone’s desktop.

Your lead scoring AI uses ICP criteria that marketing and sales never agreed on, so you’re nurturing the wrong prospects while ignoring the right ones.

Your sales enablement agent recommends a case study for a product you discontinued last quarter because nobody archived the old collateral.

This is happening every single week in enterprises that have invested millions in AI transformation. And most teams don’t even realize it until a customer or prospect points it out.

Where To Start: 5 Steps To Fix Your Data Foundation

The good news: You don’t need a massive transformation initiative to fix this. You need discipline and ownership.

1. Audit What Your AI Can Actually See

Before you can fix your data problem, you need to understand its scope.

Pull every document, spreadsheet, presentation, and database your AI systems have access to. Don’t assume. Actually look.

You’ll more than likely find:

  • Conflicting ICP definitions across departments.
  • Outdated pricing from previous years.
  • Messaging from three rebrand cycles ago.
  • Competitive intel that no longer reflects market reality.
  • Case studies for products you no longer sell.

Retire what’s wrong. Update what’s salvageable. Be ruthless about what stays and what goes.

2. Create One Source Of Truth

This is non-negotiable. Pick one system for every definition that matters to your business:

  • ICP criteria.
  • Conversion stage definitions.
  • Territory assignments.
  • Product positioning.
  • Competitive differentiators.

Everyone pulls from it. No exceptions. No “but our team does it differently.”

When marketing and sales use different definitions, your AI can’t arbitrate. It picks one randomly. Sometimes it picks both and contradicts itself across interactions.

One source of truth eliminates that chaos.

3. Set Expiration Dates For Everything

Every asset your AI can access should have a “valid until” date.

Battlecards. Case studies. Competitive intelligence. Messaging frameworks. Product specs.

When it expires, it automatically disappears from AI access. No manual cleanup required. No hoping someone remembers to archive old content.

Stale data is worse than no data. At least with no data, your AI admits it doesn’t know. With stale data, it confidently delivers wrong information.

4. Test What Your AI Actually Knows

Don’t assume your AI is working correctly. Test it.

Ask basic questions:

  • “What’s our ICP?”
  • “How do we define a qualified lead?”
  • “What’s our current pricing for [product]?”
  • “What differentiates us from [competitor]?”

If the answers conflict with what you know is true, you just found your data hygiene problem.

Run these tests monthly. Your business changes. Your data should change with it.

5. Assign Someone To Own It

Data discipline without ownership is a Slack thread that goes nowhere.

One person needs to be explicitly responsible for maintaining your source of truth. Not as an “additional responsibility.” As a core part of their role.

This person:

  • Reviews and approves all updates to the source of truth.
  • Sets and enforces expiration dates for assets.
  • Runs monthly audits of what AI can access.
  • Coordinates with teams to retire outdated content.
  • Reports on data quality metrics.

Without ownership, your data hygiene initiative dies in three months when everyone gets busy with other priorities.

The Bottom Line: Foundation Before Flash

If you don’t fix the mess, AI will scale the mess.

Deploying powerful AI on top of chaotic data is at best inefficient, but at worst, it can actively damage your brand, your customer relationships, and your competitive position.

You can have the most sophisticated AI model in the world. The best prompts. The most expensive infrastructure. None of it matters if you’re feeding it garbage. It takes a disciplined foundation to make it work.

It’s like seeing someone with perfectly white teeth and thinking they just got lucky. What you don’t see is the daily flossing, the regular dental cleanings, the discipline of avoiding sugar and brushing twice a day for years.

Or watching an Olympic athlete make a performance look effortless. You’re not seeing the 5 a.m. training sessions, the strict diet, the thousands of hours of practice that nobody applauds.

The same applies to AI.

To get real value and ROI from AI, start with setting it up for success with the right data foundation. Yes, it might not be the most glamorous or exciting work. But it is what makes the glamorous and exciting possible.

Remember, your AI isn’t hallucinating. It’s telling you exactly what your data looks like.

The question is: Are you ready to fix it?

More Resources:


Featured Image: BestForBest/Shutterstock

WooCommerce Is Integrating Agentic AI Capabilities via @sejournal, @martinibuster

WooCommerce announced that it will roll out integration with Stripe’s Agentic Commerce Suite, which will enable AI shopping assistants to conduct transactions.

Agentic AI Shopping

Agentic AI seems a long way off but OpenAI currently supports end-to-end shopping from the discovery and comparison stages to completing purchases. With the rollout in WooCommerce the infrastructure will be in place to enable over four million stores to be accept product browsing and payments through AI agents.

Stripe Agentic Commerce Suite

Stripe’s Agentic Commerce Suite uses the Agentic Commerce Protocol (ACP), an open source protocol jointly created by Stripe and OpenAI. ACP is model agnostic and does not lock in users to any particular payment provider.

ACP is compatible with the Model Context Protocol (MCP) which was created by Anthropic initially for connecting AI models to external data. The significance is that MCP enables models to call APIs, retrieve data, and perform actions.

According to the official WooCommerce announcement:

“WooCommerce is proud to be a launch partner. Woo merchants will be among the first to benefit when Agentic Commerce Suite rolls out in the coming months.

This is a significant moment for WooCommerce merchants. Instead of building custom integrations for every new AI shopping assistant or platform, you’ll be able to connect your product catalog once and reach customers shopping through whichever AI agent they prefer. Stripe handles discovery, checkout, payments, and fraud protection, while you continue using your existing WooCommerce + Stripe stack.”

This represents a step toward putting the necessary infrastructure in place to enable consumers to interact with AI as part of a new shopping experience. The very near future may see a dramatic change in shopping habits, something SEOs and merchants will have to consider.

Featured Image by Shutterstock/TarikVision

Google Updates Search Live With Gemini Model Upgrade via @sejournal, @martinibuster

Google has updated Search Live with Gemini 2.5 Flash Native Audio, upgrading how voice functions inside Search while also extending the model’s use across translation and live voice agents. The update introduces more natural spoken responses in Search Live and reflects Google’s effort to improve natural voice queries, treating voice as a core interface as a way for users to get everything they can get from regular search plus enabling them to ask questions about the physical world around them and receive immediate voice translations between two people speaking different languages.

The new updated voice capabilities, rolling out this week in the  United States, will enable Google’s voice responses to sound more natural and can even be slowed down for instructional content.

According to Google:

“When you go Live with Search, you can have a back-and-forth voice conversation in AI Mode to get real-time help and quickly find relevant sites across the web. And now, thanks to our latest Gemini model for native audio, the responses on Search Live will be more fluid and expressive than ever before.”

Broader Gemini Native Audio Rollout

This Search upgrade is part of a broader update to Gemini 2.5 Flash Native Audio rolling out across Google’s ecosystem, including Gemini Live (in the Gemini App), Google AI Studio, and Vertex AI. The model processes spoken audio in real time and produces fluid spoken responses, reducing barriers to natural conversation, reducing friction in live interactions. Although Google’s announcement didn’t say that the model was a speech-to-speech model (as opposed to speech-to-text then text-to-speech), this update follows Google’s October announcement of “Speech-to-Retrieval (S2R). It’s a neural network-based machine-learning model trained on large datasets of paired audio queries.”

These changes show Google treating native audio as a core capability across consumer-facing products, making it easier for users to ask and receive information about the physical world around them in a natural manner that wasn’t previously possible.

Improvements For Voice-Based Systems

For developers and enterprises building voice-based systems, Google says the updated model improves reliability in several areas. Gemini 2.5 Flash Native Audio more consistently triggers external functions during conversations, follows complex instructions, and maintains context across multiple turns. These improvements make live voice agents more dependable in real-world workflows, where misinterpreted instructions or broken conversational flow reduce usability.

Smooth Conversational Translation

Beyond Search and voice agents, the update introduces native support for “live speech-to-speech translation.” Gemini translates spoken language in real time, either by continuously translating ambient speech into a target language or by handling conversations between speakers of different languages in both directions. The system preserves vocal characteristics such as speech rhythm and emphasis, supporting translation that sounds smoother and conversational.

Google highlights several capabilities supporting this translation feature, including broad language coverage, automatic language detection, multilingual input handling, and noise filtering for everyday environments. These features reduce setup friction and allow translation to occur passively during conversation rather than through manual controls. The result is a translation experience that behaves much like an actual person in the middle translating between two people.

Voice Search Realizing Google’s Aspirations

The update reflects Google’s continued iteration of voice search toward an ideal that was originally inspired by the science fiction voice interactions between humans and computers in the popular Star Trek television and movie series.

Read More:

Google Announces A New Era For Voice Search

You can now have more fluid and expressive conversations when you go Live with Search.

Improved Gemini audio models for powerful voice interactions

Gemini Live

5 ways to get real-time help by going Live with Search

Featured Image by Shutterstock/Jackbin

How People Use Copilot Depends On Device, Microsoft Says via @sejournal, @MattGSouthern

How people use Microsoft Copilot depends on whether they’re at a desk or on their phone.

That is the core theme in the company’s analysis of 37.5 million Copilot conversations sampled between January and September.

The research examines consumer Copilot usage patterns across device types and time of day. The authors say they used machine-based classifiers to categorize conversations by topic and intent without any human review of the messages.

What The Report Says

On mobile, Health and Fitness is the most common topic throughout the day

The authors summarize the split this way:

“On mobile, health is the dominant topic, which is consistent across every hour and every month we observed, with users seeking not just information but also advice.”

Desktop usage follows a different rhythm. Technology leads as the top topic overall, but the researchers report that work-related conversations rise during business hours.

They describe “three distinct modes of interaction: the workday, the constant personal companion, and the introspective night.”

During the workday, the paper says:

  • Between 8 a.m. and 5 p.m., “Work and Career” overtakes “Technology” as the top topic on desktop.
  • Education and science topics rise during business hours compared to nighttime.

Outside business hours, the paper describes a shift toward more personal and reflective topics. For example, it reports that “Religion and Philosophy” rises in rank during late-night hours through dawn.

Programming conversations are more common on weekdays, while gaming rises on weekends. They also note a spike in relationship conversations on Valentine’s Day.

Methodology Caveats

A few limitations are worth keeping in mind.

This is a preprint, so it hasn’t been peer reviewed. It also focuses on consumer Copilot usage and excludes enterprise-authenticated traffic, so it doesn’t describe how Copilot is used inside Microsoft 365 at work.

Finally, the topic and intent labels come from automated classifiers, which means the results reflect how Microsoft’s system groups conversations, not a human-coded review.

Why This Matters

This paper suggests that the use of AI chatbots varies with context. The researchers describe mobile behavior as consistently health-oriented, while desktop behavior is more tied to the workday.

The researchers connect the mobile health pattern to how people use their phones. They write:

“This suggests a device-specific usage pattern where the phone serves as a constant confidant for physical well-being, regardless of the user’s schedule.”

The big takeaway is that “Copilot usage” is not one uniform behavior. Device and time of day appear to shape what people ask for, and how they ask it.

Looking Ahead

Enterprise usage patterns may look different, especially inside Microsoft 365. Any follow-up research that includes workplace contexts, or that validates these patterns outside Microsoft’s own tooling and taxonomy, would help clarify how broadly these findings apply.

Well-Known SEO Explains Why AI Agents Are Coming For You & What To Do Now via @sejournal, @theshelleywalsh

I’m carefully watching the development of agentic SEO, as I believe over the next few years, as capabilities improve, agents will have a significant impact on the industry. I’m not suggesting this will be a seamless replacement of talent with a highly capable machine intelligence. There is going to be a lot of trial and error, but I do think we are going to see radical shifts in how the online space operates. Not unlike how automation transformed manufacturing.

Marie Haynes has long been a well-known expert in the industry who shared her learnings on E-E-A-T and Google’s algorithm through her popular Search News You Can Use newsletter.

A few years ago, Marie made the decision to retire her SEO agency and went all in on learning AI systems, as she believes we’re at the beginning of a profound transformation.

Marie wrote a recent article, “Hype or not, should you be investing in AI agents?” about what SEOs need to understand about this rapidly developing space. So, I invited her to IMHO to dive more into this topic.

Marie believes AI will radically change our world for the better, and she believes every business will have AI agents.

You can watch the full interview with Marie on the IMHO recording at the end, or continue reading the article summary.

“The idea that we optimize for appearing as one of the 10 blue links on Google is already gone.”

Experimenting With Gemini Gems

Marie’s practical advice for anyone wanting to understand agents is to start with Gems:

“If you take one thing from this conversation, it’s to try to create some Gemini Gems,” Marie emphasized. “Eventually I’m fairly certain that these gems will morph into agentic workflows.”

To illustrate, she shared a process she called her “originality Gem,” which contains a 500+ word prompt that captures how she evaluates content, along with examples of truly original content in its knowledge base.

“We’re not far from the day where all of my processes that I do for SEO can be handled by agentic workflows that occasionally pull on me for some advice,” Marie said.

The Power Of Chaining Agents

The next progression and real potential come from chaining agents together to create agentic workflows.

The power that this gives opportunity to is that we can use our knowledge and experience to teach AI like a team of assistants to do the work that can be automated.

We would then orchestrate the process and, like a conductor, sit and guide the agents to perform the work as we become the human-in-the-loop to review the output.

Once we have downloaded our knowledge to the agents, and the systems work, we can scale ourselves to handle exponential clients.

“Instead of me handling just a small handful of clients, all of a sudden I could have a hundred clients and do the same work because it’s all going through my workflow,” Marie said.

The challenge here is the skill in prompting the agents and constructing them to achieve the desired output.

“The future of our industry is not about optimizing for an engine, but about acting as the interface between businesses and technology, and we will be the human experts who teach, guide, and implement AI agents.”

Why Gemini Over ChatGPT

I asked Marie why she focuses on Gemini over ChatGPT, and her response was based on futureproofing: “The main reason why I use Gemini is not to accomplish things today, but to grow my skills in what’s coming tomorrow.”

Marie went on to explain that “Google’s got a whole ecosystem that you can see it coming together like right now,” and she believes that Google will be the winner in the AI race.

“I think that Google is going to win the game. I think it’s always been their game to win. So I make it a point to use Gemini as much as I can.”

Transformations Will Follow The Money

Marie’s prediction for the next few years is for workflows to become embedded. “Sundar Pichai, CEO of Google, said this way back in March, that, in two to four years, every agentic workflow will be deeply embedded into our day-to-day work.”

However, she thinks the real transformations will come when businesses start making money from agentic workflows.

“It’s wild how many trillions of dollars are being spent on developing AI, yet there’s not a whole lot of financial output at this point,” Marie noted, referencing a McKinsey study showing 95% of businesses using AI aren’t making money from it yet [Editor’s note: McKinsey was 80%; MIT said 95%].

“It’s very similar to SEO. There was a day where there were just a small handful of people who figured out how to improve on Google. Once people started making good money from understanding SEO, there was a lot of attention. Tools were created and a whole industry popped up. I think that’s going to happen again. Will it be within the next 12 months? I don’t know. I feel like it might be a little bit longer.”

What SEOs Should Do Now

Overwhelm is a real issue to be aware of, and with developments moving so quickly, there is a huge learning curve to essentially retrain. Even for those working on this full-time.

Marie made a commitment when she went all in on AI research. “I made it my full-time job to stay on top of what’s happening, and even I get overwhelmed with all the stuff that’s happening with AI,” she explained.

Marie’s advice is to keep learning, keep trying things, and experiment with writing prompts.

“The next time you go to do a task, try to create an agent that would do this for you,” she suggested. Even if you don’t finish, you’ll learn skills for the next attempt.

Also, persevere instead of taking the first failure. “Try to figure out what they can do, instead of just telling everybody, ‘Oh, it can’t do this.’ Find ways you can use it.”

For development teams, she recommends vibe coding with tools like Google’s Anti Gravity or AI Studio. “You can deploy a whole website without even knowing any HTML,” Marie said.

She also advocates for deep research reports using either Gemini or ChatGPT to analyze how competitors are using AI, providing immediate value to clients while building skills.

The Future Of SEO

Marie referenced Sundar Pichai calling AI technology more profound than fire or electricity in its impact on society. Despite acknowledging her bias after investing significant time in understanding AI, she maintains there’s going to be societal disruption.

“Being able to understand what’s happening in the world and distill it down to what’s important to your clients will be a superpower,” she said. Although, she does admit, there is still a lot of learning and grey areas to move through as we navigate the edge of technology.

“If you’re feeling lost, you’re not alone because imagine right now we’re sort of at the forefront of all of these changes happening.”

For those who do persevere, there will be significant rewards. Eventually, business owners will be clamoring for people who can explain AI and implement it. The professionals who develop these skills now will be extremely valuable in the future.

“The people who know how to use AI, know how to create agents, and know how to make money from AI are going to be extremely valuable in the future.”

Watch the full video interview with Marie Haynes here:

Thank you to Marie Haynes for offering her insights and being my guest on IMHO.

More Resources:


Featured Image: Shelley Walsh/Search Engine Journal

AI Overviews Changed Everything: How To Choose Link Building Services For 2026 via @sejournal, @EditorialLink

This post was sponsored by Editorial.Link. The opinions expressed in this article are the sponsor’s own.

“How do you find link-building services? You don’t, they find you,” goes the industry joke. It’s enough to think about backlinks and dozens of pitches that hit your inbox.

However, most of them offer spammy links with little long-term value. Link farms, PBNs, the lot.

This type of saturated market makes it hard to find a reputable link building agency that can navigate the current AI-influenced search landscape.

That’s why we’ve put together this guide.

We’ll share a set of steps that will help you vet link providers so you can find a reliable partner that will set you up for success in organic and AI search.

1. Understand How AI-Driven Search Changes Link Building

Before you can vet an agency, you must understand how the “AI-influenced” landscape is different. Many agencies are still stuck in the old playbook, which includes chasing guest posts, Domain Rating (DR), and raw link volume.

Traditional Backlinks Remain Fundamental

A recent Ahrefs study found that 76.10% of pages cited in AI Overviews also rank in Google’s top 10 results, and 73% of participants in Editorial.Link survey believes they affect visibility in AI search.

However, the signals of authority are evolving:

When vetting a service for AI-driven search, your criteria must shift from “How many links can you get?” to “Can you build discoverable authority that earns citations?”

This means looking for agencies that build your niche authority through tactics like original data studies, digital PR, and expert quotes, not just paid posts.

2. Verify Their Expertise and AI-Search Readiness

The first test is simple: do they practice what they preach?

Check Their Own AI & Search Visibility

Check the agency’s rankings in organic and AI search for major keywords in their sector.

Let’s say you want to vet Editorial.Link. If you search for “best link building services,” you will find it is one of the link providers listed in the AI Overviews.

Screenshot of Google’s AI Overviews, November 2025

It doesn’t mean an agency isn’t worth your time just because it doesn’t rank high, as some services thrive on referrals and don’t focus on their own SEO.

However, if they do rank, that’s a major green flag. SEO is a highly competitive niche; ranking their own website demonstrates the expertise to deliver similar results for you.

Ensure Their Tactics Build Citation-Worthy Authority

A modern agency’s strategy should focus on earning citations.

Ask them these questions to see whether they’ve adapted:

  • Do they talk about AI visibility, citation tracking, or brand mentions?
  • Do they build links through original data studies, digital PR, and expert quotes?
  • Can they show examples of clients featured in AI Overviews, Chat GPT, or Perplexity answers?
  • Can they help you get a link from top listicles in your niche? Ahrefs’ data shows “Best X” list posts dominated the field. They made up 43,8% of all pages referenced in the responses, and the gap between them and every other format looked huge. You can find relevant listicles in your niche using free services, like listicle.com.
  • Screenshot of Listicle, November 2025

3. Scrutinize Their Track Record Via Reviews, Case Studies & Link Samples

Past performance is a strong indicator of future results.

Analyze Third-Party Reviews

Reviews on independent platforms like Clutch, Trustpilot, or G2 reveal genuine clients’ sentiment better than hand-picked testimonials on a website.

When studying reviews, look for:

  • Mentions of real campaigns or outcomes.
  • Verified client names or company profiles.
  • Recent activity, such as new reviews, shows a steady flow of new business.
  • The total number of reviews (the more, the more representative).
  • Patterns in negative reviews and how the agency responds to them.
Screenshot of Editorial.Link’s profile on Clutch, November 2025

Dig Into Their Case Studies

Case studies and customer stories offer proof of concept and provide insights into their processes, strategies, and industry fit.

While case studies with named clients are ideal, some top-tier agencies are bound by client NDAs for competitive reasons. Be wary if all their examples are anonymous and vague, but don’t dismiss a vendor just for protecting client confidentiality.

If the clients’ names are provided, don’t take any figures at face value.

Use an SEO tool to examine their link profiles. If you know the campaign’s timeframe, zero in on that period to see how many links they acquired, their quality, and their relevance.

Screenshot of Thrive Internet Marketing, November 2025

Audit Their Link Quality

Inspecting link quality is the ultimate litmus test.

An agency’s theoretical strategy doesn’t matter if its final product is spam. Ask for 3 – 5 examples of links they have built for recent clients.

Once you have the samples, don’t just look at the linking site’s DR. Audit them with this checklist:

  • Editorial relevance: Is the linking page topically relevant to the target page?
  • Site authority & traffic: Does the linking website have real, organic traffic?
  • Placement & context: Is the link placed editorially within the body of an article?
  • AI-citation worthiness: Is this an authoritative site Google AI Overview, ChatGPT, or Perplexity would cite (e.g., a reputable industry publication or a data-driven report)?

4. Evaluate Their Process, Pricing & Guarantees

A reliable link-building service is fully transparent about its process and what you’re paying for.

Look For A Transparent Process

Can you see what you’re paying for? A reliable service will outline its process or share a list of potential prospects before starting outreach.

Ask them for a sample report. Does it include anchor texts, website GEO, URLs, target pages, and publication dates? A vague “built 20 links” report doesn’t cut it.

Finally, check if they offer consulting services.

For example, can they help you choose target pages that will benefit from a link boost most?

Or are they just a link-placing service, as this signals a lack of expertise?

Analyze Their Pricing Model

Price is a direct indicator of quality.

When someone offers links for $100 – $200 a pop, they are typically from PBNs or bulk guest posts, and frequently disappear within months.

Valuable backlinks from trusted sites cost significantly more on average, $508.95, according to the Editorial.Link report.

Prospecting, outreach, content creation, and communication require substantial time and effort.

Reputable agencies work on one of two models:

  • Retainer model: A fixed monthly fee for a consistent flow of links.
  • Custom outreach: Tailored campaigns with flexible volume and pricing.

Scrutinize Their “Guarantees” For Red Flags

This is where unrealistic promises expose low-quality vendors.

A reputable digital PR agency, for example, won’t guarantee the number of earned links. The final result depends on how well a story resonates with journalists.

The same applies to “guaranteed DR or DA.” These metrics don’t directly affect rankings, and it’s impossible to guarantee which websites will pick up a story.

Choosing A Link Building Partner For The AI Search Era

Not all link-building services have the necessary expertise to help you build visibility in the age of AI search.

When choosing your link-building partner, look for a proven track record, transparency, and adaptability.

A service with a strong search presence, demonstrable results, and a focus on AI visibility is a safer bet than one making unsubstantiated claims.

Image Credits

Featured Image: Image by Editorial.Link. Used & modified with permission.

In-Post Images: Image by Editorial.Link. Used with permission.

Google Hit By EU Probe Into Unfair Use Of Online Content via @sejournal, @martinibuster

The European Commission has launched an antitrust inquiry into Google to determine whether the company has violated EU competition rules, partly focusing on whether Google has used creator and publisher content in ways that leave publishers unable to refuse such use without risking their search traffic. It is also looking into whether Google is granting itself privileged access to YouTube content for AI in a way that leaves competitors at a disadvantage.

How Google’s Terms May Pressure Publishers and Creators

The Commission is focusing on publisher content is used by AI Overviews and AI Mode to generate answers but without a way to compensate the publishers or for them to opt out of having their content used to generate summaries.

They write:

“The Commission will investigate to what extent the generation of AI Overviews and AI Mode by Google is based on web publishers’ content without appropriate compensation for that, and without the possibility for publishers to refuse without losing access to Google Search. Indeed, many publishers depend on Google Search for user traffic, and they do not want to risk losing access to it.”

This raises concerns that Google may be using publisher content in its AI products without offering a workable opt-out, leaving publishers who rely on Search traffic with little choice but to accept this use.

Use of YouTube Content to Train Google’s AI Models

The Commission is also examining Google’s use of YouTube videos and other creator content for training its generative AI models. According to the announcement, creators “have an obligation to grant Google permission to use their data for different purposes, including for training generative AI models,” and cannot upload content while withholding that permission. Google provides no payment for this use while blocking rival AI developers from training on YouTube content under YouTube’s policies.

This mix of mandatory access for Google, limits on competitors, and no payment for creators underpins the Commission’s concern that Google may be giving itself preferred access to YouTube content in a way that may harm the wider AI market.

The Commission has notified Google that it has opened an investigation into whether they have breached EU competition rules prohibiting the abuse of a dominant position.

Featured Image by Shutterstock/Mo Arbid