New Yahoo Scout AI Search Delivers The Classic Search Flavor People Miss via @sejournal, @martinibuster

Yahoo has announced Yahoo Scout, a new AI-powered answer engine now available in beta to users in the United States, providing a clean Classic Search experience with the power of personalized AI. The launch also includes the Yahoo Scout Intelligence Platform, which brings AI features across Yahoo’s core products, including Mail, News, Finance, and Sports.

Screenshot Of Yahoo Scout

Yahoo’s Existing Products and User Reach

Yahoo’s announcement states that it operates some of the most popular websites and services in the United States, reaching what they say is 90% of all internet users in the United States (based on Comscore data), through its email, news, finance, and sports properties. The company says that Yahoo Scout builds on the foundation of decades of search behavior and user interaction data.

How Yahoo Scout Generates Answers

Yahoo has partnered with Anthropic to use the Claude model as the primary AI system behind Yahoo Scout. Yahoo’s announcement said it selected Claude for speed, clarity, judgment, and safety, which it described as essential qualities for a consumer-facing answer engine. Yahoo also continues its partnership with Microsoft by using Microsoft Bing’s grounding API, which connects AI-generated answers to information from across the open web. Yahoo said this approach ensures that answers are informed by authoritative sources rather than unsupported text generation.

According to Yahoo, Scout relies on a combination of traditional web search and generative AI to produce answers that are grounded using Microsoft Bing’s grounding API and informed by sources from across the open web.

According to  Yahoo:

“It’s informed by 500 million user profiles, a knowledge graph spanning more than 1 billion entities, and 18 trillion consumer events that occur annually across Yahoo, which allow Yahoo Scout to provide effective and personalized answers and suggested actions.”

Yahoo’s announcement says that this data, its use of Claude, and reliance on Bing for grounding work together to provide responses to answers that are personalized and helpful for researching and making decisions in the “moments that matter” to people.

They explain:

“Yahoo Scout continues Yahoo’s focus on the moments that matter to people’s daily lives, such as understanding upcoming weather patterns before a vacation, getting details about an important game, tracking stock price movements after earnings, comparing products before buying, or fact-checking a news story.”

Where Yahoo Scout Appears Inside Yahoo Products

The Yahoo Scout Intelligence Platform embeds these AI capabilities directly into Yahoo’s existing services.

For example:

  • In Yahoo Mail, Scout supports AI-generated message summaries.
  • In Yahoo Sports, it produces game breakdowns.
  • In Yahoo News, it surfaces key takeaways.
  • In Yahoo Finance, Scout adds interactive tools for analysis that allow readers to explore market news and stock performance context through AI-powered questions.

According to Eric Feng, Senior Vice President and General Manager of Yahoo Research Group:

“Yahoo’s deep knowledge base, 30 years in the making, allows us to deliver guidance that our users can trust and easily understand, and will become even more personalized over the coming months. Yahoo Scout now powers a new generation of intelligence experiences across Yahoo, seamlessly integrated into the products people use every day.”

What Yahoo Says Comes Next

Yahoo said Scout will continue to develop over the coming months. Planned updates include deeper personalization, expanded capabilities within specific verticals, and new formats for search advertising designed to work in generative AI search. The company did not provide a timeline for when the beta period will end or when additional features will move beyond testing.

Yahoo explained:

“Yahoo Scout will continue to evolve in the months ahead, expanding to power new products across Yahoo. In particular, the new answer engine will become more personalized, will add new capabilities focused on deeper experiences within key verticals, and will introduce new, improved opportunities for search advertisers to effectively cross the chasm to generative AI search advertising. “

Yahoo’s Search Experience

Something that’s notable about Yahoo’s AI answer engine experience is how clean and straightforward it is. It’s like a throwback to classic search but with the sophistication of AI answers.

For example, I asked it to give me information on where I can buy an esoteric version of a Levi’s trucker jacket in a specific color (Midnight Harvest) and it presented a clean summary of where to get it, a table with a list of retailers ordered by the lowest prices.

Screenshot Of Yahoo Scout

Notice that there are no product images? It’s just giving me the prices. I don’t know if that’s because they don’t have a product feed but I already know what the jacket looks like in the color I specified so images aren’t really necessary.  This is what I mean when I say that Yahoo Scout offers that Classic Search flavor without the busy overly fussy search experience that Google has been providing lately.

With Yahoo Scout, the company is applying AI systems to tasks its users perform when they search for, read, or compare information online. Rather than positioning AI as a replacement for search or content platforms, Yahoo is using it as a tool that organizes, summarizes, and explains information in a clean and easy to read format.

Yahoo Scout is easy to like because it delivers the clean and uncluttered search experience that many people miss.

Check out Yahoo Scout at scout.yahoo.com

The Yahoo Scout app is available for Android and Apple devices.

Google AI Overviews Now Powered By Gemini 3 via @sejournal, @MattGSouthern

Google is making Gemini 3 the default model for AI Overviews in markets where the feature is available and adding a direct path into AI Mode conversations.

The updates, shared in a Google blog post, bring Gemini 3’s reasoning capabilities to AI Overviews. Google says the feature now reaches over one billion users.

What’s New

Gemini 3 For AI Overviews

The Gemini 3 upgrade brings the same reasoning capabilities to AI Overviews that previously powered AI Mode.

Robby Stein, VP of Product for Google Search, wrote:

“We’re rolling out Gemini 3 as the default model for AI Overviews globally, so even more people will be able to access best-in-class AI responses, directly in the results page for questions where it’s helpful.”

Gemini 3 launched in November, and Google shipped it to AI Mode on release day. This expands Gemini 3 from AI Mode into AI Overviews as the default.

AI Overview To AI Mode Transition

You can now ask a follow-up question right from an AI Overview and continue into AI Mode. The context from the original response carries into the conversation, so you don’t start over.

Stein described the thinking behind the change:

“People come to Search for an incredibly wide range of questions – sometimes to find information quickly, like a sports score or the weather, where a simple result is all you need. But for complex questions or tasks where you need to explore a topic deeply, you should be able to seamlessly tap into a powerful conversational AI experience.”

He called the result “one fluid experience with prominent links to continue exploring.”

An earlier test of this flow ran globally on mobile back in December.

In testing, Google found people prefer this kind of natural flow into conversation. The company also found that keeping AI Overview context in follow-ups makes Search more helpful.

Why This Matters

The pattern has held since AI Overviews launched. Each update makes it easier to stay within AI-powered responses.

When Gemini 3 arrived in AI Mode, it brought deeper query fan-out and dynamic response layouts. AI Overviews running on the same model could produce different citation patterns.

That makes today’s update an important one to monitor. Model changes can affect which pages get cited and how responses are structured.

Looking Ahead

Google says the updates are rolling out starting today, though availability may vary by market.

Google previously indicated plans to add automatic model selection that routes complex questions to Gemini 3 while using faster models for simpler tasks. Whether that affects AI Overviews beyond today’s default model change isn’t specified.


Featured Image: Darshika Maduranga/Shutterstock

Sam Altman Says OpenAI “Screwed Up” GPT-5.2 Writing Quality via @sejournal, @MattGSouthern

Sam Altman said OpenAI “screwed up” GPT-5.2’s writing quality during a developer town hall Monday evening.

When asked about user feedback that GPT-5.2 produces writing that’s “unwieldy” and “hard to read” compared to GPT-4.5, Altman was blunt.

He said:

“I think we just screwed that up. We will make future versions of GPT 5.x hopefully much better at writing than 4.5 was.”

Altman explained that OpenAI made a deliberate choice to focus GPT-5.2’s development on technical capabilities:

“We did decide, and I think for good reason, to put most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing. And we have limited bandwidth here, and sometimes we focus on one thing and neglect another.”

How OpenAI Positioned Each Model

The contrast between GPT-4.5 and GPT-5.2 shows where OpenAI focused its resources.

When OpenAI introduced GPT-4.5 in February 2025, the company emphasized natural interaction and writing. OpenAI said interacting with GPT-4.5 “feels more natural” and called it “useful for tasks like improving writing.”

GPT-5.2’s announcement took a different direction. OpenAI positioned it as the most capable model series yet for professional knowledge work, with improvements in creating spreadsheets, building presentations, writing code, and handling complex, multi-step projects.

The release post spotlights spreadsheets, presentations, tool use, and coding. Writing appears more briefly, with technical writing noted as an improvement for GPT-5.2 Instant. But Altman’s comments suggest the overall writing experience still fell short for users comparing it to GPT-4.5.

Why This Matters

We’ve covered the iterative changes to ChatGPT since GPT-5 launched in August, including updates to warmth and tone and the GPT-5.1 instruction-following improvements. OpenAI regularly adjusts model behavior based on user feedback, and regressions in one area while improving another aren’t new.

What’s unusual is hearing Altman acknowledge a tradeoff this directly. For anyone using ChatGPT output in client-facing work, drafts, or polished writing, this explains why outputs may have changed. Model upgrades don’t guarantee improvement across every capability.

If you rely on ChatGPT for writing, treat model updates like any other dependency change. Re-test your prompts when defaults change, and keep a fallback if output quality matters for your workflow.

Looking Ahead

Altman said he believes “the future is mostly going to be about very good general purpose models” and that even coding-focused models should “write well, too.”

No timeline was given for when GPT-5.x writing improvements will ship. OpenAI typically iterates on model behavior through point releases, so changes could arrive gradually rather than in a single update.

Hear Altman’s full statement in the video below:


Featured Image: FotoField/Shutterstock

Why Google Gemini Has No Ads Yet: ‘Trust In Your Assistant’ via @sejournal, @MattGSouthern

Google DeepMind CEO Demis Hassabis said Google doesn’t have any current plans to introduce advertising into its Gemini AI assistant, citing unresolved questions about user trust.

Speaking at the World Economic Forum in Davos, Hassabis said AI assistants represent a different product than search. He believes Gemini should be built for users first.

“In the realm of assistants, if you think of the chatbot as an assistant that’s meant to be helpful and ideally in my mind, as they become more powerful, the kind of technology that works for you as the individual,” Hassabis said in an interview with Axios. “That’s what I’d like to see with these systems.”

He said no one in the industry has figured out how advertising fits into that model.

“There is a question about how does ads fit into that model, where you want to have trust in your assistant,” Hassabis said. “I think no one’s really got a full answer to that yet.”

When asked directly about Google’s plans, Hassabis said: “We don’t have any current plans to do it ourselves.”

What Hassabis Said About OpenAI

The comments came days after OpenAI said it plans to begin testing ads in ChatGPT in the coming weeks for logged-in adults in the U.S. on free and Go tiers.

Hassabis said he was “a little bit surprised they’ve moved so early into that.”

He acknowledged advertising has funded much of the consumer internet and can be useful to users when done well. But he warned that poor execution in AI assistants could damage user relationships.

“I think it can be done right, but it can also be done in a way that’s not good,” Hassabis said. “In the end, what we want to do is be the most useful we can be to our users.”

Search Is Different

Hassabis drew a line between AI assistants and search when discussing advertising.

When asked whether his comments applied to Google Search, where the company already shows ads in AI Overviews, he said the two products work differently.

“But there it’s completely different use case because you’ve already just like how it’s always worked with search, you’ve already, you know, we know what your intent is basically and so we can be helpful there,” Hassabis said. “That’s a very different construct.”

Google began rolling out ads in AI Overviews in October 2024 and has continued expanding them since. The company claims AI Overviews generate ad revenue equal to traditional search results.

Why This Matters

This is the second time in two months that a Google executive has said Gemini ads aren’t currently planned.

In December, Google Ads VP Dan Taylor disputed an Adweek report claiming the company had told advertisers to expect Gemini ads in 2026. Taylor called that report “inaccurate” and said Google has “no current plans” to monetize the Gemini app.

Hassabis’s comments reinforce that position but go further by explaining the reasoning. His “technology that works for you” framing suggests Google sees a tension between advertising and the assistant relationship it wants Gemini to build.

Looking Ahead

Google is comfortable expanding ads where user intent is explicit, like search queries triggering AI Overviews. The company is holding back where intent is less defined and the relationship is more personal.

How long Google maintains its current position depends in part on how users respond to advertising in rival assistants.


Featured Image: Screenshot from: youtube.com/@axios, January 2026. 

Why CFOs Are Cutting AI Budgets (And The 3 Metrics That Save Them) via @sejournal, @purnavirji

Every AI vendor pitch follows the same script: “Our tool saves your team 40% of their time on X task.”

The demo looks impressive. The return on investment (ROI) calculator backs it up, showing millions in labor cost savings. You get budget approval. You deploy.

Six months later, your CFO asks: “Where’s the 40% productivity gain in our revenue?”

You realize the saved time went to email and meetings, not strategic work that moves the business forward.

This is the AI measurement crisis playing out in enterprises right now.

According to Fortune’s December 2025 report, 61% of CEOs report increasing pressure to show returns on AI investments. Yet most organizations are measuring the wrong things.

There’s a problem with how we’ve been tracking AI’s value.

Why ‘Time Saved’ Is A Vanity Metric

Time saved sounds compelling in a business case. It’s concrete, measurable, and easy to calculate.

But time saved doesn’t equal value created.

Anthropic’s November 2025 research analyzing 100,000 real AI conversations found that AI reduces task completion time by approximately 80%. Sounds transformative, right?

What that stat doesn’t capture is the Jevons Paradox of AI.

In economics, the Jevons Paradox occurs when technological progress increases the efficiency with which a resource is used, but the rate of consumption of that resource rises rather than falls.

In the corporate world, this is the Reallocation Fallacy. Just because AI completes a task faster doesn’t mean your team is producing more value. It means they’re producing the same output in less time, but then filling that saved time with lower-value work. Think more meetings, longer email threads, and administrative drift.

Google Cloud’s 2025 ROI of AI report, surveying 3,466 business leaders, found that 74% report seeing ROI within the first year, most commonly through productivity and efficiency gains rather than outcome improvements.

But when you dig into what they’re measuring, it’s primarily efficiency gains, and not outcome improvements.

CFOs understand this intuitively. That’s why “time saved” metrics don’t convince finance teams to increase AI budgets.

What does convince them is measuring what AI enables you to do that you couldn’t do before.

The Three Types Of AI Value Nobody’s Measuring

Recent research from Anthropic, OpenAI, and Google reveals a pattern: The organizations seeing real AI ROI are measuring expansion.

Three types of value actually matter:

Type 1: Quality Lift

AI can make work faster, and it makes good work better.

A marketing team using AI for email campaigns can send emails quicker. And they also have time to A/B test multiple subject lines, personalize content by segment, and analyze results to improve the next campaign.

The metric isn’t “time saved writing emails.” The metric is “15% higher email conversion rate.”

OpenAI’s State of Enterprise AI report, based on 9,000 workers across almost 100 enterprises, found that 85% of marketing and product users report faster campaign execution. But the real value shows up in campaign performance, not campaign speed.

How to measure quality lift:

  • Conversion rate improvements (not just task completion speed).
  • Customer satisfaction scores (not just response time).
  • Error reduction rates (not just throughput).
  • Revenue per campaign (not just campaigns launched).

One B2B SaaS company I talked to deployed AI for content creation.

  • Their old metric was “blog posts published per month.”
  • Their new metric became “organic traffic from AI-assisted content vs. human-only content.”

The AI-assisted content drove 23% more organic traffic because the team had time to optimize for search intent, not just word count.

That’s quality lift.

Type 2: Scope Expansion (The Shadow IT Advantage)

This is the metric most organizations completely miss.

Anthropic’s research on how their own engineers use Claude found that 27% of AI-assisted work wouldn’t have been done otherwise.

More than a quarter of the value AI creates isn’t from doing existing work faster; it’s from doing work that was previously impossible within time and budget constraints.

What does scope expansion look like? It often looks like positive Shadow IT.

The “papercuts” phenomenon: Small bugs that never got prioritized finally get fixed. Technical debt gets addressed. Internal tools that were “someday” projects actually get built because a non-engineer could scaffold them with AI.

The capability unlock: Marketing teams doing data analysis they couldn’t do before. Sales teams creating custom materials for each prospect instead of using generic decks. Customer success teams proactively reaching out instead of waiting for problems.

Google Cloud’s data shows 70% of leaders report productivity gains, with 39% seeing ROI specifically from AI enabling work that wasn’t part of the original scope.

How to measure scope expansion:

  • Track projects completed that weren’t in the original roadmap.
  • Ratio of backlog features cleared by non-engineers.
  • Measure customer requests fulfilled that would have been declined due to resource constraints.
  • Document internal tools built that were previously “someday” projects.

One enterprise software company used this metric to justify its AI investment. It tracked:

  • 47 customer feature requests implemented that would have been declined.
  • 12 internal process improvements that had been on the backlog for over a year.
  • 8 competitive vulnerabilities addressed that were previously “known issues.”

None of that shows up in “time saved” calculations. But it showed up clearly in customer retention rates and competitive win rates.

Type 3: Capability Unlock (The Full-Stack Employee)

We used to hire for deep specialization. AI is ushering in the era of the “Generalist-Specialist.”

Anthropic’s internal research found that security teams are building data visualizations. Alignment researchers are shipping frontend code. Engineers are creating marketing materials.

AI lowers the barrier to entry for hard skills.

A marketing manager doesn’t need to know SQL to query a database anymore; she just needs to know what question to ask the AI. This goes well beyond speed or time saved to removing the dependency bottleneck.

When a marketer can run their own analysis without waiting three weeks for the Data Science team, the velocity of the entire organization accelerates. The marketing generalist is now a front-end developer, a data analyst, and a copywriter all at once.

OpenAI’s enterprise data shows 75% of users report being able to complete new tasks they previously couldn’t perform. Coding-related messages increased 36% for workers outside of technical functions.

How to measure capability unlock:

  • Skills accessed (not skills owned).
  • Cross-functional work completed without handoffs.
  • Speed to execute on ideas that would have required hiring or outsourcing.
  • Projects launched without expanding headcount.

A marketing leader at a mid-market B2B company told me her team can now handle routine reporting and standard analyses with AI support, work that previously required weeks on the analytics team’s queue.

Their campaign optimization cycle accelerated 4x, leading to 31% higher campaign performance.

The “time saved” metric would say: “AI saves two hours per analysis.”

The capability unlock metric says: “We can now run 4x more tests per quarter, and our analytics team tackles deeper strategic work.”

Building A Finance-Friendly AI ROI Framework

CFOs care about three questions:

  • Is this increasing revenue? (Not just reducing cost.)
  • Is this creating competitive advantage? (Not just matching competitors.)
  • Is this sustainable? (Not just a short-term productivity bump.)

How to build an AI measurement framework that actually answers those questions:

Step 1: Baseline Your “Before AI” State

Don’t skip this step, or else it will be impossible to prove AI impact later. Before deploying AI, document current throughput, quality metrics, and scope limitations.

Step 2: Define Leading Vs. Lagging Indicators

You need to track both efficiency and expansion, but you need to frame them correctly to Finance.

  • Leading Indicator (Efficiency): Time saved on existing tasks. This predicts potential capacity.
  • Lagging Indicator (Expansion): New work enabled and revenue impact. This proves the value was realized.

Step 3: Track AI Impact On Revenue, Not Just Cost

Connect AI metrics directly to business outcomes:

  • If AI helps customer success teams → Track retention rate changes.
  • If AI helps sales teams → Track win rate and deal velocity changes.
  • If AI helps marketing teams → Track pipeline contribution and conversion rate changes.
  • If AI helps product teams → Track feature adoption and customer satisfaction changes.

Step 4: Measure The “Frontier” Gap

OpenAI’s enterprise research revealed a widening gap between “frontier” workers and median workers. Frontier firms send 2x more messages per seat.

This means identifying the teams extracting real value versus the teams just experimenting.

Step 5: Build The Measurement Infrastructure First

PwC’s 2026 AI predictions warn that measuring iterations instead of outcomes falls short when AI handles complex workflows.

As PwC notes: “If an outcome that once took five days and two iterations now takes fifteen iterations but only two days, you’re ahead.”

The infrastructure you need before you deploy AI involves baseline metrics, clear attribution models, and executive sponsorship to act on insights.

The Measurement Paradox

The organizations best positioned to measure AI ROI are the ones who already had good measurement infrastructure.

According to Kyndryl’s 2025 Readiness Report, most firms aren’t positioned to prove AI ROI because they lack the foundational data discipline.

Sound familiar? This connects directly to the data hygiene challenge I’ve written about previously. You can’t measure AI’s impact if your data is messy, conflicting, or siloed.

The Bottom Line

The AI productivity revolution is well underway. According to Anthropic’s research, current-generation AI could increase U.S. labor productivity growth by 1.8% annually over the next decade, roughly doubling recent rates.

But capturing that value requires measuring the right things.

Forget asking: “How much time does this save?”

Instead, focus on:

  • “What quality improvements are we seeing in output?”
  • “What work is now possible that wasn’t before?”
  • “What capabilities can we access without expanding headcount?”

These are the metrics that convince CFOs to increase AI budgets. These are the metrics that reveal whether AI is actually transforming your business or just making you busy faster.

Time saved is a vanity metric. Expansion enabled is the real ROI.

Measure accordingly.

More Resources:


Featured Image: SvetaZi/Shutterstock

Google Launches Personal Intelligence In AI Mode via @sejournal, @MattGSouthern

Google is rolling out Personal Intelligence, a feature that connects Gmail and Google Photos to AI Mode in Search, delivering personalized responses based on users’ own data.

The feature, announced in a blog post by Robby Stein, VP of Product at Google Search, is available to Google AI Pro and AI Ultra subscribers who opt in.

What’s New

Personal Intelligence lets AI Mode reference information from a user’s Gmail and Google Photos to tailor search responses. Google describes it as connecting the dots across Google apps to unlock search results that fit individual context.

The feature rolls out as a Labs experiment for eligible subscribers in the U.S. in English. It is available for personal Google accounts only, not for Workspace business, enterprise, or education users.

To enable Personal Intelligence, users can:

  1. Open Search and tap their profile
  2. Click on Search personalization
  3. Select Connected Content Apps
  4. Connect Gmail and Google Photos

In the settings menu, the Gmail connection appears under “Workspace,” though the feature itself is not available to Workspace business, enterprise, or education accounts.

Subscribers may also see an invitation to try the feature directly in AI Mode as the rollout progresses over the next few days.

How It Works

Personal Intelligence uses Gemini 3 to process queries alongside connected account data. When enabled, AI Mode may reference email confirmations, travel bookings, and photo memories to inform responses.

Stein offered examples in the announcement. A user searching for trip activities could receive recommendations based on hotel bookings in Gmail and past travel photos. Someone shopping for a coat could get suggestions that account for preferred brands, upcoming travel destinations from flight confirmations, and expected weather conditions.

Stein wrote:

“With Personal Intelligence, recommendations don’t just match your interests — they fit seamlessly into your life. You don’t have to constantly explain your preferences or existing plans, it selects recommendations just for you, right from the start.”

See an example in the screenshots below:

Screenshot from: blog.google/products-and-platforms/products/search/personal-intelligence-ai-mode-search/, January 2026.
Screenshot from: blog.google/products-and-platforms/products/search/personal-intelligence-ai-mode-search/, January 2026.

Privacy Controls

Google emphasizes that connecting Gmail and Google Photos is opt-in. Users choose whether to enable the connections and can turn them off at any time.

Google says AI Mode does not train directly on users’ Gmail inbox or Google Photos library. The company says training is limited to specific prompts in AI Mode and the model’s responses, used to improve functionality over time.

Google acknowledges that Personal Intelligence may make mistakes, including incorrectly connecting unrelated topics or misunderstanding context. Users can correct errors through follow-up responses or by providing feedback with the thumbs down button.

Why This Matters

This is the personal context feature Google teased at I/O in May 2025. Seven months later, in December, Google SVP Nick Fox confirmed in an interview that the feature was still in internal testing with no public timeline. Today’s rollout delivers what was delayed.

For the 75 million daily active users Fox reported in AI Mode in that December interview, this could reduce how much context you need to type in order to get tailored responses.

For publishers, the implications depend on how personalization affects which content surfaces in AI Mode responses. If the system prioritizes user-specific context over general search results, some informational queries may resolve without a click to external sites. Google has not shared data on how Personal Intelligence affects citation patterns or traffic flow.

The feature is currently limited to paid subscribers on personal accounts. Whether Google expands it to free users or Workspace accounts would change its reach.

Looking Ahead

Personal Intelligence is rolling out as a Labs feature over the next few days. Google says eligible AI Pro and AI Ultra subscribers in the U.S. will automatically have access as it becomes available.

Watch for whether Google provides analytics or attribution tools that let publishers track how personalized AI Mode responses affect visibility and traffic patterns.

A Breakdown Of Microsoft’s Guide To AEO & GEO via @sejournal, @martinibuster

Microsoft published a sixteen page explainer guide about optimizing for AI search and chat. While many of the suggestions can be classified as SEO, some of the other tips relate exclusively to AI search surfaces. Here are the most helpful takeaways.

What AEO and GEO Are And Why They Matter

Microsoft explains that AI search surfaces have created an evolution from “ranking for clicks” to “being understood and recommended by AI.” Traditional SEO still provides a foundation for being cited in AI, but AEO and GEO determine whether content gets surfaced inside AI-driven experiences.

Here is how Microsoft distinguishes AEO and GEO. The first thing to notice is that they define AEO as Agentic Engine Optimization. That’s different from Answer Engine Optimization, which is how AEO is commonly understood.

  • AEO (Answer/Agentic Engine Optimization) focuses on optimizing content and product information easy for AI assistants and agents to retrieve, interpret, and present as direct answers.
  • GEO (Generative Engine Optimization) focuses on making your content discoverable and persuasive inside generative AI systems by increasing clarity, trustworthiness, and authoritativeness.

Microsoft views AEO and GEO as not limited to marketing, but multiple teams within an organization.

The guide says:

“This shift impacts every part of the organization. Marketing teams must rethink brand differentiation, growth teams need to adapt to AI-driven journeys, ecommerce teams must measure success differently, data teams must surface richer signals, and engineering teams must ensure systems are AI-readable and reliable.”

AI shopping is not one channel, it’s really a set of overlapping systems.

Microsoft describes AI shopping as three overlapping consumer touchpoints:

  1. AI browsers that interpret what’s on a page and surface context while users browse.
  2. AI assistants that answer questions and guide decisions in conversation.
  3. AI agents that can take actions, like navigating, selecting options, and completing purchases.

The AI touchpoint matters less than whether the system can access accurate, structured, and trustworthy product information.

SEO Still Plays A Role

Microsoft’s guide says that the AEO and GEO competition changes from discovery over to influence. SEO is still important, but it is no longer the whole game.

The new competition is about influencing the AI recommendation layer, not just showing up in rankings.

Microsoft describes it like this:

  • SEO helps the product get found.
  • AEO helps the AI explain it clearly.
  • GEO helps the AI trust it and recommend it.

Microsoft explains:

“Competition is shifting from discovery to influence (SEO to AEO/GEO).

If SEO focused on driving clicks, AEO is focused on driving clarity with enriched, real-time data, while GEO focuses on building credibility and trust so AI systems can confidently recommend your products.

SEO remains foundational, but winning in AI-powered shopping experiences requires helping AI systems understand not just what your product is, but why it should be chosen.”

How AI Systems Decide What To Recommend

Microsoft explains how an AI assistant, in this case Copilot, handles a user’s request. When a user asks for a recommendation, the AI assistant goes into a reasoning phase where the query is broken down using a combination of web and product feed data.

The web data provides:

  • “General knowledge
  • Category understanding
  • Your brand positioning”

Feed data provides:

  • “Current prices
  • Availability
  • Key specs”

The AI assistant may, based on the feed data, choose to surface the product with the lowest price that is also in stock.  When the user clicks through to the website, the AI Assistant scans the page for information that provides context.

Microsoft lists these as examples of context:

  • Detailed reviews
  • Video that explain the product
  • Current promotions
  • Delivery estimates

The agent aggregates this information and provides guidance on what it discovered in terms of the context of the product (delivery times, etc.).

Microsoft brings it all together like this:

First, there’s crawled data:
The information AI systems learned during training and retrieve from indexed web pages, which shapes your brand’s baseline perception and provides grounding for AI responses, including your product
categories, reputation and market position.

Second, there’s product feeds and APIs:
The structured data you actively push to AI platforms, giving you control over how your products are represented in comparisons and recommendations. Feeds provide accuracy, details and consistency.

Third, there’s live website data:
The real-time information AI agents see when they visit your actual site, from rich media and user reviews to dynamic pricing and transaction capabilities. Each data source plays a distinct role in the shopping journey — traditional SEO remains essential because AI systems perform real-time web searches frequently throughout the shopping journey, not just at purchase time, and your site must rank well to be discovered, evaluated, and recommended.

Microsoft recommends A Three-Part Action Plan

Strategy 1: Technical Foundations

The core idea for this strategy is that your product catalog must be machine-readable, consistent everywhere, and up to date.

Key actions:

  • Use structured data (schema) for products, offers, reviews, lists, FAQs, and brand.
  • Include dynamic fields like pricing and availability.
  • Keep feed data and on-page structured data aligned with what users actually see.
  • Avoid mismatches between visible content and what is served to crawlers.

Strategy 2: Optimize Content For Intent And Clarity

This strategy is about optimizing product content so that it answers typical user questions and is easy for AI to reuse.

Key actions:

  • Write product descriptions that start with benefits and real use-case value.
  • Use headings and phrasing that match how people ask questions.

Add modular content blocks:

  • FAQs
  • specs
  • key features
  • comparisons

Add Contextual Information

  • Support multi-modal interpretation (good alt text, transcripts for video content, structured image metadata).
  • Add complementary product context (pairings, bundles, “goes well with”).

Strategy 3: Trust Signals (Authority And Credibility)

The takeaway for this strategy is that AI assistants and agents prioritize content that looks verified and reputable.

Key actions:

  • Strengthen review credibility (verified reviews, strong volumes, clear sentiment).
  • Reinforce brand authority through real-world signals (press, certifications, partnerships).
  • Keep claims grounded and consistent to avoid trust degradation.
  • Use structured data to clarify legitimacy and identity.

Microsoft explains it like this:

“AI assistants prioritize content from sources they can trust. Signals such as verified reviews, review volume, and clear sentiment help establish credibility and influence recommendations.

Brand authority is reinforced through consistent identity, real-world validation such as press coverage, certifications, and partnerships, and the use of structured data to clearly define brand entities.

Claims should be factual, consistent, and verifiable, as exaggerated or misleading information can reduce trust and limit visibility in AI-powered experiences”

Takeaways

AI search changes the goal from winning rankings to earning recommendations. SEO still matters, but AEO and GEO determine how well content is interpreted, explained, and chosen inside AI assistants and agents.

AI shopping is not a single channel but an ecosystem of assistants, browsers, and agents that rely on authoritative signals across crawled content, structured feeds, and live site experiences. The brands that win are the ones with consistent, machine-readable data, and clear content that contains useful contextual information that can be easily summarized.

Microsoft published a blog post that is accompanied by a link to the downloadable explainer guide: From Discovery to Influence: A Guide to AEO and GEO.

Featured Image by Shutterstock/Kues

56% Of CEOs Report No Revenue Gains From AI: PwC Survey via @sejournal, @MattGSouthern

Most companies haven’t yet seen financial returns from their AI investments, according to PwC’s 29th Global CEO Survey.

The survey of 4,454 chief executives across 95 countries found that 56% report neither increased revenue nor lower costs from AI over the past 12 months.

What The Survey Found

About 30% of CEOs said their company saw increased revenue from AI in the last year. On costs, 26% reported decreases while 22% said costs went up. PwC defined “increase” and “decrease” as changes of 2% or more.

Only 12% of companies achieved both revenue gains and cost reductions. PwC called this group the “vanguard” and noted they had stronger AI foundations in place, including defined roadmaps and technology environments built for integration.

For marketing specifically, the numbers suggest early-stage adoption. Just 22% of CEOs said their organization applies AI to demand generation to a large or very large extent. The company’s products, services, and experiences showed similar numbers at 19%.

Separate from AI, CEO confidence in near-term growth has declined. Only 30% said they were very or extremely confident about revenue growth over the next 12 months. That’s down from 38% last year and a peak of 56% in 2022.

Why This Matters

The survey adds data to a pattern I’ve tracked over the past year. A LinkedIn report found 72% of B2B marketers felt overwhelmed by AI’s pace of change. A Gartner survey showed 73% of marketing teams were using AI, but 87% of CMOs had experienced campaign performance problems.

The 22% demand generation figure gives marketers a rough benchmark for how their AI adoption compares to the broader executive population. It’s self-reported CEO perception rather than measured deployment, but it suggests most organizations are still in early stages of applying AI to customer acquisition at scale.

PwC’s framing is direct:

“Isolated, tactical AI projects often don’t deliver measurable value.”

The report adds that tangible returns come from enterprise-scale deployment consistent with company business strategy.

Looking Ahead

PwC recommends companies focus on building AI foundations before expecting returns. That includes defined roadmaps, technology environments that enable integration, and formalized responsible AI processes.

For marketing teams evaluating their own AI investments, this survey suggests most organizations are still working through the same questions.


Featured Image: Blackday/Shutterstock

More Sites Blocking LLM Crawling – Could That Backfire On GEO? via @sejournal, @martinibuster

Hostinger released an analysis showing that businesses are blocking AI systems used to train large language models while allowing AI assistants to continue to read and summarize more websites. The company examined 66.7 billion bot interactions across 5 million websites and found that AI assistant crawlers used by tools such as ChatGPT now reach more sites even as companies restrict other forms of AI access.

Hostinger Analysis

Hostinger is a web host and also a no-code, AI agent-driven platform for building online businesses. The company said it analyzed anonymized website logs to measure how verified crawlers access sites at scale, allowing it to compare changes in how search engines and AI systems retrieve online content.

The analysis they published shows that AI assistant crawlers expanded their reach across websites during a five-month period. Data was collected during three six-day windows in June, August, and November 2025.

OpenAI’s SearchBot increased coverage from 52 percent to 68 percent of sites, while Applebot (which indexes content for powering Apple’s search features) doubled from 17 percent to 34 percent. During the same period, traditional search crawlers essentially remained constant. The data indicates that AI assistants are adding a new layer to how information reaches users rather than replacing search engines outright.

At the same time, the data shows that companies sharply reduced access for AI training crawlers. OpenAI’s GPTBot dropped from access on 84 percent of websites in August to 12 percent by November. Meta’s ExternalAgent dropped from 60 percent coverage to 41 percent website coverage. These crawlers collect data over time to improve AI models and update their Parametric Knowledge but many businesses are blocking them, either to limit data use or for fear of copyright infringement issues.

Parametric Knowledge

Parametric Knowledge, also known as Parametric Memory, is the information that is “hard-coded” into the model during training. It is called “parametric” because the knowledge is stored in the model’s parameters (the weights). Parametric Knowledge is long-term memory about entities, for example, people, things, and companies.

When a person asks an LLM a question, the LLM may recognize an entity like a business and then retrieve the the associated vectors (facts) that it learned during training. So, when a business or company blocks a training bot from their website, they’re keeping the LLM from knowing anything about them, which might not be the best thing for an organization that’s concerned about AI visibility.

Allowing an AI training bot to crawl a company website enables that company to exercise some control over what the LLM knows about it, including what it does, branding, whatever is in the About Us, and enables the LLM to know about the products or services offered. An informational site may benefit from being cited for answers.

Businesses Are Opting Out Of Parametric Knowledge

Hostinger’s analysis shows that businesses are “aggressively” blocking AI training crawlers. While Hostinger’s research doesn’t mention this, the effect of blocking AI training bots is that businesses are essentially opting out of LLM’s parametric knowledge because the LLM is prevented from learning directly from first-party content during training, removing the site’s ability to tell its own story and forcing the LLM to rely on third-party data or knowledge graphs.

Hostinger’s research shows:

“Based on tracking 66.7 billion bot interactions across 5 million websites, Hostinger uncovered a significant paradox:

Companies are aggressively blocking AI training bots, the systems that scrape content to build AI models. OpenAI’s GPTBot dropped from 84% to 12% of websites in three months.

However, AI assistant crawlers, the technology that ChatGPT, Apple, etc. use to answer customer questions, are expanding rapidly. OpenAI’s SearchBot grew from 52% to 68% of sites; Applebot doubled to 34%.”

A recent post on Reddit shows how blocking LLM access to content is normalized and understood as something to protect intellectual property (IP).

The post starts with an initial question asking how to block AIs:

“I want to make sure my site is continued to be indexed in Google Search, but do not want Gemini, ChatGPT, or others to scrape and use my content.

What’s the best way to do this?”

Screenshot Of A Reddit Conversation

Later on in that thread someone asked if they’re blocking LLMs to protect their intellectual property and the original poster responded affirmatively, that that was the reason.

The person who started the discussion responded:

“We publish unique content that doesn’t really exist elsewhere. LLMs often learn about things in this tiny niche from us. So we need Google traffic but not LLMs.”

That may be a valid reason. A site that publishes unique instructional information about a software product that does not exist elsewhere may want to block an LLM from indexing their content because if they don’t then the LLM will be able to answer questions while also removing the need to visit the site.

But for other sites with less unique content, like a product review and comparison site or an ecommerce site, it might not be the best strategy to block LLMs from adding information about those sites into their parametric memory.

Brand Messaging Is Lost To LLMs

As AI assistants answer questions directly, users may receive information without needing to visit a website. This can reduce direct traffic and limit the reach of a business’s pricing details, product context, and brand messaging. It’s possible that the customer journey ends inside the AI interface and the businesses that block LLMs from acquiring knowledge about their companies and offerings are essentially relying on the search crawler and search index to fill that gap (and maybe that works?).

The increasing use of AI assistants affects marketing and extends into revenue forecasting. When AI systems summarize offers and recommendations, companies that block LLMs have less control over how pricing and value appear. Advertising efforts lose visibility earlier in the decision process, and ecommerce attribution becomes harder when purchases follow AI-generated answers rather than direct site visits.

According to Hostinger, some organizations are becoming more selective about what which content is available to AI, especially AI assistants.

Tomas Rasymas, Head of AI at Hostinger commented:

“With AI assistants increasingly answering questions directly, the web is shifting from a click-driven model to an agent-mediated one. The real risk for businesses isn’t AI access itself, but losing control over how pricing, positioning, and value are presented when decisions are made.”

Takeaway

Blocking LLMs from using website data for training is not really the default position to take, even though many people feel real anger and annoyance of the idea of an LLM training on their content.  It may be useful to take a more considered response that weighs the benefits versus the disadvantages and to also consider whether those disadvantages are real or perceived.

Featured Image by Shutterstock/Lightspring

A Little Clarity On SEO, GEO, And AEO via @sejournal, @martinibuster

The debate about AEO/GEO centers on whether it’s a subset of SEO, a standalone discipline, or just standard SEO. Deciding on where to plant a flag is difficult because every argument makes a solid case. There’s no doubt that change is underway and it may be time find where all the competing ideas intersect and work from there.

The Case Against AEO/GEO

Many SEOs argue that AEO/GEO doesn’t differentiate itself enough to justify being anything other than a subset of SEO, sharing computers in the same office.

Harpreet Singh Chatha (X profile) of Harps Digital recently tweeted about AEO / GEO myths to leave behind in 2025.

Some of what he listed:

  • “LLMs.txt
  • Paying a GEO expert to do “chunk optimization.” Chunking content is just making your content readable.
  • Thinking AEO / GEO have nothing in common with SEO. Ask your favourite GEO expert for 25 things that are unique to AI search and don’t overlap with SEO. They will block you.
  • Saying SEO is dead. “

The legendary Greg Boser (LinkedIn profile), one of the original SEOs since 1996 tweeted this:

“At the end of the day, the core foundation of what we do always has been and always will be about understanding how humans use technology to gain knowledge.

We don’t need to come up with a bunch of new acronyms to continue to do what we do. All that needs to happen is we all agree to change the “E” in SEO from “Engine” to “Experience”.

Then everyone can stop wasting time writing all the ridiculous SEO/GEO/AEO posts, and get back to work.”

Inability To Articulate AEO/GEO

What contributes to the perception that AEO/GEO is not a real thing is that many proponents of AEO/GEO fail to differentiate it from standard SEO. We’ve all seen it where someone tweets their new tactic and the SEO peanut gallery chimes in, nah, that’s SEO.

Back in October Microsoft published a blog post about optimizing content for for AI where they asserted:

“While there’s no secret strategy for being selected by AI systems, success starts with content that is fresh, authoritative, structured, and semantically clear.”

The post goes on to affirm the importance of SEO fundamentals such as “Crawlability, metadata, internal linking, and backlinks” but then states that these are just starting points. Microsoft points out that AI search provides answers, not ranked list of pages. That’s correct and it changes a lot.

Microsoft says that now it’s about which pieces of content are being ranked:

“In AI search, ranking still happens, but it’s less about ordering entire pages and more about which pieces of content earn a place in the final answer.”

That kind of echoes what Jesse Dwyer of Perplexity AI recently said about AI Search and SEO:

“As for the index technology, the biggest difference in AI search right now comes down to whole-document vs. “sub-document” processing.

…The AI-first approach is known as “sub-document processing.” Instead of indexing whole pages, the engine indexes specific, granular snippets (not to be confused with what SEO’s know as “featured snippets”).”

Microsoft recently published an explainer called “From discovery to influence:A guide to AEO and GEO” that’s tellingly focused mostly on shopping, which is notable and remarkable because there’s a growing awareness that ecommerce stands to gain a lot from AI Search.

No such luck for informational sites because it’s also gradually becoming understood that Agentic AI is poised to strip informational sites of all branding and value-add and treating them as sources of data.

Common SEO Practices That Pass As GEO

Some of what some champion as GEO and AEO are actually longstanding SEO practices:

  • Crafting content in the form of answers
    Good SEOs have been doing this since Featured Snippets came out in 2014.
  • Chunking content
    Crafting content in tight paragraphs looks good in mobile devices and it’s something good SEOs and thoughtful content creators have been doing for well over a decade.
  • Structured Content
    Headings and other elements that strongly disambiguate the content are also SEO.
  • Structured Data
    Shut your mouth. This is SEO.

The Customer Is Always Right

Some of in the GEO Is Real campe tend to regard themselves as evolving with the times but they also acknowledge they’re just offering what the clients are demanding. SEO practioners are in a hard spot, what are you going to do? Plant your flag on traditional SEO and turn your back on what potential clients are begging for?

Googlers Insist It’s Still SEO

There are Googlers such as Robby Stein (VP of Product), Danny Sullivan, and John Mueller who say that SEO is 100% still relevant because under the hood AI is just firing off Google searches for top ranked sites to backfill into synthesized answers and links (Read: Google Downplays GEO – But Let’s Talk About Garbage AI SERPs). OpenAI was recently hiring a content strategist that is able to lean into to SEO (not GEO), which some say demonstrates that even OpenAI is focused on traditional SEO.

Optimization Is No Longer Just Google

Manick Bhan (LinkedIn profile), founder of the Search Atlas SEO suite, offered an interesting take on why we may be transitioning to a divided SEO and GEO path.

Manick shared:

“SEO has always meant ‘search engine optimization,’ but in practice it has historically meant ‘Google optimization.’ Google defined the interface, the ranking paradigm, the incentives, and the entire mental model the industry used.

The challenge with calling GEO a ‘sub-discipline’ of SEO is that the LLM ecosystem is not one ecosystem, and Google’s AI Mode is becoming a generative surface itself.”

Manick asserts that there is no one “GEO” because each of the AI search and answer engines use different methodologies. He observed that the underlying tactics remain the same but the “the interface, the retrieval model, and the answer surface” are all radically changed from anything that’s come before.

Manick believes that GEO is not SEO, offering the following insights:

“My position is clear: GEO is not just SEO with a fresh coat of paint, and reducing it to that misses the fundamental shift in how modern answer engines actually retrieve, rank, and assemble information.

Yes, the tactics still live in the same universe of on-page and off-page signals. Those fundamentals haven’t changed. But the machines we’re optimizing for have.

Today’s answer engines:

  • Retrieve differently,
  • Fuse and weight sources differently,
  • Handle recency differently,
  • Assign trust and authority differently,
  • Fan out queries differently,
  • And incorporate user behavior into their RAG corpora differently.

Even seemingly small mechanics — like logit calibration and temperature — produce practically different retrieval outputs, which is why identical prompts across engines show measurable semantic drift and citation divergence.

This is why we’re seeing quantifiable, repeatable differences in:

  • Retrieved sources,
  • Answer structures,
  • Citation patterns,
  • Semantic frames,
  • And ranking behavior across LLMs, AI Mode surfaces, and classical Google results.

In this landscape, humility and experimentation matter more than dogma. Treating all of this as ‘just SEO’ ignores how different these systems already are, and how quickly they’re evolving.”

It’s Clear We Are In Transition

Maybe one of the reasons for the anti-GEO backlash is that there is a loud contingent of agencies and individuals who have very little experience with SEO, some who are fresh out of college with zero experience. And it’s not their lack of experience that gets some SEOs in ranting mode. It’s the things they purport are GEO/AEO that are clearly just SEO.

Yet, as Manick of Search Atlas pointed out, AI search and chat surfaces are wildly different from classic search and it’s kind of closing ones eyes to the obvious to deny that things are different and in transition.

Featured Image by Shutterstock/Natsmith1