AI As Your Marketing Co-Pilot: How To Effectively Leverage LLMs In SEO & Content via @sejournal, @cshel

I remember seeing those “God is my co-pilot” bumper stickers since I was old enough to read them.

I was a precocious little agnostic, so they always struck me as weird. God can’t be your co-pilot because God isn’t a physical manifestation of someone who can help you drive a car.

I eventually figured out that “God is my co-pilot” was less a literal statement and more a declaration of faith that there is an omniscient presence available to help you navigate life’s construction zones (if you believe, anyway).

So, fast forward to 2025, and marketers have a new omniscient presence that they can put their faith in. Something that seems equally all-knowing but perhaps a little more … unpredictable.

AI.

Large language models (LLMs) – like ChatGPT, Claude, Gemini – feel delightfully divine when you first try them. They answer instantly, confidently, and often with an authority that makes you wonder if they do know everything.

But, spend enough time with these tools, and you discover something unsettling: AI isn’t just your god-like guide. It can also act like the devil, gleefully granting your wishes exactly as asked – and letting you suffer the consequences.

This is why the healthiest way to think of AI in your SEO and content workflows is as a co-pilot. Not God. Not Lucifer. But, a powerful partner that can elevate your work, if you exercise your free will (and make good choices).

The God-Like Qualities Of AI

There’s a reason AI feels god-like in a marketing context:

  • It seems omnipresent, embedded in your search results, your content management system (CMS), your analytics.
  • It delivers answers instantly, with confidence and authority.
  • It processes far more data than any human ever could, instantly finding patterns we mere mortals miss on the first (or third) pass.

Ask it to draft a content brief, summarize competitive search engine results pages (SERPs), generate topic clusters, or even shape a brand narrative – and it performs in seconds what would have taken you hours.

That kind of power can feel miraculous.

But, just as theologians remind us that God’s will is mysterious and not always aligned with ours, LLMs work on their own unknowable internal logic.

The outputs may not match your intent. The answer may not come in the form you wanted. And you may not even fully grasp why it chose the answer it did.

The Devilish Side Of AI

On the flip side, AI can also be a trickster: seductive, transactional, and literal. It will grant you exactly what you wish for – and sometimes that’s the worst thing possible.

When you prompt an LLM poorly, you’re effectively making a deal with the devil. The model will fulfill your request to the letter, even if what you asked was misguided, incomplete, or poorly articulated.

The result? Content that’s technically correct but off-brand, off-tone, or even factually wrong – yet delivered with such confidence it lulls you into publishing it.

The moral: Be careful what you ask for. The clarity of your prompt determines the quality of your output.

What AI Is Good At

When treated as a co-pilot, not as a god, AI can supercharge your workflow:

Research & Insights

  • Competitive landscape analyses.
  • SERP gap identification.
  • Tracking how competitors frame their unique value propositions.
  • Summarizing multiple opinion pieces or reviews into one clear insight.
  • Identifying overlooked audience segments based on forums and social media discussions.

Content Ideation & Briefing

  • Generating alternative angles on stale topics: e.g., turning “best practices” into “common mistakes” or “myths to avoid.”
  • Rewriting existing briefs to prioritize experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) signals
  • Drafting Q&A content by scanning customer service transcripts or Reddit threads.
  • Suggesting specific examples or metaphors to make dry topics more engaging.

Narrative Shaping & Messaging

  • Reworking messaging for different formats: a LinkedIn post, an email subject line, and a webinar title – all aligned.
  • Auditing your current messaging to highlight jargon and suggest plain-language alternatives.
  • Helping articulate your brand’s point of view in ways that differentiate it from competitors.
  • Stress-testing your messaging by generating “devil’s advocate” objections you can preemptively address.

Workflow Enhancements

  • Drafting a competitive heat map: strengths, weaknesses, opportunities, threats – with citations.
  • Organizing customer testimonials into themed categories and crafting pull quotes.
  • Generating follow-up email sequences based on webinar transcripts or meeting notes.
  • Converting white papers into tweet threads, infographic outlines, and video scripts.

It’s like an intern with infinite energy and decent taste – incredibly helpful, but still in need of supervision.

What AI Is Not Good At

Don’t confuse the fluency of AI with wisdom. Here’s where it stumbles:

Judgment & Nuance

It doesn’t understand your brand’s unique sensibility, your audience’s emotional context, or when not to say something. You have to give it that context and direction. You cannot assume it will figure it out.

Accuracy & Truth

It is still prone to “hallucinations” – confidently wrong statements presented as fact.

We have limited understanding of why this happens, but it is so frequent that you almost have to assume there are at least a few hallucinations in the output somewhere.

Accountability

It cannot make decisions, nor does it bear the consequences of your choices. That’s on you.

In short, AI lacks your free will. And free will is what allows you to question, interpret, and choose what to do with its suggestions.

The Co-Pilot Mindset: Free Will Wins

To work effectively with your AI co-pilot, you need to strike the right balance between trust and control.

Here’s how:

Stay In The Pilot’s Seat

Never hand over full control. You’re still ultimately responsible for the vehicle.

Treat AI as a partner – or maybe not even a full partner, more like an exceptionally bright and quick research assistant – but never a replacement for you in any equation.

Be Precise In Your Prompts

Don’t assume it “knows what you mean.” Giving the AI instructions is like giving instructions to a particularly clever child who enjoys maliciously complying with your orders, except the AI doesn’t actually experience the joy.

You need to articulate your expectations clearly: format, tone, audience, and purpose. Add as much context and as many constraints as you can. The more data points and context you can provide, the better the outputs will be.

Use It To Accelerate, Not Replace

AI can speed up research, help shape narratives, and generate ideas, but it can’t replace your expertise or final judgment.

Review & Revise

Never, never, never, never publish output unedited. Always apply your brand’s perspective, always fact-check, and always ensure alignment with your goals.

Read everything you’re about to publish carefully. It’s okay to trust, but always verify.

Here’s an example of how that looks in practice:

I recently took a client’s complete keyword ranking report – not just the terms they were tracking, but every single ranking URL and query – and filtered out any URL already on page 1.

Then, I narrowed the data to just rankings in positions 11-20 (to keep it manageable) and fed that into an LLM.

I asked it to estimate the potential lift in organic traffic if each term improved to position 1 and to rank the list by estimated lift, highest to lowest.

But, I also gave the LLM context about the client’s business, explaining what kinds of customers and services were most valuable to them.

Then, I asked the model to highlight the keywords that made the most business sense for this client, because not every keyword you rank for is one you actually want to rank for.

With that context, the LLM was able to match keyword intent to the client’s goals and call out the terms that aligned with their business priorities.

In just minutes, I had a prioritized roadmap of high-impact, high-fit opportunities – something that would have taken hours to produce manually.

Practical Ways To Work With AI

Here are some more actionable ways you can incorporate AI into your workflow effectively:

Research Smarter And Faster

  • Create a competitive matrix with links and pros/cons.
  • Summarize customer sentiment across reviews, highlighting recurring pain points.
  • Surface conflicting expert opinions to inform balanced thought leadership pieces.
  • Forecast upcoming trends based on chatter in niche forums and early adopters.

Build Better Briefs

  • Include competitive positioning suggestions in briefs, not just keywords.
  • Add tone-of-voice examples aligned to audience segments.
  • Incorporate real data sources and reference points to help writers anchor their copy.
  • Generate sample social captions to support a campaign.

Strengthen Your Messaging

  • Stress-test a headline by generating objections and counterpoints.
  • Rewrite complex product descriptions into benefit-driven language for different audiences.
  • Propose alternate positioning statements for product launches or rebrands.
  • Audit your FAQ section to make it more conversational and AI-friendly.

Repurpose And Expand Content

  • Turn webinar transcripts into ebooks, blog series, and email drips.
  • Extract key insights from research reports to create shareable social graphics.
  • Draft SEO-friendly meta descriptions and titles for old content.
  • Identify missed opportunities in evergreen content for updates or expansion.

AI can do so much more than just “help you ideate.” It can help you uncover blind spots, repurpose assets, and deepen your strategic thinking, but only when you stay in the driver’s seat to guide and refine the outputs.

Final Thought: You, And Only You, Are The Pilot

I think we tend to treat our collective relationship with AI the same way we look at religion – you’re either a believer or an atheist.

Some have complete faith and trust it without question, while others reject it entirely and are convinced there is nothing there to believe in. The truth is somewhere in the middle (as it often is).

AI can be a powerful, tireless, but imperfect partner. It can help carry and manage heavy mental loads, work with you to map out routes and decide on destinations, but it can not take responsibility for driving the car. That’s got to be on you.

Your free will – your ability to keep your hands on the wheel – is what ensures the journey ends where you intended. If you actually let go, you’re certainly going to crash. You’re asking for assistance, not a magical autopilot.

So, go ahead: Let AI ride shotgun and keep your hands at 10 and two, where they belong.

More Resources:


Featured Image: Rawpixel.com/Shutterstock

Why OpenAI’s Open Source Models Are A Big Deal via @sejournal, @martinibuster

OpenAI has released two new open-weight language models under the permissive Apache 2.0 license. These models are designed to deliver strong real-world performance while running on consumer hardware, including a model that can run on a high-end laptop with only 16 GB of GPU.

Real-World Performance at Lower Hardware Cost

The two models are:

  • gpt-oss-120b (117 billion parameters)
  • gpt-oss-20b (21 billion parameters)

The larger gpt-oss-120b model matches OpenAI’s o4-mini on reasoning benchmarks while requiring only a single 80GB GPU. The smaller gpt-oss-20b model performs similarly to o3-mini and runs efficiently on devices with just 16GB of GPU. This enables developers to run the models on consumer machines, making it easier to deploy without expensive infrastructure.

Advanced Reasoning, Tool Use, and Chain-of-Thought

OpenAI explains that the models outperform other open source models of similar sizes on reasoning tasks and tool use.

According to OpenAI:

“These models are compatible with our Responses API⁠(opens in a new window) and are designed to be used within agentic workflows with exceptional instruction following, tool use like web search or Python code execution, and reasoning capabilities—including the ability to adjust the reasoning effort for tasks that don’t require complex reasoning and/or target very low latency final outputs. They are entirely customizable, provide full chain-of-thought (CoT), and support Structured Outputs⁠(opens in a new window).”

Designed for Developer Flexibility and Integration

OpenAI has released developer guides to support integration with platforms like Hugging Face, GitHub, vLLM, Ollama, and llama.cpp. The models are compatible with OpenAI’s Responses API and support advanced instruction-following and reasoning behaviors. Developers can fine-tune the models and implement safety guardrails for custom applications.

Safety In Open-Weight AI Models

OpenAI approached their open-weight models with the goal of ensuring safety throughout both training and release. Testing confirmed that even under purposely malicious fine-tuning, gpt-oss-120b did not reach a dangerous level of capability in areas of biological, chemical, or cyber risk.

Chain of Thought Unfiltered

OpenAI is intentionally leaving Chain of Thought (CoTs) unfiltered during training to preserve their usefulness for monitoring, based on the concern that optimization could cause models to hide their real reasoning. This, however, could result in hallucinations.

According to their model card (PDF version):

“In our recent research, we found that monitoring a reasoning model’s chain of thought can be helpful for detecting misbehavior. We further found that models could learn to hide their thinking while still misbehaving if their CoTs were directly pressured against having ‘bad thoughts.’

More recently, we joined a position paper with a number of other labs arguing that frontier developers should ‘consider the impact of development decisions on CoT monitorability.’

In accord with these concerns, we decided not to put any direct optimization pressure on the CoT for either of our two open-weight models. We hope that this gives developers the opportunity to implement CoT monitoring systems in their projects and enables the research community to further study CoT monitorability.”

Impact On Hallucinations

The OpenAI documentation states that the decision to not restrict the Chain Of Thought results in higher hallucination scores.

The PDF version of the model card explains why this happens:

Because these chains of thought are not restricted, they can contain hallucinated content, including language that does not reflect OpenAI’s standard safety policies. Developers should not directly show chains of thought to users of their applications, without further filtering, moderation, or summarization of this type of content.”

Benchmarking showed that the two open-source models performed less well on hallucination benchmarks in comparison to OpenAI o4-mini. The model card PDF documentation explained that this was to be expected because the new models are smaller and implies that the models will hallucinate less in agentic settings or when looking up information on the web (like RAG) or extracting it from a database.

OpenAI OSS Hallucination Benchmarking Scores

Benchmarking scores showing that the open source models score lower than OpenAI o4-mini.

Takeaways

  • Open-Weight Release
    OpenAI released two open-weight models under the permissive Apache 2.0 license.
  • Performance VS. Hardware Cost
    Models deliver strong reasoning performance while running on real-world affordable hardware, making them widely accessible.
  • Model Specs And Capabilities
    gpt-oss-120b matches o4-mini on reasoning and runs on 80GB GPU; gpt-oss-20b performs similarly to o3-mini on reasoning benchmarks and runs efficiently on 16GB GPU.
  • Agentic Workflow
    Both models support structured outputs, tool use (like Python and web search), and can scale their reasoning effort based on task complexity.
  • Customization and Integration
    The models are built to fit into agentic workflows and can be fully tailored to specific use cases. Their support for structured outputs makes them adaptable to complex software systems.
  • Tool Use and Function Calling
    The models can perform function calls and tool use with few-shot prompting, making them effective for automation tasks that require reasoning and adaptability.
  • Collaboration with Real-World Users
    OpenAI collaborated with partners such as AI Sweden, Orange, and Snowflake to explore practical uses of the models, including secure on-site deployment and custom fine-tuning on specialized datasets.
  • Inference Optimization
    The models use Mixture-of-Experts (MoE) to reduce compute load and grouped multi-query attention for inference and memory efficiency, making them easier to run at lower cost.
  • Safety
    OpenAI’s open source models maintain safety even under malicious fine-tuning; Chain of Thoughts (CoTs) are left unfiltered for transparency and monitorability.
  • CoT transparency Tradeoff
    No optimization pressure applied to CoTs to prevent masking harmful reasoning; may result in hallucinations.
  • Hallucinations Benchmarks and Real-World Performance
    The models underperform o4-mini on hallucination benchmarks, which OpenAI attributes to their smaller size. However, in real-world applications where the models can look up information from the web or query external datasets, hallucinations are expected to be less frequent.

Featured Image by Shutterstock/Good dreams – Studio

The Great Reversal: Why Agencies Are Replacing PPC With Predictable SEO via @sejournal, @mktbrew

This post was sponsored by Market Brew. The opinions expressed in this article are the sponsor’s own.

What if your client’s PPC budget could fund long-term organic growth instead?

Why do organic results dominate user clicks, but get sidelined in budget discussions?

Organic Drives 5x More Traffic Than PPC. Can We Prove It?

The Short Answer: Yes!

Over the past decade, digital marketers have witnessed a dramatic shift in how search budgets are allocated.

In the past decade, companies were funding SEO teams alongside PPC teams. However, a shift towards PPC-first has dominated the inbound marketing space.

Where Have SEO Budgets Gone?

Today, more than $150 billion is spent annually on paid search in the United States alone, while only $50 billion is invested in SEO.

That’s a 3-to-1 ratio, even though 90% of search clicks go to organic results, and only 10% to ads.

It’s not because paid search is more effective. Paid search is just easier to measure.

But that’s changing with the return of attribution within predictive SEO.

What Is Attribution?

Attribution in marketing is the process of identifying which touchpoints or channels contributed to a conversion or sale.

It helps us understand the customer journey so we can allocate budget more effectively and optimize campaigns for higher ROI.

As Google’s algorithms evolved, the cause-and-effect between SEO efforts and business outcomes became harder to prove.

Ranking fluctuations seemed random. Timelines stretched.

Clients became impatient.

Trackable Digital Marketing Has Destroyed SEO

With Google Ads, every dollar has a direct, reportable outcome:

  • Impressions.
  • Clicks.
  • Conversions.

SEO, by contrast, has long been:

  • A black box.

As a result, agencies and the clients that hire them followed the money, even when SEO’s results were higher.

PPC’s Direct Attribution Makes PPC Look More Important, But SEO Still Dominates

Hard facts:

  • SEO drives 5x more traffic than PPC.
  • Companies pay 3x more on PPC than SEO.
Image created by MarketBrew, August 2025

You Can Now Trace ROI Back To SEO

As a result, many SEO professionals and agencies want a way back to organic. Now, there is one, and it’s powered by attribution.

Attribution Is the Key to Measurable SEO Performance

Instead of sitting on the edge of the search engine’s black box, guessing what might happen, we can now go inside the SEO black box, to simulate how the algorithms behave, factor by factor, and observe exactly how rankings react to each change.

This is SEO with attribution.

Image created by MarketBrew, August 2025

With this model in place, you are no longer stuck saying “trust us.”

You can say, “Here’s what we changed. Here’s how rankings moved. Here’s the value of that movement.” Whether the change was a new internal link structure or a content improvement, it’s now visible, measurable, and attributable.

For the first time, SEO teams have a way to communicate performance in terms executives understand: cause, effect, and value.

This transparency is changing the way agencies operate. It turns SEO into a predictable system, not a gamble. And it arms client-facing teams with the evidence they need to justify the budget, or win it back.

How Agencies Are Replacing PPC With Measurable Organic SEO

For agencies, attribution opens the door to something much bigger than better reporting; it enables a completely new kind of offering: performance-based SEO.

Traditionally, SEO services have been sold as retainers or hourly engagements. Clients pay for effort, not outcomes. With attribution, agencies can now flip that model and say: You only pay when results happen.

Enter Market Brew’s AdShifted feature to model this value and success as shown here:

Screenshot from a video by MarketBrew, August 2025

The AdShift tool starts by entering a keyword to discover up to 4* competitive URLs for the Keyword’s Top Clustered Similarities. (*including your own website plus 4 top-ranking competitors)

Screenshot of PPC vs. MarketBrew comparison dashboard by Marketbrew, August 2025

AdShift averages CPC and search volume across all keywords and URLs, giving you a reliable market-wide estimate and details for your brand towards a monthly PPC investment to rank #1.

The dashboard of a business dashboard.
Screenshot of a dashboard by Marketbrew, August 2025

AdShift then calculates YOUR percentage of replacement for PPC to fund SEO.

This allows you to model your own Performance Plan with variable discounts available to the Market Brew license fees with an always less than 50% of PPC Fee for clicks replaced by new SEO traffic.

The dashboard for a business account.
Screenshot of a dashboard by Marketbrew, August 2025

AdShift simulates a PPC replacement plan option selected based on its keywords footprint to instantly see savings from the associated Performance Plans.

That’s the heart of the PPC replacement plan: a strategy you can use to gradually shift a  clients’ paid search budgets into measurable performance-based SEO.

What Is A PPC Replacement Plan? Trackable SEO.

A PPC replacement plan is a strategy in which agencies gradually shift their clients’ paid search budgets into organic investments, with measurable outcomes and shared performance incentives.

Here’s how it works:

  1. Benchmark Paid Spend: Identify the current Google Ads budget, i.e., $10,000 per month or $120,000 per year.
  2. Forecast Organic Value: Use search engine modeling to predict the lift in organic traffic from specific SEO tasks.
  3. Execute & Attribute: Complete tasks and monitor real-time changes in rankings and traffic.
  4. Charge on Impact: Instead of billing for time, bill for results, often at a fraction of the client’s former ad spend.

This is not about replacing all paid spend.

Branded queries and some high-value targets may remain in PPC. But for the large, expensive middle of the keyword funnel, agencies can now offer a smarter path: predictable, attributable organic results, at a lower cost-per-click, with better margins.

And most importantly, instead of lining Google’s pockets with PPC revenue, your investments begin to fuel both organic and LLM searches!

Real-World Proof That SEO Attribution Works

Agencies exploring this new attribution-powered model aren’t just intrigued … they’re energized. For many, it’s the first time in years that SEO feels like a strategic growth engine, not just a checklist of deliverables.

“We’ve pitched performance SEO to three clients this month alone,” said one digital strategy lead. “The ability to tie ranking improvements to specific tasks changed the entire conversation.”

Sean Myers, CEO, ThreeTech

Another partner shared,

“Instead of walking into meetings looking to justify an SEO retainer, we enter with a blueprint representing a SEO/GEO/AEO Search Engine’s ‘digital twin’ with the AI-driven tasks that show exactly what needs to be changed and the rankings it produces. Clients don’t question the value … they ask what’s next.”

Stephen Heitz, Chief Innovation Officer, LAVIDGE

Several agencies report that new business wins are increasing simply because they offer something different. While competitors stick to vague SEO promises or expensive PPC management, partners leveraging attribution offer clarity, accountability, and control.

And when the client sees that they’re paying less and getting more, it’s not a hard sell, it’s a long-term relationship.

A Smarter, More Profitable Model for Agencies and SEOs

The traditional agency model in search has become a maze of expectations.

Managing paid search may deliver short-term wins, but it comes to a bidding war with only those with the biggest budgets winning. SEO, meanwhile, has often felt like a thankless task … necessary but underappreciated, valuable but difficult to prove.

Attribution changes that.

For agencies, this is a path back to profitability and positioning. With attribution, you’re not just selling effort … you’re selling outcomes. And because the work is modeled and measured in advance, you can confidently offer performance plans that are both client-friendly and agency-profitable.

For SEOs, this is about getting the credit they deserve. Attribution allows practitioners to demonstrate their impact in concrete terms. Rankings don’t just move, … they move because of you. Traffic increases aren’t vague, … they’re connected to your specific strategies.

Now, you can show this.

Most importantly, this approach rebuilds trust.

Clients no longer have to guess what’s working. They see it. In dashboards, in forecasts, in side-by-side comparisons of where they were and where they are now. It restores SEO to a place of clarity and control where value is obvious, and investment is earned.

The industry has been waiting for this. And now, it’s here.

From PPC Dependence to Organic Dominance — Now Backed by Data

Search budgets have long been upside down, pouring billions into paid clicks that capture a mere fraction of user attention, while underfunding the organic channel that delivers lasting value.

Why? Because SEO lacked attribution.

That’s no longer the case.

Today, agencies and SEO professionals have the tools to prove what works, forecast what’s next, and get paid for the real value they deliver. It’s a shift that empowers agencies to move beyond bidding-war PPC management and into a lower cost & higher ROAS, performance-based SEO.

This isn’t just a new service mode it’s a rebalancing of power in search.

Organic is back. It’s measurable. It’s profitable. And it’s ready to take center stage again.

The only question is: will you be the agency or brand that leads the shift or watch as others do it first?

Citations

Image Credits

Featured Image: Image by Market Brew. Used with permission.

In-Post Image: Images by Market Brew. Used with permission.

A glimpse into OpenAI’s largest ambitions

OpenAI has given itself a dual mandate. On the one hand, it’s a tech giant rooted in products, including of course ChatGPT, which people around the world reportedly send 2.5 billion requests to each day. But its original mission is to serve as a research lab that will not only create “artificial general intelligence” but ensure that it benefits all of humanity. 

My colleague Will Douglas Heaven recently sat down for an exclusive conversation with the two figures at OpenAI most responsible for pursuing the latter ambitions: chief research officer Mark Chen and chief scientist Jakub Pachocki. If you haven’t already, you must read his piece.

It provides a rare glimpse into how the company thinks beyond marginal improvements to chatbots and contemplates the biggest unknowns in AI: whether it could someday reason like a human, whether it should, and how tech companies conceptualize the societal implications. 

The whole story is worth reading for all it reveals—about how OpenAI thinks about the safety of its products, what AGI actually means, and more—but here’s one thing that stood out to me. 

As Will points out, there were two recent wins for OpenAI in its efforts to build AI that outcompetes humans. Its models took second place at a top-level coding competition and—alongside those from Google DeepMind—achieved gold-medal-level results in the 2025 International Math Olympiad.

People who believe that AI doesn’t pose genuine competition to human-level intelligence might actually take some comfort in that. AI is good at the mathematical and analytical, which are on full display in olympiads and coding competitions. That doesn’t mean it’s any good at grappling with the messiness of human emotions, making hard decisions, or creating art that resonates with anyone

But that distinction—between machine-like reasoning and the ability to think creatively—is not one OpenAI’s heads of research are inclined to make. 

“We’re talking about programming and math here,” said Pachocki. “But it’s really about creativity, coming up with novel ideas, connecting ideas from different places.”

That’s why, the researchers say, these testing grounds for AI will produce models that have an increasing ability to reason like a person, one of the most important goals OpenAI is working toward. Reasoning models break problems down into more discrete steps, but even the best have limited ability to chain pieces of information together and approach problems logically. 

OpenAI is throwing a massive amount of money and talent at that problem not because its researchers think it will result in higher scores at math contests, but because they believe it will allow their AI models to come closer to human intelligence. 

As Will recalls in the piece, he said he thought maybe it’s fine for AI to excel at math and coding, but the idea of having an AI acquire people skills and replace politicians is perhaps not. Chen pulled a face and looked up at the ceiling: “Why not?”

Read the full story from Will Douglas Heaven.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The Download: AI agent infrastructure, and OpenAI’s ambitions

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

These protocols will help AI agents navigate our messy lives

A growing number of companies are launching AI agents that can do things on your behalf—actions like sending an email, making a document, or editing a database. Initial reviews for these agents have been mixed at best, though, because they struggle to interact with all the different components of our digital lives.

Anthropic and Google are among the companies and groups working to fix that. Over the past year, they have both introduced protocols that try to define how AI agents should interact with each other and the world around them. If they work as planned, they could give us a crucial part of the infrastructure we need for agents to be useful. Read our story to learn more

—Peter Hall

A glimpse into OpenAI’s largest ambitions

—James O’Donnell

OpenAI has given itself a dual mandate: on the one hand, it’s a tech giant rooted in products, including of course ChatGPT, which people around the world reportedly send 2.5 billion messages to each day. But its original mission is as a research lab that will not only create “artificial general intelligence” but ensure that it benefits all of humanity. 

My colleague Will Douglas Heaven recently sat down for an exclusive conversation with the two figures at OpenAI most responsible for the latter ambitions. The whole story is worth reading for all it reveals—about how OpenAI thinks about the safety of its products, what AGI actually means, and more—but here’s one thing that stood out to me.

This story is from The Algorithm, our weekly newsletter all about the latest goings-on in AI. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI is adding mental health guardrails to ChatGPT
It’s set to give less direct advice, and encourage users to take breaks from lengthy chats. (NBC)
What happens when doctors fail to spot AI’s mistakes? (The Verge)
+ OpenAI has released its first research into how using ChatGPT affects people’s emotional well-being. (MIT Technology Review)

2 The US wants to build a nuclear reactor on the moon
And it hopes to do that before Russia and China, who are planning to do exactly the same. (Politico)
NASA’s latest mission to the moon just failed. (Engadget)
Nokia is putting the first cellular network on the moon. (MIT Technology Review)

3 How to live forever (or at least get rich trying) 👴🤑
Love them or hate them, the people behind the explosion in longevity research are a fascinating bunch. (New Yorker $)
Longevity clinics around the world are selling unproven treatments. (MIT Technology Review)

4 Welcome to Silicon Valley’s ‘hard tech’ era
Goodbye, consumer software. Hello, massive military contracts. (NYT $)
Phase two of military AI has arrived. (MIT Technology Review)

5 There’s a big problem with the Gulf’s trillion-dollar AI dream
Building data centers in a region that already has water scarcity issues seems…unwise. (Rest of Water)
There’s a data center boom in the US desert too. (MIT Technology Review)
Google has promised to scale back its energy usage during certain times to reduce stress on the grid. (Quartz $)

6 Tesla’s board awarded about $30 billion of shares to Elon Musk
“Retaining Elon is more important than ever before,” they wrote in a letter to shareholders yesterday. (FT $)
Tech CEOs pay packets are reaching stratospheric new records. (WSJ $)

7 What happens if you respond to those scam job texts?
You get exploited, obviously—but you’d be surprised just how weird it can get along the way. (Slate)

8 Why there’s so much uproar over Vogue’s AI-generated ad
It’s the latest flashpoint in the war over when AI should (and shouldn’t) be used. (TechCrunch)

9 Earth’s core seems to be up and leaking out of Earth’s surface 🌋
It’s a finding that’s forcing geoscientists to rethink some long-held assumptions. (Quanta $)
How a volcanic eruption turned a human brain into glass. (MIT Technology Review)

10 Could lasers help us see inside people’s heads?
It seems possible, but big hurdles remain to this new method being adopted in clinical settings. (IEEE Spectrum)

Quote of the day

 “Hate it! Don’t want anything to do with it.”

—Weezy Simes, a 27-year-old florist, sums up her feelings about AI to Business Insider.

One more thing

woman holding a native blanket while hands cut pieces from it

ANDREA D’AQUINO

What happened to the microfinance organization Kiva?

Since it was founded in 2005, the San Francisco-based nonprofit Kiva has helped everyday people make microloans to borrowers around the world. It connects lenders in richer communities to fund all sorts of entrepreneurs, from bakers in Mexico to farmers in Albania. Its overarching aim is helping poor people help themselves.

But back in August 2021, Kiva lenders started to notice that information that felt essential in deciding who to lend to was suddenly harder to find. Now, lenders are worried that the organization now seems more focused on how to make money than how to create change. Read the full story.

—Mara Kardas-Nelson

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I want this guy to draw my portrait. 
+ Highly recommend making these lemongrass chicken lettuce wraps. So tasty and easy!
+ This encyclopedia teaches you about ancient gods and forgotten deities from around the world.
+ Some of the architecture in Iran looks breathtakingly beautiful.

OpenAI has finally released open-weight language models

OpenAI has finally released its first open-weight large language models since 2019’s GPT-2. These new “gpt-oss” models are available in two different sizes and score similarly to the company’s o3-mini and o4-mini models on several benchmarks. Unlike the models available through OpenAI’s web interface, these new open models can be freely downloaded, run, and even modified on laptops and other local devices.

In the company’s many years without an open LLM release, some users have taken to referring to it with the pejorative “ClosedAI.” That sense of frustration had escalated in the past few months as these long-awaited models were delayed twice—first in June and then in July. With their release, however, OpenAI is reestablishing itself as a presence for users of open models.

That’s particularly notable at a time when Meta, which had previously dominated the American open-model landscape with its Llama models, may be reorienting toward closed releases—and when Chinese open models, such as DeepSeek’s offerings, Kimi K2, and Alibaba’s Qwen series, are becoming more popular than their American competitors.

“The vast majority of our [enterprise and startup] customers are already using a lot of open models,” said Casey Dvorak, a research program manager at OpenAI, in a media briefing about the model release. “Because there is no [competitive] open model from OpenAI, we wanted to plug that gap and actually allow them to use our technology across the board.”

The new models come in two different sizes, the smaller of which can theoretically run on 16 GB of RAM—the minimum amount that Apple currently offers on its computers. The larger model requires a high-end laptop or specialized hardware.

Open models have a few key use cases. Some organizations may want to customize models for their own purposes or save money by running models on their own equipment, though that equipment comes at a substantial upfront cost. Others—such hospitals, law firms, and governments—might need models that they can run locally for data security reasons. 

OpenAI has facilitated such activity by releasing its open models under a permissive Apache 2.0 license, which allows the models to be used for commercial purposes. Nathan Lambert, post-training lead at the Allen Institute for AI, says that this choice is commendable: Such licenses are typical for Chinese open-model releases, but Meta released its Llama models under a bespoke, more restrictive license. “It’s a very good thing for the open community,” he says.

Researchers who study how LLMs work also need open models, so that they can examine and manipulate those models in detail. “In part, this is about reasserting OpenAI’s dominance in the research ecosystem,” says Peter Henderson, an assistant professor at Princeton University who has worked extensively with open models. If researchers do adopt gpt-oss as new workhorses, OpenAI could see some concrete benefits, Henderson says—it might adopt innovations discovered by other researchers into its own model ecosystem.

More broadly, Lambert says, releasing an open model now could help OpenAI reestablish its status in an increasingly crowded AI environment. “It kind of goes back to years ago, where they were seen as the AI company,” he says. Users who want to use open models will now have the option to meet all their needs with OpenAI products, rather than turning to Meta’s Llama or Alibaba’s Qwen when they need to run something locally.

The rise of Chinese open models like Qwen over the past year may have been a particularly salient factor in OpenAI’s calculus. An employee from OpenAI emphasized at the media briefing that the company doesn’t see these open models as a response to actions taken by any other AI company, but OpenAI is clearly attuned to the geopolitical implications of China’s open-model dominance. “Broad access to these capable‬‭ open-weights models created in the US helps expand democratic AI rails,” the company wrote in a blog post announcing the models’ release. 

Since DeepSeek exploded onto the AI scene at the start of 2025, observers have noted that Chinese models often refuse to speak about topics that the Chinese Communist Party has deemed verboten, such as Tiananmen Square. Such observations—as well as longer-term risks, like the possibility that agentic models could purposefully write vulnerable code—have made some AI experts concerned about the growing adoption of Chinese models. “Open models are a form of soft power,” Henderson says.

Lambert released a report on Monday documenting how Chinese models are overtaking American offerings like Llama and advocating for a renewed commitment to domestic open models. Several prominent AI researchers and entrepreneurs, such as HuggingFace CEO Clement Delangue, Stanford’s Percy Liang, and former OpenAI researcher Miles Brundage, have signed on.

The Trump administration, too, has emphasized development of open models in its AI Action Plan. With both this model release and previous statements, OpenAI is aligning itself with that stance. “In their filings about the action plan, [OpenAI] pretty clearly indicated that they see US–China as a key issue and want to position themselves as very important to the US system,” says Rishi Bommasani, a senior research scholar at the Stanford Institute for Human-Centered Artificial Intelligence. 

And OpenAI may see concrete political advantages from aligning with the administration’s AI priorities, Lambert says. As the company continues to build out its extensive computational infrastructure, it will need political support and approvals, and sympathetic leadership could go a long way.

AI Crawler Optimization Tips

Generative AI platforms such as ChatGPT, Perplexity, and Claude now execute live web searches with all prompts. Ensuring a site is crawlable by AI bots is therefore essential for mentions and citations on those platforms.

Here’s how to optimize a website for AI crawlers.

Disable JavaScript

Make sure your pages are readable with JavaScript disabled.

Unlike Google’s crawler, AI bots are immature. Many tests from industry practitioners confirm AI crawlers cannot always render JavaScript.

Most publishers and businesses no longer worry about JavaScript crawlability since Google has rendered those pages for years. Hence there’s a huge number of JavaScript-heavy sites.

The Chrome browser can render a site without JavaScript. To activate:

  • Go to your site using Chrome.
  • Open Web Developer tools at View > Developer > Developer Tools.
  • Click Settings (behind the gear icon) on the right side of the panel.
  • Scroll down and check the option “Disable JavaScript” under “Debugger.”

Disable JavaScript in Chrome’s Developer Tools panel.

Now browse your site, making sure:

  • All essential content is visible, especially behind tabs and drop-down menus.
  • The navigation menu and other links are clickable.
  • For video embeds, there’s an option to click to the original video, access a transcript, or both.

You can use Aiso, an AI optimization platform, to ensure AI bots can access and crawl your site. With a free trial, the platform allows a few free checks. Go to the “Website crawlability” section and enter your URL.

The tool will conduct a thorough review with suggestions on improving access for AI crawlers and even show the appearance of pages with JavaScript disabled.

Aiso can review a site’s use of JavaScript and suggest improvements for AI bot access.

Ensure AI Access

Make sure your site allows access for AI bots. Some content management platforms and plugins disallow AI access by default — site owners are often unaware.

To check, review your robots.txt file at [yoursite.com]/robots.txt.

The AI platforms themselves can interpret the file to ensure it allows access. Paste your robots.text URL into a ChatGPT prompt, for example, and request an analysis.

Structured Data

Structured data markup, such as from Schema.org, can also help ensure visibility.

Schema markup makes it easier for AI bots to extract essential information from a page (or bypass a block) without crawling it in full.

For example, many website FAQ sections have collapsible elements that prevent access to AI bots. Schema’s FAQPage Type replicates all questions and answers, enabling bot visibility.

Similarly, Schema’s Article Type can communicate context and authorship of content.

Claude Opus 4.1 Improves Coding & Agent Capabilities via @sejournal, @MattGSouthern

Anthropic has released Claude Opus 4.1, an upgrade to its flagship model that’s said to deliver better performance in coding, reasoning, and autonomous task handling.

The new model is available now to Claude Pro users, Claude Code subscribers, and developers using the API, Amazon Bedrock, or Google Cloud’s Vertex AI.

Performance Gains

Claude Opus 4.1 scores 74.5% on SWE-bench Verified, a benchmark for real-world coding problems, and is positioned as a drop-in replacement for Opus 4.

The model shows notable improvements in multi-file code refactoring and debugging, particularly in large codebases. According to GitHub and enterprise feedback cited by Anthropic, it outperforms Opus 4 in most coding tasks.

Rakuten’s engineering team reports that Claude 4.1 precisely identifies code fixes without introducing unnecessary changes. Windsurf, a developer platform, measured a one standard deviation performance gain compared to Opus 4, comparable to the leap from Claude Sonnet 3.7 to Sonnet 4.

Expanded Use Cases

Anthropic describes Claude 4.1 as a hybrid reasoning model designed to handle both instant outputs and extended thinking. Developers can fine-tune “thinking budgets” via the API to balance cost and performance.

Key use cases include:

  • AI Agents: Strong results on TAU-bench and long-horizon tasks make the model suitable for autonomous workflows and enterprise automation.
  • Advanced Coding: With support for 32,000 output tokens, Claude 4.1 handles complex refactoring and multi-step generation while adapting to coding style and context.
  • Data Analysis: The model can synthesize insights from large volumes of structured and unstructured data, such as patent filings and research papers.
  • Content Generation: Claude 4.1 generates more natural writing and richer prose than previous versions, with better structure and tone.

Safety Improvements

Claude 4.1 continues to operate under Anthropic’s AI Safety Level 3 standard. Although the upgrade is considered incremental, the company voluntarily ran safety evaluations to ensure performance stayed within acceptable risk boundaries.

  • Harmlessness: The model refused policy-violating requests 98.76% of the time, up from 97.27% with Opus 4.
  • Over-refusal: On benign requests, the refusal rate remains low at 0.08%.
  • Bias and Child Safety: Evaluations found no significant regression in political bias, discriminatory behavior, or child safety responses.

Anthropic also tested the model’s resistance to prompt injection and agent misuse. Results showed comparable or improved behavior over Opus 4, with additional training and safeguards in place to mitigate edge cases.

Looking Ahead

Anthropic says larger upgrades are on the horizon, with Claude 4.1 positioned as a stability-focused release ahead of future leaps.

For teams already using Claude Opus 4, the upgrade path is seamless, with no changes to API structure or pricing.


Featured Image: Ahyan Stock Studios/Shutterstock

The Future Of Search: 5 Key Findings On What Buyers Really Want via @sejournal, @MattGSouthern

Search is changing, and not just because of Google updates.

Buyers are changing how they find, evaluate, and decide. They are researching in AI summaries, asking questions out loud to their phones, and converting through conversations that happen outside of what most analytics can track.

Our latest ebook, “The Future Of Search: 16 Actionable Pivots That Improve Visibility & Conversions,” explores how marketers are responding to this shift.

It offers a closer look at what it means to optimize for visibility, engagement, and results in a fragmented, AI-influenced search landscape.

Here are five key takeaways.

1. Ranking Well Doesn’t Guarantee Visibility

Getting to the top of search results used to be enough. Today, that’s no longer the case.

AI summaries, voice assistants, and platform-native answers often intercept the buyer before they reach your website.

Even high-ranking content can go unseen if it’s not structured in a way that’s easily digestible by large language models.

For example, research shows AI-generated summaries often prioritize single-sentence answers and structured formats like tables and lists.

Only a small fraction of AI citations rely on exact-match keywords, reinforcing that clarity and context are now more important than repetition.

To stay visible, businesses need to consider how their content is interpreted across multiple AI systems, not just traditional SERPs.

2. Many Conversions Happen Offscreen

Clicks and page views only tell part of the story.

High-intent actions like phone calls, text messages, and offline conversations are often left out of attribution models, yet they play a critical role in decision-making.

These touchpoints are especially common in service-based industries and B2B scenarios where buyers want real interaction.

One case study reveals that a company discovered nearly 90% of their Yelp conversions came through phone calls they weren’t tracking. Another saw appointment bookings spike after attributing organic search traffic to calls rather than clicks.

Our ebook refers to this as the insight gap, and highlights how conversation tracking helps marketers close it.

3. Listening Is More Effective Than Guessing

Marketers have access to more customer input than ever, but much of it goes unused.

Call transcripts, support calls, and chat logs contain the language buyers actually use.

Teams that analyze these conversations are gaining an edge, using real voice-of-customer insights to refine messaging, improve landing pages, and inform campaign strategy.

In one example, a marketing agency increased qualified leads by 67% simply by identifying the specific terminology customers used when asking about their services.

The shift from assumptions to evidence is helping brands prioritize what matters most, and it’s making their campaigns more effective.

4. Paid Search Works Better When It Aligns With Everything Else

Search behavior is not linear, and neither is the buyer journey.

Users often move between organic results, paid ads, and AI-generated suggestions in the same session. The strongest-performing campaigns tend to be the ones that echo the same language and value props across all these touchpoints.

That includes aligning ad copy with real customer concerns, drawing from call transcripts, and building landing pages that reflect the buyer’s stage in the decision process.

It also means rethinking what happens after the click.

5. Attribution Models Are Out Of Step With Reality

Most attribution still assumes that conversions happen on a single screen. That’s rarely true.

A manager might discover your brand in an AI-generated search snippet on a desktop, send the link to themselves in Slack, and later call your sales team from their iPhone after revisiting the content on mobile.

Marketers relying only on last-click attribution may be optimizing based on incomplete or misleading data.

The report makes the case for models that include multi-touch, cross-device, and offline activity to give a fuller picture of what drives conversions.

This isn’t about tracking more for the sake of it. It’s about making smarter decisions with the signals that matter.

Rethinking Search Starts With Rethinking Buyers

The ebook, written in collaboration with CallRail, offers more than strategy updates. It is a reminder that behind every metric is a person making a decision.

Marketers who succeed in this new environment aren’t just optimizing for rankings or clicks. They are optimizing for how people think, search, and take action.

Download the full report to explore how buyer behavior is reshaping search strategy.

the future of ai search


Featured Image: innni/Shutterstock

Google Ecommerce SERP Features 2025 Vs. 2024 via @sejournal, @Kevin_Indig

In 2024, Google turned the SERP into a storefront.

In 2025, it turned it into a marketplace with an AI-based mind of its own.

Over the past 12 months, Google has layered AI into nearly every inch of the shopping search experience by merging organic results with product listings, rolling out AI Overviews that replace traditional product grids, and introducing a full-screen “AI Mode.”

Meanwhile, ChatGPT is inching closer to becoming a personalized shopping assistant, but for now, the most dramatic shifts for SEOs are still happening inside Google.

To understand the impact, I revisited a set of 35,000+ U.S. shopping queries I first analyzed in July 2024.

In today’s Memo, I’m breaking down the state of Google Shopping SERPs in 2025. A year later, the landscape looks … different:

  • AI Overviews have started to displace classic ecommerce SERP features.
  • Image packs dominate the page.
  • Discussion forums are on the decline.

Plus, an exclusive comparison of 2024 vs. 2025 ecommerce SERP features and a full, detailed checklist of optimizations for the SERP features that matter most today (available for premium subscribers. I show you exactly how I do this).

This memo breaks down exactly what’s changed in Google’s shopping SERPs over the past year. Let’s goooooo.

Boost your skills with Growth Memo’s weekly expert insights. Subscribe for free!

In the last 12 months, Google hasn’t just transformed itself into a publisher that serves up content to answer queries right in the SERP (via AI Overviews and AI Mode). It’s also built out an extensive marketplace for shopping queries.

However, Google now provides a whole slew of SERP features and AI features for ecommerce queries that are at least as impactful as AIOs and AI Mode.

Meanwhile, ChatGPT & Co. are starting to include product recommendations with links, reviews, buy buttons, and recommendations directly in the chat. (But this analysis focuses on Google results only.)

To better understand the key trends for Google shopping queries, in July 2024, I analyzed 35,305 keywords across product categories like fashion, beds, plants, and automotive in the U.S. over the last five months using seoClarity.

We’re revisiting that data today, examining those same keywords and categories for July 2025.

The results:

  1. AI Overviews have started to replace product grids.
  2. Ecommerce SERPs are increasingly visual.
  3. There are more question-related SERP features (like People Also Ask), less UGC.
  4. Fewer videos are appearing across the SERPs for product-related searches.

About the data:

  • This data specifically covers Google search results and features. It doesn’t include ChatGPT, Perplexity, etc. However, we’ll touch on this briefly below.
  • Over 35,000 search queries were analyzed, and the same group was examined in both July 2024 and July 2025.
  • The search queries analyzed include product-related queries across a broad spectrum, from brand terms (like Walmart) to individual products (iPads) and categories (e-bikes).
  • If you’re curious about the exact list of Google shopping SERP features included in this analysis, they’re included at the bottom of this memo.

Before we dig into the findings…

In Google’s shift from search engine to ecommerce marketplace (and from search engine to publisher), Google has merged as much as possible into the SERP page.

Web results and the shopping tab for shopping searches were combined as a response to Amazon’s long-standing dominance.

The shopping tab still exists, sure.

But for product-related searches, the main search page and the Google shopping experience look incredibly similar, with the Shopping tab streamlined to a product-grid experience only.

In June 2024, I reported in Critical SERP Features of Google’s shopping marketplace:

  • Google has fully transitioned into a shopping marketplace by adding product filters to search result pages and implementing a direct checkout option.
  • These new features create an ecommerce search experience within Google Search and may significantly impact the organic traffic merchants and retailers rely on.
  • Google has quietly introduced a direct checkout feature that allows merchants to link free listings directly to their checkout pages.
  • Google’s move to a shopping marketplace was likely driven by the need to compete with Amazon’s successful advertising business.
  • Google faces the challenge of balancing its role as a search engine with the need to generate revenue through its shopping marketplace, especially considering its dependence on partners for logistics.

And now?

Google’s layered AI and personalized SERP features into the shopping experience as well.

Below are the Google SERP features I’ll be examining in this year-over-year (YoY) analysis, specifically, with a quick synopsis if you’re not familiar.

  • Images: A horizontal carousel of image results related to the query pulled from product pages or image-rich content; usually appear at the top or mid-page and link to Google Images or directly to source pages.
  • Products: Displays a visual grid or carousel of products with titles, images, prices, reviews, and merchants. This includes free product listings (organic) and Product Listing Ads (PLAs) (paid).
  • People Also Ask (PAA): Related questions users frequently ask. Clicking a question reveals a source link. (These often inform Google’s understanding of search intent and user curiosity.)
  • Things To Know: An AI-driven feature that breaks a topic into subtopics and frequently misunderstood concepts. Found mostly on broad, educational, or commercial-intent queries, this is Google’s way of guiding users deeper into a topic and understanding deeper search intent.
  • Discussion and Forums: Highlights relevant threads from platforms like Reddit, Quora, and niche forums. Answers are often community-generated and authentic. Replaced some traditional “People Also Ask” real estate for shopping or reviews queries.
  • Knowledge Graph: Displays structured facts about a person, brand, product, or topic-sourced from trusted databases. Appears in a right-hand sidebar or embedded box.
  • Buying Guide: A feature that explains what to consider when shopping for a product, e.g., “What to look for in a DSLR camera.” Usually placed mid-page for commerce-intent queries. It mimics a human assistant or product expert’s advice. Contains snippets and links to sources.
  • Local Listing: Shows local business listings with map, ratings, hours, and quick call/location links. Prominent in searches with local intent like “shoe store near me” or “coffee shops in Detroit.”
  • AI Overview: Generative AI summary at the top of the SERP that answers the query using information synthesized from multiple sources. For shopping queries, it often includes product summaries.
  • Video: A carousel or block of video content, mostly from YouTube, but also from other video-hosting platforms. May include timestamps, captions, or “key moments” for long videos.
  • Answer Box (a.k.a. Featured Snippet): A direct answer to a query extracted from a single web page, shown at the top of the SERP in a stylized box. Often used for factual or how-to queries. Includes the source link.
  • Free Product Listings: Organic product results submitted via Google Merchant Center feeds. These listings show in the Shopping tab and occasionally in the main SERP product grid (distinct from paid Shopping ads).
  • From sources across the web: A content block showing opinions or quotes on a product or topic from a variety of sites. Often used in AI Overviews or product reviews to surface aggregated user sentiment or editorial input.
  • FAQ: An expandable schema-driven block showing common questions and answers sourced from a specific page. Typically appears under a site’s organic result when FAQ schema is properly implemented.
  • PPC: Sponsored links shown at the top or bottom of the SERP, marked “Sponsored” or “Ad.” These can show up as text, product images/grids, etc.

In addition to the standard SERP features tracked in this analysis via the above list, here’s a look at the current Google shopping marketplace SERP features and/or elements (like toggle filters) that we’re dealing with at the halfway point of 2025.

  • AI Mode (Full-Screen): Interactive, immersive full-page AI shopping experience with filters and buy links.
  • Shopping filters inline: Dynamic filters (brand, color, price) within AI Mode and Shopping grids.
  • Virtual try-on: This feature was recently released. It’s a generative AI module showing clothes on diverse body types (expanding by category).
  • Price tracking/alerts: Users can track price drops and get alerts via Gmail or Chrome. Honestly, a pretty great tool.
  • Popular stores/top stores: Scrollable carousel of prominent retailers for the product category.
  • Product sites (EU market): Organic feature that shows prominent ecommerce domains (due to regulatory changes in the EU).
  • Trending products/popular products: Highlights products rising in popularity based on recent search activity.
  • Merchant star ratings: Display review scores and counts in summaries or tiles.
  • Free shipping/returns labels: Highlighted callouts in product tiles.
  • “Verified by Google” merchant badges: Google-trusted seller icon in some listings.
  • Quick comparison panels: Side-by-side spec or feature comparisons (this is an early-stage rollout, similar to Amazon’s product comparison panel or module).

To illustrate with an example, let’s say you are looking for kayaks (summertime!).

On desktop (logged-in), Google will now show you product filters on the left sidebar and “Popular products” carousels in the middle on top of classic organic results, but under ads, of course.

kayaks on desktopImage Credit: Kevin Indig

Directly under the shopping product grids, you have traditional organic results along with an on-SERP Buying Guide, similar to People Also Ask questions (which is also included further down the page).

Both the Buying Guide and People Also Ask features deliver answers with links to original content.

Image Credit: Kevin Indig

On mobile, you get product filters at the top, ads above organic results, and product carousels in the form of Popular products or “Products for you.”

Image Credit: Kevin Indig

This experience doesn’t look very different from Amazon … which is the whole point.

Image Credit: Kevin Indig

Google’s shopping experience lets users explore products on a variety of marketplaces, like Amazon, Walmart, eBay, Etsy, & Co.

From an SEO perspective, the prominent position of product grid (listings) and filters likely significantly impacts CTR, organic visibility, and ultimately, revenue.

But let’s take a look at the same search via AI Mode.

Below is the desktop experience via Chrome.

I’ve zoomed out here so you get the whole view, but it takes the user two to three scrolls to get to the product grid when in a standard view.

Image Credit: Kevin Indig

Here on mobile, getting to product recommendations takes several scrolls. In one instance, I received a result that included a list of places near me in my city where I could get a kayak.

Image Credit: Kevin Indig

Keeping the current Google shopping SERP experience in mind, here’s what the data shows.

This is the most noteworthy shift found in the data, as you can probably guess.

Since March 2025, when Google began rolling out AI Overviews more aggressively, they’ve also started replacing (organic) product grids.

Image Credit: Kevin Indig

The graph above might look like it represents minimal changes when you examine it in a timeline view, but you can see the trend even better when moving AIOs to a second y-axis (below).

Image Credit: Kevin Indig

I expect AI Overviews to still show the product grids searchers have become accustomed to, although they might take a different form.

When searching for [which camera tripod should I buy?], for example, we find an AI Overview at the top with specific product recommendations.

Image Credit: Kevin Indig

Of course, AI Mode takes that a step further with richer product recommendations and buying guides.

(Shoutout to The New York Times and the other five sources for this AI Mode answer … which now don’t see an ad impression or affiliate click.)

Image Credit: Kevin Indig

As a result of this shift, which I predict will only increase over time, tracking your brand mentions and product links in AI Overviews becomes critical. Skip this at your own risk.

Here, you’ll see the increase in image packs over time, with a big shift in March 2025.

Image packs for ecommerce-related queries grew from ~60% in 2024 to a new baseline of over 90% of keywords in 2025.

Image Credit: Kevin Indig

Also, notice how Google systematically tests SERP layouts between core updates (e.g., the dip in the graph above happens between the March and June 2025 Core Updates).

Having strong product images, which are properly optimized, continues to be crucial for ecommerce search.

Since January 2025, Google has shown more People Also Asked (PAA) features at the cost of Discussions & Forums.

Even though Reddit is the second most visible site on the web, I’m surprised to see more PAA – two years after Google removed FAQ rich snippets from the SERPs.

Image Credit: Kevin Indig

This is something you want to consider tracking for queries that are directly related to your products, if you’re not doing so already. (You can do this in classic SEO tools like Semrush or Ahrefs, for example.)

Since August 2024, Google has systematically reduced the number of videos in the ecommerce search results.

Image Credit: Kevin Indig

It seems that images have taken a lot of the real estate videos that used to own.

Image Credit: Kevin Indig

As a result, videos are less important in ecommerce search, while images are increasingly more important.

If you’ve been creating and optimizing videos and haven’t seen the SEO results you wanted for your products/site, this could be your signal to invest in other types of content.

While this analysis covers Google SERP data specifically, it’d be a miss to not discuss the new shopping features in ChatGPT.

However, we don’t yet have months and months of data on LLM-based conversational product recommendations to give us good, clear information, so I anticipate there will be more analysis ahead once more time passes.

ChatGPT’s shopping experience is starting to look a lot like Google’s  – but with a twist: Instead of viewing lists of blue links or multiple product grids, it curates a conversational shortlist with minimal product listings included.

No affiliate links and no paid ads (yet).

Image Credit: Kevin Indig

OpenAI integrates real-time product data from tools like Klarna and Shopify, allowing ChatGPT to surface up-to-date prices, availability, reviews, and product details in a shoppable card-style format.

ChatGPT also offers a “Why you might like this” and “What people are saying” generative summary when a specific product is clicked.

Image Credit: Kevin Indig

OpenAI offers the following guidance about how these products are selected [source]:

A product appears in the visual carousel when ChatGPT perceives it’s relevant to the user’s intent. ChatGPT assesses intent based on the user’s query and other available context, such as memories or custom instructions….

When determining which products to surface, ChatGPT considers:

• Structured metadata from third-party providers (e.g., price, product description) and other third-party content (e.g., reviews).

• Model responses generated by ChatGPT before it considers any new search results. Learn more.

• OpenAI safety standards.

Depending on the user’s needs, some of these factors will be more relevant than others. For example, if the user specifies a budget of $30, ChatGPT will focus more on price, whereas if price isn’t important, it may focus on other aspects instead.

OpenAI also explains how merchants are selected for products [source]:

When a user clicks on a product, we may show a list of merchants offering it. This list is generated based on merchant and product metadata we receive from third-party providers. Currently, the order in which we display merchants is predominantly determined by these providers….

To that end, we’re exploring ways for merchants to provide us their product feeds directly, which will help ensure more accurate and current listings. If you’re interested in participating, complete the interest form here, and we’ll notify you once submissions open.

That being said, it takes some trial and error to trigger product recommendations directly in the chat.

For instance, the prompt [can you help me find the best kayaks for beginners] results in an output that includes product recommendations, while the query [what are the best kayaks for beginners] results in a list without shopping results, features, or links.

Prompts with action-oriented language like “can you help me” and “will you find” may have a higher likelihood of offering shopping results directly in the chat, while queries like “what is the best” and “what are the best” and “compare the features of” may result in a variety of recommendations.

Image Credit: Kevin Indig

Featured Image: Paulo Bobita/Search Engine Journal