Google Launches Personal Intelligence In AI Mode via @sejournal, @MattGSouthern

Google is rolling out Personal Intelligence, a feature that connects Gmail and Google Photos to AI Mode in Search, delivering personalized responses based on users’ own data.

The feature, announced in a blog post by Robby Stein, VP of Product at Google Search, is available to Google AI Pro and AI Ultra subscribers who opt in.

What’s New

Personal Intelligence lets AI Mode reference information from a user’s Gmail and Google Photos to tailor search responses. Google describes it as connecting the dots across Google apps to unlock search results that fit individual context.

The feature rolls out as a Labs experiment for eligible subscribers in the U.S. in English. It is available for personal Google accounts only, not for Workspace business, enterprise, or education users.

To enable Personal Intelligence, users can:

  1. Open Search and tap their profile
  2. Click on Search personalization
  3. Select Connected Content Apps
  4. Connect Gmail and Google Photos

In the settings menu, the Gmail connection appears under “Workspace,” though the feature itself is not available to Workspace business, enterprise, or education accounts.

Subscribers may also see an invitation to try the feature directly in AI Mode as the rollout progresses over the next few days.

How It Works

Personal Intelligence uses Gemini 3 to process queries alongside connected account data. When enabled, AI Mode may reference email confirmations, travel bookings, and photo memories to inform responses.

Stein offered examples in the announcement. A user searching for trip activities could receive recommendations based on hotel bookings in Gmail and past travel photos. Someone shopping for a coat could get suggestions that account for preferred brands, upcoming travel destinations from flight confirmations, and expected weather conditions.

Stein wrote:

“With Personal Intelligence, recommendations don’t just match your interests — they fit seamlessly into your life. You don’t have to constantly explain your preferences or existing plans, it selects recommendations just for you, right from the start.”

See an example in the screenshots below:

Screenshot from: blog.google/products-and-platforms/products/search/personal-intelligence-ai-mode-search/, January 2026.
Screenshot from: blog.google/products-and-platforms/products/search/personal-intelligence-ai-mode-search/, January 2026.

Privacy Controls

Google emphasizes that connecting Gmail and Google Photos is opt-in. Users choose whether to enable the connections and can turn them off at any time.

Google says AI Mode does not train directly on users’ Gmail inbox or Google Photos library. The company says training is limited to specific prompts in AI Mode and the model’s responses, used to improve functionality over time.

Google acknowledges that Personal Intelligence may make mistakes, including incorrectly connecting unrelated topics or misunderstanding context. Users can correct errors through follow-up responses or by providing feedback with the thumbs down button.

Why This Matters

This is the personal context feature Google teased at I/O in May 2025. Seven months later, in December, Google SVP Nick Fox confirmed in an interview that the feature was still in internal testing with no public timeline. Today’s rollout delivers what was delayed.

For the 75 million daily active users Fox reported in AI Mode in that December interview, this could reduce how much context you need to type in order to get tailored responses.

For publishers, the implications depend on how personalization affects which content surfaces in AI Mode responses. If the system prioritizes user-specific context over general search results, some informational queries may resolve without a click to external sites. Google has not shared data on how Personal Intelligence affects citation patterns or traffic flow.

The feature is currently limited to paid subscribers on personal accounts. Whether Google expands it to free users or Workspace accounts would change its reach.

Looking Ahead

Personal Intelligence is rolling out as a Labs feature over the next few days. Google says eligible AI Pro and AI Ultra subscribers in the U.S. will automatically have access as it becomes available.

Watch for whether Google provides analytics or attribution tools that let publishers track how personalized AI Mode responses affect visibility and traffic patterns.

A Breakdown Of Microsoft’s Guide To AEO & GEO via @sejournal, @martinibuster

Microsoft published a sixteen page explainer guide about optimizing for AI search and chat. While many of the suggestions can be classified as SEO, some of the other tips relate exclusively to AI search surfaces. Here are the most helpful takeaways.

What AEO and GEO Are And Why They Matter

Microsoft explains that AI search surfaces have created an evolution from “ranking for clicks” to “being understood and recommended by AI.” Traditional SEO still provides a foundation for being cited in AI, but AEO and GEO determine whether content gets surfaced inside AI-driven experiences.

Here is how Microsoft distinguishes AEO and GEO. The first thing to notice is that they define AEO as Agentic Engine Optimization. That’s different from Answer Engine Optimization, which is how AEO is commonly understood.

  • AEO (Answer/Agentic Engine Optimization) focuses on optimizing content and product information easy for AI assistants and agents to retrieve, interpret, and present as direct answers.
  • GEO (Generative Engine Optimization) focuses on making your content discoverable and persuasive inside generative AI systems by increasing clarity, trustworthiness, and authoritativeness.

Microsoft views AEO and GEO as not limited to marketing, but multiple teams within an organization.

The guide says:

“This shift impacts every part of the organization. Marketing teams must rethink brand differentiation, growth teams need to adapt to AI-driven journeys, ecommerce teams must measure success differently, data teams must surface richer signals, and engineering teams must ensure systems are AI-readable and reliable.”

AI shopping is not one channel, it’s really a set of overlapping systems.

Microsoft describes AI shopping as three overlapping consumer touchpoints:

  1. AI browsers that interpret what’s on a page and surface context while users browse.
  2. AI assistants that answer questions and guide decisions in conversation.
  3. AI agents that can take actions, like navigating, selecting options, and completing purchases.

The AI touchpoint matters less than whether the system can access accurate, structured, and trustworthy product information.

SEO Still Plays A Role

Microsoft’s guide says that the AEO and GEO competition changes from discovery over to influence. SEO is still important, but it is no longer the whole game.

The new competition is about influencing the AI recommendation layer, not just showing up in rankings.

Microsoft describes it like this:

  • SEO helps the product get found.
  • AEO helps the AI explain it clearly.
  • GEO helps the AI trust it and recommend it.

Microsoft explains:

“Competition is shifting from discovery to influence (SEO to AEO/GEO).

If SEO focused on driving clicks, AEO is focused on driving clarity with enriched, real-time data, while GEO focuses on building credibility and trust so AI systems can confidently recommend your products.

SEO remains foundational, but winning in AI-powered shopping experiences requires helping AI systems understand not just what your product is, but why it should be chosen.”

How AI Systems Decide What To Recommend

Microsoft explains how an AI assistant, in this case Copilot, handles a user’s request. When a user asks for a recommendation, the AI assistant goes into a reasoning phase where the query is broken down using a combination of web and product feed data.

The web data provides:

  • “General knowledge
  • Category understanding
  • Your brand positioning”

Feed data provides:

  • “Current prices
  • Availability
  • Key specs”

The AI assistant may, based on the feed data, choose to surface the product with the lowest price that is also in stock.  When the user clicks through to the website, the AI Assistant scans the page for information that provides context.

Microsoft lists these as examples of context:

  • Detailed reviews
  • Video that explain the product
  • Current promotions
  • Delivery estimates

The agent aggregates this information and provides guidance on what it discovered in terms of the context of the product (delivery times, etc.).

Microsoft brings it all together like this:

First, there’s crawled data:
The information AI systems learned during training and retrieve from indexed web pages, which shapes your brand’s baseline perception and provides grounding for AI responses, including your product
categories, reputation and market position.

Second, there’s product feeds and APIs:
The structured data you actively push to AI platforms, giving you control over how your products are represented in comparisons and recommendations. Feeds provide accuracy, details and consistency.

Third, there’s live website data:
The real-time information AI agents see when they visit your actual site, from rich media and user reviews to dynamic pricing and transaction capabilities. Each data source plays a distinct role in the shopping journey — traditional SEO remains essential because AI systems perform real-time web searches frequently throughout the shopping journey, not just at purchase time, and your site must rank well to be discovered, evaluated, and recommended.

Microsoft recommends A Three-Part Action Plan

Strategy 1: Technical Foundations

The core idea for this strategy is that your product catalog must be machine-readable, consistent everywhere, and up to date.

Key actions:

  • Use structured data (schema) for products, offers, reviews, lists, FAQs, and brand.
  • Include dynamic fields like pricing and availability.
  • Keep feed data and on-page structured data aligned with what users actually see.
  • Avoid mismatches between visible content and what is served to crawlers.

Strategy 2: Optimize Content For Intent And Clarity

This strategy is about optimizing product content so that it answers typical user questions and is easy for AI to reuse.

Key actions:

  • Write product descriptions that start with benefits and real use-case value.
  • Use headings and phrasing that match how people ask questions.

Add modular content blocks:

  • FAQs
  • specs
  • key features
  • comparisons

Add Contextual Information

  • Support multi-modal interpretation (good alt text, transcripts for video content, structured image metadata).
  • Add complementary product context (pairings, bundles, “goes well with”).

Strategy 3: Trust Signals (Authority And Credibility)

The takeaway for this strategy is that AI assistants and agents prioritize content that looks verified and reputable.

Key actions:

  • Strengthen review credibility (verified reviews, strong volumes, clear sentiment).
  • Reinforce brand authority through real-world signals (press, certifications, partnerships).
  • Keep claims grounded and consistent to avoid trust degradation.
  • Use structured data to clarify legitimacy and identity.

Microsoft explains it like this:

“AI assistants prioritize content from sources they can trust. Signals such as verified reviews, review volume, and clear sentiment help establish credibility and influence recommendations.

Brand authority is reinforced through consistent identity, real-world validation such as press coverage, certifications, and partnerships, and the use of structured data to clearly define brand entities.

Claims should be factual, consistent, and verifiable, as exaggerated or misleading information can reduce trust and limit visibility in AI-powered experiences”

Takeaways

AI search changes the goal from winning rankings to earning recommendations. SEO still matters, but AEO and GEO determine how well content is interpreted, explained, and chosen inside AI assistants and agents.

AI shopping is not a single channel but an ecosystem of assistants, browsers, and agents that rely on authoritative signals across crawled content, structured feeds, and live site experiences. The brands that win are the ones with consistent, machine-readable data, and clear content that contains useful contextual information that can be easily summarized.

Microsoft published a blog post that is accompanied by a link to the downloadable explainer guide: From Discovery to Influence: A Guide to AEO and GEO.

Featured Image by Shutterstock/Kues

56% Of CEOs Report No Revenue Gains From AI: PwC Survey via @sejournal, @MattGSouthern

Most companies haven’t yet seen financial returns from their AI investments, according to PwC’s 29th Global CEO Survey.

The survey of 4,454 chief executives across 95 countries found that 56% report neither increased revenue nor lower costs from AI over the past 12 months.

What The Survey Found

About 30% of CEOs said their company saw increased revenue from AI in the last year. On costs, 26% reported decreases while 22% said costs went up. PwC defined “increase” and “decrease” as changes of 2% or more.

Only 12% of companies achieved both revenue gains and cost reductions. PwC called this group the “vanguard” and noted they had stronger AI foundations in place, including defined roadmaps and technology environments built for integration.

For marketing specifically, the numbers suggest early-stage adoption. Just 22% of CEOs said their organization applies AI to demand generation to a large or very large extent. The company’s products, services, and experiences showed similar numbers at 19%.

Separate from AI, CEO confidence in near-term growth has declined. Only 30% said they were very or extremely confident about revenue growth over the next 12 months. That’s down from 38% last year and a peak of 56% in 2022.

Why This Matters

The survey adds data to a pattern I’ve tracked over the past year. A LinkedIn report found 72% of B2B marketers felt overwhelmed by AI’s pace of change. A Gartner survey showed 73% of marketing teams were using AI, but 87% of CMOs had experienced campaign performance problems.

The 22% demand generation figure gives marketers a rough benchmark for how their AI adoption compares to the broader executive population. It’s self-reported CEO perception rather than measured deployment, but it suggests most organizations are still in early stages of applying AI to customer acquisition at scale.

PwC’s framing is direct:

“Isolated, tactical AI projects often don’t deliver measurable value.”

The report adds that tangible returns come from enterprise-scale deployment consistent with company business strategy.

Looking Ahead

PwC recommends companies focus on building AI foundations before expecting returns. That includes defined roadmaps, technology environments that enable integration, and formalized responsible AI processes.

For marketing teams evaluating their own AI investments, this survey suggests most organizations are still working through the same questions.


Featured Image: Blackday/Shutterstock

Five Things To Do That Will Increase Authoritativeness And Earn Links via @sejournal, @martinibuster

The following are five things that anyone can do to establish authoritativeness and trustworthiness that can be communicated quickly and contribute to earning more links. The trick to this technique is that you have to put some time into these tactics first but the rewards after you are done are links, lots of them.

The idea behind this tactic is to convince a web publisher to give you a free link, or to give you the opportunity to publish an article (with or without a customary byline and link).

In order to cut through the noise of all the other emails the web publisher receives, it is necessary to establish your authority in order to inspire trust. And you need to do it quickly. These are some touchstones I crafted, through trial and error, in order to accomplish a higher success level in link building campaigns.

I call this method, Establishing your Bona Fides. It works by creating trust with one to two sentences. Whether at the beginning, middle or end of the outreach is up to you, but I’ve enjoyed a good response rate by placing it near the beginning.

Here are the shortcuts to establishing bona fides:

  1. Awards
  2. Media appearances and mentions
  3. List of authoritative organizations that have published your work
  4. List of peers that have published your work
  5. Authority of your website’s authors

As you can see this isn’t really something you can fake your way through. But if you take the time to first establish your bona fides (what makes your legitimate and authoritative), you will see a higher percentage of positive response rates. People will take your emails more seriously.

There is no need to be annoying and badger people over and over the way some marketing agencies do. The success rate improvement from this method will cut the need for such aggressive pestering, something that I have never approved of.

The first two bona fides are self explanatory. But I will explain them quickly.

Awards
It’s always useful to obtain recognition in whatever field that you are in (if that’s a thing). Even if it’s recognition for volunteering for an organization and doing charitable work.  Other kinds of awards are the kind that local news might give out, like best whatever in whatever town your company is based out of.

Media Appearances And Mentions
Appearing in television news, being cited in respected news or online magazines are ways to establish signals of authoritativeness. Signals of authoritativeness aren’t just ranking signals, they are also the kinds of things that  humans respond to.

Organizations And Associations
The third bona fide relates to associations and organizations that your company is allied or partnered with, and any publications that are related to those organizations, both online and offline. Some organizations are always on the lookout for people to profile or publish articles by for their association publications. This kind of publishing is a great way to establish authoritativeness and trustworthiness. It’s truly earning recognition for your expertise.

Publishing articles in offline publications are a bonanza. While you likely won’t get a link, you will also be the rare online organization submitting a guest post in those publications. Most companies and marketing agencies aren’t doing this because there is no link associated with it. This this will be your advantage because as you’ll see, it will help to increase your link building success rate. When you publish an article in an authoritative space, even if it’s offline, it gives you the ability to rightfully say in your outreach email that you’ve been published in so and so magazine or newsletter. Associating your brand with the authoritative brand in this way instantly makes your brand authoritative to the person you’re communicating with. This is especially powerful if the person you’re communicating with is also a member of whatever association or organization that you have published an article with.

The reason this approach works is that it enables you to establish yourself as authoritative with a single sentence. With only a few words in your outreach email, you can quickly profile your site as not a spammer, and a legit organization that’s ultimately worthy of getting a link. In my experience this has worked exceedingly well for consistently earning instant trust from whoever you’re outreaching to.

You can get to number four  (list of peers that have published your work) without doing number three (list of organizations that have published your work). But you’ll have greater success if you put a good amount of number three projects behind you. Even if you don’t use all the projects in your initial outreach email, you may have to deploy them in follow up emails to doubting recipients who need more convincing. And you get add all of these to your About Us page.

Authority Of Website Authors
Point number five (authority of your website’s authors) is more or less self-explanatory. It helps if the person authoring your articles is someone who the outreach recipient can identify with, can think of as “one of us” when you list their credentials. For example, I once did an outreach in the educational space citing the writing talents of a math teacher who was also an education technology blogger. This person’s credentials and authority opened doors for my link building outreach and helped my client receive links from some truly prestigious education related websites.

Obviously, the success of this approach requires do some work ahead of time to get appearances in blogs, podcasts, video interviews, publishing in association and organization online and offline publications. Even taking a photo with someone who is well known and authoritative and putting that on your About Us page can be helpful. People who are considering giving you a link will go to your website’s About Us page to verify who this company is and if they’re as above board and authoritative as you say.

Using the above pre-campaign tactics will improve your trustworthiness and authoritativeness and have a positive impact on link building success rates.

Featured Image by Shutterstock/Krakenimages.com

When Platforms Say ‘Don’t Optimize,’ Smart Teams Run Experiments via @sejournal, @DuaneForrester

A quick note up front, so we start on the right foot.

The research I’m about to reference is not mine. I did not run these experiments. I’m not affiliated with the authors. I’m not here to “endorse” a camp, pick a side, or crown a winner. What I am going to endorse, loudly and without apology, is measurement. Replication. Real-world experiments. The kind of work that teaches us in real time, in real life, what changes when an LLM sits between customers and content. We need more tested data, and this is one of those starting points.

If you do nothing else with this article, do this: Read the paper, then run your own test. Whether your results agree or disagree, publish them. We need more receipts and fewer hot takes.

Now, the reason I’m writing this.

Over the last year, the industry has been pushed toward a neat, comforting story: GEO is just SEO. Nothing new to learn. No need to change how you work. Just keep doing the fundamentals, and everything will be fine.

I don’t buy that.

Not because SEO fundamentals stopped mattering. They still matter, and they remain necessary. But because “necessary” is not the same as “sufficient,” and because the incentives behind platform messaging do not always align with the operational realities businesses are walking into and dealing with.

Image Credit: Duane Forrester

The Narrative And The Incentives

If you’ve paid attention to public guidance coming from the leading search platforms lately, you’ve probably heard a version of: Don’t focus on chunking. Don’t create “bite-sized chunks.” Don’t optimize for how the machine works. Focus on good content.

That’s been echoed and amplified across industry coverage, though I want to be precise about my position here. I’m not claiming a conspiracy, and I’m not saying anyone is being intentionally misleading. I’m not doing that.

I am saying something much simpler. It’s my opinion and happens to be based on actual experience – when messaging repeats across multiple spokespeople in a tight window, it signals an internal alignment effort.

That’s not an insult nor is it a moral judgment. That’s how large organizations operate when they want the market to hear one clear message. I was part of exactly that type of environment for well over a decade in my career.

And the message itself, on its face, is not wrong. You can absolutely hurt yourself by over-optimizing for the wrong proxy. You can absolutely create brittle content by trying to game a system you do not fully understand. In many cases, “write clearly for humans” is solid baseline guidance.

The problem is what happens when that baseline guidance becomes a blanket dismissal of how the machine layer works today, even if it’s unintentional. Because we are not in a “10 blue links” world anymore.

We are in a world where answer surfaces are expanding, search journeys are compressing, and the unit of competition is shifting from “the page” to “the selected portion of the page,” assembled into an answer the user never clicks past.

And that is where “GEO is just SEO” starts to break in my mind.

The Wrong Question: “Is Google Still The Biggest Traffic Driver?”

Executives love comforting statements: “Google still dominates search. Traditional SEO still drives the most traffic. Therefore this LLM-stuff is overblown.

The first half is true, but the conclusion is where companies get hurt.

The biggest risk here is asking the wrong question. “Where does traffic come from today?” is a dashboard question, and it’s backward-looking. It tells you what has been true.

The more important questions are forward-looking:

  • What happens to your business when discovery shifts from clicks to answers?
  • What happens when the customer’s journey ends on the results page, inside an AI Overview, inside an AI Mode experience, or inside an assistant interface?
  • What happens when the platform keeps the user, monetizes the answer surface, and your content becomes a source input rather than a destination?

If you want the behavior trendline in plain terms, start here, with the 2024 SparkToro study, then take a look at what Danny Goodwin wrote in 2024, and as a follow-up in 2025 (spoiler – zero click instances increased Y-o-Y). And while some sources are a couple of years old, you can easily find newer data showing the trend growing.

I’m not using these sources to claim “the sky is falling.” I’m using them to reinforce a simple operational reality: If the click declines, “ranking” is no longer the end goal. Being selected into the answer becomes the end goal.

That requires additional thinking beyond classic SEO. Not instead of it. On top of it.

The Platform Footprint Is Changing, And The Business Model Is Following

If you want to understand why the public messaging is conservative, you have to look at the platform’s strategic direction.

Google, for example, has been expanding AI answer surfaces, and it’s not subtle. Both AI Overviews and AI Mode saw announcements of large expansions during 2025.

Again, notice what this implies at the operating level. When AI Overviews and AI Mode expand, you’re not just dealing with “ranking signals.” You’re dealing with an experience layer that can answer, summarize, recommend, and route a user without a click.

Then comes the part everyone pretends not to see until it’s unavoidable: Monetization follows attention.

This is no longer hypothetical. Search Engine Journal covered Google’s official rollout of ads in AI Overviews, which matters because it signals this answer layer is being treated as a durable interface surface, not a temporary experiment.

Google’s own Ads documentation reinforces the same point: This isn’t just “something people noticed,” it’s a supported placement pattern with real operational guidance behind it. And Google noted mid-last-year that AI Overviews monetize at a similar rate to traditional search, which is a quiet signal that this isn’t a side feature.

You do not need to be cynical to read this clearly. If the answer surface becomes the primary surface, the ad surface will evolve there too. That’s not a scandal so much as just the reality of where the model is evolving to.

Now connect the dots back to “don’t focus on chunking”-style guidance.

A platform that is actively expanding answer surfaces has multiple legitimate reasons to discourage the market from “engineering for the answer layer,” including quality control, spam prevention, and ecosystem stability.

Businesses, however, do not have the luxury of optimizing for ecosystem stability. Businesses must optimize for business outcomes. Their own outcomes.

That’s the tension.

This isn’t about blaming anyone. It’s about understanding misaligned objectives, so you don’t make decisions that feel safe but cost you later.

Discovery Is Fragmenting Beyond Google, And Early Signals Matter

I’m on record that traditional search is still an important driver, and that optimizing in this new world is additive, not an overnight replacement story. But “additive” still changes the workflow.

AI assistants are becoming measurable referrers. Not dominant, not decisive on their own, but meaningful enough to track as an early indicator. Two examples that capture this trend.

TechCrunch noted that while it’s not enough to offset the loss of traffic from search declines, news sites are seeing growth in ChatGPT referrals. And Digiday has data showing traffic from ChatGPT doubled from 2024 to 2025.

Why do I include these?

Because this is how platform shifts look in the early stages. They start small, then they become normal, then they become default. If you wait for the “big numbers,” you’re late building competence and in taking action. (Remember “directories”? Yeah, Search ate their lunch.)

And competence, in this new environment, is not “how do I rank a page.” It’s “how do I get selected, cited, and trusted when the interface is an LLM.

This is where the “GEO is just SEO” framing stops being a helpful simplification and starts becoming operationally dangerous.

Now, The Receipts: A Paper That Tests GEO Tactics And Shows Measurable Differences

Let’s talk about the research. The paper I’m referencing here is publicly available, and I’m going to summarize it in plain English, because most practitioners do not have time to parse academic structure during the week.

At a high level, the (“E-GEO: A Testbed for Generative Engine Optimization in E-Commerce”) paper tests whether common human-written rewrite heuristics actually improve performance in an LLM-mediated product selection environment, then compares that to a more systematic optimization approach. It uses ecommerce as the proving ground, which is smart for one reason: Outcomes can be measured in ways that map to money. Product rank and selection are economically meaningful.

This is important because the GEO conversation often gets stuck in “vibes.” In contrast, this work is trying to quantify outcomes.

Here’s the key punchline, simplified:

A lot of common “rewrite advice” does not help in this environment. Some of it can be neutral. Some of it can be negative. But when they apply a meta-optimization process, prompts improve consistently, and the optimized patterns converge on repeatable features.

That convergence is the part that should make every practitioner sit up. Because convergence suggests there are stable signals the system responds to. Not mystical. Not magical. Not purely random.

Stable signals.

And this is where I come back to my earlier point: If GEO were truly “just SEO,” then you would expect classic human rewrite heuristics to translate cleanly. You would expect the winning playbook to be familiar.

This paper suggests the reality is messier. Not because SEO stopped mattering, but because the unit of success changed.

  • From page ranking to answer selection.
  • From persuasion copy to decision copy.
  • From “read the whole page” to “retrieve the best segment.”
  • From “the user clicks” to “the machine chooses.”

What The Optimizer Keeps Finding, And Why That Matters

I want to be careful here, as I’m not telling you to treat this paper like doctrine. You should not accept it on face value and suddenly adopt this as gospel. You should treat it as a public experiment that deserves replication.

Now, the most valuable output isn’t the exact numbers in their environment, but rather, it’s the shape of the solution the optimizer keeps converging on. (The name of their system/process is optimizer.)

The optimized patterns repeatedly emphasize clarity, explicitness, and decision-support structure. They reduce ambiguity. They surface constraints. They define what the product is and is not. They make comparisons easier. They encode “selection-ready” information in a form that is easier for retrieval and ranking layers to use.

That is a different goal than classic marketing copy, which often leans on narrative, brand feel, and emotional persuasion.

Those things still have a place. But if you want to be selected by an LLM acting as an intermediary, the content needs to do a second job: become machine-usable decision support.

That’s not “anti-human.” It’s pro-clarity, and it’s the kind of detail that will come to define what “good content” means in the future, I think.

The Universal LLM-Optimization Rewrite Recipe, Framed As A Reusable Template

What follows is not me inventing a process out of thin air. This is me reverse-engineering what their optimization process converged toward, and turning it into a repeatable template you can apply to product descriptions and other decision-heavy content.

Treat it as a starting point, then test it. Revise it, create your own version, whatever.

Step 1: State the product’s purpose in one sentence, with explicit context.
Not “premium quality.” Not “best in class.” Purpose.

Example pattern:
This is a [product] designed for [specific use case] in [specific constraints], for people who need [core outcome].

Step 2: Declare the selection criteria you satisfy, plainly.
This is where you stop writing like a brochure and start writing like a spec sheet with a human voice.

Include what the buyer cares about most in that category. If the category is knives, it’s steel type, edge retention, maintenance, balance, handle material. If it’s software, it’s integration, security posture, learning curve, time-to-value.

Make it explicit.

Step 3: Surface constraints and qualifiers early, not buried.
Most marketing copy hides the “buts” until the end. Machines do not reward that ambiguity.

Examples of qualifiers that matter:
Not ideal for [X]. Works best when [Y]. Requires [Z]. Compatible with [A], not [B]. This matters if you [C].

Step 4: State what it is, and what it is not.
This is one of the simplest ways to reduce ambiguity for both the user and the model.

Pattern:
This is for [audience]. It is not for [audience].
This is optimized for [scenario]. It is not intended for [scenario].

Step 5: Convert benefits into testable claims.
Instead of “durable,” say what durable means in practice. Instead of “fast,” define what “fast” looks like in a workflow.

Do not fabricate. Do not inflate. This is not about hype. It’s about clarity.

Step 6: Provide structured comparison hooks.
LLMs often behave like comparison engines because users ask comparative questions.

Give the model clean hooks:
Compared to [common alternative], this offers [difference] because [reason].
If you’re choosing between [A] and [B], pick this when [condition].

Step 7: Add evidence anchors that improve trust.
This can be certifications, materials, warranty terms, return policies, documented specs, and other verifiable signals.

This is not about adding fluff. It’s about making your claims attributable and your product legible.

Step 8: Close with a decision shortcut.
Make the “if you are X, do Y” moment explicit.

Pattern:
Choose this if you need [top 2–3 criteria]. If your priority is [other criteria], consider [alternative type].

That’s the template*.

Notice what it does. It turns a product description into structured decision support, which is not how most product copy is written today. And it is an example of why “GEO is just SEO” fails as a blanket statement.

SEO fundamentals help you get crawled, indexed, and discovered. This helps you get selected when discovery is mediated by an LLM.

Different layer. Different job.

Saying GEO = SEO and SEO = GEO is an oversimplification that will become normalized and lead to people missing the fact that the details matter. The differences, even small ones, matter. And they can have impacts and repercussions.

*A much deeper-dive pdf version of this process is available for my Substack subscribers for free via my resources page.

What To Do Next: Read The Paper, Then Replicate It In Your Environment

Here’s the part I want to be explicit about. This paper is interesting because it’s measurable, and because it suggests the system responds to repeatable features.

But you should treat it as a starting point, not a law of physics. Results like this are sensitive to context: industry, brand authority, page type, and even the model and retrieval stack sitting between the user and your content.

That’s why replication matters. The only way we learn what holds, what breaks, and what variables actually matter is by running controlled tests in our own environments and publishing what we find. If you work in SEO, content, product marketing, or growth, here is the invitation.

Read the paper here.

Then run a controlled test on a small, meaningful slice of your site.

Keep it practical:

  • Pick 10 to 20 pages with similar intent.
  • Split them into two groups.
  • Leave one group untouched.
  • Rewrite the other group using a consistent template, like the one above.
  • Document the changes so you can reverse them if needed.
  • Measure over a defined window.
  • Track outcomes that matter in your business context, not just vanity metrics.

And if you can, track whether these pages are being surfaced, cited, paraphrased, or selected in the AI answer interfaces your customers are increasingly using.

You are not trying to win a science fair. You are trying to reduce uncertainty with a controlled test. If your results disagree with the paper, that’s not failure. That’s signal.

Publish what you find, even if it’s messy. Even if it’s partial. Even if the conclusion is “it depends.” Because that is exactly how a new discipline becomes real. Not through repeating platform talking points. Not through tribal arguments. Through measurement.

One Final Level-Set, For The Executives Reading This

Platform guidance is one input, not your operating system. Your operating system is your measurement program. SEO is still necessary. If you can’t get crawled, you can’t get chosen.

But GEO, meaning optimizing for selection inside LLM-mediated discovery, is an additional competence layer. Not a replacement. A layer. If you decide to ignore that layer because a platform said “don’t optimize,” you’re outsourcing your business risk to someone else’s incentive structure.

And that’s not a strategy. The strategy is simple: learn the layer by testing the layer.

We need more people doing exactly that.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Rawpixel.com/Shutterstock

Shopify Shares More Details On Universal Commerce Protocol (UCP) via @sejournal, @martinibuster

Harvey Finkelstein, the president of Shopify, was recently interviewed about their open source Universal Commerce Protocol (UCP), which enables agentic AI shopping. Co-developed with Google, he explains how UCP enables brands to be discovered by customers based on personalized recommendations, as opposed to advertising and classic search paradigms that are less personalized.

Finkelstein said that the Universal Commerce Protocol (UCP) is designed to enable AI agents to surface products in a manner that merchants can control, show consumers personalized recommendations based on users’ preferences, and deliver a shopping experience that’s as good as any ecommerce store platform.

Shopify is also opening agentic commerce access to brands that are not Shopify customers through their Agentic plan, which he briefly mentions. This plan is designed for enterprise brands and merchants who do not use Shopify to upload their product data to Shopify’s infrastructure so it can be discovered and purchased directly by AI agents.

This positions Shopify as infrastructure for agentic commerce, not just a hosted commerce platform. This makes it easier for brands to gain immediate access to agentic shopping channels without having to migrate platforms.

Finkelstein also points out that agentic commerce only works if consumers can access all brands, not just those on Shopify.

Shopify’s Finkelstein said that UCP will enable merchants to more effectively control how their products are shown. He also discussed their strategy of bringing agentic shopping to all brands, regardless of whether they are on Shopify or not.

He explained:

“We created this protocol called Universal Commerce Protocol which effectively is this universal language is open sourced so that all merchants can speak directly to every single one of the agents.

And the best way to explain it is up until now, it was really just about like a single transaction.

So I can buy something on ChatGPT or Gemini or Microsoft. there’s no concept of loyalty or subscription or bundling or, you know, if it’s furniture, for example, please don’t ship it to me on Thursday. I’m not home Thursday. Send it Friday.

So this idea of creating this universal protocol that we co-developed with Google means that now merchants can actually tell these agents exactly how to show their products on these agentic tools. And it should be as good as it is on the online store. So that was a really, really big one.

The second thing we announced also with Google is that now we’re actually expanding. You can sell everywhere commerce is happening from an agentic perspective.

So we’re going beyond the agentic storefronts of just ChatGPT, which is what we said, you know, in Q3. Now it’s also, we’re going to be working with Gemini, with AI mode in Google Search, and also with copilot.

And maybe the last one is that we’re actually bringing agentic commerce to every brand, whether or not they’re on Shopify.

So if you’re not on Shopify, but you want to have your product syndicated and indexed, you can do so with our agentic plan.”

Access To Many Brands Is Key

Finkelstein stressed that the key to the success of agentic AI is to be able to show the widest possible selection of brands. He said it’s a big opportunity.

He explained:

“I think if Agentic is going to do what a lot of us think it’s going to do from a commerce perspective, you have to give consumers all the brands.

We obviously want them all on Shopify, but there’s some brands that want to participate now, but it may take some time for them to migrate over.

So this idea of opening up to anyone, we think is a big opportunity.”

Who Will Be The Early Adopters?

Finkelstein was asked about who the early adopters will be. His answer was cautious, seemingly acknowledging that it’s likely not going to immediately be a big crush of people turning to AI to buy things.

He answered:

“I think it’ll likely be something that like most people use some of the time and some people use most of the time. I don’t think it’s going to cross the threshold of most most, the way e-commerce does now. It’s just going to take time. It’s going to take some time.”

AI Chat Reduces Friction

Finkelstein said that Universal Commerce Protocol (UCP) enables better shopping experiences, reducing the “friction” that AI shopping may have produced. He believes that once people start having good experiences shopping with an agent, they will start to get into the habit of using it for other kinds of shopping and begin relying on it.

Finkelstein explained:

“Once you have a good experience, I think the actual friction reduces. You’ll keep having it over and over again.

But the thing that we felt was missing, and this is the reason why I think this UCP protocol is so important, is it was very difficult to do merchandising inside of these applications.

And this protocol allows you to do a lot more… Well, up until UCP happened, you couldn’t actually do subscriptions. Now you can.

Or this idea of bundling, you know, for Gymshark, it’s a huge part of their business is if you buy these, you’ll also buy these as well. You can do that as well.

So I think all of these things are sort of in line with creating a much more delightful experience in the chat.”

Merit Based Shopping Versus SEO?

Finkelstein brought up the topic of merit-based shopping where products are recommended to a user because it is what they are looking for. He used the phrase “merit-based shopping” as a contrast to today’s online advertising ecosystems that prioritize products that pay to be shown as a recommendation. The main point is that shopping recommendations are made based on personalization.

Finkelstein explained:

“And I think ultimately what it leads to is like, this will be merit-based shopping, which will be different than I think some of the traditional retailers who were kind of leaning on their balance sheets to spend money on ads. You can’t really game the system in that that way.

You actually have to be, from a context perspective, the right product for the right consumer.”

What Happens To Creative Assets And SEO

One of the podcast hosts asked about what happens to creative assets like photos, saying that he noticed that shopping AI uses images. He asked how that was going to evolve. Finkelstein’s answer touched on SEO in the context of how agentic AI shopping is about showing products based on user preferences, a tighter form of relevance than in the advertising and classic search ecosystems.

Finkelstein explained:

“I think …the idea of SEO won’t exist in Agentic because again, it’s merit-based and it’s mostly based on the context history you’ve had.

But I do think though, you’re going to have… these brands are going to have people at their companies who are thinking a lot about like consistent updates to UCP, consistent updates to the catalog.

So they may pull something off the catalog and say, we don’t want to sell it anymore this way. So I think there’s going to be, I don’t know if they’re going to be actual jobs, but there’s going to be people inside of the company, potentially in the merchandising department, who say, actually, the way that we want to sell all this, the way we want to describe this to these agents is a particular way.

And then because of UCP and because of Shopify catalog, it gets easily disseminated across every single one of these agentic applications. So the experience just gets better and better.

I think you have to be a little bit of a techno optimist… as I am, to believe that even if the experience is not incredible right now, it’s likely just going to get better at this ridiculous pace.”

Cutting Out Incentivized Recommendations

When asked what’s the most exciting thing about Agentic AI, he returned to the concept of merit-based shopping, where LLMs have the ability to personalize responses by learning user preferences and therefore recommend a product that fits within that person’s requirements. He contrasted that with what happens in the real world, where a salesperson’s recommendations are influenced by commissions.

So what he is excited about is the idea of the playing field being leveled. He mentioned the possibility of lesser-known brands, like True Classic Tees, being surfaced in AI shopping because that kind of brand is a match for a specific consumer.

He responded:

“Most of the excitement is actually around this idea of like, is there a potential for this to level the playing field? Meaning, you know, if I’ve done a bunch of research historically on an agentic application …about the stuff that I love, the brands that I love. …It probably should not show me a generic pair of boots.

So the excitement actually is around like, is this going to introduce more brands that otherwise are unknown to more people or, you know, True Classic Tee, for example, which, you know, if you’re looking for a black t-shirt, I suspect on a search engine, you’re not going to see True Classic Tee come up that much, but it’s an incredible product and ultimately it can be found on these agentic tools in a way that it probably couldn’t historically.”

Agentic AI Will Accelerate Online Shopping

The other thing that Finkelstein is excited about is that he believes Agentic AI shopping will accelerate the amount of shopping that is done online. He compared using Agentic AI to the COVID moment, where people changed their work and shopping behavior in a major way that became permanent.

He then circled back to the idea that Agentic AI is less biased:

“I think it’s actually a better version of that because it’s an unbiased discussion, an unbiased conversation.”

Watch the video podcast interview at a few minutes after the 3 hour mark:

Featured Image by Shutterstock/Julien Tromeur

Ask A PPC: What Is The PPC Manager’s Role In The AI Era? via @sejournal, @navahf

Every few months, someone asks a version of the same question “What happens to PPC managers now that AI runs the platforms?” The question usually comes wrapped in anxiety, sometimes in frustration, and often in the hope that there is still a lever left to pull.

At this point, the answer has become clearer. PPC did not lose its human role. It shed the parts of the job that never required human judgment in the first place. The real shift is not about replacement. It is about responsibility.

Automation exposed where strategy was missing.

What Still Matters In PPC

PPC still lives and dies by business context. AI does not understand your margins, your inventory constraints, or which customers actually grow the business over time. It also does not know when a message feels off-brand, misaligned, or risky.

The fundamentals still belong to humans.

Business strategy sets direction. Creativity determines how a brand earns attention. Human insight defines personas, priorities, and tradeoffs. AI can optimize toward an outcome, but it cannot decide which outcome matters most.

Teams that struggle in the AI era rarely struggle because machines outperform them. They struggle because they never clearly defined what success meant beyond short-term efficiency.

How PPC Tasks Are Changing

The day-to-day work of PPC has changed significantly. Account management no longer rewards micromanagement. Data relationships matter more than granular keyword sculpting. Message mapping must account for systems that assemble ads dynamically rather than follow static instructions.

Automation now handles execution better than humans ever could. Machines win at real-time bidding, predictive logic, and pattern recognition across massive datasets. Humans still own the decisions that shape those systems.

This shift creates discomfort for practitioners who built careers on control. It creates opportunity for those willing to trade knobs for judgment.

Account Structure In An Automated World

Modern PPC account structure follows one rule above all others. Consolidation wins.

Platforms need data density to learn. Fragmented accounts starve algorithms and produce misleading conclusions. In my experience, campaigns that fail to reach roughly 30 conversions within 30 days rarely generate stable performance signals. Manual bidding collapses under the weight of sparse data, especially when layered with audiences, match types, and device modifiers.

Consolidation means fewer campaigns with clearer goals. By consolidating, it makes it easier to deploy sufficient budget to exit learning phases.

Google supports this through close variants, dynamic search ads, and increasingly flexible matching. Microsoft and Meta allow precise targeting at the ad group or ad set level while still benefiting from broader delivery.

While segmentation might be comfortable because “it’s how we’ve always managed campaigns,” it makes it very challenging to ensure budgets are deployed correctly.

Data Cleanliness Becomes The Real Bottleneck

First-party data determines how well algorithms can marry your business goals with potential placements. If the data isn’t accurate, you face ad platforms over-indexing on the wrong “wins.”

CRM integrations break accounts when lifecycle stages drift from reality. Micro-conversions can be helpful, but they need to be paired with realistic return on ad spend (ROAS) goals.

Google now allows secondary conversions to inform bidding decisions. That flexibility helps advertisers who think carefully about value. It punishes those who inflate metrics to make reports look better.

Imperfect data produces imperfect performance. AI does not fix broken inputs. It accelerates their consequences.

Rethinking KPIs And Reporting

Performance media and brand media no longer live in separate lanes. AI blends them by design. Metrics like click-through rate, conversion rate, ROAS, and CPA now reflect mixed intent rather than pure demand capture.

Teams must set goals that acknowledge blended influence, including brand lift and assisted conversions. Budgets must support top-of-funnel exposure for users who do not yet know what they need. Reporting must evolve past the illusion of isolation.

Blended metrics represent the new standard. Advertisers who demand perfect attribution often measure familiarity rather than impact.

AI Beyond The Account Interface

Some of the biggest shifts in PPC sit outside practitioner control. AI-powered surfaces introduce new questions about where ads belong and when they help.

Most AI queries lack transactional intent. They function more like brand interactions than shopping moments. Platforms generally restrict ads to situations where purchase intent exists, which protects both advertisers and users.

Top 5 topics and intents from the Microsoft Copilot usage study (Screenshot by author, January 2026)

Serving ads in non-transactional AI environments risks irritating prospects rather than advancing consideration. Restraint often performs better than presence.

Practitioners now play the role of translator. Clients need help understanding how AI determines readiness and relevance. Ads shown within AI systems tend to carry higher relevancy because the system has already qualified the user’s intent.

Chasing every placement rarely pays off. Knowing when not to show up has become a competitive advantage.

Privacy, Content, And Creative Reality

Perfect data rarely exists. The same applies to websites and creative assets.

Auto-generated creative reflects the source material it pulls from. When advertisers dislike the output, the issue usually lives upstream. If the seed website/landing page doesn’t result in ideal content, that could indicate deeper issues crawling the site and ingesting the content for AI.

PPC teams benefit from closer collaboration with SEO and content teams. Improving site clarity improves both paid performance and AI-driven visibility. Creative quality no longer lives in isolation.

The Human Role Going Forward

Humans still make the decisions that matter most.

They decide how to allocate budget across objectives. They prioritize which business lines deserve scale. They choose which personas to pursue and which messages carry risk. They determine what data enters the system and how honestly it reflects reality.

Automation handles bidding, pacing, and formatting. Humans handle meaning.

Manual bid adjustments and creative micromanagement no longer define excellence. Strategic clarity does. Clean data does. Sound judgment does.

The AI era did not erase the human role in PPC. It stripped away the noise and left the work that actually requires expertise.

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

More Sites Blocking LLM Crawling – Could That Backfire On GEO? via @sejournal, @martinibuster

Hostinger released an analysis showing that businesses are blocking AI systems used to train large language models while allowing AI assistants to continue to read and summarize more websites. The company examined 66.7 billion bot interactions across 5 million websites and found that AI assistant crawlers used by tools such as ChatGPT now reach more sites even as companies restrict other forms of AI access.

Hostinger Analysis

Hostinger is a web host and also a no-code, AI agent-driven platform for building online businesses. The company said it analyzed anonymized website logs to measure how verified crawlers access sites at scale, allowing it to compare changes in how search engines and AI systems retrieve online content.

The analysis they published shows that AI assistant crawlers expanded their reach across websites during a five-month period. Data was collected during three six-day windows in June, August, and November 2025.

OpenAI’s SearchBot increased coverage from 52 percent to 68 percent of sites, while Applebot (which indexes content for powering Apple’s search features) doubled from 17 percent to 34 percent. During the same period, traditional search crawlers essentially remained constant. The data indicates that AI assistants are adding a new layer to how information reaches users rather than replacing search engines outright.

At the same time, the data shows that companies sharply reduced access for AI training crawlers. OpenAI’s GPTBot dropped from access on 84 percent of websites in August to 12 percent by November. Meta’s ExternalAgent dropped from 60 percent coverage to 41 percent website coverage. These crawlers collect data over time to improve AI models and update their Parametric Knowledge but many businesses are blocking them, either to limit data use or for fear of copyright infringement issues.

Parametric Knowledge

Parametric Knowledge, also known as Parametric Memory, is the information that is “hard-coded” into the model during training. It is called “parametric” because the knowledge is stored in the model’s parameters (the weights). Parametric Knowledge is long-term memory about entities, for example, people, things, and companies.

When a person asks an LLM a question, the LLM may recognize an entity like a business and then retrieve the the associated vectors (facts) that it learned during training. So, when a business or company blocks a training bot from their website, they’re keeping the LLM from knowing anything about them, which might not be the best thing for an organization that’s concerned about AI visibility.

Allowing an AI training bot to crawl a company website enables that company to exercise some control over what the LLM knows about it, including what it does, branding, whatever is in the About Us, and enables the LLM to know about the products or services offered. An informational site may benefit from being cited for answers.

Businesses Are Opting Out Of Parametric Knowledge

Hostinger’s analysis shows that businesses are “aggressively” blocking AI training crawlers. While Hostinger’s research doesn’t mention this, the effect of blocking AI training bots is that businesses are essentially opting out of LLM’s parametric knowledge because the LLM is prevented from learning directly from first-party content during training, removing the site’s ability to tell its own story and forcing the LLM to rely on third-party data or knowledge graphs.

Hostinger’s research shows:

“Based on tracking 66.7 billion bot interactions across 5 million websites, Hostinger uncovered a significant paradox:

Companies are aggressively blocking AI training bots, the systems that scrape content to build AI models. OpenAI’s GPTBot dropped from 84% to 12% of websites in three months.

However, AI assistant crawlers, the technology that ChatGPT, Apple, etc. use to answer customer questions, are expanding rapidly. OpenAI’s SearchBot grew from 52% to 68% of sites; Applebot doubled to 34%.”

A recent post on Reddit shows how blocking LLM access to content is normalized and understood as something to protect intellectual property (IP).

The post starts with an initial question asking how to block AIs:

“I want to make sure my site is continued to be indexed in Google Search, but do not want Gemini, ChatGPT, or others to scrape and use my content.

What’s the best way to do this?”

Screenshot Of A Reddit Conversation

Later on in that thread someone asked if they’re blocking LLMs to protect their intellectual property and the original poster responded affirmatively, that that was the reason.

The person who started the discussion responded:

“We publish unique content that doesn’t really exist elsewhere. LLMs often learn about things in this tiny niche from us. So we need Google traffic but not LLMs.”

That may be a valid reason. A site that publishes unique instructional information about a software product that does not exist elsewhere may want to block an LLM from indexing their content because if they don’t then the LLM will be able to answer questions while also removing the need to visit the site.

But for other sites with less unique content, like a product review and comparison site or an ecommerce site, it might not be the best strategy to block LLMs from adding information about those sites into their parametric memory.

Brand Messaging Is Lost To LLMs

As AI assistants answer questions directly, users may receive information without needing to visit a website. This can reduce direct traffic and limit the reach of a business’s pricing details, product context, and brand messaging. It’s possible that the customer journey ends inside the AI interface and the businesses that block LLMs from acquiring knowledge about their companies and offerings are essentially relying on the search crawler and search index to fill that gap (and maybe that works?).

The increasing use of AI assistants affects marketing and extends into revenue forecasting. When AI systems summarize offers and recommendations, companies that block LLMs have less control over how pricing and value appear. Advertising efforts lose visibility earlier in the decision process, and ecommerce attribution becomes harder when purchases follow AI-generated answers rather than direct site visits.

According to Hostinger, some organizations are becoming more selective about what which content is available to AI, especially AI assistants.

Tomas Rasymas, Head of AI at Hostinger commented:

“With AI assistants increasingly answering questions directly, the web is shifting from a click-driven model to an agent-mediated one. The real risk for businesses isn’t AI access itself, but losing control over how pricing, positioning, and value are presented when decisions are made.”

Takeaway

Blocking LLMs from using website data for training is not really the default position to take, even though many people feel real anger and annoyance of the idea of an LLM training on their content.  It may be useful to take a more considered response that weighs the benefits versus the disadvantages and to also consider whether those disadvantages are real or perceived.

Featured Image by Shutterstock/Lightspring

All anyone wants to talk about at Davos is AI and Donald Trump

This story first appeared in The Debrief, our subscriber-only newsletter about the biggest news in tech by Mat Honan, Editor in Chief. Subscribe to read the next edition as soon as it lands.

Hello from the World Economic Forum annual meeting in Davos, Switzerland. I’ve been here for two days now, attending meetings, speaking on panels, and basically trying to talk to anyone I can. And as far as I can tell, the only things anyone wants to talk about are AI and Trump. 

Davos is physically defined by the Congress Center, where the official WEF sessions take place, and the Promenade, a street running through the center of the town lined with various “houses”—mostly retailers that are temporarily converted into meeting hubs for various corporate or national sponsors. So there is a Ukraine House, a Brazil House, Saudi House, and yes, a USA House (more on that tomorrow). There are a handful of media houses from the likes of CNBC and the Wall Street Journal. Some houses are devoted to specific topics; for example, there’s one for science and another for AI. 

But like everything else in 2026, the Promenade is dominated by tech companies. At one point I realized that literally everything I could see, in a spot where the road bends a bit, was a tech company house. Palantir, Workday, Infosys, Cloudflare, C3.ai. Maybe this should go without saying, but their presence, both in the houses and on the various stages and parties and platforms here at the World Economic Forum, really drove home to me how utterly and completely tech has captured the global economy. 

While the houses host events and serve as networking hubs, the big show is inside the Congress Center. On Tuesday morning, I kicked off my official Davos experience there by moderating a panel with the CEOs of Accenture, Aramco, Royal Philips, and Visa. The topic was scaling up AI within organizations. All of these leaders represented companies that have gone from pilot projects to large internal implementations. It was, for me, a fascinating conversation. You can watch the whole thing here, but my takeaway was that while there are plenty of stories about AI being overhyped (including from us), it is certainly having substantive effects at large companies.  

Aramco CEO Amin Nasser, for example, described how that company has found $3 billion to $5 billion in cost savings by improving the efficiency of its operations. Royal Philips CEO Roy Jakobs described how it was allowing health-care practitioners to spend more time with patients by doing things such as automated note-taking. (This really resonated with me, as my wife is a pediatrics nurse, and for decades now I’ve heard her talk about how much of her time is devoted to charting.) And Visa CEO Ryan McInerney talked about his company’s push into agentic commerce and the way that will play out for consumers, small businesses, and the global payments industry. 

To elaborate a little on that point, McInerney painted a picture of commerce where agents won’t just shop for things you ask them to, which will be basically step one, but will eventually be able to shop for things based on your preferences and previous spending patterns. This could be your regular grocery shopping, or even a vacation getaway. That’s going to require a lot of trust and authentication to protect both merchants and consumers, but it is clear that the steps into agentic commerce we saw in 2025 were just baby ones. There are much bigger ones coming for 2026. (Coincidentally, I had a discussion with a senior executive from Mastercard on Monday, who made several of the same points.) 

But the thing that really resonated with me from the panel was a comment from Accenture CEO Julie Sweet, who has a view not only of her own large org but across a spectrum of companies: “It’s hard to trust something until you understand it.” 

I felt that neatly summed up where we are as a society with AI. 

Clearly, other people feel the same. Before the official start of the conference I was at AI House for a panel. The place was packed. There was a consistent, massive line to get in, and once inside, I literally had to muscle my way through the crowd. Everyone wanted to get in. Everyone wanted to talk about AI. 

(A quick aside on what I was doing there: I sat on a panel called “Creativity and Identity in the Age of Memes and Deepfakes,” led by Atlantic CEO Nicholas Thompson; it featured the artist Emi Kusano, who works with AI, and Duncan Crabtree-Ireland, the chief negotiator for SAG-AFTRA, who has been at the center of a lot of the debates about AI in the film and gaming industries. I’m not going to spend much time describing it because I’m already running long, but it was a rip-roarer of a panel. Check it out.)

And, okay. Sigh. Donald Trump. 

The president is due here Wednesday, amid threats of seizing Greenland and fears that he’s about to permanently fracture the NATO alliance. While AI is all over the stages, Trump is dominating all the side conversations. There are lots of little jokes. Nervous laughter. Outright anger. Fear in the eyes. It’s wild. 

These conversations are also starting to spill out into the public. Just after my panel on Tuesday, I headed to a pavilion outside the main hall in the Congress Center. I saw someone coming down the stairs with a small entourage, who was suddenly mobbed by cameras and phones. 

Moments earlier in the same spot, the press had been surrounding David Beckham, shouting questions at him. So I was primed for it to be another celebrity—after all, captains of industry were everywhere you looked. I mean, I had just bumped into Eric Schmidt, who was literally standing in line in front of me at the coffee bar. Davos is weird. 

But in fact, it was Gavin Newsom, the governor of California, who is increasingly seen as the leading voice of the Democratic opposition to President Trump, and a likely contender, or even front-runner, in the race to replace him. Because I live in San Francisco I’ve encountered Newsom many times, dating back to his early days as a city supervisor before he was even mayor. I’ve rarely, rarely, seen him quite so worked up as he was on Tuesday. 

Among other things, he called Trump a narcissist who follows “the law of the jungle, the rule of Don” and compared him to a T-Rex, saying, “You mate with him or he devours you.” And he was just as harsh on the world leaders, many of whom are gathered in Davos, calling them “pathetic” and saying he should have brought knee pads for them. 

Yikes.

There was more of this sentiment, if in more measured tones, from Canadian prime minister Mark Carney during his address at Davos. While I missed his remarks, they had people talking. “If we’re not at the table, we’re on the menu,” he argued. 

The Download: Trump at Davos, and AI scientists

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

All anyone wants to talk about at Davos is AI and Donald Trump

—Mat Honan, MIT Technology Review’s editor in chief 

At Davos this year Trump is dominating all the side conversations. There are lots of little jokes. Nervous laughter. Outright anger. Fear in the eyes. It’s wild. The US president is due to speak here today, amid threats of seizing Greenland and fears that he’s about to permanently fracture the NATO alliance.

But Trump isn’t the only game in town—everyone’s also talking about AI. Read Mat’s story to find out more

This subscriber-only story appeared first in The Debrief, Mat’s weekly newsletter about the biggest stories in tech. Sign up here to get the next one in your inbox, and subscribe if you haven’t already!

The UK government is backing AI that can run its own lab experiments

A number of startups and university teams that are building “AI scientists” to design and run experiments in the lab, including robot biologists and chemists, have just won extra funding from the UK government agency that funds moonshot R&D.  

The competition, set up by ARIA (the Advanced Research and Invention Agency), gives a clear sense of how fast this technology is moving: The agency received 245 proposals from research teams that are already building tools capable of automating increasing amounts of lab work. Read the full story to learn more. 

—Will Douglas Heaven 

Everyone wants AI sovereignty. No one can truly have it.

—Cathy Li is head of the Centre for AI Excellence at the World Economic Forum

Governments plan to pour $1.3 trillion into AI infrastructure by 2030 to invest in “sovereign AI,” with the premise being that countries should be in control of their own AI capabilities. The funds include financing for domestic data centers, locally trained models, independent supply chains, and national talent pipelines. 

This is a response to real shocks: covid-era supply chain breakdowns, rising geopolitical tensions, and the war in Ukraine. But the pursuit of absolute autonomy is running into reality: AI supply chains are irreducibly global. If sovereignty is to remain meaningful, it must shift from defensive self-reliance to a vision that balances national autonomy with strategic partnership. Read the full story.

Here’s how extinct DNA could help us in the present—and the future

Thanks to genetic science, gene editing, and techniques like cloning, it’s now possible to move DNA through time, studying genetic information in ancient remains and then re-creating it in the bodies of modern beings. And that, scientists say, offers new ways to try to help endangered species, engineer new plants that resist climate change, or even create new human medicines.  

Read more about why genetic resurrection is one of our 10 Breakthrough Technologies this year, and check out the rest of the list.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The White House wants Americans to embrace AI
It faces an uphill battle—the US public is mostly pretty gloomy about AI’s impact. (WP $) 
What’s next for AI in 2026. (MIT Technology Review)

2 The UN says we’re entering an “era of water bankruptcy” 
And it’s set to affect the vast majority of us on the planet. (Reuters $)
Water shortages are fueling the protests in Iran. (Undark
This Nobel Prize–winning chemist dreams of making water from thin air. (MIT Technology Review)

3 How is US science faring after a year of Trump?
Not that well, after proposed budget cuts amounting to $32 billion. (Nature $)
The foundations of America’s prosperity are being dismantled. (MIT Technology Review

4 We need to talk about the early career AI jobs crisis 
Young people are graduating and finding there simply aren’t any roles for them to do. (NY Mag $)
+ AI companies are fighting to win over teachers. (Axios $)
Chinese universities want students to use more AI, not less. (MIT Technology Review)

5 The AI boyfriend business is booming in China
And it’s mostly geared towards Gen Z women. (Wired $)
It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review

6 Snap has settled a social media addiction lawsuit ahead of a trial 
However the other defendants, including Meta, TikTok and YouTube, are still fighting it. (BBC)
A new study is going to examine the effects of restricting social media for children. (The Guardian)

7 Here are some of the best ideas of this century so far
From smartphones to HIV drugs, the pace of progress has been dizzying. (New Scientist $)

8 Robots may be on the cusp of becoming very capable
Until now, their role in the world of work has been limited. AI could radically change that. (FT $)
Why the humanoid workforce is running late. (MIT Technology Review)

9 Scientists are racing to put a radio telescope on the moon 
If they succeed, it will be able to ‘hear’ all the way back to over 13 billion years ago, just 380,000 years after the big bang. (IEEE Spectrum)
+ Inside the quest to map the universe with mysterious bursts of radio energy. (MIT Technology Review

10 It turns out cows can use tools
What will we discover next? Flying pigs?! (Futurism)

Quote of the day

“We’re still staggering along, but I don’t know for how much longer. I don’t have the energy any more.”

—A researcher at the National Oceanic and Atmospheric Administration tells Nature they and their colleagues are exhausted by the Trump administration’s attacks on science.  

One more thing

Palmer Luckey on the Pentagon’s future of mixed reality

Palmer Luckey has, in some ways, come full circle.  

His first experience with virtual-reality headsets was as a teenage lab technician at a defense research center in Southern California, studying their potential to curb PTSD symptoms in veterans. He then built Oculus, sold it to Facebook for $2 billion, left Facebook after a highly public ousting, and founded Anduril, which focuses on drones, cruise missiles, and other AI-enhanced technologies for the US Department of Defense. The company is now valued at $14 billion.

Now Luckey is redirecting his energy again, to headsets for the military. He spoke to MIT Technology Review about his plans. Read the full interview.

—James O’Donnell

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ I want to skip around every single one of these beautiful gardens.
+ Your friends help you live longer. Isn’t that nice of them?!
+ Brb, just buying a pharaoh headdress for my cat.
+ Consider this your annual reminder that you don’t need a gym membership or fancy equipment to get fitter.