MCP, A2A, NLWeb, And AGENTS.md: The Standards Powering The Agentic Web via @sejournal, @slobodanmanic

This is Part 3 in a five-part series on optimizing websites for the agentic web. Part 1 covered the evolution from SEO to AAIO. Part 2 explored how to get your content cited in AI responses. This article goes deeper: the protocols forming the infrastructure layer that make everything else possible.

The early web needed HTTP to transport data, HTML to structure content, and the W3C to keep everyone building on the same foundation. Without those shared standards, we’d have ended up with a fragmented collection of incompatible networks instead of a single web.

The agentic web is at that same inflection point. AI agents need standardized ways to connect to tools, talk to each other, query websites, and understand codebases. Without shared protocols, every AI vendor builds proprietary integrations, and the result is the same fragmentation the early web narrowly avoided.

Four protocols are emerging as the foundational layer. This article covers what each one does, who’s behind it, and what it means for your business. Throughout this series, we draw exclusively from official documentation, research papers, and announcements from the companies building this infrastructure.

Why Standards Matter

Consider how the original web came together. In the early 1990s, competing browser vendors and incompatible standards were fragmenting what should have been a unified network. The W3C brought order by establishing shared protocols. HTTP handled transport. HTML handled structure. Everyone agreed on the rules, and the web took off.

AI is at a similar crossroads. Right now, every major AI company is building agents that need to interact with external tools, data sources, other agents, and websites. Without standards, connecting your business systems to AI means building separate integrations for Claude, ChatGPT, Gemini, Copilot, and whatever comes next. That’s the M x N problem: M different AI models times N different tools equals an unsustainable number of custom connections.

What makes this moment remarkable is who’s building the solution together. On Dec. 9, 2025, the Linux Foundation announced the Agentic AI Foundation (AAIF), a vendor-neutral governance body for agentic AI standards. Eight platinum members anchor it: AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI.

OpenAI, Anthropic, Google, and Microsoft. Competing on AI products, collaborating on AI infrastructure. As Linux Foundation Executive Director Jim Zemlin put it: “We are seeing AI enter a new phase, as conversational systems shift to autonomous agents that can work together.”

This is a bigger deal than most people realize. Competitors building shared infrastructure because they all recognize that proprietary standards would hold back the entire ecosystem, including themselves.

MCP: The Universal Adapter

What it is: The Model Context Protocol (MCP) is an open standard for connecting AI applications to external tools, data sources, and workflows.

The official analogy is apt:

Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect electronic devices, MCP provides a standardized way to connect AI applications to external systems.”

Before MCP, if you wanted your database, CRM, or internal tools accessible to an AI assistant, you had to build a custom integration for each AI platform. MCP replaces that with a single standard interface. Build one MCP server for your data, and every MCP-compatible AI system can connect to it.

The numbers are striking. MCP launched as an open-source project from Anthropic on Nov. 25, 2024. In just over a year, it reached 97 million monthly SDK downloads across Python and TypeScript, with over 10,000 public MCP servers built by the community.

The adoption timeline tells the story. Anthropic’s Claude had native MCP support from day one. In March 2025, OpenAI CEO Sam Altman announced support across OpenAI’s products, stating: “People love MCP and we are excited to add support across our products.” Google followed in April, confirming MCP support in Gemini. Microsoft joined the MCP steering committee at Build 2025 in May, with MCP support in VS Code reaching general availability in July 2025.

From internal experiment to industry standard in 12 months. That pace of adoption signals something real.

What this means for your business: If your data, tools, or services are MCP-accessible, every major AI platform can use them. That’s not a theoretical benefit. It means an AI assistant helping your customer can pull real-time product availability from your inventory system, check order status from your CRM, or retrieve pricing from your database, all through one standardized connection rather than platform-specific integrations.

A2A: How Agents Talk To Each Other

What it is: The Agent2Agent protocol (A2A) enables AI agents from different vendors to discover each other’s capabilities and collaborate on tasks.

If MCP is how agents connect to tools, A2A is how agents connect to each other. The distinction matters. In a world where businesses use AI agents from Salesforce for CRM, ServiceNow for IT, and an internal agent for billing, these agents need a way to discover what each other can do, delegate tasks, and coordinate responses. A2A provides that.

Google launched A2A on April 9, 2025 with over 50 technology partners. By June, Google donated the protocol to the Linux Foundation. By July, version 0.3 shipped with over 150 supporting organizations, including Salesforce, SAP, ServiceNow, PayPal, Atlassian, Microsoft, and AWS.

The core concept is the Agent Card: a JSON metadata document that serves as a digital business card for agents. Each A2A-compatible agent publishes an Agent Card at a standard web address (/.well-known/agent-card.json) describing its identity, capabilities, skills, and authentication requirements. When one agent needs help with a task, it reads another agent’s card to understand what that agent can do, then communicates through A2A to request collaboration.

Google’s own framing of how these pieces fit together is useful: “Build with ADK, equip with MCP, communicate with A2A.” ADK (Agent Development Kit) is Google’s framework for building agents, MCP gives them access to tools, and A2A lets them talk to other agents.

Here’s a practical example. A customer contacts your company with a billing question that requires a refund. Your customer service agent (built on one platform) identifies the issue, passes the context to your billing agent (built on another platform) via A2A, which calculates the refund amount and hands off to your payments agent (yet another platform) to process it. The customer sees one seamless interaction. Behind the scenes, three agents from different vendors collaborated through a shared protocol.

The enterprise adoption signal is strong. When Salesforce, SAP, ServiceNow, and every major consultancy sign on to a protocol within months, it’s because their enterprise clients are already running into the multi-vendor agent coordination problem that A2A solves.

NLWeb: Making Websites Conversational

What it is: NLWeb (Natural Language Web) is an open project from Microsoft that turns any website into a natural language interface, queryable by both humans and AI agents.

Of the four protocols covered here, NLWeb is the most directly relevant to this series’ audience. MCP, A2A, and AGENTS.md are primarily developer concerns. NLWeb is about your website.

NLWeb was introduced at Microsoft Build 2025 on May 19, 2025. It was conceived and developed by R.V. Guha, who joined Microsoft as CVP and Technical Fellow. If that name sounds familiar, it should: Guha is the creator of RSS, RDF, and Schema.org, three standards that fundamentally shaped how the web organizes and syndicates information. When the person behind Schema.org builds a new web protocol, it’s worth paying attention.

The key insight behind NLWeb is that websites already publish structured data. Schema.org markup, RSS feeds, product catalogs, recipe databases. NLWeb leverages these existing formats, combining them with AI to let users and agents query a website’s content using natural language instead of clicking through pages.

Microsoft’s framing is deliberate: “NLWeb can play a similar role to HTML in the emerging agentic web.” The NLWeb README puts it even more directly: “NLWeb is to MCP/A2A what HTML is to HTTP.”

Every NLWeb instance is automatically an MCP server. That means any website running NLWeb immediately becomes accessible to the entire ecosystem of MCP-compatible AI assistants and agents. Your website’s content doesn’t just sit there waiting for visitors. It becomes actively queryable by any AI system that speaks MCP.

Early adopters include Eventbrite, Shopify, Tripadvisor, O’Reilly Media, Common Sense Media, and Hearst. These are content-rich websites that already invest heavily in structured data. NLWeb builds directly on that investment.

Here’s what this looks like in practice. Instead of a user navigating Tripadvisor’s search filters to find family-friendly restaurants in Barcelona with outdoor seating, an AI agent could query Tripadvisor’s NLWeb endpoint: “Find family-friendly restaurants in Barcelona with outdoor seating and good reviews.” The response comes back as structured Schema.org JSON, ready for the agent to present to the user or act on.

If your business has already invested in Schema.org markup (and Part 2 of this series explained why you should), you’re closer to NLWeb readiness than you might think.

AGENTS.md: Instructions For AI Coders

What it is: AGENTS.md is a standardized Markdown file that provides AI coding agents with project-specific guidance, essentially a README written for machines instead of humans.

This protocol is less directly relevant to the marketers and strategists reading this series, but it’s an important piece of the complete picture, especially if your organization has development teams using AI coding tools.

AGENTS.md emerged from a collaboration between OpenAI Codex, Google Jules, Cursor, Amp, and Factory. The problem they were solving: AI coding agents need to understand project conventions, build steps, testing requirements, and architectural decisions before they can contribute useful code. Without explicit guidance, agents make assumptions that lead to inconsistent, buggy output.

Since its release in August 2025, AGENTS.md has been adopted by over 60,000 open-source projects and is supported by tools including GitHub Copilot, Claude Code, Cursor, Gemini CLI, VS Code, and many others. It’s now governed by the Agentic AI Foundation, alongside MCP.

The file itself is simple. Plain Markdown, typically under 150 lines, covering build commands, architectural overview, coding conventions, and testing requirements. Agents read it before making any changes, getting the same tribal knowledge that senior engineers carry in their heads.

GitHub reports that Copilot now generates 46% of code for its users. When nearly half of code is AI-generated, having a standard way to ensure agents follow your conventions, security practices, and architectural patterns isn’t optional. It’s quality control.

Why this matters for your business: If your development teams use AI coding tools (and most do), AGENTS.md ensures those tools produce code that matches your standards. It reduces agent-generated bugs, cuts onboarding time for AI tools on new projects, and provides consistency across teams.

How They Fit Together

These four protocols aren’t competing. They’re complementary layers in the same stack.

Protocol Created By Purpose Web Analogy
MCP Anthropic Connect agents to tools and data USB ports
A2A Google Agent-to-agent communication Email/messaging
NLWeb Microsoft Make websites queryable by agents HTML
AGENTS.md OpenAI + collaborators Guide AI coding agents README files
AAIF Linux Foundation Governance and standards body W3C

The stack works like this: MCP provides the plumbing for agents to access tools and data. A2A enables agents to coordinate with each other. NLWeb makes website content accessible to the entire ecosystem. AGENTS.md ensures AI coding agents build correctly. And the Agentic AI Foundation provides the governance layer, ensuring these protocols remain open, vendor-neutral, and interoperable.

The parallel to the original web is impossible to ignore:

  • HTTP (transport) maps to MCP (tool access) and A2A (agent communication).
  • HTML (content structure) maps to NLWeb (website content for agents).
  • W3C (governance) maps to AAIF (governance).

What’s different this time is the speed. HTTP took years to gain broad adoption. MCP went from launch to universal platform support in 12 months. A2A grew from 50 to 150+ partner organizations in three months. NLWeb shipped with major publisher adoption at launch. AGENTS.md reached 60,000 projects within its first few months.

The infrastructure is being built at internet speed, not standards-committee speed. That’s partly because the companies involved are the same ones building the agents that need these protocols. They’re motivated.

And these four aren’t the only protocols emerging. Commerce-specific standards are building the transaction layer: Shopify and Google co-developed the Universal Commerce Protocol (UCP), launched in January 2026 with support from Etsy, Target, Walmart, and Wayfair. OpenAI and Stripe co-developed the Agentic Commerce Protocol (ACP), which powers Instant Checkout in ChatGPT. CopilotKit’s AG-UI protocol addresses agent-to-frontend communication, with integrations from LangGraph, CrewAI, and Google ADK. We’ll cover the commerce protocols in depth in Part 5.

What This Means For Your Business

You don’t need to implement all four protocols tomorrow. But you need to understand what’s being built, because it shapes what your website, tools, and teams should be ready for.

If you’ve already invested in Schema.org markup, NLWeb is your closest on-ramp. It builds directly on the structured data you already maintain. As NLWeb adoption grows, your Schema.org investment becomes the foundation for making your website conversationally accessible to AI agents. Keep your structured data current and comprehensive.

If you have APIs or internal tools, consider MCP accessibility. Making your services available through MCP means any AI platform can interact with them. For ecommerce, that could mean product catalogs, inventory systems, and order tracking becoming accessible to AI shopping assistants across ChatGPT, Claude, Gemini, and whatever comes next.

If you’re evaluating multi-vendor agent workflows, A2A is the protocol to watch. Enterprise organizations running agents from multiple vendors (Salesforce, ServiceNow, internal tools) will increasingly need these agents to coordinate. A2A is the emerging standard for that coordination.

If your development teams use AI coding tools, adopt AGENTS.md now. It’s the simplest protocol to implement (it’s a single Markdown file) and the one with the most immediate, tangible benefit: fewer bugs, more consistent output, faster onboarding for AI tools on your codebase.

The underlying message across all four protocols is the same: the agentic web is being built on open standards, not proprietary ones. The companies that understand these standards early will be better positioned as AI agents become a primary way users interact with businesses.

These aren’t things you need to implement today. But they are things you need to understand, because Part 4 of this series gets into the technical specifics of making your website agent-ready.

Key Takeaways

  • Four protocols form the agentic web’s infrastructure. MCP (tools), A2A (agent communication), NLWeb (website content), and AGENTS.md (code guidance) are complementary layers, not competitors.
  • The speed of adoption signals real urgency. MCP reached 97 million monthly SDK downloads and universal platform support in 12 months. A2A grew from 50 to 150+ partner organizations in three months. These are not experiments.
  • Competitors are collaborating on infrastructure. OpenAI, Anthropic, Google, and Microsoft are all building shared protocols under the Agentic AI Foundation. This mirrors the W3C moment that unified the early web.
  • NLWeb is potentially the most relevant protocol for website owners. Built by the creator of Schema.org, it turns your existing structured data into a conversational interface for AI agents. Every NLWeb instance is automatically an MCP server.
  • MCP is the universal adapter. Build one MCP connection to your data, and every major AI platform (Claude, ChatGPT, Gemini, Copilot) can access it. No more building separate integrations for each platform.
  • Start with what you have. Schema.org markup readies you for NLWeb. Existing APIs can become MCP servers. AGENTS.md is a single file your dev team can create today. You don’t need to start from scratch.

The original web succeeded because competitors agreed on shared standards. The agentic web is following the same playbook, just faster. The protocols are being established now. The governance is in place. The agents are already using them.

Up next in Part 4: the hands-on technical guide for making your website ready for autonomous AI agents, from semantic HTML to accessibility standards to testing with real agent tools.

More Resources:


This post was originally published on No Hacks.


Featured Image: Collagery/Shutterstock

The 5-Pillar Framework For AI Content That Audiences Actually Trust via @sejournal, @gregjarboe

When I started updating an online course I’m teaching, I kept returning to the same uncomfortable observation: The content marketing profession has gotten remarkably good at producing content nobody wants to read.

That’s not a knock on the people doing the work. It’s a structural problem created by an industry that optimized for volume at precisely the moment audiences were becoming more discerning. AI turbo-charged the volume side of that equation, and now we’re living with the consequences. Production cycles that once took weeks compress into minutes. A single core message can spin out into thousands of personalized variants for specific micro-segments before lunch. We have the technical ability to create more content faster than ever before.

And yet consumer trust keeps falling. The gap between what we can produce and what actually connects with real people is widening, and most digital marketers are standing on the wrong side of it. More output is simply not the answer.

The argument I make in the course and the one I want to make here is this: AI changes how we work, not why audiences engage. The fundamentals of storytelling still apply. The difference is that mistakes now get amplified faster, and audiences have grown sophisticated enough to detect soulless content almost instantly.

Here’s how you can use AI strategically without sacrificing the human authenticity and cultural integrity your audiences actually respond to.

Understanding The Trust Gap Before You Touch Any Tool

Before getting into frameworks and tactics, it’s worth sitting with the problem for a moment, because the instinct in marketing is always to jump to solutions. Three distinct forces are eroding trust right now, and they’re operating simultaneously.

The first is algorithmic gatekeeping. The platforms have built increasingly sophisticated AI-driven filters, and those filters are getting better at detecting and suppressing low-quality, inauthentic content. The very tools that made it easier to produce content at scale are now being used by platform algorithms to identify and downrank that content.

The second force is what I’d call the authenticity crisis. As content volume has exploded since 2022, audience skepticism has risen in direct proportion. Consumers in 2026 can detect generic AI-generated output – what some researchers have started calling “slop.” If your content looks like an ad and reads like a press release, it gets filtered before it’s even consciously processed.

The third is plain audience sophistication. Your readers have now seen tens of thousands of pieces of AI-generated content. They know what it feels like, even if they can’t articulate exactly why. The brain is a prediction machine, and it ignores what it can easily predict.

The Framework: Five Pillars, One Sustainable Ecosystem

The approach I’ve developed in my online course organizes the challenge into five interconnected areas: AI-powered content strategy, visceral storytelling, multimodal optimization, audience psychology and analytics, and ethics and authenticity. Each pillar builds on the previous one. Getting the strategy wrong makes everything else harder. Getting the ethics wrong undermines everything else you’ve built.

Here’s how each one works in practice.

Pillar 1: Strategy First, Automation Second

Most marketers use AI reactively. They open a chat window when they need a first draft, get something plausible-sounding back, clean it up a little, and ship it. That approach treats AI as a shortcut rather than infrastructure, and it produces exactly the kind of generic, undifferentiated content that’s making the trust problem worse.

The shift I’m advocating is moving from random generation to what I call an architectural framework. The idea is that you build the strategy first – deeply, carefully, the way you always should have – and then use AI to execute it at scale. Strategy acts as the guardrail against the amplified mistakes that come with AI-accelerated production.

One analogy that’s changed how I talk about this in the course: Prompting AI is the same as briefing a junior writer. If you wouldn’t hand a new hire a one-line brief and expect a polished deliverable, you shouldn’t do it with AI either. A vague brief produces generic fluff. A structured brief with clear context, defined constraints, and specific tone guidelines produces something you can actually work with.

What belongs in a good AI brief? The specific audience segment and the pain point they’re experiencing right now. The emotional response you’re trying to trigger. The single action you want the reader to take. Brand voice guidelines with concrete examples of what “on-brand” actually sounds like. And critically, explicit guardrails about what the AI should not do – topics to avoid, phrases that feel off, cultural considerations that require human judgment.

The workflow itself matters just as much as the brief. The most effective AI content process isn’t linear; it loops. A human sets the strategy. A hybrid prompting phase generates raw material. Then – and this is the step most teams skip – a human evaluates that output against strategic goals before anything else happens. Editing comes next to inject brand voice and emotional depth. Then publishing, then learning from the data, then feeding those insights back into the next strategy cycle. Evaluation is the most overlooked stage in AI content workflows. Without a dedicated checkpoint to assess output before it moves forward, the whole process becomes a loop of mediocrity.

Pillar 2: Visceral Storytelling And Why Safe Content Is Invisible Content

When production is fully commoditized – when anyone can generate a competent first draft in 30 seconds – storytelling becomes the only real differentiator. The problem is that most organizations have spent years training themselves out of good storytelling.

Corporate content defaults toward safety, and safe content is invisible. There are three failure modes I see constantly. The first is being too rational: leading with features and specs rather than the human experience of using something. The second is being too generic: following best practices so faithfully that the brand blends into the noise of every competitor doing the same thing. The third is being too brand-centric: talking about the company rather than the customer’s identity and aspirations.

One useful model for thinking about attention is how it moves through three phases. The limbic system reacts first, almost instantaneously: “Do I care about this? Is this interesting?” Logic only engages in phase two, after emotion has granted permission. Memory encoding happens in phase three, and only for content that cleared both previous gates. You cannot argue your way into memory. Logic justifies attention that emotion has already seized.

Visceral storytelling is content that’s felt before it’s understood. It bypasses the analytical filter to create an immediate physical or emotional response. Content that achieves this shares four qualities: It’s anchored in feelings rather than facts, it evokes sensory details (sight, sound, texture), it mirrors lived reality rather than corporate ideals, and it delivers the hook immediately rather than building toward it.

Four narrative formats do this reliably. Before-and-after structures work because they visualize transformation with high satisfaction and instant comprehension. There’s a reason the format has been used in advertising for over a century. Behind-the-scenes content demystifies the process in a way that builds genuine trust, particularly with B2B audiences trying to evaluate whether a vendor actually knows what they’re doing. First-person perspective removes the brand-voice filter entirely and creates direct human-to-human connection, which is why founder stories and employee perspectives consistently outperform official announcements. And micro-stories – a complete narrative arc compressed into a short format – work because they respect the audience’s time while still providing the emotional arc that drives engagement.

Here’s a concrete example of the transformation I’m describing. A coffee shop writes this about itself: “Our coffee shop is open 24 hours and uses high-quality beans sourced globally.” That’s accurate, inoffensive, and completely forgettable. Now consider this version: “For the late-night grinders and the early risers: fuel that traveled 4,000 miles to keep you going. We’re awake when you are.” The second version identifies the customer, creates a scene, and speaks to an emotional need. It doesn’t state facts. It describes the reality of someone experiencing those facts.

Pillar 3: Multimodal Optimization And The Repurposing Fallacy

Content needs to be optimized not just for text search anymore, but for voice, visual, and video ingestion by AI agents. That’s a significant expansion of the surface area content teams are responsible for. The instinctive response is to produce more content, but that’s the wrong answer. The right answer is smarter reuse of a single asset.

One of the most common mistakes I see in content marketing is copy-pasting the same asset across channels and calling it a distribution strategy. This fails for several reasons. TikTok’s interest graph operates completely differently from LinkedIn’s social graph, so content engineered for one will typically underperform on the other. A polished corporate video feels alienating in a raw TikTok feed. And audiences have become intuitively good at detecting content that doesn’t belong on the platform they’re using – they scroll past it without really knowing why.

The strategic shift is adapting the story’s core to each platform’s native dialect, rather than syndicating the same asset everywhere. Different platforms carry different emotional intentions for users, and successful content matches the narrative to the mindset. On Instagram, users are curating identity, so content needs to be visually aspiring. On TikTok, users seek raw entertainment, and polish is actively punished while personality is rewarded. On LinkedIn, the mode is professional development – users want peer validation and actionable insight. On YouTube, users have actively chosen to spend time, making it the natural home for long-form narrative depth.

The framework I use in the course assigns every format a distinct role in the conversion funnel. Short-form video and interactive content belong at the top, grabbing attention with high velocity. Audio and long-form text sit in the middle, building the intimacy and context that move people from awareness toward consideration. Deep interactive tools and long-form video belong at the bottom, providing the detailed utility that supports a decision.

A travel campaign called “The Hyperbolist” illustrates this well. Directed by Oscar-winner Tom Hooper, the campaign targets North American long-haul travelers seeking substance over spectacle.

The campaign has a single narrative theme, luxury travel experience, which features a playful husband-and-wife dynamic: the “Hyperbolist” husband describes Dubai in sweeping, mythical terms, while the wife offers a warmer, more grounded emotional perspective. The throughline is a clever tension, acknowledging that the location sounds like an exaggeration, while insisting the reality lives up to it.

However, the campaign expresses itself entirely differently across platforms. TikTok and Reels handle discovery through fast-paced visual content. YouTube delivers planning utility through detailed itinerary guides. Instagram Carousel provides the inspirational aesthetic content that helps potential visitors imagine themselves there. The user encounters the same destination three times without experiencing the repetition fatigue that comes from seeing the same asset recycled.

Pillar 4: Measuring What Actually Matters

The most dangerous thing in content marketing right now is optimizing for the wrong metrics. Likes, impressions, and follower counts feel like success. They’re visible, they’re easy to report, and they create a satisfying sense of momentum. But they rarely guide strategic decisions because they represent visibility rather than intent.

Watch time tells you whether a narrative actually resonated. Did the audience stay for the message, or bail after five seconds? Scroll depth tells you whether the hook was efficient enough to pull people through the full piece. Repeat exposure tells you whether there’s genuine brand affinity being built or whether people are bouncing and never coming back. A user who watches 90% of a video without liking it is more valuable, behaviorally, than a user who taps the heart and scrolls on in two seconds.

SEO has largely shifted from keyword-based search intent to behavior-based retention signals. Engagement velocity (how quickly users interact after posting), completion rates, and saves and shares are the signals that trigger algorithmic amplification. High performance in behavioral metrics unlocks reach.

Translating these signals into language that resonates with leadership and clients matters too. “We got 5,000 likes” is a social media metric. “We validated brand alignment with a core demographic” is a business outcome. “The video had high watch time” is a platform stat. “We retained audience attention on a complex policy message” is a communication result. Content needs to be positioned as a business driver, not a marketing output, and that requires defining outcomes before hitting publish rather than retrofitting meaning to whatever the dashboard shows afterward.

Pillar 5: Ethics, Authenticity, And Why Trust Has Become Competitive Advantage

In an era of infinite AI-generated content, ethical transparency has shifted from a compliance question to a genuine competitive differentiator.

Three hidden costs of over-automation tend to compound each other. The first is misinformation: AI hallucinates confidently, and factual errors that get published undermine authority in ways that take a long time to repair. The second is the uncanny valley effect: Content that’s technically competent but emotionally hollow, generating disengagement because something just feels “off” about it. The third is brand erosion: When efficiency consistently overrides empathy, the brand voice gradually becomes generic and interchangeable. No single moment of damage, just a slow drift toward invisibility.

Hiding the use of AI reads as weakness to increasingly sophisticated audiences. Disclosing it clearly, with non-intrusive labeling like “AI-Assisted” or “Synthetically Generated” where appropriate, reads as strategic competence and respect for the audience’s intelligence. Transparency strengthens credibility rather than weakening it.

The governance principle I come back to most often is what I call the Human-in-the-Loop requirement. Every AI content workflow needs a human filter providing editorial oversight (fact and tone review) and cultural review (norms, values, sensitivity assessment). AI cannot be responsible for content. Only a human can take ownership of a message, and that ownership matters most precisely when something goes wrong.

A Case Study Worth Studying: The $1 Million Film

In January 2026, the 1 Billion Followers Summit Challenge in collaboration with Google, concluded with 3,500 global entries competing for a $1 million prize. Requirements stated submitted films had to be powered by at least 70% generative AI tools from Google. The winner was Zoubeir ElJlassi of Tunisia, with a short film called “Lily.”

The premise is deceptively simple. A lonely archivist discovers a doll at a hit-and-run scene. The doll gradually becomes a silent witness to a haunted conscience, and the weight of it forces a confession. The story is elemental: guilt, isolation, the impossibility of outrunning what you’ve done.

ElJlassi used Google’s Veo to generate the signature gloomy aesthetic and maintain visual consistency across the film. Google’s AI filmmaking tool Flow handled fine-tuning of individual scenes to ensure the characters moved and emoted with genuine nuance. Gemini served as a creative co-pilot for storyboarding and defining the look and feel from the start.

The judges called it a seamless blend of raw emotion and high-tech execution. What I find instructive about this outcome is what it tells us about what the tools actually did. None of them invented the story. None of them understood why a doll at a crime scene becomes unbearable to look at, or why confession is both the worst and the only option. The human brought the emotional core. The AI brought the execution capacity. That division of labor – human meaning, machine scale – is the model worth studying.

What To Do Starting Tomorrow

Four things are worth doing before you get to any of the more sophisticated changes.

Start by auditing your existing workflows to map exactly where AI is currently used and identify where there is no human checkpoint before content goes live. Most teams, when they do this exercise honestly, find gaps they didn’t realize existed.

Then add AI to your process intentionally rather than expansively. Pick the high-impact, low-risk areas first – idea generation, headline testing, first drafts for internal review – rather than deploying it across every content type simultaneously.

Implement a mandatory cultural review step for all external-facing AI content. This means a human with contextual judgment reviewing for tone, accuracy, and sensitivity before anything publishes. For teams operating across multiple markets or cultural contexts, this step is not optional.

Finally, shift your key performance indicators away from volume and reach toward sentiment and trust signals. Watch time, scroll depth, saves, and repeat visits tell a more honest story about whether content is actually working than follower counts and like rates ever did.

The Fundamental Argument

The future belongs to organizations that merge the scale of machines with the judgment of people. Not one or the other. Both, in deliberate proportion.

The technology will keep changing. The core truth won’t: meaning cannot be automated. Stories outperform statements. Specific outperforms generic. Authentic outperforms polished. By placing the human back at the center of the workflow – not as an obstacle to efficiency, but as the source of everything that makes content worth reading – you transform AI from a risk into something genuinely sustainable.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

Four things we’d need to put data centers in space

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

In January, Elon Musk’s SpaceX filed an application with the US Federal Communications Commission to launch up to one million data centers into Earth’s orbit. The goal? To fully unleash the potential of AI without triggering an environmental crisis on Earth. But could it work?

SpaceX is the latest in a string of high-tech companies extolling the potential of orbital computing infrastructure. Last year, Amazon founder Jeff Bezos said that the tech industry will move toward large-scale computing in space. Google has plans to loft data-crunching satellites, aiming to launch a test constellation of 80 as early as next year. And last November Starcloud, a startup based in Washington State, launched a satellite fitted with a high-performance Nvidia H100 GPU, marking the first orbital test of an advanced AI chip. The company envisions orbiting data centers as large as those on Earth by 2030.

Proponents believe that putting data centers in space makes sense. The current AI boom is straining energy grids and adding to the demand for water, which is needed to cool the computers. Communities in the vicinity of large-scale data centers worry about increasing prices for those resources as a result of the growing demand, among other issues.

In space, advocates say, the water and energy problems would be solved. In constantly illuminated sun-synchronous orbits, space-borne data centers would have uninterrupted access to solar power. At the same time, the excess heat they produce would be easily expelled into the cold vacuum of space. And with the cost of space launches decreasing, and mega-rockets such as SpaceX’s Starship promising to push prices even lower, there could be a point at which moving the world’s data centers into space makes sound business sense. Detractors, on the other hand, tell a different story and point to a variety of technological hurdles, though some say it’s possible they may be surmountable in the not-so-distant future. Here are four of the must-haves we’d need to make space-based data centers a reality. 

A way to carry away heat 

AI data centers produce a lot of heat. Space might seem like a great place to dispel that heat without using up massive amounts of water. But it’s not so simple. To get the power needed to run 24-7, a space-based data center would have to be in a constantly illuminated orbit, circling the planet from pole to pole, and never hide in Earth’s shadow. And in that orbit, the temperature of the equipment would never drop below 80 °C, which is way too hot for electronics to operate safely in the long term. 

Getting the heat out of such a system is surprisingly challenging. “Thermal management and cooling in space is generally a huge problem,” says Lilly Eichinger, CEO of the Austrian space tech startup Satellives.

On Earth, heat dissipates mostly through the natural process of convection, which relies on the movement of gases and liquids like air and water. In the vacuum of space, heat has to be removed through the far less efficient process of radiation. Safely removing the heat produced by the computers, as well as what’s absorbed from the sun, requires large radiative surfaces. The bulkier the satellite, the harder it is to send all the heat inside it out into space.

But Yves Durand, former director of technology at the European aerospace giant Thales Alenia Space, says that technology already exists to tackle the problem.

The company previously developed a system for large telecommunications satellites that can pipe refrigerant fluid through a network of tubing using a mechanical pump, ultimately transferring heat from within a spacecraft to radiators on the exterior. Durand led a 2024 feasibility study on space-based data centers, which found that although challenges exist, it should be possible for Europe to put gigawatt-scale data centers (on par with the largest Earthbound facilities) into orbit before 2050. These would be considerably larger than those envisioned by SpaceX, featuring solar arrays hundreds of meters in size—larger than the International Space Station.

Computer chips that can withstand a radiation onslaught

The space around Earth is constantly battered by cosmic particles and lashed by solar radiation. On Earth’s surface, humans and their electronic devices are protected from this corrosive soup of charged particles by the planet’s atmosphere and magnetosphere. But the farther away from Earth you venture, the weaker that protection becomes. Studies show that aircraft crews have a higher risk of developing cancer because of their frequent exposure to high radiation at cruising altitude, where the atmosphere is thin and less protective.

Electronics in space are at risk of three types of problems caused by high radiation levels, says Ken Mai, a principal systems scientist in electrical and computer engineering at Carnegie Mellon University. Phenomena known as single-event upsets can cause bit flips and corrupt stored data when charged particles hit chips and memory devices. Over time, electronics in space accumulate damage from ionizing radiation that degrades their performance. And sometimes a charged particle can strike the component in a way that physically displaces atoms on the chip, creating permanent damage, Mai explains.

Traditionally, computers launched to space had to undergo years of testing and were specifically designed to withstand the intense radiation present in Earth’s orbit. These space-hardened electronics are much more expensive, though, and their performance is also years behind the state-of-the-art devices for Earth-based computing. Launching conventional chips is a gamble. But Durand says cutting-edge computer chips use technologies that are by default more resistant to radiation than past systems. And in mid-March, Nvidia touted hardware, including a new GPU, that is “bringing AI compute to orbital data centers.” 

Nvidia’s head of edge AI marketing, Chen Su, told MIT Technology Review, that “Nvidia systems are inherently commercial off the shelf, with radiation resilience achieved at the system level rather than through radiation‑hardened silicon alone.” He added that satellite makers increase the chips’ resiliency with the help of shielding, advanced software for error detection, and architectures that combine the consumer-grade devices with bespoke, hardened technologies.

Still, Mai says that the data-crunching chips are only one issue. The data centers would also need memory and storage devices, both of which are vulnerable to damage by excessive radiation. And operators would need the ability to swap things out or adapt when issues arise. The feasibility and affordability of using robots or astronaut missions for maintenance is a major question mark hanging over the idea of large-scale orbiting data centers.

“You not only need to throw up a data center to space that meets your current needs; you need redundancy, extra parts, and reconfigurability, so when stuff breaks, you can just change your configuration and continue working,” says Mai. “It’s a very challenging problem because on one hand you have free energy and power in space, but there are a lot of disadvantages. It’s quite possible that those problems will outweigh the advantages that you get from putting a data center into space.”

In addition to the need for regular maintenance, there’s also the potential for catastrophic loss. During periods of intense space weather, satellites can be flooded with enough radiation to kill all their electronics. The sun has just passed the most active phase of its 11-year cycle with relatively little impact on satellites. Still, experts warn that since the space age began, the planet has not experienced the worst the sun is capable of. Many doubt whether the low-cost new space systems that dominate Earth’s orbits today are prepared for that.

A plan to dodge space debris

Both large-scale orbiting data centers such as those envisioned by Thales Alenia Space and the mega-constellations of smaller satellites as proposed by SpaceX give a headache to space sustainability experts. The space around Earth is already quite crowded with satellites. Starlink satellites alone perform hundreds of thousands of collision avoidance maneuvers every year to dodge debris and other spacecraft. The more stuff in space, the higher the likelihood of a devastating collision that would clutter the orbit with thousands of dangerous fragments.

Large structures with hundreds of square meters of solar arrays would quickly suffer damage from small pieces of space debris and meteorites, which would over time degrade the performance of their solar panels and create more debris in orbit. Operating one million satellites in low Earth orbit, the region of space at the altitude of up to 2,000 kilometers, might be impossible to do safely unless all satellites in that area are part of the same network so they can communicate effectively to maneuver around each other, Greg Vialle, the founder of the orbital recycling startup Lunexus Space, told MIT Technology Review.

“You can fit roughly four to five thousand satellites in one orbital shell,” Vialle says. “If you count all the shells in low Earth orbit, you get to a number of around 240,000 satellites maximum.”

And spacecraft must be able to pass each other at a safe distance to avoid collisions, he says. 

“You also need to be able to get stuff up to higher orbits and back down to de-orbit,” he adds. “So you need to have gaps of at least 10 kilometers between the satellites to do that safely. Mega-constellations like Starlink can be packed more tightly because the satellites communicate with each other. But you can’t have one million satellites around Earth unless it’s a monopoly.”

On top of that, Starlink would likely want to regularly upgrade its orbiting data centers with more modern technology. Replacing a million satellites perhaps every five years would mean even more orbital traffic—and it could increase the rate of debris reentry into Earth’s atmosphere from around three or four pieces of junk a day to about one every three minutes, according to a group of astronomers who filed objections against SpaceX’s FCC application. Some scientists are concerned that reentering debris could damage the ozone layer and alter Earth’s thermal balance

Economical launch and assembly

The longer hardware survives in orbit, the better the return on investment. But for orbital data centers to make economic sense, companies will have to find a relatively cheap way to get that hardware in orbit. SpaceX is betting on its upcoming Starship mega-rocket, which will be able to carry up to six times as much payload as the current workhorse, Falcon 9. The Thales Alenia Space study concluded that if Europe were to build its own orbital data centers, it would have to develop a similarly potent launcher. 

But launch is only part of the equation. A large-scale orbital data center won’t fit in a rocket—even a mega-rocket. It will need to be assembled in orbit. And that will likely require advanced robotic systems that do not exist yet. Various companies have conducted Earth-based tests with precursors of such systems, but they are still far from real-world use.

Durand says that in the short term, smaller-scale data centers are likely to establish themselves as an integral part of the orbital infrastructure, by processing images from Earth-observing satellites directly in space without having to send them to Earth. That would be a huge help for companies selling insights from space, as many of these data sets are extremely large, and competition for opportunities to downlink them to Earth for processing via ground stations is growing.

“The good thing with orbital data centers is that you can start with small servers and gradually increase and build up larger data centers,” says Durand. “You can use modularity. You can learn little by little and gradually develop industrial capacity in space. We have all the technology, and the demand for space-based data processing infrastructure is huge, so it makes sense to think about it.”

Smaller facilities probably won’t do much to offset the strain that terrestrial data centers are placing on the planet’s water and electricity, though. That vision of the future might take decades to come to fruition, some critics think—if it even gets off the ground at all. 

My Ideal Second Business

Molson Hart is the founder of Viahart, a D2C toy brand, and Edison, a legal technology company. He says every entrepreneur should own two businesses, where one offers more opportunities to scale, is more profitable, or diversifies risk.

I’m all in on Beardbrand, my own D2C brand launched in 2014. Molson is a two-time guest on the podcast. His comment got me thinking about an attractive second company, one that would enhance my life without creating stress and headaches.

So in this episode I’ll depart from my typical guest interviews and, instead, describe my ideal business.

My entire audio narrative is embedded below. The transcript is edited for length and clarity.

My optimal business is an ecommerce brand that sells easy-to-ship products. The items are likely small and, importantly, consumable. Once acquired, a customer would buy two or three times. The products would emphasize both value and prestige, with gross margins that at least cover acquisition ads on Meta.

Lastly, the products appeal to a large enough market to differentiate, niche down, and target the right audience.

So what are those?

Sean Frank is CEO of Ridge, the D2C wallet provider, and a veteran of this podcast. One could argue Ridge’s wallets are consumable: Release new versions, and they become fashion items, enticing repeat buyers.

Yet to me, consumables are what go in or on my body, what I eat or apply every day, such as food, supplements, and personal care goods. For ideas, I would walk into a grocery store, a Walmart, or a Target and just look around. What are folks buying? Which brands are old and stuffy, ripe for disruption?

Examples

Native has moved beyond deodorant, its original product. Moiz Ali, a former guest on this show, launched the deodorant-only brand in 2015 and reached $100 million in annual revenue within a couple of years. Native now sells multiple consumables: skincare, hand soap, toothpaste, and hair products.

Harry’s launched in 2012 as a D2C shaving goods provider, an affordable alternative to dominant players such as Gillette and Schick. The company was wildly successful.

Native and Harry’s focused on staples that consumers use daily.

Seven Sundays launched in 2011 at a Minneapolis farmers’ market. The founders, having realized that most cereal manufacturers used glyphosate-treated wheat and high-fructose corn syrup, offered a cleaner, healthier granola at a higher price point. It’s now a Certified B, ecommerce powerhouse.

Goodles sells a product every parent can appreciate: healthy macaroni and cheese for kids. The brand launched in 2020 with nutritious selections in bright, colorful packaging and fun product names, such as Shella Good and Twist My Parm. It’s another upstart challenging a dominant brand (Kraft) in a big market.

Opportunities

So the opportunities for me lie in creating new products in sizeable markets dominated by stale, out-of-touch providers.

I would differentiate those products in one of three ways.

First is better quality — superior ingredients or components. Parents who prioritize nutrition are unlikely to buy Kraft Mac and Cheese, but they would consider Goodles, even at a higher price. That’s one way to distinguish.

Another way is innovative packaging. Many entrepreneurs overlook this opportunity. Go again to Walmart, Target, and even trade shows. How are products presented and packaged? I’ve seen incredible packaging designs over the years. I once saw packaging for a cosmetic cream where users twisted a bottle cap and pumped the cream into a built-in bowl at the top, to then mix it before applying to their face.

The third way is branding. It’s often easier to launch a brand named after the products it sells or the audience it targets. But doing that can restrict the company later, when the market shifts. Vacation.inc avoids that trap. Founded in 2021, the brand sells sunscreen but can easily pivot to other products and services should the market evolve.

Cool Products

It rarely makes sense to exactly replicate what another entrepreneur has started. Don’t listen to a successful owner on a podcast like this and think, “That guest is killing it. I’m going to do the exact same thing.”

Often the owner has not proven the business over the long term, and regardless, copying her merely carves up that audience. Instead, learn from successful brands such as Vacation, Inc., and apply their tactics to an entirely different market.

Have some fun. Make your own cool products.

Why Agentic AI Shopping Feels Unnatural And May Not Threaten SEO via @sejournal, @martinibuster

Google, OpenAI, and Shopify insist that the next revolution in AI is agentic AI shopping agents. Shopping is a lucrative area for AI to burrow into. The thing that I keep thinking is that shopping is a deeply important activity to humans; it’s literally a part of our DNA. Is surrendering the shopping experience something the general public is willing to do?

Agentic AI shopping is like a personal assistant that you tell what you want and maybe why you need it, plus some features and a price range. The AI will go out and do the research and comparison and even make the purchase.

There’s no human performing a search in that scenario. So it’s kind of not necessarily good for SEO unless you’re optimizing shopping sites for agentic AI shoppers.

Shopping Is A Part Of Human Biology

Scientists say that shopping is literally a part of our DNA. Our desire to hunt, to gather, and to flaunt our ability to be successful is a part of the evolutionary competition we participate in (whether we know it or not).

A Wikipedia page on the subject explains:

“Richard Dawkins outlines in The Selfish Gene (1976) that humans are machines made of genes, and genes are the grounding for everything people do.

…Therefore, everything that people do relates to thriving in their environment above competition, including the way people consume as a form of survival in their environment when simply purchasing the basic physiological needs of food, water and warmth. People also consume to thrive above others, for example in conspicuous consumption where a luxury car represents money and high social status…”

What that means is that whether we know it or not, our drive to shop is a part of evolutionary competition with each other. Part of it is to signal our status and attractiveness for reproduction. So when we go shopping for clothes or toilet paper, it’s part of our genetic programming to feel good about it.

Shopping And The Brain’s Chemical Cocktail

And when it comes to feeling good, some of that is triggered by chemicals like dopamine, endorphins, and serotonin firing off to reward you for finding a good deal.

Even scoring a deal on toilet paper can trigger reward signals in the brain.

Another Wikipedia page about the biology of our reward system explains:

“Reward is the attractive and motivational property of a stimulus that induces appetitive behavior, also known as approach behavior, and consummatory behavior. A rewarding stimulus has been described as “any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it.”

A sale sign in a store can act as a reward cue because it signals a lower price or added value, which can drive someone to approach and buy it. The sign itself is just information, but when a person recognizes the discount or deal as beneficial, it can trigger motivation to act. That’s a deeply embedded behavior that we carry with us.

We are like machines that are programmed in our genes to shop.

So that raises the question: Why would anyone delegate that deeply rewarding activity to an AI agent? It’s like delegating the enjoyment of chocolate to a robot.

I suspect that most of you reading this know which supermarkets sell the best produce at the cheapest price, which ones have the yummiest bread, and which markets have the best spices. That’s our programming; it’s biological. It does not make sense to delegate the rewards inherent in discovery or acquisition to an AI shopping agent.

Serendipity And Shopping

Serendipity is when things happen by chance, unplanned, that nonetheless provide a happy outcome or benefit. One of the joys of shopping is stumbling onto something that’s a good deal or beautiful or has some other value. Employing an AI agent will cause humans to miss out on the serendipitous joy of discovering something they hadn’t been looking for that is not just desirable but also something they hadn’t known they needed.

For example, I purchased a birthday gift for my wife. I walked into a gift shop run by a charming new age hippie. We talked about music as I browsed the gifts for sale. I found something, two things, that I hadn’t planned on buying. The two things had a semantic connection to each other that I found to be poetic and therefore extra nice as a gift. The shop owner put the two items into two boxes, then placed the boxes in a lovely mesh gift bag with a ribbon.

That’s serendipity in action. It was a pleasurable moment I enjoyed. I walked out of the store into the sunshine with a fresh cocktail of dopamine, endorphins, and serotonin flooding my brain, and it was a delightful moment. I bought a gift that I was certain my wife would enjoy.

Agentic AI Shopping Is Unnatural

My question is, why does Silicon Valley think it can automate the many things that make us human?

It’s as if Silicon Valley is trying to convert us into teenagers by doing the things adults normally do.

Now they want to take shopping away from us?

I think that the only way that agentic AI has a chance of working is if they build in a sense of serendipity and discovery into the system. I’ve been a part of the technology scene for over 25 years, I lived in the world capital of the Internet in San Francisco and even worked for a time at a leading technology magazine.

So it’s not that I’m a luddite about technology. AI integrated into a shopping site makes a lot of sense. It can make recommendations and answer questions. That’s great. There is still a human who is clicking around and discovering things for themselves in a way that satisfies are natural urge to shop and consume. That’s good for SEO because it means that a store needs to be optimized for search.

AI agents doing the shopping for humans makes less sense because it’s unnatural, it goes against our biology.

Featured Image by Shutterstock/Prostock-studio

Google Core Update, Crawl Limits & Gemini Traffic Data – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how Google ranks content, how its crawlers handle page size, and where AI referral traffic is heading. Here’s what matters for you and your work.

Google Rolls Out The March 2026 Core Update

Google began rolling out the March core update this week. This is the first broad core update of the year.

Key facts: The rollout may take up to two weeks. Google described it as a regular update designed to surface more relevant, satisfying content from all types of sites. It arrives two days after the March spam update completed in under 20 hours.

Why This Matters

The December core update was the most recent broad core update, finishing on December 29. That’s a three-month gap. The February 2026 update only affected Discover, so Search rankings haven’t been recalibrated since late December.

Ranking changes could appear throughout early April. Google recommends waiting at least a full week after the rollout finishes before analyzing Search Console performance. Compare against a baseline period before March 27.

What SEO Professionals Are Saying

John Mueller, a member of Google’s Search Relations team, wrote on Bluesky when asked whether the two updates overlap:

One is about spam, one is not about spam. If with some experience, you’re not sure whether your site is spam or not, it’s unfortunately probably spam.

Mueller later explained that core updates don’t follow a single deployment mechanism. Different teams and systems contribute changes, and those components can require step-by-step rollouts rather than a single release. That’s why rollouts take weeks and why ranking volatility often appears in waves rather than all at once.

Roger Montti, writing for Search Engine Journal, noted the proximity to the spam update may not be a coincidence. Spam fighting is logically part of the broader quality reassessment in a core update.

Read our full coverage: Google Begins Rolling Out March 2026 Core Update

Read Roger Montti’s coverage: Google Answers Why Core Updates Can Roll Out In Stages

Illyes Explains Googlebot’s Crawling Architecture And Byte Limits

Google’s Gary Illyes, an analyst on Google’s Search team, published a blog post explaining how Googlebot works within Google’s broader crawling systems. The post adds new technical details to the 2 MB crawl limit Google published earlier this year.

Key facts: Illyes described Googlebot as one client of a centralized crawling platform. Google Shopping, AdSense, and other products all route requests through the same system under different crawler names. HTTP request headers count toward the 2 MB limit. External resources like CSS and JavaScript get their own separate byte counters.

Why This Matters

When Googlebot hits 2 MB, it doesn’t reject the page. It stops fetching and passes the truncated content to indexing as if it were the complete file. Anything past 2 MB is never indexed. That matters for pages with large inline base64 images, heavy inline CSS or JavaScript, or oversized navigation menus.

The centralized platform detail also explains why different Google crawlers behave differently in server logs. Each client sets its own configuration, including byte limits. Googlebot’s 2 MB is a Search-specific override of the platform’s 15 MB default.

Google has now covered these limits in documentation updates, a podcast episode, and this blog post within two months. Illyes noted the 2 MB limit is not permanent and may change as the web evolves.

What SEO Professionals Are Saying

Cyrus Shepard, founder of Zyppy SEO, wrote on LinkedIn:

That said, as SEOs we often deal with extreme situations. If you notice certain content not getting indexed on VERY LARGE PAGES, you probably want to check your size.

Read our full coverage: Google Explains Googlebot Byte Limits And Crawling Architecture

Google’s Illyes And Splitt: Pages Are Getting Larger, And It Still Matters

Gary Illyes and Martin Splitt, Developer Advocate at Google, discussed page weight growth and crawling on a recent Search Off the Record podcast episode.

Key facts: Web pages have grown nearly 3x over the past decade. The 15 MB default applies across Google’s broader crawling systems, with individual clients like Googlebot for Search overriding it downward to 2 MB. Illyes raised whether structured data that Google asks websites to add is contributing to page bloat.

Why This Matters

The 2025 Web Almanac reports a median mobile homepage size of 2,362 KB. This indicates pages are getting larger, though it should not be considered safely below Googlebot’s 2 MB fetch limit. However, Illyes’s question about structured data contributing to bloat is worth monitoring. Google encourages sites to add schema markup for rich results, and that markup increases the weight of each page.

Splitt said he plans to address specific techniques for reducing page size in a future episode. Pages with heavy inline content should verify their critical elements load within the first 2 MB of the response.

Read our full coverage: Google: Pages Are Getting Larger & It Still Matters

Gemini Referral Traffic More Than Doubles, Overtakes Perplexity

Google Gemini more than doubled its referral traffic to websites between November 2025 and January 2026. The data comes from SE Ranking’s analysis of more than 101,000 sites with Google Analytics installed.

Key facts: SE Ranking measured a 115% combined increase over two months, with the jump starting around the time Google rolled out Gemini 3. In January, Gemini sent 29% more referral traffic than Perplexity globally and 41% more in the U.S. ChatGPT still generates about 80% of all AI referral traffic. For transparency, SE Ranking sells AI visibility tracking tools.

Why This Matters

In August 2025, Perplexity was sending about 2.9x more referral traffic than Gemini. Gemini’s December-January surge reversed that by January 2026. ChatGPT’s lead over Gemini also narrowed, from roughly 22x in October to about 8x in January.

All AI platforms combined still account for about 0.24% of global internet traffic, up from 0.15% in 2025. That’s measurable growth, but it’s still a small share compared to organic search. Two months of Gemini growth correlates with a known product launch, but it’s too early to call it a sustained pattern.

Gemini is now worth watching alongside ChatGPT and Perplexity in your referral reports.

Read our full coverage: Google Gemini Sends More Traffic To Sites Than Perplexity: Report


Theme Of The Week: Google Is Explaining Its Own Systems

Three of this week’s four stories are Google telling you how its systems work. Illyes published a blog post detailing Googlebot’s architecture. The same week, the Search Off the Record podcast covered page weight and crawl thresholds. Mueller explained why core updates roll out in waves rather than all at once. Each one fills a gap that documentation alone left open.

The Gemini traffic data provides a new perspective. Google is being open about how its crawlers and ranking systems operate. The traffic passing through its AI services is increasing rapidly enough to be reflected in third-party data, and Google isn’t explaining that part.

Top Stories Of The Week:

More Resources:

ChatGPT Ads: New Acquisition Channel Or Just Another Brand Tax? via @sejournal, @brookeosmundson

A lot of PPC managers are going to get asked about ChatGPT Ads over the next few months.

That was probably inevitable the moment OpenAI moved beyond testing ads and started building a real monetization story around them. The initial pilot was easy enough for most advertisers to ignore. It was invite-only, expensive, and limited enough that it felt more like a premium media test than something the average paid media team needed to factor into a media plan.

It’s going to be harder for PPC pros to ignore with the newest announcement from OpenAI.

OpenAI is reportedly preparing to launch self-serve advertiser capabilities in April while also expanding its ads pilot into additional countries. That does not automatically make ChatGPT Ads a serious channel for every advertiser. It does, however, make this the first point where more paid media teams may actually have to form a view on it.

And that view should probably be more skeptical than enthusiastic.

Because while the headlines around ChatGPT Ads are easy to frame as momentum, that is not the same thing as proving this is already a channel worth real budget.

For a lot of advertisers, the more useful question is not whether OpenAI can sell ads. It clearly can. The better question is whether this becomes a meaningful new acquisition channel or just another place brands feel pressure to pay for visibility before the economics are fully there.

That is the part worth taking seriously.

What OpenAI’s First Ads Pilot Told Us

The first version of ChatGPT Ads was never built for broad advertiser adoption.

OpenAI said in January that it would begin testing ads in the U.S. for logged-in adult users on Free and Go plans, while keeping Plus, Pro, Business, Enterprise, and Education ad-free. It also made a point of saying ads would not influence answers, would remain clearly separated from responses, and would not involve selling user conversations to advertisers.

That setup was important, because OpenAI was clearly trying to introduce monetization without damaging trust in the product. In practical terms, though, it also meant the pilot looked much closer to a controlled brand environment than a normal PPC channel.

The early economics reinforced that. Reuters reported in March that Criteo had been pitching advertiser commitments in the $50,000 to $100,000 range as OpenAI expanded the U.S. pilot, while other early reporting around the first wave of access pointed to premium CPMs and high barriers to entry.

That is not how platforms behave when they are trying to onboard the average mid-market advertiser. That is how they behave when they are trying to keep the test small, high-value, and manageable.

Some advertisers reported CTR of ads in ChatGPT as low as 0.91%, compared to an average benchmark of 6.4% on Google search. This metric is something marketers will want to watch closely when trying to identify how ChatGPT fits into their marketing strategy and aligning it with realistic expectations.

The context of those details matter, because some of the current reaction to ChatGPT Ads skips too quickly past what the pilot actually was. It was not broad proof of market fit.

At the same time, it would be too dismissive to treat the pilot as nothing more than a PR-friendly experiment.

OpenAI has a massive user base, a product people are already using in research and discovery behaviors, and enough advertiser demand to justify moving beyond the first phase. That does not prove long-term channel value, but it does suggest there is more here than novelty.

What About the Reported $100 Million Annualized Revenue From The Pilot?

The most repeated number in the current conversation is Reuters’ report that OpenAI’s U.S. ads pilot exceeded $100 million in annualized revenue within six weeks. That is a strong headline, and on its face, it suggests there is real advertiser demand. Reuters also reported that the pilot has expanded to more than 600 advertisers, with nearly 80% of small and medium-sized businesses signaling interest.

For a limited pilot, that seems to be a meaningful revenue pace. Even allowing for premium pricing and controlled access, it tells you this is not a fringe experiment with a handful of novelty buyers. Advertisers are interested, and OpenAI has clearly found enough demand to justify building this out further.

It also suggests there may be real commercial value in conversational inventory if the platform can maintain trust while expanding scale.

But, let’s take a deeper look into what the claim of annualized revenue means.

What Does Annualized Revenue Mean?

“Annualized revenue” is not the same thing as saying OpenAI booked $100 million in actual revenue in six weeks. It means the current pace of revenue, if sustained over a year, would exceed that number.

That is still notable, especially for a limited pilot. But it is also one of the easiest ways to make an early-stage business line sound bigger and more mature than it may actually be.

There are a few reasons to be careful about what it does and does not prove.

For one, premium pilot economics can make early revenue look healthier than a scaled platform may actually be. If access is limited, inventory is scarce, and pricing is high, you can build a very attractive short-term revenue story without proving that the platform is broadly investable for normal advertisers.

Second, Reuters reported that while about 85% of users are currently eligible to see ads, fewer than 20% are shown ads daily. That gives OpenAI room to increase monetization, but it also means the current revenue run rate is still being generated in a fairly controlled environment.

Third, the $100 million figure tells us very little about advertiser outcomes. It tells us advertisers are willing to buy in.

It does not tell us yet whether those advertisers are seeing meaningful incremental conversions, efficient customer acquisition, or strong downstream value relative to other channels.

So, while the revenue number is worth paying attention to, it shouldn’t be treated as proof that ChatGPT Ads are already a mature or “must-test” channel for most advertisers.

How Will The Self-Serve Ads Platform Change The Conversation?

In its newest development, OpenAI is reportedly preparing to open self-serve advertiser access in April.

That changes the conversation because self-serve is what turns a tightly controlled pilot into something more PPC managers may be expected to evaluate, budget for, or at least have an opinion on. Reuters also reported that OpenAI plans to expand the pilot beyond the U.S. into Canada, Australia, and New Zealand, which further signals that this is moving out of “contained experiment” territory.

A premium pilot mostly tells you whether a company can sell scarce inventory to selected advertisers. A self-serve platform is the first stage where advertisers can start evaluating whether the product behaves like a usable media channel at all.

That’s where the real learning begins again.

There’s a legitimate case for why some advertisers will want to pay close attention. If ChatGPT continues to become a place where people compare products, explore options, and work through buying decisions, then ad placements in that environment could eventually matter in a way that does not map cleanly to either search or paid social.

That possibility is real, it just has not been fully proven yet.

Why ChatGPT Ads Could Become A Meaningful Channel

If ChatGPT Ads are going to matter, the case for why is not hard to understand.

People are already using AI tools for research, planning, troubleshooting, product comparisons, and early-stage decision-making. That behavior is commercially important because it sits in a part of the journey that many advertisers care about but do not always capture especially well.

  • Search often captures explicit demand.
  • Paid social often creates or interrupts demand.
  • ChatGPT (or other AI platforms down the road) may end up sitting somewhere in-between.

A user in ChatGPT is often not just typing a keyword. They are explaining a situation, asking for options, and narrowing a decision. That creates a different kind of commercial context.

In theory, that should be valuable to advertisers, especially in categories where buyers need more information, more confidence, or more help evaluating tradeoffs before they convert.

If OpenAI can build an ad product that fits that behavior without damaging trust, there is a reasonable case that this becomes a genuinely useful environment rather than just another place to buy impressions.

Could The Hype Of ChatGPT Ads Be Overrated?

AI platforms have gotten a lot of hype over the past few years, and they all seem to be a race towards the top.

Now that ads are being placed into ChatGPT, the market anticipation may get ahead of what the platform has actually proven.

That tends to happen whenever a platform has three things at once:

  • Cultural momentum
  • Advertiser curiosity
  • Enough scale to make marketers nervous about being absent

That combination can create pressure to show up before the underlying economics are fully understood.

And that is where the “brand tax” concern comes in.

A brand tax shows up when advertisers feel compelled to buy visibility because the platform is becoming too important to ignore, even if the measurement is still fuzzy and the performance case is still incomplete.

That does not mean the spend is automatically wasteful. But, the motivation behind the spend can shift from strategic fit to defensive presence if not clearly thought through.

This is why I think the right posture for most advertisers is curiosity, not urgency.

What Types Of Advertisers Could Benefit First?

If ChatGPT Ads are going to work well, they are most likely to work first for businesses that already benefit from longer, more thoughtful buying journeys.

That includes categories where users are naturally looking for help evaluating options, understanding tradeoffs, or narrowing a set of choices.

Think along the lines of:

  • B2B software
  • Education
  • Travel
  • Home improvement
  • Higher-consideration e-commerce categories (like furniture)
  • Services where buyers need more confidence before converting

These are the kinds of businesses where the user journey is not always driven by a clean keyword and an immediate click. Often, the person is still trying to figure out what they need, what the differences are, or what is worth paying for.

That is where a conversational interface could eventually become commercially valuable.

If your ideal buyer tends to ask detailed, open-ended questions before making a decision, ChatGPT is a much more natural fit than it would be for a business relying on urgency, impulse, or low-friction conversion volume.

Why Many Mid-Market Advertisers Should Probably Wait

This is the part that will probably matter most to a lot of teams.

Most mid-market advertisers do not need to rush into ChatGPT Ads the moment self-serve opens.

That is not because the platform is irrelevant, but because most mid-market advertisers still have far more obvious growth opportunities in channels they already understand better.

If your search account structure is still messy, your paid social creative testing is inconsistent, your landing pages are underperforming, or your measurement setup is still weak, ChatGPT Ads are probably not the next smartest dollar.

That is especially true for advertisers that depend on:

  • Short purchase windows
  • Lower-ticket conversion volume
  • Aggressive CPA efficiency
  • Highly predictable scale

Those businesses may eventually find a role for ChatGPT Ads. But in the near term, it is hard to make the case that they should prioritize it over more proven opportunities.

That is where a lot of marketers get into trouble with new platforms. They confuse early visibility with early fit.

And those are not the same thing.

What Should PPC Teams Do Right Now?

For most PPC managers, the smartest move is not to force a test. It is to build a more useful framework for evaluating whether ChatGPT Ads deserve one later.

That starts with a few practical questions.

First, is your category one where conversational research behavior is likely to influence purchase decisions in a meaningful way?

Second, if you were to test this, what would success actually look like? Not in vague terms, but in measurable ones.

Would you be looking for qualified traffic? Stronger engagement? Assisted conversion value? Branded search lift? Lead quality? Or net-new customer acquisition?

If you cannot answer that before testing, then the test is probably not ready.

Third, do you have the measurement maturity to evaluate a channel that may sit somewhere between search, content discovery, and assisted decision support?

Because that is likely where ChatGPT Ads will live if they work at all.

A lot of teams will either under-credit this type of channel or over-excuse it. Neither is especially useful.

What Should PPC Managers Take From This?

ChatGPT Ads are worth paying attention to, even if your brand isn’t ready to test them yet.

Whether they become a durable acquisition channel, a useful upper- to mid-funnel complement, or simply another place where advertisers feel pressure to buy visibility before the performance case is fully established is unclear.

Right now, there is evidence for more than one possible outcome.

There is enough here to justify serious interest. OpenAI has the user scale, advertiser demand, and product usage patterns to make this more than a passing media story.

There is also enough uncertainty here to justify restraint. The platform still has a lot to prove around advertiser outcomes, economics, and where it truly fits in the paid media mix.

That is why the smartest response is probably not to rush in or write it off.

Watch the rollout carefully and pay attention to where category-specific fit starts to emerge. Then, be honest about whether your business has a reason to test beyond the fact that the platform is new.

That is a much better standard than hype, and a much better one than reflexive skepticism too.

More Resources:


Featured Image: Saeedreza/Shutterstock

Mullenweg To Cloudflare: Keep WordPress Out Of Your Mouth via @sejournal, @martinibuster

WordPress co-founder Matt Mullenweg responded to Cloudflare’s announcement of EmDash as the spiritual successor to WordPress by invoking Will Smith’s Oscars slap. Cloudflare’s CEO responded by doing exactly what Mullenweg told him not to do.

Spiritual Successor To WordPress

Mullenweg’s first criticism of the new EmDash was the claim that it was the spiritual successor to WordPress. He made the point that WordPress can be installed and used on virtually any device and on any platform, saying that this was a part of their mission to democratize publishing by making it easy to deploy on almost any kind of infrastructure.

Although he didn’t say it out loud, the implication is clear: WordPress can be deployed everywhere, and EmDash is not as flexible.

Matt aimed his next comment straight at Cloudflare:

“You can come after our users, but please don’t claim to be our spiritual successor without understanding our spirit.”

The Compliment Sandwich

Back in the early 2000s, Googlers were famous for their friendliness and smiles. I don’t think it was a calculated thing; the smiles were not a persona; it was genuine. I believe that many of the Googlers who had interactions with the SEO community were genuinely friendly and truly wanted to help people with their SEO issues. When I lived in San Francisco, I had many visits at Google and had nothing but positive experiences.

Matt affects that same kind of persona where he speaks with a smile. But he also does it while being critical of things, which is a kind of dissonant thing to witness. His response to Cloudflare is the written equivalent of that approach.

It follows compliment sandwich pattern:

  • Positive statement
  • Criticism or negative point
  • Another positive statement

Done correctly, with tact and genuine empathy, it can soften criticism. It’s a valid approach to providing critical but helpful feedback.

Matt accused Cloudflare of using EmDash as a way to promote their infrastructure, but he did it with a smile.

He criticized:

“I think EmDash was created to sell more Cloudflare services.”

Then he switched over to the positive statement:

“And that’s okay! It can kinda run on Netlify or Vercel, but good stuff works best on Cloudflare. This is where I’m going to stop and say, I really like Cloudflare! I think they’re one of the top engineering organizations on the planet; they run incredible infrastructure, and their public stock is one of the few I own. And I love that this is open source! That’s more important than anything. I will never belittle a fellow open source CMS; I only hate the proprietary ones.”

Then he criticized Cloudflare again:

“If you want to adopt a CMS that will work seamlessly with Cloudflare and make it hard for you to ever switch vendors, EmDash is an incredible choice.”

That last part is a backhanded and sarcastic compliment, implying that EmDash is a way to trap users within Cloudflare’s infrastructure. Mullenweg offered a bullet-point list of additional criticism mixed with compliments.

Keep WordPress Out Of Your Mouth

Mullenweg ended his blog post with a conciliatory-sounding paragraph that ends abruptly with a phrase that invoked Will Smith’s Oscars slap:

“Some day, there may be a spiritual successor to WordPress that is even more open. When that happens, I hope we learn from it and grow together. Until then, please keep the WordPress name out of your mouth.”

Mullenweg is doing something between the lines there. Whether he did it intentionally or not, he’s invoking Will Smith’s infamous moment at the Oscars, when he slapped Chris Rock across the face and told him to keep his wife’s name out of his mouth. That phrase subtly invokes a violent image, with Mullenweg playing the role of Will Smith slapping Cloudflare across the face.

By using that specific phrase, Matt Mullenweg was, intentionally or not, invoking the conflict by comparing Cloudflare’s use of the “WordPress” name to an insulting personal attack.

Understated Irony

After being told to keep WordPress out his mouth, Cloudflare co-founder and CEO Matthew Prince responded on X by saying it’s a fair criticism and then immediately putting WordPress in his mouth. Prince tweeted:

“Think this is a fair critique from @photomatt of EmDash.

I remain hopeful it’ll bring a broader set of developers into the WordPress ecosystem.”

What Prince did there was politely defy Mullenweg by tweeting the word “WordPress” in his response after being told to keep it out of his mouth while simultaneously adopting the persona of someone trying to “help” the person who just slapped him. In the context of the Oscar reference, it’s as if Chris Rock had responded to the slap by calmly saying, “I hope this incident brings more viewers to your next movie.”

Was that meant as understated irony? If so, it’s a master class.

Featured Image by Shutterstock/Prostock-studio

Google Answers Why Some SEOs Split Their Sitemap Into Multiple Files via @sejournal, @martinibuster

Google’s John Mueller answered a question about why some websites use multiple XML sitemaps instead of a single file. His answer suggests that what looks like unnecessary complexity may come from reasons that are not immediately obvious.

The question came from an SEO trying to understand why managing multiple sitemap files would be preferred over keeping everything in one place.

Question About Using Multiple Sitemaps

The SEO framed the issue as a matter of efficiency, questioning why anyone would choose to increase the number of files they have to manage.

They asked:

“Can I ask a silly question, what’s the advantage of multiple site maps? It seems like your going from 1 file to manage to X files to manage.

Why the extra work? Why not just have 1 file?”

It’s a good question, avoiding extra work is always a good idea in SEO, especially if someone has a relatively small website, it makes sense to have just one sitemap but as Google’s Mueller explains, there may be good reasons to split a sitemap into multiple files.

Mueller Explains Why Multiple Sitemaps Are Used

Mueller responded by listing several reasons why multiple sitemap files are used, including both practical and less intentional causes.

He responded:

“Some reasons I’ve seen:

  • want to track different kinds of urls in groups (“product detail page sitemap” vs “product category sitemap” — which you can kinda do with the page indexing report)
  • split by freshness (evergreen content in a separate sitemap file – theoretically a search engine might not need to check the “old” sitemap as often; I don’t know if this actually happens though)
  • proactively split (so that you don’t get to 50k and have to urgently figure out how to change your setup)
  • hreflang sitemaps (can take a ton of space, so the 50k URLs could make the files too large)
  • my computer did it, I don’t know why”

Mueller’s answer shows that sitemaps can be used in creative ways that serve a purpose. Something I’ve heard from enterprise level SEOs is that they find that keeping a sitemap to well under 50k lines ensures better indexing.

Takeaways

Mueller’s answer shows that sometimes keeping things “simple” isn’t always the best strategy. It might make sense apply organization to the sitemaps appears to be unnecessary complexity is often the result of practical constraints, evolving site structures, or automated systems rather than deliberate optimization.

  • Multiple sitemaps can be used to group different types of content
  • They help avoid hitting technical limits like the 50,000 URL cap
  • Some implementations are based on theory rather than confirmed behavior
  • Not all sitemap structures are intentional or strategically planned

Featured Image by Shutterstock/Rachchanont Hemmawong

Fuel prices are soaring. Plastic could be next.

As the war in Iran continues to engulf the Middle East and the Strait of Hormuz stays closed, one of the most visible global economic ripple effects has been fossil-fuel prices. In particular, you can’t get away from news about the price of gasoline, which just topped an average of $4 a gallon in the US, its highest level since 2022.

But looking ahead, further consequences for the global economy could be looming in plastics. Plastics are made using petrochemicals, and the supply chain impacts of the oil bottleneck near Iran are starting to build up. 

Plastic production accounts for roughly 5% of global carbon dioxide emissions today. And our current moment shows just how embedded oil and gas products are in our lives. It goes far beyond their use for energy. 

As I write this, I’m wearing clothes that contain plastic fibers, typing on a plastic keyboard, and looking through the plastic lenses of my glasses. It’s hard to imagine what our world looks like without plastic. And in some ways, moving away from fossil-derived plastic could prove even more complicated than decarbonizing our energy system. 

Crude oil prices have been on a roller-coaster in recent weeks, and prices have recently topped $100 a barrel.

Crude oil contains a huge range of hydrocarbons, and it’s typically refined by putting it through a distillation unit that separates the raw material into different fractions according to their boiling point. Those fractions then go on to be further processed into everything from jet fuel to asphalt binder. We’ve already seen the price spikes for some materials pulled out of crude oil, like gasoline and jet fuel.

Let’s zoom in on another component, naphtha. It can be added to gasoline and jet fuel to improve performance. It can also be used as a solvent or as a raw material to make plastics.

The Middle East currently accounts for about 20% of global naphtha production­ and supplies about 40% of the market in Asia, where prices are already up by 50% over the last month.

We’re starting to see these effects trickle down already. The price of polypropylene (which is made from naphtha and used for food containers, bottle caps, and even automotive parts) is climbing, especially in Asia.  

Typically, manufacturers have a bit of stock built up, but that’ll be exhausted soon, likely in the coming weeks. The largest supplier of water bottles in India recently announced that it would raise prices by 11% after its packaging costs went up by over 70%, according to reporting from Reuters. Toys could be more expensive this holiday season as manufacturers grapple with supply chain concerns.

Americans will likely feel these ripples especially hard if disruptions continue. The average US resident used over 250 kilograms of new plastics in 2019, according to a 2022 report from the Organization for Economic Cooperation and Development. That’s an absolutely massive number—the global average is just 60 kilograms.

The effects of higher prices for both fuels and feedstocks could compound and multiply, and alternatives aren’t widely available. Bio-based plastics made with materials like plant sugars exist, but they still make up a vanishingly tiny portion of the market. As of 2025, global plastics production totaled over 431 million metric tons per year. Bio-based and bio-degradable plastics made up about 0.5% of that, a share that could reach 1% by 2030.

Bio-based plastics are much more expensive than their fossil-derived counterparts. And many are made using agricultural raw materials, so scaling them up too much could be harmful for the environment and might compete with other industries like food production.

Recycling isn’t the easy answer either. Mechanical recycling is the current standard method used for materials like the plastics that make up water bottles and disposable coffee cups. But that degrades the materials over time, so they can’t be used infinitely. Chemical recycling has its own host of issues—the facilities that do it can be highly polluting, and today plastics that go into advanced recycling plants largely don’t actually go into new plastics.

There’s been a lot of talk in recent weeks about how this energy crisis is going to push the world more toward renewable energy. Solar panels, electric vehicles, and batteries could suddenly become more attractive as we face the drastic consequences of a disruption in the global fossil-fuel supply.

But when it comes to plastic, the future looks far more complicated. Even though the plastics industry is facing much the same disruptions as the energy sector, there aren’t the same obvious alternatives available for a transition. Our lives are tied up in plastic, with uses ranging from the essential (like medical equipment) to the mundane (my to-go coffee cup). Soon, our economy could feel the effects of just how much we rely on fossil-derived plastics, and how hard it’s going to be to replace them. 

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here