MCP Shifts AI from Chat to Work

Beehiiv, an emerging email newsletter platform, became this month the most recent ecommerce-adjacent software company to announce an MCP integration for artificial intelligence.

On its own, Beehiiv’s announcement might not seem significant. Yet the integration could point to a larger trend.

Increasingly, merchants’ software tools have direct, even native, AI connections. Examples include Shopify, WooCommerce, Yottaa, and Shippo.

What Is an MCP?

Anthropic, the company behind the popular Claude LLM, defined the Model Context Protocol in 2024. Releasing it as “a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. It aims to help frontier models produce better, more relevant responses.”

Essentially, the MCP (and competing protocols) ensure secure two-way connections between data sources and AI-powered tools or agents.

Diagram from Anthropic showing MCP (Model Context Protocol) as a central hub connecting AI applications on the left — including chat interfaces, IDEs, and other AI apps — with data sources and tools on the right, including databases, development tools, and productivity apps, via bidirectional data flow

Connecting data sources and services with AI-enabled agents and tools, the MCP describes one way AI will integrate into business operations. Click image to enlarge.

Instead of building one-off integrations for every service via an API, a business can expose its entire tools and data to AI systems. An AI model can then query those systems and take action.

Business leaders can think of MCP as AI infrastructure. It sits between the AI and the systems that run the business. And when a software supports the MCP, the opportunity to integrate with AI for analysis, generation, and automation is relatively greater and easier.

Operational Shift

Many AI tools today summarize reports, draft emails, or answer questions. With MCP-style integrations, those tools can act. An AI assistant could check inventory, compare shipping rates, and evaluate campaign performance, before making adjustments, perhaps in real time.

Consider a simple case. An AI system detects rising delivery costs on a group of orders. With access to shipping tools, it compares carrier rates and selects a cheaper option. The same system updates the order and notifies the customer. This can happen even as someone in the warehouse is picking items.

That type of loop is what MCP and similar protocols are trying to enable.

Here are a few examples.

Shopify’s Hydrogen update introduced AI support for Storefront MCP.

The integration allows AI agents to browse products, manage carts, and assist with checkout. In effect, the storefront becomes a structured environment that an AI can navigate. An AI could have done this before, but the MCP provides rules that make it more successful.

Promotional banner for Shopify's Storefront MCP, showing a headline, brief description of connecting AI assistants to commerce data via Model Context Protocol, a 'Start building now' call to action, and a decorative graphic of code snippets on a dark background.

Shopify is one example of an ecommerce-related MCP implementation.

Shippo’s MCP server exposes shipping workflows to AI systems. An AI assistant can create shipments, compare carrier rates, generate labels, track packages, and validate addresses. These are tasks that typically require manual steps or custom integrations.

An AI system identifies a cluster of delayed shipments. It checks alternative carriers, updates fulfillment rules, and flags affected customers. The shopper experience is better, and the agent acted without direct supervision, albeit within the set of guidelines.

Beehiiv’s MCP integration links newsletter accounts to AI tools such as ChatGPT and Claude.

The current version focuses on analysis. AI can evaluate subject lines, subscriber growth, churn, and engagement trends. That insight can guide content and monetization decisions. It might even help close the loop, so to speak, on how email marketing contributes to ecommerce sales.

APIs Remain

MCP does not replace application programming interfaces; it complements them.

APIs are precise and stable. They are suited for core integrations such as order processing or payments. MCP is flexible. The protocol allows AI systems to move across tools without rigid workflows.

In practice, an ecommerce stack will likely combine APIs for reliability and MCP-style interfaces for adaptability.

Other Protocols

The MCP is part of a broader shift toward agentic applications and commerce. Other protocols are emerging.

OpenAI’s Agentic Commerce Protocol, for example, aims to enable product discovery and transactions within AI environments such as ChatGPT. Google is developing a similar approach for its AI interfaces.

These protocols define how shopping happens inside AI-driven surfaces. MCP focuses on how AI systems access business operations behind the scenes.

For merchants, the distinction matters. One set of standards governs how consumers find and buy products. Another governs how the business fulfills and manages those transactions. Each case illustrates the evolution of businesses and their software tools with AI.

Implementation

The most important takeaway may be that the MCP signals a shift from AI as a chat tool to an operator in a business.

Ecommerce leaders should focus less on the protocol itself and more on being ready to adapt to AI use and integration. Having clean, organized data and clear workflows is more important than being the first to adopt new tools.

Expect a stack where APIs offer reliability and MCP-like layers enable flexibility. And monitor where AI-driven shopping happens, as protocols from platforms such as OpenAI or Google may shape demand as much as backend operations.

The Death Of The Static GBP: Why Dynamic Profiles Are The New Local Ranking Factor via @sejournal, @AdamHeitzman

You probably set up your Google Business Profile a while back, filled in your address, picked your categories, maybe chased down a few reviews, and then called it done. Totally understandable. That was enough, once.

But here’s what’s changed: If you haven’t meaningfully touched that profile in months, you’re losing visibility to competitors who figured out something you haven’t yet. Google transformed GBP from a directory listing into a live engagement surface, and businesses that treat it like the former are quietly bleeding map pack rankings they don’t even know they’ve lost.

This applies to every local business. Retailers, yes, but also law firms, dental practices, restaurants, gyms, plumbers, and salons. If your GBP isn’t actively signaling to Google that you’re open for business and earning it every day, you’re leaving real visibility on the table.

Let’s talk about what killed the static profile, what Google built in its place, and exactly what you need to do about it.

When “Set It And Forget It” Actually Worked

Cast your mind back to the directory era. You filled out your name, address, and phone number (NAP), chose a category, uploaded a logo, and crossed your fingers. Google treated these profiles as reference points, fixed coordinates in the physical world. The algorithm cared about NAP consistency across directories more than anything else. Match your citations across 50 listing sites? You were golden.

It worked because that’s genuinely all Google needed. The platform was confirming you existed at a given address. Nothing more.

The New Table Stakes (And Why They’re Not Enough)

Those fundamentals haven’t disappeared; they’ve just become the entry fee. According to the 2026 Local Search Ranking Factors report, the primary GBP category is still the No. 1 factor for local pack visibility, followed by proximity to the searcher and keywords in the business title. These matter enormously. But when every serious competitor has them dialed in, they stop being differentiators.

Screenshot from Whitespark, March 2026

The report also makes clear that behavioral and engagement signals, posts, photos, clicks, calls, direction requests, and review cadence are climbing fast in importance. Google is actively rewarding businesses that “look alive.”

There’s also a finding worth pausing on: Being open when users search is now the No. 5 local pack ranking factor. Your hours aren’t just informational; they’re a ranking signal. This was first noted by Joy Hawkins of Sterling Sky and subsequently confirmed by a BrightLocal study of 50 businesses across 10 categories, which found that rankings tended to drop when a business is listed as closed. Don’t treat your hours as a set-and-forget field. Audit them quarterly, set special hours for holidays before the holiday arrives (not after), and consider whether your current hours are costing you visibility during high-intent search windows.

A static profile with perfect NAP and a 4.8-star rating is like showing up to a job interview in a great suit but refusing to speak. You look the part, but you’re not convincing anyone you’re the right choice.

Google’s Shift: From Listings To Live Engagement

Google didn’t randomly decide to make GBP harder to manage. They followed user behavior. People aren’t browsing businesses anymore; they’re searching with immediate intent. “Who can help me with this right now?” isn’t a research question; it’s a decision waiting to happen.

So Google built GBP into an active engagement surface. For retailers, that meant integrating Merchant Center so real-time product inventory could surface directly in search results and Maps. For service businesses, it means appointment booking, Q&A, and post-activity are all live signals. For restaurants, it’s menus, wait times, and reservation links. The platform expects ongoing input, and it rewards the businesses that provide it.

The core principle is the same whether you sell hiking boots or handle divorces: Google favors profiles that continuously demonstrate relevance and activity. The mechanism differs by business type. The outcome doesn’t.

The Signals That Actually Move The Needle

Review Velocity, Not Just Review Volume

Reviews have always mattered, but the 2026 Local Search Factors Ranking Report data adds important nuance. Fresh reviews don’t just help you rank; they help people pick you over a competitor with the same star rating. Research further confirms that review signals are gaining influence across local rankings, with proximity earning you the look, but review content helping secure the top spot.

Do this: Make review requests part of your operational workflow. Send the ask within 24 hours of a completed service or transaction while the experience is fresh. Respond to every review, positive and negative, within 48 hours. Owner responses are an engagement signal, not just a reputation management courtesy.

Not that: Don’t batch review requests monthly or rely on a generic follow-up email. Don’t respond to positive reviews with a copy-paste “Thanks for your feedback!” Google and potential customers can both tell.

A law firm that earns 12 reviews over three years and one that earns 12 reviews over three months are sending very different signals to the algorithm, even with identical star ratings.

GBP Posts: The Most Underused Freshness Signal

Most businesses either never post to GBP or publish one post in January and forget it exists. That’s a significant missed opportunity. Posts, whether offers, updates, events, or business news, are a direct freshness signal that tells Google your profile is actively managed.

Do this: Post at least once a week. Tie posts to things that are actually happening: a seasonal promotion, a recently completed project, a staff milestone, or a local event you’re involved in. Use the “Offer” post type when you have something time-sensitive; the expiry date creates urgency and signals recency.

Not that: Don’t recycle the same “Welcome to our business!” post every few months. Don’t post only when you remember to; build it into a recurring task, same as you would any other content channel. And don’t ignore the post types Google gives you; Events and Offers get more real estate in the profile than standard Updates.

Photos: Recency Matters As Much As Quality

According to Birdeye’s State of Google Business Profile 2025 report, verified profiles with photos consistently receive more website visits, direction requests, and calls, and listings with recent photos and video see measurably higher engagement than those with stale or infrequently updated imagery. That “recently updated” part is key. A profile with 80 photos, all uploaded three years ago, isn’t sending the same freshness signal as one with steady uploads over recent months.

Do this: Set a recurring reminder to upload new photos at least twice a month. Show real things: recent work, your current team, your updated space, seasonal inventory. For service businesses, job-site photos and before/after shots are gold; they’re authentic, specific, and far more compelling than stock imagery.

Not that: Don’t upload a batch of 50 photos once a year and call it done. Don’t use obviously staged or stock photos as your primary images; research on competitor GBP analysis shows that photo quality and authenticity are increasingly factored into how profiles are perceived. And don’t ignore customer-uploaded photos; respond to them or flag inappropriate ones rather than leaving them unattended.

Booking And Messaging: Closing The Loop Inside Google

Google increasingly wants to keep searchers inside its own ecosystem. For local businesses, that means enabling every feature your business type supports: “Book Online” links, appointment URLs, and the Q&A section. These aren’t just convenience features; they’re engagement signals. When a user books directly through your GBP, that interaction tells Google your profile is functional and driving real-world action.

Do this: If your business supports appointments, connect a booking link (Google supports integrations with platforms like Booksy, Vagaro, OpenTable, and others). Seed your Q&A section with the three to five questions customers actually ask most, and answer them yourself before strangers do it for you.

Not that: Don’t leave your Q&A section empty or unmonitored, unanswered questions (or worse, inaccurate answers from random users) erode trust and represent a missed engagement opportunity.

For Retailers: Real-Time Inventory Is Its Own Category

If you sell physical products, everything above applies, but you have an additional lever that service businesses don’t: real-time inventory.

Google integrated Merchant Center with GBP specifically to surface what’s on your shelves in search results and Maps.

Do this: Prioritize your top 50 highest-intent, most-searched products first. Get those live and accurate before trying to sync your entire catalog. Add product schema markup to your website’s product pages so your feed and your site are telling Google the same thing.

Not that: Don’t upload a feed manually once a week and assume that’s close enough to real-time. Don’t skip the Merchant Center diagnostics step; a feed with errors will silently underperform, and you won’t know why until you check. And don’t assume inventory feeds only matter for paid ads; enabling free local listings through Merchant Center unlocks organic product visibility in search, Maps, and your GBP profile at no additional cost.

The AI Layer: Why This All Matters More Than Ever

Here’s the dimension that makes everything above more urgent: GBP signals are now feeding directly into AI-driven local results, not just the traditional map pack.

Google’s AI Mode pulls from the same signals discussed in this article: review recency and sentiment, photo freshness, post activity, accurate hours, and service completeness. The Whitespark 2026 report introduced an entirely new AI Search Visibility category for the first time, with three of the top five AI visibility factors being citation and entity-based signals. Businesses that keep their GBP current and consistent are the ones being surfaced in AI-generated answers. Businesses with stale profiles aren’t just losing map pack spots; they’re becoming invisible to AI-driven discovery entirely.

Treat every update you make to your GBP not just as a ranking tactic for the traditional local pack, but as a data signal for AI systems that are increasingly acting as the front door to local search. Accurate hours, fresh photos, recent reviews, and complete service descriptions aren’t just best practices; they’re the inputs AI needs to confidently recommend your business.

What To Measure

Once you’re actively managing your profile, track what’s actually moving:

Profile interactions: calls, direction requests, website clicks, and (where applicable) booking clicks tell you which features are actually driving action. 

Review velocity: not just your total count, but how many you’re earning per month and how quickly you’re responding. 

Post engagement: views and clicks on GBP posts help you understand which content types your local audience actually responds to. For retailers, add product impressions and store visit conversions to this list.

The Compounding Effect

Here’s what makes dynamic GBP management so powerful over time: the signals compound. Consistent posting builds freshness and authority. Steady review velocity builds trust signals. Updated photos drive higher engagement. Higher engagement improves rankings. Better rankings bring more profile views, more reviews, and more interactions, which further improve rankings. And now, all of those same signals are feeding AI systems that are reshaping how local businesses get discovered in the first place.

Local visibility is increasingly built on engagement, credibility, and connection, not just keyword optimization. Static profiles erode authority over time. Dynamic profiles compound it.

The businesses treating GBP like a compliance checkbox are the ones watching competitors steal map pack spots they used to own. The ones showing up consistently, posting, earning reviews, updating photos, keeping information current, and (for retailers) feeding Google live inventory, are building durable local visibility that’s genuinely hard to disrupt, whether the search happens in the traditional map pack or in an AI-generated answer.

That’s the gap. The only question is which side of it you want to be on.

More Resources:


Featured Image: A_stockphoto/Shutterstock

A woman’s uterus has been kept alive outside the body for the first time

<div data-chronoton-summary="

  • A uterus survived outside the body for the first time: Scientists in Spain kept a donated human uterus alive for 24 hours using a machine that mimics the body’s circulatory system, pumping modified blood through the organ.
  • The researchers hope to someday keep a uterus alive for a full menstrual cycle: Researchers also want to study how embryos implant into the uterine lining, by observing the process in a living organ outside the body.
  • Bigger ambitions are already on the table: The team’s founder envisions a future where a machine like this could gestate a human fetus entirely outside the body, offering a new path to parenthood for those unable to carry a pregnancy.

” data-chronoton-post-id=”1134766″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

“Think of this as a human body,” says Javier González.

In front of me is essentially a metal box on wheels. Standing at around a meter in height, it reminds me of a stainless-steel counter in a restaurant kitchen. It is covered in flexible plastic tubing—which act as veins and arteries—connecting a series of transparent containers, the organs of this machine.

What makes it extra special is the role of the cream-colored tub that sits on its surface. Ten months ago, González, a biomedical scientist who developed the device with his colleagues at the Carlos Simon Foundation, carefully placed a freshly donated human uterus in the tub. The team connected it to the device’s tubes and pumped in modified human blood.

The device kept the uterus alive for a day—a new feat that could represent the first step to the long-term maintenance of uteruses outside the human body. The work has not yet been published. 

The team members want to keep donated human uteruses alive long enough to see a full menstrual cycle. They hope this will help them study diseases of the uterus and learn more about how embryos burrow their way into the organ’s lining at the start of a pregnancy. They also hope that future iterations of their device might one day sustain the full gestation of a human fetus.

The machine is technically called PUPER, which stands for “preservation of the uterus in perfusion.” But González’s colleague Xavier Santamaria says the team has adopted a nickname for it: “We call it ‘Mother.’”

The organ in the machine

González and Santamaria, medical vice president of the Carlos Simon Foundation, demonstrated how the device might work when I visited the foundation in Valencia, Spain, earlier this month (although it held no organs on that day). 

Both are interested in learning more about implantation, the moment at which an embryo attaches itself to the lining of a uterus—essentially, the very first moment of pregnancy.

The foundation’s founder and director, Carlos Simon, believes it’s a sticking point in IVF: Scientists have made many improvements to the technology over the years, but the failure of embryos to implant underlies plenty of unsuccessful IVF cycles, he says. Being able to carefully study how the process works in a real, living organ might give the team a better idea of how to prevent those failures.

a person in gloves stands next to a machine with lots of tubing coming in and out of the metal exterior

JESS HAMZELOU
a sheep uterus resting on gauze connected to several tubes

JAVIER GONZALES/CARLOS SIMON FOUNDATION

Javier González demonstrates the perfusion machine. A previous iteration of the device kept a sheep’s uterus (right) alive for a day.

The team took inspiration from advances in technologies designed to maintain donated organs for transplantation. In recent years, researchers around the world have created devices that deliver nutrients and filter waste so that organs can survive longer after being removed from donors’ bodies.

The main goal here is to buy time. A human organ might last only a matter of hours outside the body, so a transplant may require frantic preparation for the recipient, sometimes in the middle of the night. With a little more time, doctors could find better donor-patient matches and potentially test the quality of donated organs.

This approach is called normothermic or machine perfusion, and it is already being used clinically for some liver, kidney, and heart transplants.

The team at the Carlos Simon Foundation built a similar machine for uteruses. A blood bag hangs on one side. From there, blood is ferried via plastic tubing to a pump, which functions as the heart. The pump shunts the blood through an oxygenator, which adds oxygen and removes carbon dioxide as the lungs would in a human body.

The blood is warmed and passed through sensors that monitor the levels of glucose and oxygen, along with other factors. It passes through a “kidney” to remove waste. And finally the blood reaches the uterus, hooked up to its own plastic “arteries” and “veins.” The organ itself sits at a tilt, just as in the body, and is kept in a humid environment to stay moist.

Mother’s first uterus

The team first began testing an early prototype of the device with sheep uteruses around four years ago. That meant carting the machine to an animal research center in Zaragoza, around 200 miles away. Over the course of the preliminary study, veterinary surgeons removed the uteruses of six sheep and hooked them up to the machine. They kept each uterus alive for a day, using blood from the same animals.

After the sheep experiments, the researchers carted their machine back to Valencia and modified it to achieve its current incarnation, “Mother.” They started working with a local hospital that performed hysterectomies. And in May last year, they were offered their first human uterus.

The team needed to be quick. “You need to put [the uterus in the machine] within a couple of hours, maximum, of the extraction,” says Santamaria. He and his colleagues also needed to connect the uterus’s blood vessels to the tubing delicately, taking care to avoid any blockages (clotting is a major challenge in organ perfusion). The organ was hooked up to human blood obtained from a blood bank.

It seemed to work—at least temporarily. “We kept it alive for one day,” says Santamaria.

“As a proof of concept, it is impressive,” says Keren Ladin, a bioethicist who has focused on organ transplantation and perfusion at Tufts University. “These are early days.”

It might not sound like much, but 24 hours is a long time for an organ to be out of the body. Maintaining a donated uterus for that long could expand the options for uterus transplant, a fairly new procedure offered to some people who want to be pregnant but don’t have a functional uterus, says Gerald Brandacher, professor of experimental and translational transplant surgery at the Medical University of Innsbruck in Austria.

“It is better than what we currently have, because we have only a couple of hours,” he says. So far, most uterus transplants have been planned operations involving organs from living donors. A technology like this could allow for the use of more organs from deceased donors, he says.

That work is “not in the immediate pipeline” for the team in Spain, says Santamaria. “We are working on other problems.”

Pregnancy in the lab?

Santamaria, González, and their colleagues are more interested in using sustained human uteruses for research. 

They’ve mounted a camera to a wall in the corner of the room, pointed at their machine. It allows the team to monitor “Mother” remotely, and to check if any valves disconnect. (That happened once before—a spike in pressure caused the blood bag to come loose, spilling a liter of blood on the floor, Santamaria says.)

They’d like to be able to keep their uteruses alive for around 28 days to study the menstrual cycle and disorders that affect the uterus, like endometriosis and fibroids.

It won’t be easy to maintain a uterus for that long, cautions Brandacher. As far as he knows, no one has been able to maintain a liver for more than seven days. “No studies out there … have shown 30-day survival in a machine perfusion circuit,” he says.

But it’s worth the effort. The team’s main interest is learning more about how embryos implant in the uterine lining at the start of a pregnancy. They hope to be able to test the process in their outside-the-body uteruses.

They won’t be allowed to use human embryos for this, says González—that would cross an ethical boundary. Instead, they plan to use embryo-like structures made from stem cells. The structures closely resemble human embryos but are created in a lab without sperm or eggs.

Simon himself has grander ambitions.

He sees a future in which a machine like “Mother” will be able to fully gestate a human, all the way from embryo to newborn. It could offer a new path to parenthood for people who don’t have a uterus, for example, or who are not able to get pregnant for other reasons.

He appreciates that it sounds futuristic, to say the least. “I don’t know if we will end up having pregnancies inside of the uterus outside of the body, but at least we are ready to understand all the steps to do that,” he says. “You have to start somewhere.”

Answer Engine Optimization: How To Get Your Content Into AI Responses via @sejournal, @slobodanmanic

This is Part 2 in a five-part series on optimizing websites for the agentic web. Part 1 covered the evolution from SEO to AAIO and why the shift matters. This article gets practical: how AI systems actually select content, and what you can do about it.

AI Doesn’t Rank Pages. It Selects Fragments.

Traditional search ranks whole pages. AI search does something fundamentally different.

Microsoft’s Krishna Madhavan, principal product manager on the Bing team, described the shift in October 2025: AI assistants “break content down, a process called parsing, into smaller, structured pieces that can be evaluated for authority and relevance. Those pieces are then assembled into answers, often drawing from multiple sources to create a single, coherent response.”

This is the core insight. AI doesn’t pick the best page and show it. It picks the best fragments from many pages and weaves them together. Your page might rank No. 1 on Google and still not get cited in an AI response if its content isn’t structured in fragments that AI can extract.

The numbers show the shift is real. According to the Conductor AEO/GEO Benchmarks Report (January 2026; 13,770 domains, 17 million AI responses), AI traffic now accounts for 1.08% of all website sessions, growing roughly 1% month over month. Microsoft reported that AI referrals to top websites spiked 357% year-over-year in June 2025, reaching 1.13 billion visits. Small numbers today, compounding fast.

One in four Google searches now triggers an AI Overview. In healthcare, it’s nearly one in two. The surface area is growing, and the content that fills these answers has to come from somewhere. The question is whether it comes from you.

The Research: What Actually Gets Cited

The academic research on what makes content citable in AI responses has matured rapidly. The foundational paper, “GEO: Generative Engine Optimization” (Princeton, IIT Delhi, Georgia Tech, published at KDD 2024), tested nine optimization strategies and found that GEO techniques could boost visibility by up to 40% in AI responses. The most effective single technique was citing credible sources, which produced a 115.1% visibility increase for websites that weren’t already ranking in the top positions.

A counterintuitive finding: Writing in an authoritative or persuasive tone did not improve AI visibility. AI systems don’t respond to rhetorical style. They respond to verifiable information.

Since then, 2025 brought a wave of follow-up research that tested these ideas on real production AI engines rather than simulated ones.

The University of Toronto study (September 2025) was the first large-scale analysis across ChatGPT, Perplexity, Gemini, and Claude. Their most striking finding: AI search overwhelmingly favors earned media. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time, compared to Google’s 54.1%. Automotive showed a similar pattern at 81.9% versus 45.1%. In other words, it’s not just how you write content, but whose domain it appears on. Press coverage, product reviews on independent websites, and mentions on industry publications carry far more weight in AI responses than your own website.

Carnegie Mellon’s AutoGEO study (October 2025) used automated methods to discover what generative engines actually prefer. The results showed up to 50.99% improvement over the best baseline, with universal preferences emerging across engines: comprehensive topic coverage, factual accuracy with citations, clear logical structure with headings and lists, and direct answers to queries.

The GEO-16 framework (September 2025) analyzed 1,702 real citations from Brave, Google AI Overviews, and Perplexity. It identified 16 on-page quality factors that predict citation likelihood. The top three: metadata and freshness, semantic HTML, and structured data. Technical on-page factors matter as much as the quality of the writing itself.

And a reality check from Columbia and MIT’s ecommerce study (November 2025): of 15 common content rewriting heuristics, 10 produced negligible or negative results. The optimization strategies that did work converged toward truthfulness, user intent alignment, and competitive differentiation. Not tricks. Substance.

The overall pattern across all of this research: AI systems reward clarity, factual accuracy, and structure. They don’t reward marketing language, persuasion tactics, or keyword density.

Content Structure That Earns Citations

Based on the research and official guidance from Microsoft and Google, here’s what structurally makes content citable.

Heading hierarchy matters more than ever. Use descriptive H2 and H3 headings that each cover one specific idea. Microsoft lists strong headings as “signals that help AI know where a complete idea starts and ends.” Vague headings like “Learn More” or “Overview” give AI nothing to work with. A heading like “How AI parses content differently than search engines” tells the system exactly what the section contains.

Q&A format is native to AI. Write questions as headings with direct answers below them. Microsoft notes that “assistants can often lift these pairs word for word into AI-generated responses.” If your content answers the question someone asks an AI, and it’s structured as a clear question-and-answer pair, you’ve made the AI’s job easy.

Make content snippable. Bulleted and numbered lists, comparison tables, step-by-step instructions. These formats give AI clean, extractable fragments. A paragraph buried in a wall of text is harder for AI to isolate than the same information presented as a three-item list.

Front-load the answer. Start sections with the key information, then provide context. If someone asks, “What temperature should I bake bread at?” and your content opens with a two-paragraph history of bread making before mentioning 375°F, you’ll lose the citation to a competitor who leads with the answer.

Keep sections self-contained. Each section should make sense on its own, without requiring the reader to have read the previous section. AI extracts fragments. If your fragment only makes sense in the context of the whole page, it won’t be selected.

An important technical note from Microsoft: “Don’t hide important answers in tabs or expandable menus: AI systems may not render hidden content, so key details can be skipped.” FAQ answers collapsed inside an expandable menu, product specs hidden behind tabs, content that requires interaction to reveal: it may all be invisible to AI. If information is important, it needs to be in the visible HTML.

Authority Signals For AI

E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) isn’t just a Google concept anymore. It’s what AI systems look for across the board, even if they don’t use the term.

Microsoft’s October 2025 guidance describes the baseline: success starts with content that is “fresh, authoritative, structured, and semantically clear.” On the clarity side, they’re specific: “avoid vague language. Terms like innovative or eco mean little without specifics. Instead, anchor claims in measurable facts.” Saying something is “next-gen” or “cutting-edge” without context leaves AI unsure how to classify it.

The research backs this up. The original GEO paper found that writing in a persuasive or authoritative tone did not improve AI visibility. Facts and cited sources did. Marketing language doesn’t impress algorithms.

This connects to the University of Toronto’s finding about earned media dominance. AI systems trust third-party validation more than self-promotion. In consumer electronics, AI cited third-party authoritative sources 92.1% of the time compared to Google’s 54.1%. The implication: getting your expertise published on industry websites, earning press coverage, and building a presence on authoritative platforms matters more for AI visibility than perfecting the copy on your own site.

Freshness is a signal, not a bonus. Stale content rarely gets cited. Krishna Madhavan said at Pubcon Cyber Week: “Stale or missing content will constrain the amount of retrieval we can do and push agents toward alternative sources.”

Schema Markup: From Text To Knowledge

Microsoft’s October 2025 post devotes an entire section to schema. They describe it as code that “turns plain text into structured data that machines can interpret with confidence.” Schema can label your content as a product, review, FAQ, or event, giving AI systems explicit context instead of forcing them to guess. Krishna Madhavan reinforced this at Pubcon: “Schemas are super useful. They help the system discern exactly what your information is without us having to guess.”

The GEO-16 framework confirms this from the academic side. Structured data was one of the top three factors predicting AI citation likelihood, alongside metadata/freshness and semantic HTML.

The schema types that matter most for AI visibility:

  • FAQPage for question-and-answer content (directly maps to how AI formats responses).
  • HowTo for step-by-step instructions.
  • Product with Offer, AggregateRating, and Review for ecommerce.
  • Article/BlogPosting for content with clear authorship and dates.
  • Organization for business identity.

Pair structured data with IndexNow for freshness. As the Bing Webmaster Blog put it: “IndexNow tells search engines that something has changed, while structured data tells them what has changed. Together, they improve both speed and accuracy in indexing.”

Crawler Permissions: Who Gets In

AI search engines use distinct crawlers, and most let you control training and search access separately. Here’s who to allow.

Bot Platform Purpose Robots.txt Token
OAI-SearchBot ChatGPT Search index OAI-SearchBot
GPTBot OpenAI Model training GPTBot
ChatGPT-User ChatGPT On-demand browsing ChatGPT-User
Bingbot Microsoft Copilot Search + AI Bingbot
Googlebot Google AI Overviews Search + AI Googlebot
Google-Extended Google Gemini training Google-Extended
PerplexityBot Perplexity Search + index PerplexityBot
Perplexity-User Perplexity On-demand browsing Perplexity-User
ClaudeBot Anthropic Training + retrieval ClaudeBot

A sensible robots.txt configuration might allow search crawlers while blocking training:

User-agent: OAI-SearchBot
Allow: /

User-agent: ChatGPT-User
Allow: /

User-agent: GPTBot
Disallow: /

User-agent: Google-Extended
Disallow: /

OpenAI provides the cleanest bot separation. You can allow OAI-SearchBot (so your content appears in ChatGPT search) while blocking GPTBot (so it’s not used for model training). Google’s controls are less granular: blocking Google-Extended prevents Gemini training but has no effect on AI Overviews, which use Googlebot.

OpenAI also offers the most specific technical recommendation of any AI search provider. For their Atlas browser (which uses a standard Chrome user agent, not a bot identifier), they recommend following WAI-ARIA best practices: “Add descriptive roles, labels, and states to interactive elements like buttons, menus, and forms. This helps ChatGPT recognize what each element does and interact with your site more accurately.” Accessibility and AI agent compatibility are the same work.

A caveat on Perplexity: while their documentation states they respect robots.txt, Cloudflare documented in August 2025 that Perplexity uses undeclared crawlers with rotating IPs and spoofed browser user agents to bypass no-crawl directives. This is a contested claim, but it’s worth knowing.

For revenue, Perplexity is the only platform currently offering publisher compensation. Their Comet Plus program provides an 80/20 revenue split (publishers keep 80%) across direct visits, search citations, and agent actions.

Google Vs. Microsoft: Two Philosophies

The contrast between Google and Microsoft on AEO is striking enough to be its own story.

Google says: just do good SEO. Their official documentation is deliberately minimalist: “There are no additional requirements to appear in AI Overviews or AI Mode, nor other special optimizations necessary.” They add that you “don’t need to create new machine readable files, AI text files, or markup to appear in these features.”

Google recommends helpful, reliable, people-first content demonstrating E-E-A-T. Standard structured data. Good page experience. Technical basics. Nothing AI-specific.

Microsoft says: here’s the playbook. Their October 2025 blog post and January 2026 guide provide detailed, actionable guidance. Specific heading structures. Schema recommendations. Content formatting rules. Concrete examples (an AEO product description vs. a GEO product description). Warnings about content hidden in tabs and expandable menus. A framework for thinking about crawled data, product feeds, and live website data as three distinct layers.

What explains the difference? Partly market position. Google dominates search and has less incentive to help publishers optimize for AI features that might reduce clicks to their websites. Microsoft, with Bing’s roughly 8% market share, benefits from providing publishers with reasons to optimize specifically for their ecosystem.

But there’s a practical takeaway: Microsoft’s guidance isn’t Bing-specific. The principles of structured content, clear headings, snippable formats, schema markup, and expert authority are universal. Following Microsoft’s playbook improves your content for every AI system, including Google’s. Google just won’t tell you that.

Measuring AI Visibility

This is the hard part. Traditional SEO has Google Search Console. AI visibility is still fragmented.

Ahrefs analyzed 1.9 million citations from 1 million AI Overviews and found that 76% of citations come from pages already ranking in Google’s top 10. The median ranking for the most-cited URLs was position 2. Traditional ranking still matters for AI citation, but being No. 1 is “a coin flip at best” for getting cited.

The traffic impact is significant. Ahrefs found that AI Overviews correlate with 58% lower click-through rates for the No. 1 position. Seer Interactive reported a 61% organic CTR drop for queries with AI Overviews. But being cited within the AI Overview gives 35% more organic clicks compared to not being cited. Citation is the new ranking.

For tracking, the tool landscape is emerging:

Tool What It Tracks Starting Price
Profound Citations across ChatGPT, Perplexity, Copilot, Google AIOs From $99/mo
Peec.ai Brand mentions across ChatGPT, Gemini, Claude, Perplexity From ~$95/mo
Advanced Web Ranking AIO presence tracking in Google Included in plans
Bing Webmaster Tools AI Performance Report for Copilot Free

Bing Webmaster Tools is the easiest starting point. It’s free, and the new AI Performance Report shows how your content performs in Copilot citations. For ChatGPT specifically, track utm_source=chatgpt.com in your analytics. OpenAI automatically appends this to referral URLs.

Conductor’s January 2026 report found that 87.4% of AI referral traffic comes from ChatGPT. That’s one platform dominating the space, which makes tracking it particularly important.

Key Takeaways

  • AI selects fragments, not pages. Structure your content in self-contained, extractable sections with descriptive headings that signal where each idea starts and ends.
  • Clarity beats persuasion. Factual accuracy, cited sources, and direct answers outperform authoritative tone and marketing language. The research consistently shows this.
  • Earned media dominates brand content in AI citations. Press coverage, third-party reviews, and authoritative mentions on other websites carry more weight than your own pages. Build presence beyond your domain.
  • Schema markup is a force multiplier. FAQPage, HowTo, Product, and Article schemas make your content machine-readable. Pair with IndexNow for freshness.
  • Follow Microsoft’s playbook, even for Google. Google says “just do good SEO.” Microsoft provides specific, actionable guidance that improves content for every AI system, Google’s included.
  • Separate training from search in your robots.txt. Allow search crawlers (OAI-SearchBot, Bingbot, PerplexityBot) while blocking training crawlers (GPTBot, Google-Extended) if that’s your preference. You have more control than you might think.
  • Track AI visibility now. Use Bing Webmaster Tools (free), monitor utm_source=chatgpt.com in analytics, and consider dedicated tools as the measurement space matures.

Traditional SEO asked: “How do I rank?” AEO asks: “How do I become the fragment that gets selected?” The answer isn’t a single trick. It’s clear structure, verifiable expertise, and content that AI can confidently extract and cite.

Up next in Part 3: the protocols powering the agentic web, including MCP, A2A, NLWeb, and AGENTS.md, and how they fit together.

More Resources:


This was originally published on No Hacks.


Featured Image: Meepian Graphic/Shutterstock

Here’s why some people choose cryonics to store their bodies and brains after death

This week I reported on some rather unusual research that focuses on the brain of L. Stephen Coles.

Coles was a gerontologist who died from pancreatic cancer in 2014. He had spent the latter part of his career specializing in human longevity. And before he died, he decided to have his brain preserved by a cryonics facility. Today, it’s being stored at −146 °C at a center in Arizona, where it sits covered in a thin layer of frost.

Coles also tasked his longtime friend Greg Fahy with studying pieces of his brain to see how they had fared (partly because he was worried his brain might crack). Fahy, a renowned cryobiologist, has found that the brain is “astonishingly well preserved.”

But that doesn’t mean Coles could be reanimated. Over the past few years, I’ve spoken to people who run cryonics facilities, study cryopreservation, or just want to be cryogenically stored. All those I’ve spoken to acknowledge that the chance they’ll one day be brought back to life is vanishingly small. So why do they do it?

The first person to be cryonically preserved was James Hiram Bedford, a retired psychology professor who died of kidney cancer in 1967. Affiliates of the Cryonics Society of California, an organization headed by a charming TV repairman with no scientific or medical training, perfused his body with cryoproctective chemicals to protect against harmful ice formation and “quick-froze” him.

Today, Bedford’s body is still in storage at Alcor, a cryonics facility based in Scottsdale, Arizona. It’s one of a handful of organizations that offer to collect, preserve, and store a person’s whole body or just their brain—pretty much indefinitely. It’s where Coles’s brain is stored.

Both men died from cancer. Medicine could not cure them. But in the future, who knows? One of the premises of cryonics is that modern medicine will continue to advance over time. Cancer death rates have declined significantly in the US since the early 1990s. I don’t know what exactly drove Coles and Bedford to their decisions, but they might have hoped to be reanimated at some point in the future when their cancers became curable.

Others simply don’t want to die, period. Last year, I attended Vitalist Bay, a gathering for people who believe that life is good and that death is “humanity’s core problem.” Emil Kendziorra, CEO of the cryonics organization Tomorrow.Bio, spoke at the event, and a healthy interest in cryonics was obvious among the attendees.

Many of them believe that science will find a way to “obviate” aging. And some were keen on the idea of being preserved until that happens. Think of it as a way to cheat not only death but aging itself.

This sentiment might have support beyond the realms of Vitalist Bay, according to research by Kendziorra and his colleagues. In 2021, they surveyed 1,478 US-based internet users who were recruited via Craigslist. They found that men were more aware of cryonics than women, and more optimistic about its outcomes. Just over a third of the men who completed the survey expressed interest “a desire to live indefinitely.”

Still, cryonics is a niche field. Worldwide, only around 5,000 or 6,000 people have signed up for cryopreservation when they die, Kendziorra told me when we chatted at Vitalist Bay. He also told me that his company gets between 20 and 50 new signups every month.

And there are plenty of reasons why people don’t do it. A small fraction of the people who responded to Kendziorra’s survey said that they thought the idea of cryonics was dystopian, and some even said it should be illegal.

Then there’s the cost. Alcor charges $80,000 to store a person’s brain, and around $220,000 to store a whole body. Tomorrow.Bio’s charges are slightly higher. Many people, including Kendziorra himself, opt to cover this cost via a life insurance policy.

Perhaps the main reason people don’t opt for cryonic preservation is that we don’t have any way to bring people back. Bedford has been in storage for more than 50 years, Coles for more than a decade. All the scientists I’ve spoken to say the likelihood of reanimating remains like theirs is vanishingly small.

The fact that the possibility—however tiny—is above zero is enough for some, including Nick Llewellyn, the director of research and development at Alcor. As a scientist, he says, he acknowledges that the chances reanimation will actually work are “pretty low.” Still, he’s interested in seeing what the future will look like, so he has signed himself up for the cryonic preservation of his brain.

But Shannon Tessier, a cryobiologist at Massachusetts General Hospital, tells me that she wouldn’t sign up for cryonic preservation even if it worked. “It turns into a philosophical question,” she says.

“Do I want to be revived hundreds of years later when my family is gone and life is different?” she asks. “There are so many complicated philosophical, societal, [and] legal complications that need to be thought through.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The Download: the internet’s best weather app, and why people freeze their brains

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How a couple of ski bums built the internet’s best weather app 

The best snow-forecasting app for skiers isn’t a federally-funded service or a big-name brand. It’s OpenSnow, a startup that uses government data, its own AI models, and decades of alpine-life experience to deliver the best predictions out there. 

The app has proved especially vital this winter, one of the weirdest on record. It’s even made microcelebrities of its forecasters, who sift through reams of data to write “Daily Snow” reports for locations around the world.  

We headed to the Tahoe mountains to hear how two broke ski bums became modern-day snow gods. Read the full story

—Rachel Levin 

Here’s why some people choose cryonics to store their bodies and brains after death 

—Jessica Hamzelou 

This week I reported on unusual research focused on the frozen brain of L. Stephen Coles. 

Coles, a researcher who studied aging, was interested in cryonics—the long-term storage of human bodies and brains in the hope that they might one day be brought back to life. It’s a hope shared by many. 

Over the past few years, I’ve spoken to people who run cryonics facilities, study cryopreservation, or just want to be cryogenically stored. All of them acknowledge that there’s a vanishingly small chance of being brought back to life. So why do they do it? 

Read the full story to find out

This article is from The Checkup, our weekly biotech newsletter. Sign up to receive it in your inbox every Thursday. 

What’s next for space exploration?  

Whether it’s the race to find life on Mars, the campaign to outsmart killer asteroids, or the quest to make the moon a permanent home to astronauts, scientists’ efforts in space can tell us more about where humanity is headed.

To learn more about the progress and possibilities ahead, our features editor Amanda Silverman sat down with Robin George Andrews, an award-winning science journalist and author, on Wednesday. If you missed their conversation, fear not—you can catch up and watch the video here. You’ll need to be a subscriber to access it, but the good news is subscriptions are discounted right now. Bag yours if you haven’t already! 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 The Pentagon’s ban on Anthropic has been halted 
A judge has paused its designation as a supply chain risk. (CBS News)  
+ She said the government was trying to “chill public debate.” (BBC
+ Sam Altman claimed he tried to “save” Anthropic in the clash. (Axios

2 Elon Musk has lost his lawsuit against an ad boycott on X 
A judge admonished the “fishing expedition.” (Ars Technica
+ Ad revenue fell by more than half as advertisers fled X after Musk took over. (BBC

3 OpenAI has put plans for an erotic chatbot on hold “indefinitely” 
Staff and investors had raised concerns. (The Information $) 
+ The company is making a sharp strategic pivot. (FT $) 
+ AI companions are the final stage of digital addiction. (MIT Technology Review

4 A helium shortage has started impacting tech supply chains 
The problem stems from the Middle East conflict. (Reuters
+ The era of cheap helium is over. (MIT Technology Review

5 Trump’s new science advisers: 12 tech chiefs and just one academic 
They include at least nine billionaires. (Nature
+ David Sacks is stepping down as Trump’s crypto and AI czar. (TechCrunch

6 Anthropic is mulling an IPO as soon as October 
It’s racing OpenAI to hold an initial public offering. (Bloomberg $) 

7 Wikipedia has banned all AI-generated content  
LLM-related issues had overwhelmed editors. (404 Media
+ Here’s what we’re getting wrong about AI’s truth crisis. (MIT Technology Review

8 OpenAI’s ad pilot generated $100 million in under 2 months 
More than 600 advertisers are working on the trial. (CNBC
+ Ads will arrive on ChatGPT free ‌and Go in the coming weeks. (Reuters
 
 9 An Irish village is giving kids a phone-free upbringing 
The ban works because almost everyone’s bought in. (NYT $) 

10 Chatting with sycophantic AI makes you less kind 
New research found it encourages “uncouth behavior.” (Nature

Quote of the day 

“I don’t know if it’s ‘murder,’ but it looks like an attempt to cripple Anthropic.” 

—Judge Rita Lin rules against the Pentagon’s ban on Anthropic, The Verge reports. 

One More Thing 

people standing in a TESSERAE pavilion

AURELIA INSTITUTE

This futuristic space habitat is designed to self-assemble in orbit  

More and more people are traveling beyond Earth, but the International Space Station can only hold 11 of them at a time.  

Aurelia Institute, an architecture R&D lab based in Cambridge, MA, is building a solution: a habitat that launches in compact stacks of flat tiles—and self-assembles in orbit.  

The concept may sound far-fetched, but it’s already won support from NASA. Read the full story

—Sarah Ward 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ These optical illusions are absolute brain-melters. 
+ The web design museum lovingly visualizes the evolution of the internet. 
+ Zara Picken’s modernist illustrations are a new window into the mid-20th century. 
+ Explore our planet’s connections through the digital Knowledge Garden

Content Beats Design, Says CRO Pro

Dave Diederen is a Netherlands-based developer turned conversion rate optimization pro. He encourages ecommerce brands to test product pages, ads, and, well, everything.

He says merchants often prioritize their sites’ aesthetics over copy and content, a big mistake. “Content and copy play a very big role in conversions, if not the biggest,” he told me.

In our recent conversation, he addressed conversion wins, product and home page tactics, A/B strategies, and more.

Our entire audio is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Who are you?

Dave Diederen: I’m the founder of Syntra, a conversion rate optimization agency in the Netherlands. I’m a firm believer in testing and collecting as much data as you can. We do a lot of product page and listicle testing.

I encourage the brands we work with to test new advertising angles, too, such as new creative.

Traffic that doesn’t end up on a product page has no chance to buy. So I tell merchants to keep it simple, focus on what they have, and optimize it.

Bandholz: What is the minimum traffic volume for statistically reliable testing?

Diederen: I would focus on the number of orders, not traffic. A brand might have 100,000 visitors and just 20 orders. There’s no way to judge that data accurately. So I aim for around 250 orders for an A/B test. That’s more than enough.

Brands that do not have 250 orders should typically focus on advertising to increase the volume. Every business is different, however.

Bandholz: Which areas on ecommerce sites drive the biggest conversion gains from testing?

Diederen: Announcement bars are the biggest. Even single-product stores can leverage them well. Most sellers slap on an announcement bar and forget about it. Instead, always link it to a product page, regardless of the promotion. Visitors tend to click on announcement bars.

Another mistake I see is not showing the price under a product title or above the fold in the product page description. Unfortunately, brands often show it in the variant selector or in the add-to-cart button.

If you’re selling supplements, address the ingredients. Comparison charts work well for fashion and healthcare.

Reviews always work well.

Other items are low priorities for conversion. FAQ sections and social proof can go at the bottom of the page, although it depends on the industry.

Brands don’t realize how many visitors divert to the home page. Visitors may start on a product page, click the company logo, and end up on the home page. But most brands don’t optimize their home page. A strong home-page hero image is a good place to start, whether it’s a lifestyle or product image.

Bandholz: How does copy impact conversions?

Diederen: It’s very important, way more than most merchants think. Most focus on a site’s appearance, but trust me, the look and feel aren’t as important. At the end of the day, what matters are the products.

Be specific about your product’s benefits and the problems it solves.

So, yes, content and copy play a very big role in conversions, if not the biggest.

Bandholz: What is your take on email pop-ups?

Diederen: It depends on how many visitors respond. If you get a lot of signups from pop-ups, it makes no sense to remove them. I’ve tested hiding pop-ups for four brands, and it has mostly reduced sales.

So I say don’t get rid of pop-ups, although don’t overdo them either.

Bandholz: Where can people follow you, support you, hire you?

Diederen: Our site is SyntraLabs.com. Follow me on X. I’m also on LinkedIn.

Why Google’s New “Google-Agent” Is The Biggest Mindset Shift In SEO History via @sejournal, @marie_haynes

Imagine a web where a machine fills out your lead forms, buys your products, and negotiates with your backend. While many SEOs are currently arguing over whether to block AI search bots, whether to call ourselves SEOs or GEOs, or optimizing to get mentioned in ChatGPT, the reality of the web is moving far past simple crawlers. With Google’s new WebMCP and Google-Agent bot, the agentic web is already here.

The Web Is Becoming Agentic

The web as we know it (where humans click links and scroll pages) is radically ending. What replaces it is the agentic web.

This week, Google announced a new user agent just for agents. When an agent using Google infrastructure browses your site (like Project Mariner), it will use this new tag.

Table showing mobile and desktop HTTP User-Agent strings for the Google-Agent web crawler.
Image Credit: Marie Haynes

This is just the beginning. Agents will do much more than browse the web like a human.

Liz Reid Describes A Web Where Agents Do Much Of The Browsing

In a recent interview, Head of Search Liz Reid noted she believes people will still want to hear from other people, but she said, “I do think that probably means there’s a world in which a lot of agents are talking with each other.

Google’s latest blog post outlines several AI protocols we all must understand immediately:

Protocol What It Stands For The Business Impact
MCP Model Context Protocol Lets agents securely access your backend data.
A2A Agent2Agent Enables bot-to-bot communication and transactions.
UCP Universal Commerce Protocol Lets a machine buy your product directly from the SERPs.
A2UI Agent to User Interface Automatically composes new visual layouts for users.
AG-UI Agent User Interaction A middleware for streaming real-time AI data.

I had a great chat with Search Bar member Liz Micik discussing this:

WebMCP Will Let Agents Use Your Website Natively

Standard browser agents are slow because they look at pixels like humans do. In the agentic web, machines will talk directly with the tools and functionality available on our website.

Infographic comparing traditional human web browsing to AI agent interaction using the WebMCP protocol.
(Yup – it’s a nano banana image. 😛) Image Credit: Marie Haynes

The improvement here is WebMCP. I think every SEO should be heavily paying attention to this. WebMCP lets agents use the functionality of your website in real time, natively.

What does this look like?

The obvious use case is an agent automatically filling out lead forms perfectly. But I think we will see much more interesting use cases as we publish our own agents. Let’s say I’ve made agents for SEO. (I have! I just haven’t shared them with others yet.) I could share those with you via a SaaS model, perhaps – where you pay a monthly fee for access. Or, what I think will be most likely is that you will be able to have your agent access my agents via WebMCP. My agents will negotiate with your agents on pricing and possibly even help each other improve as well.

Search Is Turning Into AI Search

Google’s Nick Fox recently stated, “Search is becoming AI Search, and the Gemini app is your personal assistant.” He also said that Google is increasingly thinking of AI Mode and AI Overviews as one in the same.

I keep looking back at this NYT article from three years ago.

Article headline about Google devising radical search changes to beat A.I. rivals, with highlighted text.
Image Credit: Marie Haynes

They framed it as Google panicking over ChatGPT. There is no doubt in my mind, however, that Google had much bigger things in mind. The agentic web is not an afterthought! It has been in planning for many years now.

I personally believe that from 1998 until now, those of us who create content have been giving it to Google to train AI. In return, we got human traffic and ad revenue. I think that partnership no longer exists in its traditional form.

Why This Is The Most Exciting Time To Be In SEO

This transition from a human-first web to an agent-first web might sound terrifying if you rely solely on traditional keyword rankings. In reality, it is the biggest opportunity we have seen since the invention of the search engine itself. WebMCP and UCP mean we are no longer just optimizing for clicks; we are optimizing for direct action, frictionless commerce, and automated lead generation. The creators who understand how these agents interact with backend systems are going to see a level of efficiency and reach that human browsing could never achieve. The partnership between creators and Google has definitively changed, but the future of what we can build on top of this agentic web is incredibly bright.

My Advice

It’s hard to know how to act right now because no one, not even Google, knows exactly how the agentic web will unfold. Here’s what I’d recommend:

  • Familiarize yourself with WebMCP and understand how it works.
  • If you are ecommerce, learn all you can on UCP.
  • Start learning to vibe code using tools like Claude Code, Google’s AI Studio, or my favourite, Antigravity. (If you are a member of my paid community, I just published a full guide on using Antigravity in the Learning Hub.)
  • Focus on the exciting things you can do with AI rather than letting media and social media steer you away! No one knows what’s in store for the future, but I’m positive that those who press in and learn how to use AI for good will have great success.

More Resources:


Read Marie’s newsletter, AI News You Can Use. Subscribe now.


Featured Image: HST6/Shutterstock

Google Tests AI Headlines, Rolls Out Spam Update – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse: updates affect how your headlines appear in Search, how spam enforcement played out, and how AI content gets labeled.

Here’s what matters for you and your work.

Google Tests AI-Generated Headline Rewrites In Search

Google confirmed that it’s testing AI-generated headline rewrites in traditional search results. The test uses language similar to what Google used before reclassifying AI headlines in Discover as a feature.

Key facts: Google called the test “small and narrow.” The rewrites include no disclosure that Google changed the original headline. Google said any broader launch may not use generative AI but didn’t explain what the alternative would look like.

Why This Matters

Google called AI headlines in Discover “small” in December, reclassified them as a feature by January, and is now using the same language for Search. Google has not outlined an opt-out for this test, and the documented examples show Google changing meaning, not just formatting.

What Publishers And SEO Professionals Are Saying

Bastian Grimm, founder of Peak Ace AG, wrote on LinkedIn:

“Previous rewrites were primarily about matching query intent, fixing truncation, or improving readability. This test uses AI to rewrite for engagement – and documented examples show it changing tone and intent in ways that go well beyond formatting. That is a meaningful shift. A title rewritten to match a query is one thing. A title rewritten because Google’s model thinks a different framing will perform better is another.”

Brodie Clark, independent SEO consultant, wrote on LinkedIn:

“The big issue with this approach is that there were instances where the titles for the articles were rewritten, but the meaning of the article was lost in the rewrite or through formatting changes (such as using capitals for every word).”

Nilay Patel, editor-in-chief of The Verge, wrote on Bluesky:

“Google is now screwing with the 10 blue links in traditional search and rewriting headlines – including ours – to be the worst kind of slop. This sucks so bad”

James Ball, political editor at The New World Opinion and fellow at Tech Policy Press and Demos, wrote on Bluesky:

“Google is re-headlining articles in search results, including in ways that introduce errors. I think even 2-3 years ago it would’ve backed off this for fear of publisher backlash. Does the media have enough clout left wirh tech to get this one reversed?”

Read our full coverage: Google Tested AI Headlines In Discover. Now It’s Testing Them In Search

March 2026 Spam Update Completes In Under 20 Hours

Google’s March 2026 spam update started on March 24 and finished on March 25. The rollout was significantly faster than recent spam updates. The update applies globally and to all languages.

Key facts: The rollout began at 12:00 PM PT on March 24 and ended at 7:30 AM PT on March 25. Google didn’t announce new spam policies with this update. The community response has been notably quiet, with few reports of visible impact.

Why This Matters

The rollout window was short and is already complete, so March 24-25 is the clearest period to review in Search Console. Google’s current spam policies are still the main guidelines to follow, as no new categories have been introduced.

What SEO Professionals Are Saying

Nilesh Pansuriya, leading Guru99’s global content and SEO team, wrote on LinkedIn:

“I’ve been tracking Google updates for 15 years. I’ve never seen one move this fast. The March 2026 Spam Update rolled out on March 24th. Completely finished by March 25th. ⏱️ Total time: 19 hours and 30 minutes. → August 2025 spam update → 27 days → December 2024 spam update → 7 days → October 2022 spam update → 48 hours → March 2026 spam update → under 20 hours Done before most SEOs even noticed it started.”

Read our full coverage: Google Begins Rolling Out The March 2026 Spam Update

Google Adds AI And Bot Content Labels To Structured Data

Google updated its Discussion Forum and Q&A Page structured data documentation to include new properties, including a way for sites to label AI- and bot-generated content.

Key facts: The new digitalSourceType property uses IPTC enumeration values to distinguish content created by a trained model from content created by a simpler automated process. Google lists the property as recommended, not required. When it’s absent, Google assumes the content is human-generated.

Why This Matters

Forums and Q&A platforms now have a documented way to tell Google which content was created by AI or bots. The “recommended” status means adoption will be voluntary.

What SEO Professionals Are Saying

Jan-Willem Bobbink, founder of WebGeist, wrote on LinkedIn:

“Lets talk about a gap in Google’s new AI content labeling. They require it for product feeds but only ‘recommend’ it for forums. Google just updated its Discussion Forum and Q&A Page structured data docs with a new property called digitalSourceType. It lets sites flag when a post or comment was written by an AI model or an automated bot. The idea sounds great on paper. In practice, the implementation tells a different story. The property is listed as ‘recommended,’ not required. If a site leaves it out, Google assumes the content is human-generated. That is a massive loophole.”

Read our full coverage: Google Adds AI & Bot Labels To Forum, Q&A Structured Data

Bing Connects Grounding Queries To Cited Pages

Bing Webmaster Tools added a mapping feature to its AI Performance dashboard that connects grounding queries to the specific pages cited for them. The update works in both directions.

Key facts: You can click a grounding query to see which pages are cited for it. You can also click a page to see which grounding queries drive its citations. The dashboard covers AI experiences across Copilot, AI summaries in Bing, and select partner integrations. The data is still a sample, not a complete log.

Why This Matters

This gives you a way to connect AI citation data to specific content on your site. Knowing which pages earn citations for which phrases makes it easier to decide where to focus content updates for AI visibility.

Google’s Search Console includes AI Overviews and AI Mode in standard Performance reporting but hasn’t introduced a comparable page-level citation mapping.

What SEO Professionals Are Saying

Aleyda Solís, international SEO consultant and founder of Orainti, wrote on LinkedIn:

“New Bing Webmaster Tools AI Performance Dashboard Insights: We can now see which pages are being cited for a specific grounding query, and which grounding queries are driving citations to a specific pages Thanks so much for hearing the community feedback Krishna Madhavan, Fabrice Canel and team See the announcement in comments.”

Navah Hopkins, ads liaison at Microsoft Advertising, wrote on LinkedIn:

“Grounding queries reveal the key phrases AI used to retrieve content that was cited, offering insight into how AI interprets user intent. If you see your content is getting cited, that means you’re registering as visible to the AI. The page-level citation report sheds light on which pages are helping you win that visibility.”

Read our full coverage: Bing AI Dashboard Maps Grounding Queries To Cited Pages

Theme Of The Week: Google Tightens Control Over How Content Appears

Three of this week’s four stories show Google asserting more influence over how content is presented and categorized in its ecosystem.

AI headline rewrites let Google change how your pages appear in search results. The spam update completed in under 20 hours, the fastest rollout in recent memory. And the new structured data properties ask platforms to self-report whether content was created by humans or machines.

In contrast, while Google tightens control over how content appears, Bing is giving publishers greater visibility into how their content performs in AI-generated answers. The query-to-page mapping closes a measurement gap that Google hasn’t addressed on its side.

Top Stories Of The Week:

More Resources:


Google Adds Scenario Planner, Performance Max Updates, And Veo – PPC Pulse via @sejournal, @brookeosmundson

Welcome to this week’s PPC Pulse.

This week’s updates focus on Performance Max visibility improvements, new budget planning tools in Google Analytics, and generative video now built directly into Google Ads.

Here’s what was announced this week and why they matter for your campaigns.

Google Adds More Visibility and Control To Performance Max

Google rolled out several updates to Performance Max aimed at two ongoing gaps: control and reporting.

Advertisers can now exclude first-party customer lists. This gives teams running acquisition-focused campaigns a cleaner way to avoid spending on existing users.

On the reporting side, Google added:

  • Budget report
  • Expanded audience insights, including demographic breakdowns
  • Placement reporting segmented by network

Why This Matters For Advertisers

Audience exclusions help reduce overlap between prospecting and retention, assuming your customer lists are accurate. The reporting updates are more practical. Advertisers get better visibility into spend pacing, who campaigns are reaching, and where ads are showing.

For teams already using Performance Max, this improves day-to-day oversight. It does not turn it into a fully controllable campaign type.

What PPC Professionals Are Saying

Anthony Simonetti is “very excited for more insight” for PMax campaigns, while the company Optifeed shared its support for the update by saying “Love seeing PMax get more transparent!”

Google Analytics Introduces Scenario Planner and Projections

Google Analytics launched two new tools as part of its cross-channel budgeting feature:

  • Scenario Planner for building forward-looking budget models
  • Projections for tracking whether live campaigns are pacing toward goals

Both tools use historical data to estimate conversions, revenue, and spend across channels, including non-Google platforms if cost data is imported.

Right now access is limited due to it being a beta feature. Advertisers need at least one year of data across multiple channels, as well as a few other eligibility requirements.

Why This Matters For Advertisers

Planning and performance have traditionally lived in separate places. These tools bring them closer together, especially to those marketers who manage more than just Google Ads.

Advertisers can now model budgets and monitor pacing in the same platform used for reporting. That can help teams managing multiple channels make faster adjustments during a campaign.

The tradeoff is reliability. Outputs depend entirely on data quality and historical consistency. For many accounts, that will limit how actionable these projections actually are.

Veo Brings AI Video Creation Into Google Ads

Google introduced Veo, its generative video model, inside Asset Studio in Google Ads.

Advertisers can start by uploading just three static images and generate short-form videos, then package them into ads for formats like Demand Gen.

Each uploaded image can generate a video by Veo that’s up to 10 seconds long.

Google is positioning this around speed and creative variation, and can be used in conjunction with the rollout of Nano Banana Pro. The goal is to make it easier to produce multiple video assets without traditional production.

Why This Matters For Advertisers

Creative production has been a bottleneck for many teams, especially for video.

Veo lowers that barrier immensely for brands. Advertisers can generate variations faster and test more creative without additional resources.

The bigger shift is volume. Google continues to push toward having multiple creative variations in-market at all times. This gives advertisers another way to keep up with that expectation, even if the output still needs review and refinement.

What PPC Professionals Are Saying

This got a lot of traction from advertisers, including 70 comments and over 340 reposts from its LinkedIn announcement.

André Felizol shared:

The key here will be the brands that could create something different. With AI facilitating the creation of videos based on images, everything will be similar. So, the companies that will invest more in creativity with different and creative approaches to show their products will win in the long run.

Brooke Hess is “looking forward to testing” for her agency’s clients while Thomas Eccel has already dug in and created a live demo test of Veo 3.

Personally, I’m excited to test it out after being introduced to the first version of Veo at the 2025 Google Marketing Live event last year:

Theme of the Week: More Ways To Plan, Steer, And Build

This week’s updates all support a more hands-on role for advertisers.

Google added more steering and reporting inside Performance Max, more planning functionality inside Analytics, and more creative production tools inside Google Ads.

Advertisers are getting more ways to shape performance instead of just reacting to it after the fact.

More Resources:


Featured Image: Djile/Shutterstock; Paulo Bobita/Search Engine Journal