Google Lost Two Antitrust Cases, But Stock Rose 65% – Here’s Why via @sejournal, @MattGSouthern

In January, Alphabet passed Apple in market capitalization to become the second most valuable company in the world. Alphabet was worth $3.885 trillion. Apple sat at $3.846 trillion. Only Nvidia, at $4.595 trillion, was ahead.

That alone would be news. But the context makes it something else entirely. Courts had found that Google violated antitrust law in both general search services and general search text advertising. The Department of Justice asked judges to break the company apart, sell off Chrome, divest the Android operating system, and force the sale of its ad exchange. In the search case, the court rejected those proposed divestitures. In the ad-tech case, the government is still asking the judge to order a sale of Google’s ad exchange, and remedies are pending.

In this article, I’ll walk through every active Google antitrust thread, what courts have ordered, what’s still pending, and what the timelines mean. The gap between Google’s legal exposure and its market performance tells a story that matters for everyone working in search.

How We Got Here

When the DOJ’s search monopoly trial opened in 2023, the government argued that Google spent billions on exclusive deals with Apple, Samsung, and browser makers to lock in its position as the default search engine. The case centered on whether those deals maintained a monopoly or reflected a better product.

In 2024, Judge Amit Mehta ruled that Google had maintained an illegal monopoly in general search services. It was the first time a federal court found a tech company had maintained an illegal monopoly since the Microsoft case in 2001.

Then came the remedies phase, where the real fight began. The DOJ wanted dramatic structural changes. Prosecutors laid out four options, including forcing Google to sell Chrome and potentially divesting Android. That was the peak fear moment for investors. It was also the point at which the case stopped being abstract legal theory and started having direct implications for how search distribution works.

What happened next surprised the industry.

The Search Case: Where It Stands

On Sept. 2, 2025, Judge Mehta issued his remedies opinion. He declined to order any divestitures. No Chrome sale. No Android breakup. No forced separation of search from the broader Alphabet structure.

His reasoning centered on AI. Mehta wrote that generative AI had changed the course of the case. He pointed to the competitive threat that AI chatbots posed to Google’s search business and concluded that the market was too dynamic for the kind of structural remedy the DOJ wanted.

Instead, Mehta ordered behavioral remedies. The final judgment, entered on Dec. 5, 2025, limits how Google can structure search distribution deals. Agreements are capped at one year and cannot be used to lock partners into defaults across multiple access points. The judgment includes provisions that require partners to have more flexibility to surface rival search options and, in some cases, third-party generative AI products.

The order also sets out data-licensing obligations for qualified rivals, including access to a portion of Google’s web index and certain user-side data. An oversight process oversees how the implementation is carried out and ensures everything stays in line during the remedy period.

Google filed its Notice of Appeal on Jan. 16, 2026. The company is specifically challenging the data-sharing requirements and the technical committee oversight. The DOJ had until Feb. 3, 2026, to decide whether to file a cross-appeal seeking stronger remedies than what Mehta ordered.

The search case landed in a unique place. Google keeps Chrome and Android. The default search deals that delivered Google the majority of mobile search activity get restructured with shorter terms and fewer restrictions on partners.

Data-sharing could enable competitors to build better search products, but the timeline for that playing out is years, not months.

The Ad-Tech Case: What’s Coming

The second federal case against Google involves digital advertising technology. This one operates on a different track with a different judge and a different set of remedies at stake.

In April 2025, Judge Leonie Brinkema ruled that Google had willfully monopolized parts of the digital ad market. Where the search case focused on consumer-facing search defaults, this case targeted Google’s ad server, ad exchange (AdX), and the connections between them.

The DOJ’s post-trial brief requested the divestiture of Google’s Ad Manager suite, including the AdX exchange. That would mean separating the tool publishers use to sell ads from the marketplace where those ads get bought and sold.

During closing arguments in November, Brinkema expressed skepticism. She noted that a potential buyer for the ad exchange hadn’t been identified and called the divestiture proposal “fairly abstract.” The court, she said, needed to be “far more down to earth and concrete.”

Brinkema said she plans to issue a decision early in 2026. That ruling could arrive at any point in Q1.

The practical stakes here are different from the search case. The search remedies affect how people find Google. The ad-tech remedies affect how publishers make money through Google.

Any forced separation of AdX would directly change the monetization stack that millions of websites rely on. Even if Brinkema follows the same pattern as Mehta and declines structural remedies, the behavioral changes she orders could reshape how programmatic advertising flows through Google’s systems.

The Epic/Play Store Settlement Question

In late January 2026, Judge James Donato held a hearing in San Francisco on a proposed settlement between Google and Epic Games. The case, which centered on Google’s Play Store practices, appeared headed for resolution. But Donato threw the terms into question.

Donato described the settlement as overly favorable to the two companies and questioned whether it came at the expense of the broader class of developers affected by Google’s Play Store policies.

The settlement terms include Epic spending $800 million over six years on Google services, plus a marketing and exploratory partnership. Reports described the partnership as involving Epic’s technology, including Unreal Engine, alongside marketing and other commercial terms.

This case matters because it touches a different part of Google’s ecosystem. The search and ad-tech cases are about how Google dominates web search and digital advertising. The Play Store case is about how Google controls app distribution on Android. Together, these cases cover the three main ways Google generates revenue and the three main ways practitioners interact with Google’s platforms.

The EU Front

European regulators are pursuing their own path, and in some areas, they’re moving faster than U.S. courts.

In September 2025, the European Commission fined Google €2.95 billion for abusing its dominance in ad tech. Google said it would appeal the decision.

Reports from December indicate the EU is preparing a non-compliance fine against Google related to Play Store anti-steering rules. That fine is expected as early as Q1 2026, which would put it on roughly the same timeline as Brinkema’s ad-tech ruling in the U.S.

But the most consequential EU action may be the newest one. On January 26, the Commission opened specification proceedings under the Digital Markets Act focused on online search data sharing and interoperability for Android AI features. The process is framed around access for rivals, including AI developers and search competitors, and is expected to conclude within six months.

That goes beyond what the U.S. search case requires. Mehta’s order mandates data-sharing with search competitors. The EU proceedings ask whether Google must open access to a broader set of rivals, including those building AI-powered products that don’t fit neatly into the traditional search category.

For those watching how AI search develops, this EU proceeding could have bigger long-term implications than anything in the U.S. cases. The question of whether Google’s search index data feeds into competing AI products affects the entire ecosystem of AI-generated answers, citations, and traffic referrals.

Why The Stock Rose Anyway

Google’s stock rose 65% in 2025, CNBC reported, which made it the best performer among the big tech stocks. Apple, by comparison, rose 8.6%. The gap between Google’s legal losses and its market gains points to a pattern that has repeated at every stage of these cases.

When we covered the original verdict in October 2024 and looked at what it could mean for SEO, the range of possible outcomes was wide. Chrome divestiture, Android breakup, elimination of default deals, forced data sharing, and structural separation of search from advertising all sat on the table.

What investors watched play out was a narrowing of that range at every step. Google offered to loosen its search engine deals in December 2024, signaling that behavioral concessions were coming. The DOJ pushed for breakups. The court landed closer to Google’s position than the government’s.

A Financial Times analysis from January 2026 placed Google’s outcome in a broader context. Across multiple Big Tech antitrust cases, judges have shown reluctance to order structural remedies. Meta won outright in November when Judge James Boasberg ruled the company doesn’t hold an illegal monopoly. In the Google ad-tech case, Brinkema expressed discomfort with divestiture. Former DOJ antitrust chief Jonathan Kanter, who helped bring these cases, acknowledged to the FT that the rulings showed the U.S. was too slow to act.

The pattern across cases is consistent. Courts are willing to find that tech companies violated antitrust law. They’re reluctant to order the kind of structural changes that would break the companies apart. And they’re citing AI competition as a central reason for that restraint.

For Google specifically, the combination of light remedies, a strong AI narrative (signs that Google had caught up to OpenAI reinforced investor confidence, according to a Fortune report), and continued dominance in search revenue removed the threat that investors feared most. The breakup scenario didn’t happen, and the stock reflected that.

What This Means For Search Professionals

The antitrust cases resolved in a way that preserves Google’s structure while introducing new requirements around data access and distribution agreements. The impact will unfold over years, not weeks. Here’s what to track.

Search distribution could diversify gradually. The one-year cap on distribution agreements and the restrictions on tying defaults across access points give Apple and Samsung more room to offer users alternatives or to negotiate different terms. Whether they will is a separate question.

Apple’s search-default deal with Google has been widely reported to be worth tens of billions annually. Without that kind of long-term lock-in, Apple has financial incentive to build or license an alternative.

Data-sharing mandates could create new competitors. The judgment requires Google to license a portion of its web index and certain user-side data to qualified rivals, with an oversight process governing the details. The scope matters enormously. Providing limited index access is different from sharing the ranking signals and full index depth that would let a competitor build a viable alternative. Google is appealing this requirement, which tells you where the company sees the real threat.

The ad-tech ruling will directly affect publisher revenue. Brinkema’s decision, expected in early 2026, determines whether Google must separate the tools publishers use to sell ads from the exchange where those ads trade. Even if she orders behavioral remedies instead of a full divestiture, changes to how Google’s ad stack operates will ripple through programmatic advertising. Publishers using Google Ad Manager should pay close attention to the timeline.

The EU’s DMA proceedings open a different front. The January proceedings cover online search data sharing and Android AI interoperability, framed around access for rivals, including AI developers. The outcome would affect how AI search products source their information and, by extension, how content gets cited in AI-generated answers.

Looking Ahead

The next 12 months will determine whether the antitrust cases produce real changes to search markets or settle into a compliance exercise that preserves the status quo.

Key dates and events to watch include Brinkema’s ad-tech remedies ruling, expected in Q1 2026. The DOJ’s decision on whether to cross-appeal Mehta’s rejection of stronger search remedies was due by early February.

Google’s search case appeal will move through the D.C. Circuit, likely taking a year or more. The EU’s DMA specification proceedings on search data sharing and Android AI interoperability are expected to conclude within six months. And the Epic/Play Store settlement faces scrutiny after Judge Donato’s criticism.

Meanwhile, the Amazon and Apple antitrust cases are pending, with trials expected in 2027. Those cases will test whether courts continue the pattern of finding violations but declining breakups, or whether the legal environment changes.

In Summary

Google was found to have maintained illegal monopolies in two separate markets. It’s appealing one case and awaiting remedies in another. Regulators on two continents are pressing forward, and yet the company just became the second most valuable in the world.

Whether the courts ultimately deliver continuity or disruption will play out over the years ahead. Either way, what gets decided in these cases shapes the infrastructure that every search professional works within.

More Resources:


Featured Image: Collagery/Shutterstock

Bing Adds AI Visibility Reporting

Unlike traditional search engine optimization, AI search lacks native performance reporting to help businesses develop organic visibility strategies.

Google’s Search Console combines AI Overviews and organic listings in its “Performance” section, leaving optimizers to guess which channel drove visibility and traffic. ChatGPT shares metrics only with publishers that have licensed their content to OpenAI.

Bing is the first platform to offer some transparency. A few weeks after publishing its “guide to AEO and GEO,” Bing launched an “AI Performance Report” in Webmaster Tools.

AI Performance

The new report tracks citations in Microsoft Copilot, AI-generated summaries in Bing, and select AI partner integrations. But there’s no option to filter by a single surface, and no way to identify the integration partners or their purpose.

The report shows users’ “Total Citations” for the chosen period and “Avg. Cited Pages.” It then lists:

  • “Grounding Queries,” which are “the key phrases the AI used when retrieving content that was cited in its answer.” In other words, the queries are the “fan-out” terms that Bing’s AI agents use to search for and find answers, though we don’t know which search engines or platforms they access.
  • “Pages,” the URLs mentioned in AI answers.
Screenshot of the new AI Performance section

The new Webmaster Tools section lists citations by “Grounding Queries” and “Pages.” Click image to enlarge.

Each tab includes additional visibility data:

  • For every grounding query, Webmaster Tools reports on the average number of unique pages cited per day in AI answers.
  • For each cited URL, the report includes its frequency — how often it appears in an answer — not its importance, ranking, or role within a response.

The report provides no traffic or click-through data and no clarity into which Grounding Queries triggered which citations.

Using the Data

The report is a good first step, but it offers little actionable data. Perhaps it will force other players to do more.

According to Bing, the new report:

… shows how your site’s content is used in AI‑generated answers across Microsoft Copilot and partner experiences by highlighting which pages are cited, how visibility trends change over time, and the grounding queries associated with your content.

I’m making the report more useful by:

  • Researching organic keywords on Bing and Google that drive traffic to the cited URLs,
  • Prompting ChatGPT or Gemini to turn the keywords into prompts,
  • Evaluating whether the cited pages address those prompts or need better structure or clarity.

Also, I identify common modifiers in the grounding queries to understand how AI agents find the pages.

Identify common modifiers, such as “virus” in this example, to understand how AI agents find your pages.

Webmaster Tools

Setting up Bing Webmaster Tools takes only a couple of minutes with Search Console enabled.

Log in to Webmaster Tools with your Microsoft account, click “Add site,” and choose the “Import your sites from GSC” option. Allow roughly 24 hours for Bing to collect and report the data.

Antitrust Filing Says Google Cannibalizes Publisher Traffic via @sejournal, @martinibuster

Penske Media Corporation (PMC) filed a federal court memorandum opposing Google’s motion to dismiss its antitrust lawsuit. The company argues that Google has broken the longstanding premise of a web ecosystem in which publishers allowed their content to be crawled in exchange for receiving search traffic in return.

PMC is the publisher of twenty brands like Deadline, The Hollywood Reporter, and Rolling Stone.

Web Ecosystem

The PMC legal filing makes repeated references to the “fundamental fair exchange” where Google sends traffic in exchange for allowing them to crawl and index websites, explicitly quoting Google’s expressions of support for “the health of the web ecosystem.”

And yet there are some industry outsiders on social media who deny that there is any understanding between Google and web publishers, a concept that even Google doesn’t deny.

This concept dates to pretty much the beginning of Google and is commonly understood by all web workers. It’s embedded in Google’s Philosophy, expressed at least as far back as 2004:

“Google may be the only company in the world whose stated goal is to have users leave its website as quickly as possible.”

In May 2025 Google published a blog post where they affirmed that sending users to websites remained their core goal:

“…our core goal remains the same: to help people find outstanding, original content that adds unique value.”

What’s relevant about that passage is that it’s framed within the context of encouraging publishers to create high quality content and in exchange they will be considered for referral traffic.

The concept of a web ecosystem where both sides benefit was discussed by Google CEO Sundar Pichai in a June 2025 podcast interview by Lex Fridman where Pichai said that sending people to the human created web in AI Mode was “going to be a core design principle for us.”

In response to a follow-up question referring to journalists who are nervous about web referrals, Sundar Pichai explicitly mentioned the ecosystem and Google’s commitment to it.

Pichai responded:

“I think news and journalism will play an important role, you know, in the future we’re pretty committed to it, right? And so I think making sure that ecosystem… In fact, I think we’ll be able to differentiate ourselves as a company over time because of our commitment there. So it’s something I think you know I definitely value a lot and as we are designing we’ll continue prioritizing approaches.”

This “fundamental fair exchange” serves as the baseline competitive condition for their claims of coercive reciprocal dealing and unlawful monopoly maintenance.

That baseline helps PMC argue:

  • That Google changed the understood terms of participation in search in a way publishers cannot refuse.
  • And that Google used its dominance in search to impose those new terms.

And despite that Google’s own CEO expressed that sending people to websites is a core design principal and there are multiple instances in the past and the present where Google’s own documentation refers to this reciprocity between publishers and Google, Google’s legal response expressly denies that it exists.

The PMC document states:

“Google …argues that no reciprocity agreement exists because it has not “promised to deliver” any search referral traffic.”

Profound Consequences Of Google AI Search

PMC filed a federal court memorandum in February 2026 opposing Google’s motion to dismiss its antitrust complaint. The complaint details Google’s use of its search monopoly to “coerce” publishers into providing content for AI training and AI Overviews without compensation.

The suit argues that Google has pivoted from being a search engine (that sends traffic to websites) to an answer engine that removes the incentive for users to click to visit a website. The lawsuit claims that this change harms the economic viability of digital publishers.

The filing explains the consequences of this change:

“Google has shattered the longstanding bargain that allows the open internet to exist. The consequences for online publishers—to say nothing of the public at large—are profound.”

Google Is Using Their Market Power

The filing claims that the collapse of the traditional search ecosystem positions Google’s AI search system as coercive rather than innovative, arguing that publishers must either allow AI to reuse their content or risk losing search visibility.

The legal filing alleges that Google’s generative AI competes directly with online publishers for user’s attention, describing Google as cannibalizing publisher’s traffic, specifically alleging that Google is using their “market power” to maintain a situation in which publishers can’t block the AI without also negatively affecting what little search traffic is left.

The memorandum portrays a bleak choice offered by Google:

“Google’s search monopoly leaves publishers with no choice: acquiesce—even as Google cannibalizes the traffic publishers rely on—or perish.”

It also describes the role of AI grounding plays in cannibalizing publisher traffic for its sole benefit:

“Through RAG, or “grounding,” Google uses, repackages, and republishes publisher content for display on Google’s SERP, cannibalizing the traffic on which PMC depends.”

Expansion Of Zero-Click Search Results And Traffic Loss

The filing claims AI answers divert users away from publisher sites and diminish monetizable audience visits. Multiple parts of the filing directly confronts Google with the fact of reduced traffic from search due to the cannibalization of their content.

The filing alleges:

“Google reduces click‑throughs to publisher sites, increases zero‑click behavior, and diverts traffic that publishers need to support their advertising, affiliate, and subscription revenue.

…Google’s insinuation . . . that AI Overview is not getting in the way of the ten blue links and the traffic going back to creators and publishers is just 100% false . . . . [Users] are reading the overview and stopping there . . . . We see it.”

…The purpose is not to facilitate click-throughs but to have users consume PMC’s content, repackaged by Google, directly on the SERP.”

Zero-click searches are described as a component of a multi-part process in which publishers are injured by Google’s conduct. The filing accuses Google of using publisher content for training, grounding their AI on facts, and then republishing it within the zero-click AI search environment that either reduces or eliminates clicks back to PMC’s websites.

Should Google Send More Referral Traffic?

Everything that’s described in the PMC filing is the kind of thing that virtually all online businesses have been complaining about in terms of traffic losses as a result of Google’s AI search surfaces. It’s the reason why Lex Fridman specifically challenged Google’s CEO on the amount of traffic Google is sending to websites.

Google AI Shows A Site Is Offline Due To JS Content Delivery via @sejournal, @martinibuster

Google’s John Mueller offered a simple solution to a Redditor who blamed Google’s “AI” for a note in the SERPs saying that the website was down since early 2026.

The Redditor didn’t create a post on Reddit, they just linked to their blog post that blamed Google and AI. This enabled Mueller to go straight to the site, identify the cause as having to do with JavaScript implementation, and then set them straight that it wasn’t Google’s fault.

Redditor Blames Google’s AI

The blog post by the Redditor blames Google, headlining the article with a computer science buzzword salad that over-complicates and (unknowingly) misstates the actual problem.

The article title is:

“Google Might Think Your Website Is Down
How Cross-page AI aggregation can introduce new liability vectors.”

That part about “cross-page AI aggregation” and “liability vectors” is eyebrow raising because none of those terms are established terms of art in computer science.

The “cross-page” thing is likely a reference to Google’s Query Fan-Out, where a question on Google’s AI Mode is turned into multiple queries that are then sent to Google’s Classic Search.

Regarding “liability vectors,” a vector is a real thing that’s discussed in SEO and is a part of Natural Language Processing (NLP). But “Liability Vector” is not a part of it.

The Redditor’s blog post admits that they don’t know if Google is able to detect if a site is down or not:

“I’m not aware of Google having any special capability to detect whether websites are up or down. And even if my internal service went down, Google wouldn’t be able to detect that since it’s behind a login wall.”

And they appear to maybe not be aware of how RAG or Query Fan-Out works, or maybe how Google’s AI systems work. The author seems to regard it as a discovery that Google is referencing fresh information instead of Parametric Knowledge (information in the LLM that was gained from training).

They write that Google’s AI answer says that the website indicated the site was offline since 2026.:

“…the phrasing says the website indicated rather than people indicated; though in the age of LLMs uncertainty, that distinction might not mean much anymore.

…it clearly mentions the timeframe as early 2026. Since the website didn’t exist before mid-2025, this actually suggests Google has relatively fresh information; although again, LLMs!”

A little later in the blog post the Redditor admits that they don’t know why Google is saying that the website is offline.

They explained that they implemented a shot in the dark solution by removing a pop-up. They were incorrectly guessing that it was the pop-up that was causing the issue and this highlights the importance of being certain of what’s causing issues before making changes in the hope that this will fix them.

The Redditor shared they didn’t know how Google summarizes information about a site in response to a query about the site, and expressed their concern that they believe it’s possible that Google can scrape irrelevant information then show it as an answer.

They write:

“…we don’t know how exactly Google assembles the mix of pages it uses to generate LLM responses.

This is problematic because anything on your web pages might now influence unrelated answers.

…Google’s AI might grab any of this and present it as the answer.”

I don’t fault the author for not knowing how Google AI search works, I’m fairly certain it’s not widely known. It’s easy to get the impression that it’s an AI answering questions.

But what’s basically going on is that AI search is based on Classic Search, with AI synthesizing the content it finds online into a natural language answer. It’s like asking someone a question, they Google it, then they explain the answer from what they learned from reading the website pages.

Google’s John Mueller Explains What’s Going On

Mueller responded to the person’s Reddit post in a neutral and polite manner, showing why the fault lies in the Redditor’s implementation.

Mueller explained:

“Is that your site? I’d recommend not using JS to change text on your page from “not available” to “available” and instead to just load that whole chunk from JS. That way, if a client doesn’t run your JS, it won’t get misleading information.

This is similar to how Google doesn’t recommend using JS to change a robots meta tag from “noindex” to “please consider my fine work of html markup for inclusion” (there is no “index” robots meta tag, so you can be creative).”

Mueller’s response explains that the site is relying on JavaScript to replace placeholder text that is served briefly before the page loads, which only works for visitors whose browsers actually run that script.

What happened here is that Google read that placeholder text that the web page showed as the indexed content. Google saw the original served content with the “not available” message and treated it as the content.

Mueller explained that the safer approach is to have the correct information present in the page’s base HTML from the start, so that both users and search engines receive the same content.

Takeaways

There are multiple takeaways here that go beyond the technical issue underlying the Redditor’s problem. Top of the list is how they tried to guess their way to an answer.

They really didn’t know how Google AI search works, which introduced a series of assumptions that complicated their ability to diagnose the issue. Then they implemented a “fix” based on guessing what they thought was probably causing the issue.

Guessing is an approach to SEO problems that’s justified on Google being opaque but sometimes it’s not about Google, it’s about a knowledge gap in SEO itself and a signal that further testing and diagnosis is necessary.

Featured Image by Shutterstock/Kues

Google’s Search Relations Team Debates If You Still Need A Website via @sejournal, @MattGSouthern

Google’s Search Relations team was asked directly whether you still need a website in 2026. They didn’t give a one-size-fits-all answer.

The conversation stayed focused on trade-offs between owning a website and relying on platforms such as social networks or app stores.

In a new episode of the Search Off the Record podcast, Gary Illyes and Martin Splitt spent about 28 minutes exploring the question and repeatedly landed on the same conclusion: it depends.

What Was Said

Illyes and Splitt acknowledged that websites still offer distinct advantages, including data sovereignty, control over monetization, the ability to host services such as calculators or tools, and freedom from platform content moderation.

Both Googlers also emphasized situations where a website may not be necessary.

Illyes referenced a Google user study conducted in Indonesia around 2015-2016 where businesses ran entirely on social networks with no websites. He described their results as having “incredible sales, incredible user journeys and retention.”

Illyes also described mobile games that, in his telling, became multi-million-dollar and in some cases “billion-dollar” businesses without a meaningful website beyond legal pages.

Illyes offered a personal example:

“I know that I have a few community groups in WhatsApp for instance because that’s where the people I want to reach are and I can reach them reliably through there. I could set up a website but I never even considered because why? To do what?”

Splitt addressed trust and presentation, saying:

“I’d rather have a nicely curated social media presence that exudes trustworthiness than a website that is not well done.”

When pressed for a definitive answer, Illyes offered the closest thing to a position, saying that if you want to make information or services available to as many people as possible, a website is probably still the way to go in 2026. But he framed it as a personal opinion, not a recommendation.

Why This Matters

Google Search is built around crawling and indexing web content, but the hosts still frame “needing a website” as a business decision that depends on your goals and audience.

Neither made a case that websites are essential for every business in 2026. Neither argued that the open web offers something irreplaceable. The strongest endorsement was that websites provide a low barrier of entry for sharing information and that the web “isn’t dead.”

This is consistent with the fragmented discovery landscape that SEJ has been covering, where user journeys now span AI chatbots, social feeds, and community platforms alongside traditional search.

Looking Ahead

The Search Off the Record podcast has historically offered behind-the-scenes perspectives from the Search Relations team that sometimes run ahead of official positions.

This episode didn’t introduce new policy or guidance. But the Search Relations team’s willingness to validate social-only business models and app-only distribution reflects how the role of websites is changing in a multi-platform discovery environment.

The question is worth sitting with. If the Search Relations team frames website ownership as situational rather than essential, the value proposition rests on the specific use case, not on the assumption that every business needs one.


Featured Image: Diki Prayogo/Shutterstock

Bing AI Citation Tracking, Hidden HTTP Homepages & Pages Fall Under Crawl Limit – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse for SEO: updates cover how you track AI visibility, how a ghost page can break your site name in search results, and what new crawl data reveals about Googlebot’s file size limits.

Here’s what matters for you and your work.

Bing Webmaster Tools Adds AI Citation Dashboard

Microsoft introduced an AI Performance dashboard in Bing Webmaster Tools, giving publishers visibility into how often their content gets cited in Copilot and AI-generated answers. The feature is now in public preview.

Key Facts: The dashboard tracks total citations, average cited pages per day, page-level citation activity, and grounding queries. Grounding queries show the phrases AI used when retrieving your content for answers.

Why This Matters

Bing is now offering a dedicated dashboard for AI citation visibility. Google includes AI Overviews and AI Mode activity in Search Console’s overall Performance reporting, but it doesn’t break out a separate report or provide citation-style URL counts. AI Overviews also assign all linked pages to a single position, which limits what you can learn about individual page performance in AI answers.

Bing’s dashboard goes further by tracking which pages get cited, how often, and what phrases triggered the citation. The missing piece is click data. The dashboard shows when your content is cited, but not whether those citations drive traffic.

Now you can confirm which pages are referenced in AI answers and identify patterns in grounding queries, but connecting AI visibility to business outcomes still requires combining this data with your own analytics.

What SEO Professionals Are Saying

Wil Reynolds, founder of Seer Interactive, celebrated the feature on X and focused on the new grounding queries data:

“Bing is now giving you grounding queries in Bing Webmaster tools!! Just confirmed, now I gotta understand what we’re getting from them, what it means and how to use it.”

Koray Tuğberk GÜBÜR, founder of Holistic SEO & Digital, compared it directly to Google’s tooling on X:

“Microsoft Bing Webmaster Tools has always been more useful and efficient than Google Search Console, and once again, they’ve proven their commitment to transparency.”

Fabrice Canel, principal product manager at Microsoft Bing, framed the launch on X as a bridge between traditional and AI-driven optimization:

“Publishers can now see how their content shows up in the AI era. GEO meets SEO, power your strategy with real signals.”

The reaction across social media centered on a shared frustration. This is the data practitioners have been asking for, but it comes from Bing rather than Google. Several people expressed hope that Google and OpenAI would follow with comparable reporting.

Read our full coverage: Bing Webmaster Tools Adds AI Citation Performance Data

Hidden HTTP Homepage Can Break Your Site Name In Google

Google’s John Mueller shared a troubleshooting case on Bluesky where a leftover HTTP homepage was causing unexpected site-name and favicon problems in search results. The issue is easy to miss because Chrome can automatically upgrade HTTP requests to HTTPS, hiding the problematic page from normal browsing.

Key Facts: The site used HTTPS, but a server-default HTTP homepage was still accessible. Chrome’s auto-upgrade meant the publisher never saw the HTTP version, but Googlebot doesn’t follow Chrome’s upgrade behavior, so Googlebot was pulling from the wrong page.

Why This Matters

This is the kind of problem you wouldn’t find in a standard site audit because your browser never shows it. If your site name or favicon in search results doesn’t match what you expect, and your HTTPS homepage looks correct, the HTTP version of your domain is worth checking.

Mueller suggested running curl from the command line to see the raw HTTP response without Chrome’s auto-upgrade. If it returns a server-default page instead of your actual homepage, that’s the source of the problem. You can also use the URL Inspection tool in Search Console with a Live Test to see what Google retrieved and rendered.

Google’s documentation on site names specifically mentions duplicate homepages, including HTTP and HTTPS versions, and recommends using the same structured data for both. Mueller’s case shows what happens when an HTTP version contains content different from the HTTPS homepage you intended.

What People Are Saying

Mueller described the case on Bluesky as “a weird one,” noting that the core problem is invisible in normal browsing:

“Chrome automatically upgrades HTTP to HTTPS so you don’t see the HTTP page. However, Googlebot sees and uses it to influence the sitename & favicon selection.”

The case highlights a pattern where browser features often hide what crawlers see. Examples include Chrome’s auto-upgrade, reader modes, client-side rendering, and JavaScript content. To debug site name and favicon issues, check the server response directly, not just browser loadings.

Read our full coverage: Hidden HTTP Page Can Cause Site Name Problems In Google

New Data Shows Most Pages Fit Well Within Googlebot’s Crawl Limit

New research based on real-world webpages suggests most pages sit well below Googlebot’s 2 MB fetch cutoff. The data, analyzed by Search Engine Journal’s Roger Montti, draws on HTTP Archive measurements to put the crawl limit question into practical context.

Key Facts: HTTP Archive data suggests most pages are well below 2 MB. Google recently clarified in updated documentation that Googlebot’s limit for supported file types is 2 MB, while PDFs get a 64 MB limit.

Why This Matters

The crawl limit question has been circulating in technical SEO discussions, particularly after Google updated its Googlebot documentation earlier this month.

The new data answers the practical question that documentation alone couldn’t. Does the 2 MB limit matter for your pages? For most sites, the answer is no. Standard webpages, even content-heavy ones, rarely approach that threshold.

Where the limit could matter is on pages with extremely bloated markup, inline scripts, or embedded data that inflates HTML size beyond typical ranges.

The broader pattern here is Google making its crawling systems more transparent. Moving documentation to a standalone crawling site, clarifying which limits apply to which crawlers, and now having real-world data to validate those limits gives a clearer picture of what Googlebot handles.

What Technical SEO Professionals Are Saying

Dave Smart, technical SEO consultant at Tame the Bots and a Google Search Central Diamond Product Expert, put the numbers in perspective in a LinkedIn post:

“Googlebot will only fetch the first 2 MB of the initial html (or other resource like CSS, JavaScript), which seems like a huge reduction from 15 MB previously reported, but honestly 2 MB is still huge.”

Smart followed up by updating his Tame the Bots fetch and render tool to simulate the cutoff. In a Bluesky post, he added a caveat about the practical risk:

“At the risk of overselling how much of a real world issue this is (it really isn’t for 99.99% of sites I’d imagine), I added functionality to cap text based files to 2 MB to simulate this.”

Google’s John Mueller endorsed the tool on Bluesky, writing:

“If you’re curious about the 2MB Googlebot HTML fetch limit, here’s a way to check.”

Mueller also shared Web Almanac data on Reddit to put the limit in context:

“The median on mobile is at 33kb, the 90-percentile is at 151kb. This means 90% of the pages out there have less than 151kb HTML.”

Roger Montti, writing for Search Engine Journal, reached a similar conclusion after reviewing the HTTP Archive data. Montti noted that the data based on real websites shows most sites are well under the limit, and called it “safe to say it’s okay to scratch off HTML size from the list of SEO things to worry about.”

Read our full coverage: New Data Shows Googlebot’s 2 MB Crawl Limit Is Enough

Theme Of The Week: The Diagnostic Gap

Each story this week points to something practitioners couldn’t see before, or checked the wrong way.

Bing’s AI citation dashboard fills a measurement gap that has existed since AI answers started citing website content. Mueller’s HTTP homepage case reveals an invisible page that standard site audits and browser checks would miss entirely because Chrome hides it. And the Googlebot crawl limit data answers a question that documentation updates raised, but couldn’t resolve on their own.

The connecting thread isn’t that these are new problems. AI citations have been happening without measurement tools. Ghost HTTP pages have been confusing site name systems since Google introduced the feature. And crawl limits have been listed in Google’s docs for years without real-world validation. What changed this week is that each gap got a concrete diagnostic: a dashboard, a curl command, and a dataset.

The takeaway is that the tools and data for understanding how search engines interact with your content are getting more specific. The challenge is knowing where to look.

More Resources:


Featured Image: Accogliente Design/Shutterstock

The Classifier Layer: Spam, Safety, Intent, Trust Stand Between You And The Answer via @sejournal, @DuaneForrester

Most people still think visibility is a ranking problem. That worked when discovery lived in 10 blue links. It breaks down when discovery happens inside an answer layer.

Answer engines have to filter aggressively. They are assembling responses, not returning a list. They are also carrying more risk. A bad result can become harmful advice, a scam recommendation, or a confident lie delivered in a friendly tone. So the systems that power search and LLM experiences rely on classification gates long before they decide what to rank or what to cite.

If you want to be visible in the answer layer, you need to clear those gates.

SSIT is a simple way to name what’s happening. Spam, Safety, Intent, Trust. Four classifier jobs sit between your content and the output a user sees. They sort, route, and filter long before retrieval, ranking, or citation.

The Classifier Layer: Spam, Safety, Intent, Trust Stand Between You and the Answer

Spam: The Manipulation Gate

Spam classifiers exist to catch scaled manipulation. They are upstream and unforgiving, and if you trip them, you can be suppressed before relevance even enters the conversation.

Google is explicit that it uses automated systems to detect spam and keep it out of search results. It also describes how those systems evolve over time and how manual review can complement automation.

Google has also named a system directly in its spam update documentation. SpamBrain is described as an AI-based spam prevention system that it continually improves to catch new spam patterns.

For SEOs, spam detection behaves like pattern recognition at scale. Your site gets judged as a population of pages, not a set of one-offs. Templates, footprints, link patterns, duplication, and scaling behavior all become signals. That’s why spam hits often feel unfair. Single pages look fine; the aggregate looks engineered.

If you publish a hundred pages that share the same structure, phrasing, internal links, and thin promise, classifiers see the pattern.

Google’s spam policies are a useful map of what the spam gate tries to prevent. Read them like a spec for failure modes, then connect each policy category to a real pattern on your site that you can remove.

Manual actions remain part of this ecosystem. Google documents that manual actions can be applied when a human reviewer determines a site violates its spam policies.

There is an uncomfortable SEO truth hiding in this. If your growth play relies on behaviors that resemble manipulation, you are betting your business on a classifier not noticing, not learning, and not adapting. That is not a stable bet.

Safety: The Harm And Fraud Gate

Safety classifiers are about user protection. They focus on harm, deception, and fraud. They do not care if your keyword targeting is perfect, but they do care if your experience looks risky.

Google has made public claims about major improvements in scam detection using AI, including catching more scam pages and reducing specific forms of impersonation scams.

Even if you ignore the exact numbers, the direction is clear. Safety classification is a core product priority, and it shapes visibility hardest where users can be harmed financially, medically, or emotionally.

This is where many legitimate sites accidentally look suspicious. Safety classifiers are conservative, and they work at the level of pattern and context. Monetization-heavy layouts, thin lead gen pages, confusing ownership, aggressive outbound pushes, and inflated claims can all resemble common scam patterns when they show up at scale.

If you operate in affiliate, lead gen, local services, finance, health, or any category where scams are common, you should assume the safety gate is strict. Then build your site so it reads as legitimate without effort.

That comes down to basic trust hygiene.

Make ownership obvious. Use consistent brand identifiers across the site. Provide clear contact paths. Be transparent about monetization. Avoid claims that cannot be defended. Include constraints and caveats in the content itself, not hidden in a footer.

If your site has ever been compromised, or if you operate in a neighborhood of risky outbound links, you also inherit risk. Safety classifiers treat proximity as a signal because threat actors cluster. Cleaning up your link ecosystem and site security is no longer only a technical responsibility; it’s visibility defense.

Intent: The Routing Gate

Intent classification determines what the system believes the user is trying to accomplish. That decision shapes the retrieval path, the ranking behavior, the format of the answer, and which sources get pulled into the response.

This matters more as search shifts from browsing sessions to decision sessions. In a list-based system, the user can correct the system by clicking a different result. In an answer system, the system makes more choices on the user’s behalf.

Intent classification is also broader than the old SEO debates about informational versus transactional. Modern systems try to identify local intent, freshness intent, comparative intent, procedural intent, and high-stakes intent. These intent classes change what the system considers helpful and safe. In fact, if you deep-dive into “intents,” you’ll find that so many more don’t even fit into our crisply defined, marketing-designed boxes. Most marketers build for maybe three to four intents. The systems you’re trying to win in often operate with more, and research taxonomies already show how intent explodes into dozens of types when you measure real tasks instead of neat categories.

If you want consistent visibility, make intent alignment obvious and commit each page to a primary task.

  • If a page is a “how to,” make it procedural. Lead with the outcome. Present steps. Include requirements and failure modes early.
  • If a page is a “best options” piece, make it comparative. Define your criteria. Explain who each option fits and who it does not.
  • If a page is local, behave like a local result. Include real local proof and service boundaries. Remove generic filler that makes the page look like a template.
  • If a page is high-stakes, be disciplined. Avoid sweeping guarantees. Include evidence trails. Use precise language. Make boundaries explicit.

Intent clarity also helps across classic ranking systems, and it can help reduce pogo behavior and improve satisfaction signals. More importantly for the answer layer, it gives the system clean blocks to retrieve and use.

Trust: The “Should We Use This” Gate

Trust is the gate that decides whether content is used, how much it is used, and whether it is cited. You can be retrieved and still not make the cut. You can be used and still not be cited. You can show up one day and disappear the next because the system saw slightly different context and made different selections.

Trust sits at the intersection of source reputation, content quality, and risk.

At the source level, trust is shaped by history. Domain behavior over time, link graph context, brand footprint, author identity, consistency, and how often the site is associated with reliable information.

At the content level, trust is shaped by how safe it is to quote. Specificity matters. Internal consistency matters. Clear definitions matter. Evidence trails matter. So does writing that makes it hard to misinterpret.

LLM products also make classification gates explicit in their developer tooling. OpenAI’s moderation guide documents classification of text and images for safety purposes, so developers can filter or intervene.

Even if you are not building with APIs, the existence of this tooling reflects the reality of modern systems. Classification happens before output, and policy compliance influences what can be surfaced. For SEOs, the trust gate is where most AI optimization advice gets exposed. Sounding authoritative is easy, but being safe to use takes precision, boundaries, evidence, and plain language.

It also comes in blocks that can stand alone.

Answer engines extract. They reassemble, and they summarize. That means your best asset is a self-contained unit that still makes sense when it is pulled out of the page and placed into a response.

A good self-contained block typically includes a clear statement, a short explanation, a boundary condition, and either an example or a source reference. When your content has those blocks, it becomes easier for the system to use it without introducing risk.

How SSIT Flows Together In The Real World

In practice, the gates stack.

First, the system evaluates whether a site and its pages look spammy or manipulative. This can affect crawl frequency, indexing behavior, and ranking potential. Next, it evaluates whether the content or experience looks risky. In some categories, safety checks can suppress visibility even when relevance is high. Then it evaluates intent. It decides what the user wants and routes retrieval accordingly. If your page does not match the intent class cleanly, it becomes less likely to be selected.

Finally, it evaluates trust for usage. That is where decisions get made about quoting, citing, summarizing, or ignoring. The key point for AI optimization is not that you should try to game these gates. The point is that you should avoid failing them.

Most brands lose visibility in the answer layer for boring reasons. They look like scaled templates. They hide important legitimacy signals. They publish vague content that is hard to quote safely. They try to cover five intents in one page and satisfy none of them fully.

If you address those issues, you are doing better “AI optimization” than most teams chasing prompt hacks.

Where “Classifiers Inside The Model” Fit, Without Turning This Into A Computer Science Lecture

Some classification happens inside model architectures as routing decisions. Mixture of Experts approaches are a common example, where a routing mechanism selects which experts process a given input to improve efficiency and capability. NVIDIA also provides a plain-language overview of Mixture of Experts as a concept.

This matters because it reinforces the broader mental model. Modern AI systems rely on routing and gating at multiple layers. Not every gate is directly actionable for SEO, but the presence of gates is the point. If you want predictable visibility, you build for the gates you can influence.

What To Do With This, Practical Moves For SEOs

Start by treating SSIT like a diagnostic framework. When visibility drops in an answer engine, do not jump straight to “ranking.” Ask where you might have failed in the chain.

Spam Hygiene Improvements

Audit at the template level. Look for scaled patterns that resemble manipulation when aggregated. Remove doorway clusters and near-duplicate pages that do not add unique value. Reduce internal link patterns that exist only to sculpt anchors. Identify pages that exist only to rank and cannot defend their existence as a user outcome.

Use Google’s spam policy categories as the baseline for this audit, because they map to common classifier objectives.

Safety Hygiene Improvements

Assume conservative filtering in categories where scams are common. Strengthen legitimacy signals on every page that asks for money, personal data, a phone call, or a lead. Make ownership and contact information easy to find. Use transparent disclosures. Avoid inflated claims. Include constraints inside the content.

If you publish in YMYL-adjacent categories, tighten your editorial standards. Add sourcing. Track updates. Remove stale advice. Safety gates punish stale content because it can become harmful.

Intent Hygiene Improvements

Choose the primary job of the page and make it obvious in the first screen. Align the structure to the task. A procedural page should read like a procedure. A comparison page should read like a comparison. A local page should prove locality.

Do not rely on headers and keywords to communicate this. Make it obvious in sentences that a system can extract.

Trust Hygiene Improvements

Build citeable blocks that stand on their own. Use tight definitions. Provide evidence trails. Include boundaries and constraints. Avoid vague, sweeping statements that cannot be defended. If your content is opinion-led, label it as such and support it with rationale. If your content is claim-led, cite sources or provide measurable examples.

This is also where authorship and brand footprint matter. Trust is not only on-page. It is the broader set of signals that tell systems you exist in the world as a real entity.

SSIT As A Measurement Mindset

If you are building or buying “AI visibility” reporting, SSIT changes how you interpret what you see.

  • A drop in presence can mean a spam classifier dampened you.
  • A drop in citations can mean a trust classifier avoided quoting you.
  • A mismatch between retrieval and usage can mean intent misalignment.
  • A category-level invisibility can mean safety gating.

That diagnostic framing matters because it leads to fixes you can execute. It also stops teams from thrashing, rewriting everything, and hoping the next version sticks.

SSIT also keeps you grounded. It is tempting to treat AI optimization as a new discipline with new hacks. Most of it is not hacks. It is hygiene, clarity, and trust-building, applied to systems that filter harder than the old web did. That’s the real shift.

The answer layer is not only ranking content, but it’s also selecting content. That selection happens through classifiers that are trained to reduce risk and improve usefulness. When you plan for Spam, Safety, Intent, and Trust, you stop guessing. You start designing content and experiences that survive the gates.

That is how you earn a place in the answer layer, and keep it.

More Resources:


This post was originally published on Duane Forrester Decodes.


Featured Image: Olga_TG/Shutterstock

From Performance SEO To Demand SEO via @sejournal, @TaylorDanRW

AI is fundamentally changing what doing SEO means. Not just in how results are presented, but in how brands are discovered, understood, and trusted inside the very systems people now rely on to learn, evaluate, and make decisions. This forces a reassessment of our role as SEOs, the tools and frameworks we use, and the way success is measured beyond legacy reporting models that were built for a very different search environment.

Continuing to rely on vanity metrics rooted in clicks and rankings no longer reflects reality, particularly as people increasingly encounter and learn about brands without ever visiting a website.

For most of its history, SEO focused on helping people find you within a static list of results. Keywords, content, and links existed primarily to earn a click from someone who already recognized a need and was actively searching for a solution.

AI disrupts that model by moving discovery into the answer itself, returning a single synthesized response that references only a small number of brands, which naturally reduces overall clicks while simultaneously increasing the number of brand touchpoints and moments of exposure that shape perception and preference. This is not a traffic loss problem, but a demand creation opportunity. Every time a brand appears inside an AI-generated answer, it is placed directly into the buyer’s mental shortlist, building mental availability even when the user has never encountered the brand before.

Why AI Visibility Creates Demand, Not Just Traffic

Traditional SEO excelled at capturing existing demand by supporting users as they moved through a sequence of searches that refined and clarified a problem before leading them towards a solution.

AI now operates much earlier in that journey, shaping how people understand categories, options, and tradeoffs before they ever begin comparing vendors, effectively pulling what we used to think of as middle and bottom-of-funnel activity further upstream. People increasingly use AI to explore unfamiliar spaces, weigh alternatives, and design solutions that fit their specific context, which means that when a brand is repeatedly named, explained, or referenced, it begins to influence how the market defines what good looks like.

This repeated exposure builds familiarity over time, so that when a decision moment eventually arrives, the brand feels known and credible rather than new and untested, which is demand generation playing out inside the systems people already trust and use daily.

Unlike above-the-line advertising, this familiarity is built natively within tools that have become deeply embedded in everyday life through smartphones, assistants, and other connected devices, making this shift not only technical but behavioral, rooted in how people now access and process information.

How This Changes The Role Of SEO

As AI systems increasingly summarize, filter, and recommend on behalf of users, SEO has to move beyond optimizing individual pages and instead focus on making a brand easy for machines to understand, trust, and reuse across different contexts and queries.

This shift is most clearly reflected in the long-running move from keywords to entities, where keywords still matter but are no longer the primary organizing principle, because AI systems care more about who a brand is, what it does, where it operates, and which problems it solves.

That pushes modern SEO towards clearly defined and consistently expressed brand boundaries, where category, use cases, and differentiation are explicit across the web, even when that creates tension with highly optimized commercial landing pages.

AI systems rely heavily on trust signals such as citations, consensus, reviews, and verifiable facts, which means traditional ranking factors still play a role, but increasingly as proof points that an AI system can safely rely on when constructing answers. When an AI cannot confidently answer basic questions about a brand, it hesitates to recommend it, whereas when it can, that brand becomes a dependable component it can repeatedly draw upon.

This changes the questions SEO teams need to ask, shifting focus away from rankings alone and toward whether content genuinely shapes category understanding, whether trusted publishers reference the brand, and whether information about the brand remains consistent wherever it appears.

Narrative control also changes, because where brands once shaped their story through pages in a list of results, AI now tells the story itself, requiring SEOs to work far more closely with brand and communication teams to reinforce simple, consistent language and a small number of clear value propositions that AI systems can easily compress into accurate summaries.

What Brands Need To Do Differently

Brands need to stop starting their strategies with keywords and instead begin by assessing their strength and clarity as an entity, looking at what search engines and other systems already understand about them and how consistent that understanding really is.

The most valuable AI moments occur long before a buyer is ready to compare vendors, at the point where they are still forming opinions about the problem space, which means appearing by name in those early exploratory questions allows a brand to influence how the problem itself is framed and to build mental availability before any shortlist exists.

Achieving that requires focus rather than breadth, because trying to appear in every possible conversation dilutes clarity, whereas deliberately choosing which problems and perspectives to own creates stronger and more coherent signals for AI systems to work with.

This represents a move away from chasing as many keywords as possible in favor of standardizing a simple brand story that uses clear language everywhere, so that what you do, who it is for, and why it matters can be expressed in one clean, repeatable sentence.

This shift also demands a fundamental change in how SEO success is measured and reported, because if performance continues to be judged primarily through rankings and clicks, AI visibility will always look underwhelming, even though its real impact happens upstream by shaping preference and intent over time.

Instead, teams need to look at patterns across branded search growth, direct traffic, lead quality, and customer outcomes, because when reporting reflects that broader reality, it becomes clear that as AI visibility grows, demand follows, repositioning SEO from a purely tactical channel into a strategic lever for long-term growth.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock

What The Data Shows About Local Rankings In 2026 [Webinar] via @sejournal, @hethr_campbell

Reputation Signals Now Matter More Than Reviews Alone

Positive reviews are no longer the primary fast path to the top of local search results. 

As Google Local Pack and Maps continue to evolve, reputation signals are playing a much larger role in how businesses earn visibility. At the same time, AI tools are emerging as a new entry point for local discovery, changing how brands are cited, mentioned, and recommended.

Join Alexia Platenburg, Senior Product Marketing Manager at GatherUp, for a data-driven look at the local SEO signals shaping visibility today. In this session, she will break down how modern reputation signals influence rankings and what scalable, defensible reputation programs look like for local SEO agencies and multi-location brands.

You will walk away with a clear framework for using reputation as a true visibility and ranking lever, not just a step toward conversion. The session connects reviews, owner responses, and broader reputation signals to measurable outcomes across Google Local Pack, Maps, and AI-powered discovery.

What You’ll Learn

  • How review volume, velocity, ratings, and owner responses influence Local Pack and Maps rankings
  • The reputation signals AI tools use to cite or mention local businesses
  • How to protect your brand from fake reviews before they impact trust at scale

Why Attend?

This webinar offers a practical, evidence-based view of how reputation management is shaping local visibility in 2026. You will gain clear guidance on what matters now, what to prioritize, and how to build trust signals that support long-term local growth.

Register now to learn how reputation is driving local visibility, trust, and growth in 2026.

🛑 Can’t attend live? Register anyway, and we’ll send you the on-demand recording after the webinar.

Ask an Expert: Should Merchants Block AI Bots?

“Ask an Expert” is an occasional series where we pose questions to seasoned ecommerce pros. For this installment, we’ve turned to Scot Wingo, a serial ecommerce entrepreneur most recently of ReFiBuy, a generative engine optimization platform, and before that, ChannelAdvisor, the marketplace management firm.

He addresses tactics for managing genAI bots.

Practical Ecommerce: Should ecommerce merchants monitor and even block AI agents that crawl their sites?

Scot Wingo: It’s a nuanced and strategic decision essential to every merchant.

Scot Wingo

Scot Wingo

The four agentic commerce experiences — ChatGPT (Instant Checkout, Agentic Commerce Protocol), Google Gemini (Universal Commerce Protocol), Microsoft Copilot (Copilot Checkout, ACP), and Perplexity (PayPal, Instant Buy) — have nearly 1 billion combined monthly active users. With Google transitioning from traditional search to AI Mode, that number will dramatically increase.

For merchants, the opportunity is as big or bigger than Amazon or any other marketplace.

Merchants should embrace AI agents and ensure access to the entire product catalog.

But genAI models need more than access. Agentic commerce thrives not just on extensive attributes but also on the products’ applications and use cases. Merchants should expand attributes beyond what’s shown on product detail pages and provide essential context via a deep and wide question-and-answer section that includes common shopper queries. It enables the models to match consumer prompts with relevant recommendations, driving sales to those merchants.

The time for action is now. Gemini’s shift to AI Mode means zero-click searches will increase, likely producing 20-30% fewer clicks (and revenue) in 2026.