Enterprise SEO Operating Models That Scale In 2026 And Beyond via @sejournal, @billhunt

Most enterprises are still treating SEO as a marketing activity. That decision, whether intentional or accidental, is now a material business risk.

In the years ahead, SEO performance will not be determined by better tactics, better tools, or even better talent. It will be determined by whether leadership understands what SEO has become and restructures the organization accordingly. SEO is no longer simply a channel but an infrastructure, and infrastructure decisions are leadership decisions.

The Old SEO Question Is No Longer Relevant

For years, executives asked a familiar question: Are we doing SEO well? Or even more simply, are we ranking well in Google? 

That question assumed SEO was something you did, summed up as a collection of optimizations, audits, and campaigns applied after the fact. It made sense when search primarily ranked pages and rewarded incremental improvements. The more relevant question today is different: Is our organization structurally capable of being discovered, understood, and selected by modern search systems?

That is no longer a marketing question. It is an operating model question because AI optimization must become a team sport.

Search engines, and increasingly AI-driven systems, do not reward isolated optimizations. They reward coherence, structure, intent alignment, and machine-readable clarity across an entire digital ecosystem. Those outcomes are not created downstream. They are created by how an organization builds, governs, and scales its digital assets.

What Has Fundamentally Changed

To understand why enterprise SEO operating models must evolve, leadership first needs to understand what actually changed in search.

1. Search Systems Now Interpret Intent Before Retrieval

Modern search systems no longer treat queries as literal requests. They reinterpret ambiguous intent, expand queries through fan-out, explore multiple intent paths simultaneously, and retrieve information across formats and sources. Content no longer competes page-to-page. It competes concept-to-concept.

If an organization lacks clear intent modeling, structured topical coverage, and consistent entity representation, its content may never enter the retrieval set at all, regardless of how optimized individual pages appear.

2. Eligibility Now Precedes Ranking

This shift also changed the sequence of how visibility is earned. Ranking still matters, particularly for enterprises where much of the traffic still flows through traditional results. But ranking now occurs only after eligibility is established. As search experiences move toward synthesized answers and AI-driven surfaces, eligibility has become the prerequisite rather than the reward.

That eligibility is determined upstream by templates, data models, taxonomy, entity consistency, governance, and workflow design. These are not marketing decisions. They are organizational ones.

3. Enterprise SEO Has Crossed An Infrastructure Threshold

Enterprise SEO has always depended on infrastructure. What has changed is that modern search systems no longer compensate for structural shortcuts. In the past, rankings recovered, signals recalibrated, and messiness was often forgiven.

Today, AI-driven systems amplify inconsistency. Retrieval becomes selective, narratives persist, and structural debt compounds. Delivering results aligned to real searcher intent has shifted from a forgiving environment to a selective one, where visibility depends on how well the underlying system is designed. Taken together, these conditions define what a scalable enterprise SEO operating model actually looks like, not as a team or function, but as an organizational capability.

The Leadership Declaration: What Must Be True In 2026

Organizations that scale organic visibility in the coming years will share a small set of non-negotiable characteristics. These are not best practices. They are operating requirements.

Declaration #1: SEO Must Be Treated As Infrastructure

SEO must be treated as infrastructure. That means it moves from a downstream marketing function to a foundational digital capability. SEO requirements are embedded in platforms, standards are enforced through templates, and eligibility is designed before content is commissioned. When failures occur, they are treated like performance or security issues, not optional enhancements. If SEO depends on post-launch fixes, the operating model is already broken.

Declaration #2: SEO Must Live Upstream In Decision-Making

SEO must live upstream in decision-making. Search performance is created when decisions are made about site structure, content scope, taxonomy, product naming, localization strategy, data modeling, and internal linking frameworks. SEO cannot succeed if it only reviews outcomes; it must help shape inputs. This does not mean SEO dictates solutions. It means SEO defines non-negotiable discovery constraints, just as accessibility, performance, and security already do.

Declaration #3: SEO Requires Cross-Functional Accountability

SEO requires cross-functional accountability. Visibility depends on development, content, product, UX, legal, and localization teams working in concert, similar to a professional sports team. In most enterprises, SEO is measured on outcomes while other teams control the systems that produce them. That accountability gap must close. High-performing organizations define shared ownership of visibility, clear escalation paths, mandatory compliance standards, and executive sponsorship for search performance. Without this, SEO remains a negotiation rather than a capability.

Declaration #4: Governance Must Replace Guidelines

Governance must replace guidelines. Guidelines are optional; governance is enforceable. Scalable SEO requires mandatory standards, controlled templates, centralized entity definitions, enforced structured data policies, approved market deviations, and continuous compliance monitoring. This demands a Center of Excellence with authority, not just expertise. SEO cannot scale on influence alone.

Declaration #5: SEO Must Be Measured As A System

Finally, SEO must be measured as a system. Executives need to move beyond quarterly performance questions and instead assess structural eligibility across markets, intent coverage, entity coherence, template enforcement, and where visibility leaks and why. System-level measurement replaces page-level obsession.

This shift mirrors a broader issue I explored in a previous Search Engine Journal article on the questions CEOs should be asking about their websites, but rarely do. The core insight was that executive oversight often focuses on surface-level outcomes while missing systemic sources of risk, inefficiency, and value leakage.

SEO measurement suffers from the same blind spot. Asking how SEO “performed” this quarter obscures whether the organization is structurally capable of being discovered and represented accurately across modern search and AI-driven environments. The more meaningful questions are systemic: where visibility leaks, which teams own those failure points, and whether the underlying architecture enforces consistency at scale.

Measured this way, SEO stops being a reporting function and becomes an early warning system for digital effectiveness.

The Operating Model Divide

Enterprises will fall into two groups.

Some will remain tactical optimizers, where SEO lives in marketing, fixes happen after launch, paid media masks organic gaps, and AI visibility remains inconsistent. Others will become structural builders, embedding SEO into systems, defining requirements before creation, enforcing governance, and earning consistent retrieval and trust from AI-driven platforms.

The difference will not be effort. It will be organizational design.

The Clarifying Reality

Ranking still matters, particularly for enterprises where a significant share of traffic continues to flow through traditional results. What has changed is not its importance, but its position in the visibility chain. Before anything can rank, it must first be retrieved. Before it can be retrieved, it must be eligible. And eligibility is no longer determined by isolated optimizations, but by infrastructure – how content is structured, how entities are defined, and how consistently signals are enforced across systems.

Every enterprise already has an SEO operating model, whether it was designed intentionally or emerged by default. In the years ahead, that distinction will matter far more than most organizations expect.

SEO has become infrastructure. Infrastructure requires leadership because it shapes what the organization can reliably produce and how it is perceived at scale. The companies that win will not be the ones that optimize harder, but the ones that operate differently, by designing systems that search engines and AI-driven platforms can consistently discover, understand, and trust.

More Resources:


Featured Image: Anton Vierietin/Shutterstock

How To Set Up AI Prompt Tracking You Can Trust [Webinar] via @sejournal, @lorenbaker

Getting Real About AI Visibility Tracking

If you’re on the search or marketing team right now, you’ve probably been asked some version of: “Are we showing up in ChatGPT?” or “What’s our visibility in AI Overviews?”

And honestly? Most of us are still figuring that out.

Answer engines like ChatGPT, Perplexity, and Google AI Overviews have changed how people discover and evaluate solutions. Yet, we still see a lot of teams approaching AI visibility tracking the same way they’ve approached keyword tracking, and they’re just not the same.

Improper tracking leads to bad data that’s being used to make decisions. And bad decisions can be expensive.

That’s why we’re bringing in Nick Gallagher, Sr. SEO Strategy Director at Conductor, to walk through how to set up AI prompt tracking the right way. The goal is to walk away with a tracking framework you can actually trust.

What You’ll Learn

  • How AI prompt tracking works, and why the setup matters more than the volume of prompts you’re monitoring.
  • Best practices for choosing the right topics, prompts, and answer engines to track.
  • How to avoid common mistakes that lead to inaccurate or misleading AI visibility data.

Why This Matters Right Now

A lot of the conversations I’ve been having with SEOs and in-house marketers lately come back to the same thing: they know AI search is important, but they don’t trust the data they’re getting. Nick is going to break down why that’s happening and give you a clear framework to fix it for smarter decision-making. 

If you’re trying to measure AI visibility and want to make sure you’re not building strategy on bad data, please join us.

Can’t make it live? Register anyway, and we’ll send you the on-demand recording.

4 Pillars To Turn Your “Sticky-Taped” Tech Stack Into a Modern Publishing Engine

This post was sponsored by WP Engine. The opinions expressed in this article are the sponsor’s own.

In the race for audience attention, digital marketers at media companies often have one hand tied behind their backs. The mission is clear: drive sustainable revenue, increase engagement, and stay ahead of technological disruptions such as LLMs and AI agents.

Yet, for many media organizations, execution is throttled by a “Sticky-taped stack,” which is a fragile, patchwork legacy CMS structure and ad-hoc plugins. For a digital marketing leader, this isn’t just a technical headache; it’s a direct hit to the bottom line.

It’s time to examine the Fragmentation Tax, and why a new publishing standard is required to reclaim growth.

Fragmentation Tax: How A Siloed CMS, Disconnected Data & Tech Debt Are Costing You Growth

The Fragmentation Tax is the hidden cost of operational inefficiency. It drains budgets, burns out teams, and stunts the ability to scale. For digital marketing and growth leads, this tax is paid in three distinct “currencies”:

1. Siloed Data & Strategic Blindness.

When your ad server, subscriber database, and content tools exist as siloed work streams, you lose the ability to see the full picture of the reader’s journey.

Without integrated attribution, marketers are forced to make strategic pivots based on vanity metrics like generic pageviews rather than true business intelligence, such as conversion funnels or long-term reader retention.

2. The Editorial Velocity Gap.

In the era of breaking news, being second is often the same as being last. If an editorial team is forced into complex, manual workflows because of a fragmented tech stack, content reaches the market too late to capture peak search volume or social trends. This friction creates a culture of caution precisely when marketing needs a culture of velocity to capture organic traffic.

3. Tech Debt vs. Innovation.

Tech debt is the future cost of rework created by choosing “quick-and-dirty” solutions. This is a silent killer of marketing budgets. Every hour an engineering team spends fixing plugin conflicts or managing security fires caused by a cobbled-together infrastructure is an hour stolen from innovation.

The 4 Publishing Pillars That Improve SEO & Monetization

To stop paying this tax, media organizations are moving away from treating their workflows as a collection of disparate parts. Instead, they are adopting a unified system that eliminates the friction between engineering, editorial, and growth.

A modern publishing standard addresses these marketing hurdles through four key operational pillars:

Pillar 1: Automated Governance (Built-In SEO & Tracking Integrity)

Marketing integrity relies on consistency.

In a fragmented system, SEO metadata, tracking pixels, and brand standards are often managed manually, leading to human error.

A unified approach embeds governance directly into the workflow.

By using automated checklists, organizations ensure that no article goes live until it meets defined standards, protecting the brand and ensuring every piece of content is optimized for discovery from the moment of publication.

Pillar 2: Fearless Iteration (Continuous SEO & CRO Optimization Without Risk)

High-traffic articles are a marketer’s most valuable asset. However, in a legacy stack, updating a live story to include, for instance, a Call-to-Action (CTA), is often a high-risk maneuver that could break site layouts.

A modern unified approach allows for “staged” edits, enabling teams to draft and review iterations on live content without forcing those changes live immediately. This allows for a continuous improvement cycle that protects the user experience and site uptime.

Pillar 3: Cross-Functional Collaboration (Reducing Workflow Bottlenecks Between Editorial, SEO & Engineering)

Any type of technology disruption requires a team to collaborate in real-time. The “Sticky-taped” approach often forces teams to work in separate tools, creating bottlenecks.

A modern unified standard utilizes collaborative editing, separating editorial functions into distinct areas for text, media, and metadata. This allows an SEO specialist or a growth marketer to optimize a story simultaneously with the journalist, ensuring the content is “market-ready” the instant it’s finished.

Pillar 4: Native Breaking News Capabilities (Capturing Real-Time Search Demand)

Late-breaking or real-time events, such as global geopolitical shifts or live sports, require in-the-moment storytelling to keep audiences informed, engaged, and on-site. Traditionally, “Live Blogs” relied on clunky third-party embeds that fragmented user data and slowed page loads.

A unified standard treats breaking news as a native capability, enabling rapid-fire updates that keep the audience glued to the brand’s own domain, maximizing ad impressions and subscription opportunities.

Conclusion: Trading Toil for Agility

Ultimately, shifting to a unified standard is about reducing inefficiencies caused by “fighting the tools.” By removing the technical toil that typically hides insights in siloed tools, media organizations can finally trade operational friction for strategic agility.

When your site’s foundation is solid and fast, editors can hit “publish” without worrying about things breaking. At the same time, marketers can test new ways to grow the audience without waiting weeks for developers to update code. This setup clears the way for everyone to move faster and focus on what actually matters: telling great stories and connecting with readers.

The era of stitching software together with “sticky tape” is over. For modern media companies to thrive amid constant digital disruption, infrastructure must be a launchpad, not a hindrance. By eliminating the Fragmentation Tax, marketing leaders can finally stop surviving and start growing.

Jason Konen is director of product management at WP Engine, a global web enablement company that empowers companies and agencies of all sizes to build, power, manage, and optimize their WordPressⓇ websites and applications with confidence.

Image Credits

Featured Image: Image by WP Engine. Used with permission.

In-Post Images: Image by WP Engine. Used with permission.

Google Text Ad Click Share Rises Sharply In Some Verticals via @sejournal, @MattGSouthern

An analysis of 16,000 U.S. search queries found that text ads gained 7 to 13 percentage points of click share between January 2025 and January 2026.

SEO consultant Aleyda Solis used Similarweb clickstream estimates to measure click share across classic organic results, SERP features, text ads, PLAs, and zero-click behavior.

She also tracked how often AI Overviews appeared on the page, but the dataset doesn’t attribute clicks to AI Overviews directly.

What The Data Shows

Text ads gained between 7 and 13 percentage points of click share across every vertical Solis analyzed.

In the headphones vertical (top 5,000 US queries), classic organic click share fell from 73% to 50%. Text ads grew from 3% to 16%, and PLAs grew from 13% to 20%. Combined paid results now capture 36% of clicks in that category, up from 16% a year earlier.

Jeans followed a similar pattern. Classic organic dropped from 73% to 56%, while combined paid results rose from 18% to 34%.

The online games vertical saw text ads quadruple, from 3% to 13%, even though the category had historically had almost no ad presence.

In greeting cards, the only vertical where total clicks actually grew year over year, organic click share still fell from 88% to 75% as text ads nearly doubled.

The AI Overview presence on SERPs grew across all four verticals. Headphones saw AIO presence jump from 2.28% to 32.76%, and online games went from 0.38% to 29.80%. But the analysis measured how often AIOs appeared on the page, not how many clicks they captured or prevented.

Solis wrote:

“When I started this research, my hypothesis was that text ads and organic SERP features -not just AI Overviews- could be significant culprits behind declining organic clicks. The data confirmed this across all four verticals, and the scale of the text ad impact surprised me: they gained between +7 and +13 percentage points of click share in every vertical, making them the single biggest measurable driver of the organic decline.”

Independent Data Points To The Same Pattern

The SERP-level click data lines up with what advertisers are seeing from the other side.

Tinuiti’s Q4 2025 benchmark report found Google text ad clicks hit a 19-quarter high in Tinuiti’s benchmark dataset, growing 9% year over year. Overall Google search ad spend rose 13% in the quarter, up from 10% in Q3.

Google’s earnings tell a similar story. In its Q3 2025 report, Alphabet posted $102.3 billion in revenue, its first $100 billion quarter, with search ad revenue reaching $56.6 billion. CEO Sundar Pichai said AI features were expanding total query volume, including commercial queries.

More queries and more commercial intent create more ad inventory. The Similarweb data is consistent with more clicks shifting to paid placements in these verticals.

Why This Matters

The industry has spent much of the past year focused on AI Overviews as the explanation for declining organic clicks.

AIO presence is growing, and Google reported 1.5 billion monthly AIO users as of Q1 2025. This data indicates that text ads are an increasingly important factor to consider.

When diagnosing drops in organic traffic, it’s helpful to look at the SERP composition for your industry rather than assuming AI Overviews are the sole reason.

Looking Ahead

Data from different sources indicate that text ads are gaining click share.

Whether Google is actively expanding ad placements or advertisers are bidding more aggressively on existing inventory is unknown.

What you may consider doing now is tracking SERP composition changes in your own vertical using tools that measure click distribution rather than rankings alone.

Google Lost Two Antitrust Cases, But Stock Rose 65% – Here’s Why via @sejournal, @MattGSouthern

In January, Alphabet passed Apple in market capitalization to become the second most valuable company in the world. Alphabet was worth $3.885 trillion. Apple sat at $3.846 trillion. Only Nvidia, at $4.595 trillion, was ahead.

That alone would be news. But the context makes it something else entirely. Courts had found that Google violated antitrust law in both general search services and general search text advertising. The Department of Justice asked judges to break the company apart, sell off Chrome, divest the Android operating system, and force the sale of its ad exchange. In the search case, the court rejected those proposed divestitures. In the ad-tech case, the government is still asking the judge to order a sale of Google’s ad exchange, and remedies are pending.

In this article, I’ll walk through every active Google antitrust thread, what courts have ordered, what’s still pending, and what the timelines mean. The gap between Google’s legal exposure and its market performance tells a story that matters for everyone working in search.

How We Got Here

When the DOJ’s search monopoly trial opened in 2023, the government argued that Google spent billions on exclusive deals with Apple, Samsung, and browser makers to lock in its position as the default search engine. The case centered on whether those deals maintained a monopoly or reflected a better product.

In 2024, Judge Amit Mehta ruled that Google had maintained an illegal monopoly in general search services. It was the first time a federal court found a tech company had maintained an illegal monopoly since the Microsoft case in 2001.

Then came the remedies phase, where the real fight began. The DOJ wanted dramatic structural changes. Prosecutors laid out four options, including forcing Google to sell Chrome and potentially divesting Android. That was the peak fear moment for investors. It was also the point at which the case stopped being abstract legal theory and started having direct implications for how search distribution works.

What happened next surprised the industry.

The Search Case: Where It Stands

On Sept. 2, 2025, Judge Mehta issued his remedies opinion. He declined to order any divestitures. No Chrome sale. No Android breakup. No forced separation of search from the broader Alphabet structure.

His reasoning centered on AI. Mehta wrote that generative AI had changed the course of the case. He pointed to the competitive threat that AI chatbots posed to Google’s search business and concluded that the market was too dynamic for the kind of structural remedy the DOJ wanted.

Instead, Mehta ordered behavioral remedies. The final judgment, entered on Dec. 5, 2025, limits how Google can structure search distribution deals. Agreements are capped at one year and cannot be used to lock partners into defaults across multiple access points. The judgment includes provisions that require partners to have more flexibility to surface rival search options and, in some cases, third-party generative AI products.

The order also sets out data-licensing obligations for qualified rivals, including access to a portion of Google’s web index and certain user-side data. An oversight process oversees how the implementation is carried out and ensures everything stays in line during the remedy period.

Google filed its Notice of Appeal on Jan. 16, 2026. The company is specifically challenging the data-sharing requirements and the technical committee oversight. The DOJ had until Feb. 3, 2026, to decide whether to file a cross-appeal seeking stronger remedies than what Mehta ordered.

The search case landed in a unique place. Google keeps Chrome and Android. The default search deals that delivered Google the majority of mobile search activity get restructured with shorter terms and fewer restrictions on partners.

Data-sharing could enable competitors to build better search products, but the timeline for that playing out is years, not months.

The Ad-Tech Case: What’s Coming

The second federal case against Google involves digital advertising technology. This one operates on a different track with a different judge and a different set of remedies at stake.

In April 2025, Judge Leonie Brinkema ruled that Google had willfully monopolized parts of the digital ad market. Where the search case focused on consumer-facing search defaults, this case targeted Google’s ad server, ad exchange (AdX), and the connections between them.

The DOJ’s post-trial brief requested the divestiture of Google’s Ad Manager suite, including the AdX exchange. That would mean separating the tool publishers use to sell ads from the marketplace where those ads get bought and sold.

During closing arguments in November, Brinkema expressed skepticism. She noted that a potential buyer for the ad exchange hadn’t been identified and called the divestiture proposal “fairly abstract.” The court, she said, needed to be “far more down to earth and concrete.”

Brinkema said she plans to issue a decision early in 2026. That ruling could arrive at any point in Q1.

The practical stakes here are different from the search case. The search remedies affect how people find Google. The ad-tech remedies affect how publishers make money through Google.

Any forced separation of AdX would directly change the monetization stack that millions of websites rely on. Even if Brinkema follows the same pattern as Mehta and declines structural remedies, the behavioral changes she orders could reshape how programmatic advertising flows through Google’s systems.

The Epic/Play Store Settlement Question

In late January 2026, Judge James Donato held a hearing in San Francisco on a proposed settlement between Google and Epic Games. The case, which centered on Google’s Play Store practices, appeared headed for resolution. But Donato threw the terms into question.

Donato described the settlement as overly favorable to the two companies and questioned whether it came at the expense of the broader class of developers affected by Google’s Play Store policies.

The settlement terms include Epic spending $800 million over six years on Google services, plus a marketing and exploratory partnership. Reports described the partnership as involving Epic’s technology, including Unreal Engine, alongside marketing and other commercial terms.

This case matters because it touches a different part of Google’s ecosystem. The search and ad-tech cases are about how Google dominates web search and digital advertising. The Play Store case is about how Google controls app distribution on Android. Together, these cases cover the three main ways Google generates revenue and the three main ways practitioners interact with Google’s platforms.

The EU Front

European regulators are pursuing their own path, and in some areas, they’re moving faster than U.S. courts.

In September 2025, the European Commission fined Google €2.95 billion for abusing its dominance in ad tech. Google said it would appeal the decision.

Reports from December indicate the EU is preparing a non-compliance fine against Google related to Play Store anti-steering rules. That fine is expected as early as Q1 2026, which would put it on roughly the same timeline as Brinkema’s ad-tech ruling in the U.S.

But the most consequential EU action may be the newest one. On January 26, the Commission opened specification proceedings under the Digital Markets Act focused on online search data sharing and interoperability for Android AI features. The process is framed around access for rivals, including AI developers and search competitors, and is expected to conclude within six months.

That goes beyond what the U.S. search case requires. Mehta’s order mandates data-sharing with search competitors. The EU proceedings ask whether Google must open access to a broader set of rivals, including those building AI-powered products that don’t fit neatly into the traditional search category.

For those watching how AI search develops, this EU proceeding could have bigger long-term implications than anything in the U.S. cases. The question of whether Google’s search index data feeds into competing AI products affects the entire ecosystem of AI-generated answers, citations, and traffic referrals.

Why The Stock Rose Anyway

Google’s stock rose 65% in 2025, CNBC reported, which made it the best performer among the big tech stocks. Apple, by comparison, rose 8.6%. The gap between Google’s legal losses and its market gains points to a pattern that has repeated at every stage of these cases.

When we covered the original verdict in October 2024 and looked at what it could mean for SEO, the range of possible outcomes was wide. Chrome divestiture, Android breakup, elimination of default deals, forced data sharing, and structural separation of search from advertising all sat on the table.

What investors watched play out was a narrowing of that range at every step. Google offered to loosen its search engine deals in December 2024, signaling that behavioral concessions were coming. The DOJ pushed for breakups. The court landed closer to Google’s position than the government’s.

A Financial Times analysis from January 2026 placed Google’s outcome in a broader context. Across multiple Big Tech antitrust cases, judges have shown reluctance to order structural remedies. Meta won outright in November when Judge James Boasberg ruled the company doesn’t hold an illegal monopoly. In the Google ad-tech case, Brinkema expressed discomfort with divestiture. Former DOJ antitrust chief Jonathan Kanter, who helped bring these cases, acknowledged to the FT that the rulings showed the U.S. was too slow to act.

The pattern across cases is consistent. Courts are willing to find that tech companies violated antitrust law. They’re reluctant to order the kind of structural changes that would break the companies apart. And they’re citing AI competition as a central reason for that restraint.

For Google specifically, the combination of light remedies, a strong AI narrative (signs that Google had caught up to OpenAI reinforced investor confidence, according to a Fortune report), and continued dominance in search revenue removed the threat that investors feared most. The breakup scenario didn’t happen, and the stock reflected that.

What This Means For Search Professionals

The antitrust cases resolved in a way that preserves Google’s structure while introducing new requirements around data access and distribution agreements. The impact will unfold over years, not weeks. Here’s what to track.

Search distribution could diversify gradually. The one-year cap on distribution agreements and the restrictions on tying defaults across access points give Apple and Samsung more room to offer users alternatives or to negotiate different terms. Whether they will is a separate question.

Apple’s search-default deal with Google has been widely reported to be worth tens of billions annually. Without that kind of long-term lock-in, Apple has financial incentive to build or license an alternative.

Data-sharing mandates could create new competitors. The judgment requires Google to license a portion of its web index and certain user-side data to qualified rivals, with an oversight process governing the details. The scope matters enormously. Providing limited index access is different from sharing the ranking signals and full index depth that would let a competitor build a viable alternative. Google is appealing this requirement, which tells you where the company sees the real threat.

The ad-tech ruling will directly affect publisher revenue. Brinkema’s decision, expected in early 2026, determines whether Google must separate the tools publishers use to sell ads from the exchange where those ads trade. Even if she orders behavioral remedies instead of a full divestiture, changes to how Google’s ad stack operates will ripple through programmatic advertising. Publishers using Google Ad Manager should pay close attention to the timeline.

The EU’s DMA proceedings open a different front. The January proceedings cover online search data sharing and Android AI interoperability, framed around access for rivals, including AI developers. The outcome would affect how AI search products source their information and, by extension, how content gets cited in AI-generated answers.

Looking Ahead

The next 12 months will determine whether the antitrust cases produce real changes to search markets or settle into a compliance exercise that preserves the status quo.

Key dates and events to watch include Brinkema’s ad-tech remedies ruling, expected in Q1 2026. The DOJ’s decision on whether to cross-appeal Mehta’s rejection of stronger search remedies was due by early February.

Google’s search case appeal will move through the D.C. Circuit, likely taking a year or more. The EU’s DMA specification proceedings on search data sharing and Android AI interoperability are expected to conclude within six months. And the Epic/Play Store settlement faces scrutiny after Judge Donato’s criticism.

Meanwhile, the Amazon and Apple antitrust cases are pending, with trials expected in 2027. Those cases will test whether courts continue the pattern of finding violations but declining breakups, or whether the legal environment changes.

In Summary

Google was found to have maintained illegal monopolies in two separate markets. It’s appealing one case and awaiting remedies in another. Regulators on two continents are pressing forward, and yet the company just became the second most valuable in the world.

Whether the courts ultimately deliver continuity or disruption will play out over the years ahead. Either way, what gets decided in these cases shapes the infrastructure that every search professional works within.

More Resources:


Featured Image: Collagery/Shutterstock

Bing Adds AI Visibility Reporting

Unlike traditional search engine optimization, AI search lacks native performance reporting to help businesses develop organic visibility strategies.

Google’s Search Console combines AI Overviews and organic listings in its “Performance” section, leaving optimizers to guess which channel drove visibility and traffic. ChatGPT shares metrics only with publishers that have licensed their content to OpenAI.

Bing is the first platform to offer some transparency. A few weeks after publishing its “guide to AEO and GEO,” Bing launched an “AI Performance Report” in Webmaster Tools.

AI Performance

The new report tracks citations in Microsoft Copilot, AI-generated summaries in Bing, and select AI partner integrations. But there’s no option to filter by a single surface, and no way to identify the integration partners or their purpose.

The report shows users’ “Total Citations” for the chosen period and “Avg. Cited Pages.” It then lists:

  • “Grounding Queries,” which are “the key phrases the AI used when retrieving content that was cited in its answer.” In other words, the queries are the “fan-out” terms that Bing’s AI agents use to search for and find answers, though we don’t know which search engines or platforms they access.
  • “Pages,” the URLs mentioned in AI answers.
Screenshot of the new AI Performance section

The new Webmaster Tools section lists citations by “Grounding Queries” and “Pages.” Click image to enlarge.

Each tab includes additional visibility data:

  • For every grounding query, Webmaster Tools reports on the average number of unique pages cited per day in AI answers.
  • For each cited URL, the report includes its frequency — how often it appears in an answer — not its importance, ranking, or role within a response.

The report provides no traffic or click-through data and no clarity into which Grounding Queries triggered which citations.

Using the Data

The report is a good first step, but it offers little actionable data. Perhaps it will force other players to do more.

According to Bing, the new report:

… shows how your site’s content is used in AI‑generated answers across Microsoft Copilot and partner experiences by highlighting which pages are cited, how visibility trends change over time, and the grounding queries associated with your content.

I’m making the report more useful by:

  • Researching organic keywords on Bing and Google that drive traffic to the cited URLs,
  • Prompting ChatGPT or Gemini to turn the keywords into prompts,
  • Evaluating whether the cited pages address those prompts or need better structure or clarity.

Also, I identify common modifiers in the grounding queries to understand how AI agents find the pages.

Identify common modifiers, such as “virus” in this example, to understand how AI agents find your pages.

Webmaster Tools

Setting up Bing Webmaster Tools takes only a couple of minutes with Search Console enabled.

Log in to Webmaster Tools with your Microsoft account, click “Add site,” and choose the “Import your sites from GSC” option. Allow roughly 24 hours for Bing to collect and report the data.

Antitrust Filing Says Google Cannibalizes Publisher Traffic via @sejournal, @martinibuster

Penske Media Corporation (PMC) filed a federal court memorandum opposing Google’s motion to dismiss its antitrust lawsuit. The company argues that Google has broken the longstanding premise of a web ecosystem in which publishers allowed their content to be crawled in exchange for receiving search traffic in return.

PMC is the publisher of twenty brands like Deadline, The Hollywood Reporter, and Rolling Stone.

Web Ecosystem

The PMC legal filing makes repeated references to the “fundamental fair exchange” where Google sends traffic in exchange for allowing them to crawl and index websites, explicitly quoting Google’s expressions of support for “the health of the web ecosystem.”

And yet there are some industry outsiders on social media who deny that there is any understanding between Google and web publishers, a concept that even Google doesn’t deny.

This concept dates to pretty much the beginning of Google and is commonly understood by all web workers. It’s embedded in Google’s Philosophy, expressed at least as far back as 2004:

“Google may be the only company in the world whose stated goal is to have users leave its website as quickly as possible.”

In May 2025 Google published a blog post where they affirmed that sending users to websites remained their core goal:

“…our core goal remains the same: to help people find outstanding, original content that adds unique value.”

What’s relevant about that passage is that it’s framed within the context of encouraging publishers to create high quality content and in exchange they will be considered for referral traffic.

The concept of a web ecosystem where both sides benefit was discussed by Google CEO Sundar Pichai in a June 2025 podcast interview by Lex Fridman where Pichai said that sending people to the human created web in AI Mode was “going to be a core design principle for us.”

In response to a follow-up question referring to journalists who are nervous about web referrals, Sundar Pichai explicitly mentioned the ecosystem and Google’s commitment to it.

Pichai responded:

“I think news and journalism will play an important role, you know, in the future we’re pretty committed to it, right? And so I think making sure that ecosystem… In fact, I think we’ll be able to differentiate ourselves as a company over time because of our commitment there. So it’s something I think you know I definitely value a lot and as we are designing we’ll continue prioritizing approaches.”

This “fundamental fair exchange” serves as the baseline competitive condition for their claims of coercive reciprocal dealing and unlawful monopoly maintenance.

That baseline helps PMC argue:

  • That Google changed the understood terms of participation in search in a way publishers cannot refuse.
  • And that Google used its dominance in search to impose those new terms.

And despite that Google’s own CEO expressed that sending people to websites is a core design principal and there are multiple instances in the past and the present where Google’s own documentation refers to this reciprocity between publishers and Google, Google’s legal response expressly denies that it exists.

The PMC document states:

“Google …argues that no reciprocity agreement exists because it has not “promised to deliver” any search referral traffic.”

Profound Consequences Of Google AI Search

PMC filed a federal court memorandum in February 2026 opposing Google’s motion to dismiss its antitrust complaint. The complaint details Google’s use of its search monopoly to “coerce” publishers into providing content for AI training and AI Overviews without compensation.

The suit argues that Google has pivoted from being a search engine (that sends traffic to websites) to an answer engine that removes the incentive for users to click to visit a website. The lawsuit claims that this change harms the economic viability of digital publishers.

The filing explains the consequences of this change:

“Google has shattered the longstanding bargain that allows the open internet to exist. The consequences for online publishers—to say nothing of the public at large—are profound.”

Google Is Using Their Market Power

The filing claims that the collapse of the traditional search ecosystem positions Google’s AI search system as coercive rather than innovative, arguing that publishers must either allow AI to reuse their content or risk losing search visibility.

The legal filing alleges that Google’s generative AI competes directly with online publishers for user’s attention, describing Google as cannibalizing publisher’s traffic, specifically alleging that Google is using their “market power” to maintain a situation in which publishers can’t block the AI without also negatively affecting what little search traffic is left.

The memorandum portrays a bleak choice offered by Google:

“Google’s search monopoly leaves publishers with no choice: acquiesce—even as Google cannibalizes the traffic publishers rely on—or perish.”

It also describes the role of AI grounding plays in cannibalizing publisher traffic for its sole benefit:

“Through RAG, or “grounding,” Google uses, repackages, and republishes publisher content for display on Google’s SERP, cannibalizing the traffic on which PMC depends.”

Expansion Of Zero-Click Search Results And Traffic Loss

The filing claims AI answers divert users away from publisher sites and diminish monetizable audience visits. Multiple parts of the filing directly confronts Google with the fact of reduced traffic from search due to the cannibalization of their content.

The filing alleges:

“Google reduces click‑throughs to publisher sites, increases zero‑click behavior, and diverts traffic that publishers need to support their advertising, affiliate, and subscription revenue.

…Google’s insinuation . . . that AI Overview is not getting in the way of the ten blue links and the traffic going back to creators and publishers is just 100% false . . . . [Users] are reading the overview and stopping there . . . . We see it.”

…The purpose is not to facilitate click-throughs but to have users consume PMC’s content, repackaged by Google, directly on the SERP.”

Zero-click searches are described as a component of a multi-part process in which publishers are injured by Google’s conduct. The filing accuses Google of using publisher content for training, grounding their AI on facts, and then republishing it within the zero-click AI search environment that either reduces or eliminates clicks back to PMC’s websites.

Should Google Send More Referral Traffic?

Everything that’s described in the PMC filing is the kind of thing that virtually all online businesses have been complaining about in terms of traffic losses as a result of Google’s AI search surfaces. It’s the reason why Lex Fridman specifically challenged Google’s CEO on the amount of traffic Google is sending to websites.

Google AI Shows A Site Is Offline Due To JS Content Delivery via @sejournal, @martinibuster

Google’s John Mueller offered a simple solution to a Redditor who blamed Google’s “AI” for a note in the SERPs saying that the website was down since early 2026.

The Redditor didn’t create a post on Reddit, they just linked to their blog post that blamed Google and AI. This enabled Mueller to go straight to the site, identify the cause as having to do with JavaScript implementation, and then set them straight that it wasn’t Google’s fault.

Redditor Blames Google’s AI

The blog post by the Redditor blames Google, headlining the article with a computer science buzzword salad that over-complicates and (unknowingly) misstates the actual problem.

The article title is:

“Google Might Think Your Website Is Down
How Cross-page AI aggregation can introduce new liability vectors.”

That part about “cross-page AI aggregation” and “liability vectors” is eyebrow raising because none of those terms are established terms of art in computer science.

The “cross-page” thing is likely a reference to Google’s Query Fan-Out, where a question on Google’s AI Mode is turned into multiple queries that are then sent to Google’s Classic Search.

Regarding “liability vectors,” a vector is a real thing that’s discussed in SEO and is a part of Natural Language Processing (NLP). But “Liability Vector” is not a part of it.

The Redditor’s blog post admits that they don’t know if Google is able to detect if a site is down or not:

“I’m not aware of Google having any special capability to detect whether websites are up or down. And even if my internal service went down, Google wouldn’t be able to detect that since it’s behind a login wall.”

And they appear to maybe not be aware of how RAG or Query Fan-Out works, or maybe how Google’s AI systems work. The author seems to regard it as a discovery that Google is referencing fresh information instead of Parametric Knowledge (information in the LLM that was gained from training).

They write that Google’s AI answer says that the website indicated the site was offline since 2026.:

“…the phrasing says the website indicated rather than people indicated; though in the age of LLMs uncertainty, that distinction might not mean much anymore.

…it clearly mentions the timeframe as early 2026. Since the website didn’t exist before mid-2025, this actually suggests Google has relatively fresh information; although again, LLMs!”

A little later in the blog post the Redditor admits that they don’t know why Google is saying that the website is offline.

They explained that they implemented a shot in the dark solution by removing a pop-up. They were incorrectly guessing that it was the pop-up that was causing the issue and this highlights the importance of being certain of what’s causing issues before making changes in the hope that this will fix them.

The Redditor shared they didn’t know how Google summarizes information about a site in response to a query about the site, and expressed their concern that they believe it’s possible that Google can scrape irrelevant information then show it as an answer.

They write:

“…we don’t know how exactly Google assembles the mix of pages it uses to generate LLM responses.

This is problematic because anything on your web pages might now influence unrelated answers.

…Google’s AI might grab any of this and present it as the answer.”

I don’t fault the author for not knowing how Google AI search works, I’m fairly certain it’s not widely known. It’s easy to get the impression that it’s an AI answering questions.

But what’s basically going on is that AI search is based on Classic Search, with AI synthesizing the content it finds online into a natural language answer. It’s like asking someone a question, they Google it, then they explain the answer from what they learned from reading the website pages.

Google’s John Mueller Explains What’s Going On

Mueller responded to the person’s Reddit post in a neutral and polite manner, showing why the fault lies in the Redditor’s implementation.

Mueller explained:

“Is that your site? I’d recommend not using JS to change text on your page from “not available” to “available” and instead to just load that whole chunk from JS. That way, if a client doesn’t run your JS, it won’t get misleading information.

This is similar to how Google doesn’t recommend using JS to change a robots meta tag from “noindex” to “please consider my fine work of html markup for inclusion” (there is no “index” robots meta tag, so you can be creative).”

Mueller’s response explains that the site is relying on JavaScript to replace placeholder text that is served briefly before the page loads, which only works for visitors whose browsers actually run that script.

What happened here is that Google read that placeholder text that the web page showed as the indexed content. Google saw the original served content with the “not available” message and treated it as the content.

Mueller explained that the safer approach is to have the correct information present in the page’s base HTML from the start, so that both users and search engines receive the same content.

Takeaways

There are multiple takeaways here that go beyond the technical issue underlying the Redditor’s problem. Top of the list is how they tried to guess their way to an answer.

They really didn’t know how Google AI search works, which introduced a series of assumptions that complicated their ability to diagnose the issue. Then they implemented a “fix” based on guessing what they thought was probably causing the issue.

Guessing is an approach to SEO problems that’s justified on Google being opaque but sometimes it’s not about Google, it’s about a knowledge gap in SEO itself and a signal that further testing and diagnosis is necessary.

Featured Image by Shutterstock/Kues

Google’s Search Relations Team Debates If You Still Need A Website via @sejournal, @MattGSouthern

Google’s Search Relations team was asked directly whether you still need a website in 2026. They didn’t give a one-size-fits-all answer.

The conversation stayed focused on trade-offs between owning a website and relying on platforms such as social networks or app stores.

In a new episode of the Search Off the Record podcast, Gary Illyes and Martin Splitt spent about 28 minutes exploring the question and repeatedly landed on the same conclusion: it depends.

What Was Said

Illyes and Splitt acknowledged that websites still offer distinct advantages, including data sovereignty, control over monetization, the ability to host services such as calculators or tools, and freedom from platform content moderation.

Both Googlers also emphasized situations where a website may not be necessary.

Illyes referenced a Google user study conducted in Indonesia around 2015-2016 where businesses ran entirely on social networks with no websites. He described their results as having “incredible sales, incredible user journeys and retention.”

Illyes also described mobile games that, in his telling, became multi-million-dollar and in some cases “billion-dollar” businesses without a meaningful website beyond legal pages.

Illyes offered a personal example:

“I know that I have a few community groups in WhatsApp for instance because that’s where the people I want to reach are and I can reach them reliably through there. I could set up a website but I never even considered because why? To do what?”

Splitt addressed trust and presentation, saying:

“I’d rather have a nicely curated social media presence that exudes trustworthiness than a website that is not well done.”

When pressed for a definitive answer, Illyes offered the closest thing to a position, saying that if you want to make information or services available to as many people as possible, a website is probably still the way to go in 2026. But he framed it as a personal opinion, not a recommendation.

Why This Matters

Google Search is built around crawling and indexing web content, but the hosts still frame “needing a website” as a business decision that depends on your goals and audience.

Neither made a case that websites are essential for every business in 2026. Neither argued that the open web offers something irreplaceable. The strongest endorsement was that websites provide a low barrier of entry for sharing information and that the web “isn’t dead.”

This is consistent with the fragmented discovery landscape that SEJ has been covering, where user journeys now span AI chatbots, social feeds, and community platforms alongside traditional search.

Looking Ahead

The Search Off the Record podcast has historically offered behind-the-scenes perspectives from the Search Relations team that sometimes run ahead of official positions.

This episode didn’t introduce new policy or guidance. But the Search Relations team’s willingness to validate social-only business models and app-only distribution reflects how the role of websites is changing in a multi-platform discovery environment.

The question is worth sitting with. If the Search Relations team frames website ownership as situational rather than essential, the value proposition rests on the specific use case, not on the assumption that every business needs one.


Featured Image: Diki Prayogo/Shutterstock

Bing AI Citation Tracking, Hidden HTTP Homepages & Pages Fall Under Crawl Limit – SEO Pulse via @sejournal, @MattGSouthern

Welcome to the week’s Pulse for SEO: updates cover how you track AI visibility, how a ghost page can break your site name in search results, and what new crawl data reveals about Googlebot’s file size limits.

Here’s what matters for you and your work.

Bing Webmaster Tools Adds AI Citation Dashboard

Microsoft introduced an AI Performance dashboard in Bing Webmaster Tools, giving publishers visibility into how often their content gets cited in Copilot and AI-generated answers. The feature is now in public preview.

Key Facts: The dashboard tracks total citations, average cited pages per day, page-level citation activity, and grounding queries. Grounding queries show the phrases AI used when retrieving your content for answers.

Why This Matters

Bing is now offering a dedicated dashboard for AI citation visibility. Google includes AI Overviews and AI Mode activity in Search Console’s overall Performance reporting, but it doesn’t break out a separate report or provide citation-style URL counts. AI Overviews also assign all linked pages to a single position, which limits what you can learn about individual page performance in AI answers.

Bing’s dashboard goes further by tracking which pages get cited, how often, and what phrases triggered the citation. The missing piece is click data. The dashboard shows when your content is cited, but not whether those citations drive traffic.

Now you can confirm which pages are referenced in AI answers and identify patterns in grounding queries, but connecting AI visibility to business outcomes still requires combining this data with your own analytics.

What SEO Professionals Are Saying

Wil Reynolds, founder of Seer Interactive, celebrated the feature on X and focused on the new grounding queries data:

“Bing is now giving you grounding queries in Bing Webmaster tools!! Just confirmed, now I gotta understand what we’re getting from them, what it means and how to use it.”

Koray Tuğberk GÜBÜR, founder of Holistic SEO & Digital, compared it directly to Google’s tooling on X:

“Microsoft Bing Webmaster Tools has always been more useful and efficient than Google Search Console, and once again, they’ve proven their commitment to transparency.”

Fabrice Canel, principal product manager at Microsoft Bing, framed the launch on X as a bridge between traditional and AI-driven optimization:

“Publishers can now see how their content shows up in the AI era. GEO meets SEO, power your strategy with real signals.”

The reaction across social media centered on a shared frustration. This is the data practitioners have been asking for, but it comes from Bing rather than Google. Several people expressed hope that Google and OpenAI would follow with comparable reporting.

Read our full coverage: Bing Webmaster Tools Adds AI Citation Performance Data

Hidden HTTP Homepage Can Break Your Site Name In Google

Google’s John Mueller shared a troubleshooting case on Bluesky where a leftover HTTP homepage was causing unexpected site-name and favicon problems in search results. The issue is easy to miss because Chrome can automatically upgrade HTTP requests to HTTPS, hiding the problematic page from normal browsing.

Key Facts: The site used HTTPS, but a server-default HTTP homepage was still accessible. Chrome’s auto-upgrade meant the publisher never saw the HTTP version, but Googlebot doesn’t follow Chrome’s upgrade behavior, so Googlebot was pulling from the wrong page.

Why This Matters

This is the kind of problem you wouldn’t find in a standard site audit because your browser never shows it. If your site name or favicon in search results doesn’t match what you expect, and your HTTPS homepage looks correct, the HTTP version of your domain is worth checking.

Mueller suggested running curl from the command line to see the raw HTTP response without Chrome’s auto-upgrade. If it returns a server-default page instead of your actual homepage, that’s the source of the problem. You can also use the URL Inspection tool in Search Console with a Live Test to see what Google retrieved and rendered.

Google’s documentation on site names specifically mentions duplicate homepages, including HTTP and HTTPS versions, and recommends using the same structured data for both. Mueller’s case shows what happens when an HTTP version contains content different from the HTTPS homepage you intended.

What People Are Saying

Mueller described the case on Bluesky as “a weird one,” noting that the core problem is invisible in normal browsing:

“Chrome automatically upgrades HTTP to HTTPS so you don’t see the HTTP page. However, Googlebot sees and uses it to influence the sitename & favicon selection.”

The case highlights a pattern where browser features often hide what crawlers see. Examples include Chrome’s auto-upgrade, reader modes, client-side rendering, and JavaScript content. To debug site name and favicon issues, check the server response directly, not just browser loadings.

Read our full coverage: Hidden HTTP Page Can Cause Site Name Problems In Google

New Data Shows Most Pages Fit Well Within Googlebot’s Crawl Limit

New research based on real-world webpages suggests most pages sit well below Googlebot’s 2 MB fetch cutoff. The data, analyzed by Search Engine Journal’s Roger Montti, draws on HTTP Archive measurements to put the crawl limit question into practical context.

Key Facts: HTTP Archive data suggests most pages are well below 2 MB. Google recently clarified in updated documentation that Googlebot’s limit for supported file types is 2 MB, while PDFs get a 64 MB limit.

Why This Matters

The crawl limit question has been circulating in technical SEO discussions, particularly after Google updated its Googlebot documentation earlier this month.

The new data answers the practical question that documentation alone couldn’t. Does the 2 MB limit matter for your pages? For most sites, the answer is no. Standard webpages, even content-heavy ones, rarely approach that threshold.

Where the limit could matter is on pages with extremely bloated markup, inline scripts, or embedded data that inflates HTML size beyond typical ranges.

The broader pattern here is Google making its crawling systems more transparent. Moving documentation to a standalone crawling site, clarifying which limits apply to which crawlers, and now having real-world data to validate those limits gives a clearer picture of what Googlebot handles.

What Technical SEO Professionals Are Saying

Dave Smart, technical SEO consultant at Tame the Bots and a Google Search Central Diamond Product Expert, put the numbers in perspective in a LinkedIn post:

“Googlebot will only fetch the first 2 MB of the initial html (or other resource like CSS, JavaScript), which seems like a huge reduction from 15 MB previously reported, but honestly 2 MB is still huge.”

Smart followed up by updating his Tame the Bots fetch and render tool to simulate the cutoff. In a Bluesky post, he added a caveat about the practical risk:

“At the risk of overselling how much of a real world issue this is (it really isn’t for 99.99% of sites I’d imagine), I added functionality to cap text based files to 2 MB to simulate this.”

Google’s John Mueller endorsed the tool on Bluesky, writing:

“If you’re curious about the 2MB Googlebot HTML fetch limit, here’s a way to check.”

Mueller also shared Web Almanac data on Reddit to put the limit in context:

“The median on mobile is at 33kb, the 90-percentile is at 151kb. This means 90% of the pages out there have less than 151kb HTML.”

Roger Montti, writing for Search Engine Journal, reached a similar conclusion after reviewing the HTTP Archive data. Montti noted that the data based on real websites shows most sites are well under the limit, and called it “safe to say it’s okay to scratch off HTML size from the list of SEO things to worry about.”

Read our full coverage: New Data Shows Googlebot’s 2 MB Crawl Limit Is Enough

Theme Of The Week: The Diagnostic Gap

Each story this week points to something practitioners couldn’t see before, or checked the wrong way.

Bing’s AI citation dashboard fills a measurement gap that has existed since AI answers started citing website content. Mueller’s HTTP homepage case reveals an invisible page that standard site audits and browser checks would miss entirely because Chrome hides it. And the Googlebot crawl limit data answers a question that documentation updates raised, but couldn’t resolve on their own.

The connecting thread isn’t that these are new problems. AI citations have been happening without measurement tools. Ghost HTTP pages have been confusing site name systems since Google introduced the feature. And crawl limits have been listed in Google’s docs for years without real-world validation. What changed this week is that each gap got a concrete diagnostic: a dashboard, a curl command, and a dataset.

The takeaway is that the tools and data for understanding how search engines interact with your content are getting more specific. The challenge is knowing where to look.

More Resources:


Featured Image: Accogliente Design/Shutterstock