Google’s John Mueller answered a question about a situation where multiple pages were ranking for the same search queries. Mueller affirmed the importance of reducing unnecessary duplication but also downplayed keyword cannibalization.
What Is Keyword/Content Cannibalization?
There is an idea that web pages will have trouble ranking if multiple pages are competing for the same keyword phrases. This is related to the SEO fear of duplicate content. Keyword cannibalization is just a catchall phrase that is applied to low-ranking pages that are on similar topics.
The problem with saying that something is keyword cannibalization is that it does not identify something specific about the content that is wrong. That is why there are people asking John Mueller about it, simply because it is an ill-defined and unhelpful SEO concept.
SEO Confusion
The SEO was confused about the recent &num=100 change, where Google is blocking rank trackers from scraping the search results (SERPs) at the rate of 100 results at a time. Some rank trackers are floating the idea of only showing ranking data for the top 20 search results. This affects rank trackers’ ability to scrape the SERPs and has no effect on Google Search Console other than to show more accurate results.
The SEO was under the wrong impression that Search Console was no longer showing impressions from results beyond the top twenty. This is false.
Mueller didn’t address that question; it is just a misunderstanding on the part of the SEO.
“If now we are not seeing data from GSC from positions 20 and over, does that mean in fact there are no pages ranking above those places?
If I want to avoid cannibalization, how would I know which pages are being considered for a query, if I can only see URLs in the top 20 or so positions?”
Different Pages Ranking For Same Query
Mueller said that different pages ranking for the same search query is not a problem. I agree: multiple web pages ranking for the same keyword phrases is not a problem; it’s a good thing.
Mueller explained:
“Search Console shows data for when pages were actually shown, it’s not a theoretical measurement. Assuming you’re looking for pages ranking for the same query, you’d see that only if they were actually shown. (IMO it’s not really “cannibalization” if it’s theoretical.)
All that said, I don’t know if this is actually a good use of time. If you have 3 different pages appearing in the same search result, that doesn’t seem problematic to me just because it’s “more than 1″. You need to look at the details, you need to know your site, and your potential users.
Reduce unnecessary duplication and spend your energy on a fantastic page, sure. But pages aren’t duplicates just because they happen to appear in the same search results page. I like cheese, and many pages could appear without being duplicates: shops, recipes, suggestions, knives, pineapple, etc.”
Actual SEO Problems
Multiple pages ranking for the same keyword phrases is not a problem; it’s a good thing and not a reason for concern. Multiple pages not ranking for keywords is a problem.
Here are some real reasons why pages on the same topic may fail to rank:
The pages are too long and consequently are unfocused.
The pages contain off-topic passages.
The pages are insufficiently linked internally.
The pages are thin.
The pages are virtually duplicates of the other pages in the group.
The above are just a few real reasons why multiple pages on the same topic may not be ranking. Pointing at the pages and declaring they are cannibalizing each other is not real. It’s not something to worry about because keyword cannibalization is just a catchall phrase that masks all the actual reasons I just listed.
Takeaway
The debate over keyword cannibalization says less about Google’s algorithm and more about how the SEO community is willing to accept ideas without really questioning whether the underlying basis makes sense. The question about keyword cannibalization is frequently discussed, and I think that’s because many SEOs have the intuition that it’s somehow not right.
Maybe the habit of diagnosing ranking issues with convenient labels mirrors the human tendency to prefer simple explanations over complex answers. But, as Mueller reminds us, the real story is not that two or three pages happen to surface for the same query. The real story is whether those pages are useful, well linked, and focused enough to meet a reader’s information needs.
What is diagnosed as “content cannibalization” is more likely something else. So, rather than chasing shadows, it may be better to look at the web pages with the eyes of a user and really dig into what’s wrong with the page or the interlinking patterns of the entire section that is proving problematic. Keyword cannibalization disappears the moment you look closer, and other real reasons become evident.
A ruling last week in Australia makes using facial recognition to combat fraud almost impossible and is the latest example of global regulators’ growing disapproval of biometric technology in retail environments.
The Office of the Australian Information Commissioner (OAIC) determined that Kmart Australia Limited had violated the country’s Privacy Act 1988 when it used facial recognition to prevent return fraud and theft.
Kmart stores in Australia had used facial recognition technology to catch fraudsters. Image: Wesfarmers.
Kmart and Bunnings
At question was a Kmart pilot program that had placed facial recognition technology (FRT) in 28 of the company’s retail locations from June 2020 through July 2022.
The company created a face print, if you will, of every shopper entering one of the pilot program stores. When a customer returned an item, Kmart’s system would compare that person’s face print to a list of known thieves and fraudsters.
Kmart argued that the technology aimed to thwart return fraud and protect its employees, which thieves had frequently threatened. Biometrics, however, represent a special category of privacy protection in Australia.
The case was similar to a November 2024 OAIC determination against Bunnings, a home-improvement retailer, for using FRT to identify criminals. Australian conglomerate Wesfarmers Limited owns Kmart Australia, Bunnings, and other retail chains, including Target Australia.
FRT Challenges
The OAIC stated that its finding is not a ban on FRT, but its conditions make using the technology challenging, if not impossible.
For example, an Australian retailer would need consent before employing FRT, and the thieves stealing items to attempt return fraud would almost certainly refuse.
Kmart had disclosed FRT in a sign at the front of each pilot store, which read, “This store has 24-hour CCTV coverage, which includes facial recognition technology.” But this notice did not establish consent according to the OAIC.
Asking would-be criminals for permission to use facial recognition has the same effect as banning it, given the current state of the technology.
GDPR
The OAIC’s Kmart decision regarding explicit consent aligns with other privacy regulations and rulings.
For example, many privacy experts note that Article 9 of the European Union’s General Data Privacy Regulation, which covers the processing of special categories of personal data, requires explicit consent for the use of FRT.
FTC vs. Rite Aid
In the United States, there are instances of rulings against FRT and the use of biometric data.
In a 2023 determination, the U.S. Federal Trade Commission prohibited Rite Aid Pharmacy from using FRT and other automated biometric systems for five years.
The agency argued that Rite Aid had not taken sufficient measures to prevent false positives and algorithmic racial profiling.
Illinois BIPA
The Illinois Biometric Information Privacy Act was enacted in 2008 and is, perhaps, the most stringent biometric privacy law in the nation.
The BIPA requires businesses to provide written notification of the use of biometric data and obtain shoppers’ written consent. The law permits individuals to sue for violations, and has resulted in many cases against retailers, such as:
A 2022 lawsuit alleges that Walmart’s in-store “cameras and advanced video surveillance systems” secretly collect shoppers’ biometric data without notice or consent.
A March 2024 class-action lawsuit against Target alleges the retailer used FRT to identify shoplifters without proper consent.
A class-action lawsuit filed in August 2025 alleges that Home Depot is illegally using FRT at its self-checkout kiosks.
M•A•C Cosmetics
From the retail and ecommerce perspective, the most concerning BIPA lawsuit may be Fiza Javid v. M.A.C. Cosmetics Inc. The class-action suit, filed in August 2025, is not concerned with crime fighting but with virtual try-on technology.
The complaint notes that M•A•C’s website asks shoppers to upload a photo or enable live video so that it can detect someone’s facial structure and skin color. Plaintiff Fiza Javid asserts the feature would require BIPA’s written consent and is therefore in violation of the Illinois law.
M•A•C Cosmetics offers tools for virtual try-on and skin color identification.
The merits of the case are pending, yet BIPA has already spawned virtual try-on cases, including:
Kukovec v. Estée Lauder Companies, Inc. (2022).
Theriot v. Louis Vuitton North America, Inc. (2022).
Gielow v. Pandora Jewelry LLC (2022).
Shores v. Wella Operations US LLC (2022).
Engagement and Enforcement
AI-driven facial recognition and biometric technology are among the most promising trends in retail and ecommerce.
The technology has the potential to reduce fraud, deter theft, and support criminal prosecutions. A 2023 article in the International Security Journal estimated that facial biometrics could reduce retail shoplifting by between 50% and 90% depending on location and use.
Moreover, biometrics can improve online and in-store shopping with virtual try-on tools. Some merchants have reported a 35% increase in sales conversions when virtual shopping is available.
The question is how privacy regulations and rulings, such as last week’s Kmart decision, ultimately impact its use.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
A pivotal meeting on vaccine guidance is underway—and former CDC leaders are alarmed
This week has been an eventful one for America’s public health agency. Two former leaders of the US Centers for Disease Control and Prevention explained why they suddenly departed in a Senate hearing. They also described how CDC employees are being instructed to turn their backs on scientific evidence.
They painted a picture of a health agency in turmoil—and at risk of harming the people it is meant to serve. And, just hours afterwards, a panel of CDC advisers voted to stop recommending the MMRV vaccine for children under four. Read the full story.
—Jessica Hamzelou
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
If you’re interested in reading more about US vaccine policy, check out:
+ Read our profile of Jim O’Neill, the deputy health secretary and current acting CDC director.
+ Why US federal health agencies are abandoning mRNA vaccines. Read the full story.
+ Why childhood vaccines are a public health success story. No vaccine is perfect, but these medicines are still saving millions of lives. Read the full story.
Every year, MIT Technology Review selects one individual whose work we admire to recognize as Innovator of the Year. For 2025, we chose Sneha Goenka, who designed the computations behind the world’s fastest whole-genome sequencing method.
Thanks to her work, physicians can now sequence a patient’s genome and diagnose a genetic condition in less than eight hours—an achievement that could transform medical care.
Register here to join an exclusive subscriber-only Roundtable conversation with Goenka, Leilani Battle, assistant professor at the University of Washington, and our editor in chief Mat Honan at 1pm ET next Tuesday September 23.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 The CDC voted against giving some children a combined vaccine If accepted, the agency will stop recommending the MMRV vaccine for children under 4. (CNN) + Its vote on hepatitis B vaccines for newborns is expected today too. (The Atlantic $) + RFK JR’s allies are closing ranks around him. (Politico)
2 Russia is using Charlie Kirk’s murder to sow division in the US It’s using the momentum to push pro-Kremlin narratives and divide Americans. (WP $) + The complicated phenomenon of political violence. (Vox) + We don’t know what being ‘terminally online’ means any more. (Wired $)
3 Nvidia will invest $5 billion in Intel The partnership allows Intel to develop custom CPUs to work with Nvidia’s chips. (WSJ $) + It’s a much-needed financial shot in the arm for Intel. (WP $) + It’s also great news for Intel’s Asian suppliers. (Bloomberg $)
4 Medical AI tools downplay symptoms in women and ethnic minorities Experts fear that LLM-powered tools could lead to worse health outcomes. (FT $) + Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions. (MIT Technology Review)
5 AI browsers have hit the mainstream Where’s the off switch? (Wired $) + AI means the end of internet search as we’ve known it. (MIT Technology Review)
6 China has entered the global brain interface race Its ambitious government-backed startups are primed to challenge Neuralink. (Bloomberg $) + This patient’s Neuralink brain implant gets a boost from generative AI. (MIT Technology Review)
7 What makes humans unique in the age of AI? Defining the distinctions between us and machines isn’t as easy as it used to be. (New Yorker $) + How AI can help supercharge creativity. (MIT Technology Review)
8 This ship helps to reconnect Africa’s internet AI needs high speed internet, which needs undersea cables. (Rest of World) + What Africa needs to do to become a major AI player. (MIT Technology Review)
9 Hundreds of people queued in Beijing to buy Apple’s new iPhone Desire for Apple products in the country appears to be alive and well. (Reuters)
10 San Francisco’s idea of a great night out? A robot cage fight It’s certainly one way to have a good time. (NYT $)
Quote of the day
“Get off the iPad!”
—An irate air traffic controller tells the pilots of a Spirit Airlines flight to pay attention to avoid potentially colliding with Donald Trump’s Air Force One aircraft, Ars Technica reports.
One more thing
We used to get excited about technology. What happened?
As a philosopher who studies AI and data, Shannon Vallor’s Twitter feed is always filled with the latest tech news. Increasingly, she’s realized that the constant stream of information is no longer inspiring joy, but a sense of resignation.
Joy is missing from our lives, and from our technology. Its absence is feeding a growing unease being voiced by many who work in tech or study it. Fixing it depends on understanding how and why the priorities in our tech ecosystem have changed. Read the full story.
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
Ecommerce marketers know the challenge of delivering relevant promotions to prospects without violating privacy rules and norms. Yet many providers now offer solutions that do both — personalize offers and respect privacy — for much greater performance.
Two of those providers are my guests in this week’s episode. Sean Larkin is CEO of Fueled, a customer data platform for merchants. Francesco Gatti is CEO of Opensend, a repository of consumer demographic and behavior data.
The entire audio of our conversation is embedded below. The transcript is edited for clarity and length.
Eric Bandholz: Give us an overview of what you do.
Sean Larkin: I’m CEO and founder of Fueled, a customer data platform for ecommerce. We help brands strengthen the data signals sent to advertising and marketing platforms such as Meta to improve tracking and performance. Our team collaborates with companies such as Built Basics, Dr. Squatch, and Oats Overnight, ensuring accurate pixel data and confidence in their marketing metrics.
Francesco Gatti: I’m CEO and co-founder of Opensend. We help brands identify site visitors who haven’t provided contact information. This includes new users who show sufficient engagement for retargeting and returning shoppers browsing on different devices or browsers. Our technology links these sessions, enabling brands and platforms such as Klaviyo and Shopify to distinguish between returning visitors and new ones.
We also offer a persona tool that segments customers by detailed demographics and behavior, enabling personalized marketing. We integrate directly with Klaviyo and other email platforms. By enhancing Klaviyo accounts, we help merchants reach unidentifiable visitors and maximize ad spend. Our re-identification capabilities are critical, as consumers often use multiple devices and frequently replace them, which can disrupt tracking. We work with roughly 1,000 brands, including Oats Overnight, Regent, and Alexander Wang.
Bandholz: How can your tools track a consumer across devices?
Gatti: We see two main use cases. First is cross-device and cross-browser identification. Imagine Joe bought from you last year on his old iPhone. This year, he returns using a new iPhone or his work computer. Typically, you wouldn’t know it’s the same person. Our technology matches signals such as user-agent data against our consumer graph, which holds multiple devices per person, allowing you to recognize Joe regardless of the device or browser.
The second use case involves capturing emails from high-intent visitors. Suppose Joe clicks an Instagram ad, views several product pages for over two minutes, even adds items to his cart, but leaves without buying or subscribing.
Through data partnerships with publishers such as Sports Illustrated and Quizlet, where users provide their email addresses in exchange for free content or promotions, we can match Joe’s anonymous activity to his known email. We then send that email, plus his on-site behavior, to Klaviyo and similar platforms. This triggers an abandonment flow, allowing us to retarget him with personalized messages and increase the chance of conversion.
Bandholz: What are other ways brands use the data?
Gatti: Brands mainly set up automated flows and let them run. Like Fueled, we send data to email platforms and customer data systems, allowing them to trigger personalized actions automatically. The data enables Klaviyo to distinguish between new and returning visitors to show pop-ups only to first-timers.
Larkin: We integrate hashed emails into Meta. Match scores rise 30–50%, and return on ad spend improves because we can prove an ad drove a sale and retarget that shopper.
Gatti: Our identity graph stores multiple data points, including email addresses, phone numbers, postal addresses, IP addresses, and devices. Sharing that with Fueled feeds richer details into Meta’s conversion API, dramatically increasing match rates and targeting accuracy.
Larkin: Privacy rules now limit simple pixel tracking. Since iOS 17, identifiers are stripped, making it harder for ecommerce brands to track visitors and run effective ads. Fueled collects first-party data, and Opensend’s third-party graph restores lost signals. With conversion API integrations, brands send detailed data directly to platforms such as Meta for stronger targeting and email automation.
Bandholz: When should a brand start using data technologies like yours?
Larkin: It depends on scale. If you’re spending under $20,000 a month on ads, the free Shopify integrations with Meta or Google usually suffice. Fueled is twice the cost of our competitors because we offer hands-on audits, proactive monitoring, and direct Slack support. Our typical clients do $8 million or more in annual revenue, often over $20 million. Some entrepreneurs bring us in from day one for the data advantage. Still, most brands should wait until ad spend grows and minor optimizations have a significant financial impact.
Gatti: For Opensend it’s about traffic, not revenue. We recommend at least 30,000 monthly unique visitors so our filtering can produce quality new emails. Services that identify returning visitors across devices work best for sites with 100,000 monthly visitors, where a 10x ROI is common. Our plans start at $250 per month.
Visitors who never share an email address convert less often, but our filtering narrows the gap. At apparel brand True Classic, for example, we captured 390,000 emails over three years, saw 65% open rates, and delivered a 5x return on investment within three months. As these contacts move through remarketing with holiday offers and seasonal promotions, ROI continues to compound.
Bandholz: When should a company remove an unresponsive subscriber?
Gatti: It varies by brand, average order value, and overall marketing strategy. We work with both high-end luxury companies and billion-dollar tire sellers with very different approaches. In general, if you’ve sent 10 to 15 emails with zero engagement, it’s time to drop those contacts. Continuing to send won’t help.
Larkin: I’d add that many brands, including big ones, don’t plan their retargeting or abandonment flows, especially heading into Black Friday and Cyber Monday. The pressure to discount everything can lead to leaving money on the table. Opensend reveals customer intent, allowing you to adjust offers. Someone who reaches the checkout may not require the same discount as someone who has only added a product to their cart.
We partner with agencies such as Brand.co and New Standard Co that help us build smart strategies. My biggest recommendation for the holidays is to review your flows, decide when a large discount is necessary, and avoid giving away the farm. If you blanket customers with huge discounts, many will disappear once the sale ends.
Bandholz: Where can people follow you, find you, use your services?
An internal dispute within the WordPress core contributor team spilled into the open, causing major confusion among people outside the organization. The friction began with a post from more than a week ago and culminated in a remarkable outburst, exposing latent tensions within the core contributor community.
Mary Hubbard Announcement Triggers Conflict
The incident seemingly began with a September 15 announcement by Mary Hubbard, the Executive Director of WordPress. She announced a new Core Program Team that is meant to improve how Core contributor groups coordinate with each other and improve collaboration between Core contributor teams. But this was just the trigger for the conflict, which was actually part of a longer-term friction.
“The goal of this team is to strengthen coordination across Core, improve efficiency, and make contribution easier. It will focus on documenting practices, surfacing roadmaps, and supporting new teams with clear processes.
The Core Program Team will not set product direction. Each Core team remains autonomous. The Program Team’s role is to listen, connect, and reduce friction so contributors can collaborate more smoothly.”
That announcement was met with the following response by a member of the documentation team (Jenni McKinnon), which was eventually removed:
“For the public record: This Core Program Team announcement was published during an active legal and procedural review that directly affects the structural governance of this project.
I am not only subject to this review—I am one of the appointed officials overseeing it under my legal duty as a recognized lead within SSRO (Strategic Social Resilience Operations). This is a formal governance, safety, and accountability protocol—bound by national and international law—not internal opinion.
Effective immediately: • This post and the program it outlines are to be paused in full. • No action is to be taken under the name of this Core Program Team until the review concludes and clearance is formally issued. • Mary Hubbard holds no valid authority in this matter. Any influence, instruction, or decision traced to her is procedurally invalid and is now part of a legal evidentiary record. • Direction, oversight, and all official governance relating to this matter is held by SSRO, myself, and verified leadership under secured protocol.
This directive exists to protect the integrity of WordPress contributors, prevent governance sabotage, and ensure future decisions are legally and ethically sound.
Further updates will be provided only through secured channels or when review concludes. Thank you for respecting this freeze and honoring the laws and values that underpin open source.”
The post was followed by astonishment and questions in various Slack and Facebook WordPress groups. The roots of the friction begin with events from a week ago centered on documentation team participation.
Documentation Team Participation
A September 10 post by documentation team member Estela Rueda informed the Core contributor community that the WordPress 6.9 release squad is experimenting with a smaller team that excludes documentation leads, with only a temporary “Docs Liaison” in place. Her post explained why this exclusion is a problem, detailed the importance of documentation in the release cycle, and urged that a formal documentation lead role be reinstated in future releases.
“The release team does not include representation from the documentation team. Why is this a problem? Because often documentation gets overlooked in release planning and project-wide coordination: Documentation is not a “nice-to-have,” it is a survival requirement. It’s not something we might do if someone has time; it’s something we must do — or the whole thing breaks down at scale. Removing the role from the release squad, we are not just sending the message that documentation is not important, we are showing new contributors that working on docs will never get them to the top of the credits page, therefore showing that we don’t even appreciate contributing to the Docs.”
Jenni McKinnon, who is a member of the docs team, responded with her opinions:
“This approach isn’t in line with genuine open-source values — it’s exclusionary and risks reinforcing harmful, cult-like behaviors.
By removing the Docs Team from the release squad under the guise of “reducing overhead,” this message sends a stark signal: documentation is not essential. That’s not just unfair — it actively erodes the foundations of transparency, contributor morale, and equitable participation.”
She added further comments, culminating in the post below that accused WordPress Executive Director Mary Hubbard of being behind a shift toward “top-down” control:
“While this post may appear collaborative on the surface, it’s important to state for the record — under Chatham House Rule, and in protection of those who have been directly impacted — that this proposal was pushed forward by Mary Hubbard, despite every Docs Team lead, and multiple long-time contributors, expressing concerns about the ethics, sustainability, and power dynamics involved.
Framing this as ‘streamlining’ or ‘experimenting’ is misleading. What’s happening is a shift toward top-down control and exclusion, and it has already resulted in real harm, including abusive behavior behind the scenes.”
Screenshot Of September 10 Comment
Documentation Team Member Asked To Step Away
Today’s issue appears to have been triggered by a post from earlier today announcing that Jenni McKinnon was asked to “step away.”
“The Documentation team’s leadership has asked Jenni McKinnon to step away from the team.
Recent changes in the structure of the WordPress release squad started a discussion about the role of the Documentation team in documenting the release. While the team was working with the Core team, the release squad, and Mary Hubbard to find a solution for this and future releases, Jenni posted comments that were out of alignment with the team, including calls for broad changes across the project and requests to remove certain members from leadership roles.
This ran counter to the Documentation team’s intentions. Docs leadership reached out privately in an effort to de-escalate the situation and asked Jenni to stop posting such comments, but this behaviour did not stop. As a result, the team has decided to ask her to step away for a period of time to reassess her involvement. We will work with her to explore rejoining the team in the future, if it aligns with the best outcomes for both her and the team.”
And that post may have been what precipitated today’s blow-up in the comments section of Mary Hubbard’s post.
Zooming Out: The Big Picture
What happened today is an isolated incident. But some in the WordPress community have confided their opinion that the WordPress core technical debt has grown larger and expressed concern that the big picture is being ignored. Separately, in comments on her September 10 post (Docs team participation in WordPress releases), Estela Rueda alluded to the issue of burnout among WordPress contributors:
“…the number of contributors increases in waves depending on the releases or any special projects we may have going. The ones that stay longer, we often feel burned out and have to take breaks.”
Taken together, to an outsider, today’s friction contributes to the appearance of cracks starting to show in the WordPress project.
Google returns to court on Monday for the remedies phase of the Department of Justice’s ad-tech antitrust case, where the government is asking the judge to order a divestiture of Google Ad Manager.
The remedies trial follows a ruling that found Google illegally monopolized the publisher ad server and ad exchange markets, while rejecting claims about advertiser ad networks and Google’s past acquisitions.
In a statement published today, Google said it will appeal the earlier decision and argued the DOJ’s proposed remedies “go far beyond the Court’s liability decision and the law.”
What The DOJ Is Seeking
The Justice Department will seek structural remedies, which could include selling parts of Google’s ad-tech stack.
Based on reports and filings, the DOJ appears to be pushing for a divestiture of AdX, and possibly DFP, which are now combined within Google Ad Manager.
The remedies trial is scheduled to start Monday in Alexandria, Virginia, before U.S. District Judge Leonie M. Brinkema.
Google’s Counter
Google says a breakup would disrupt publishers and raise costs for advertisers.
The company proposes a behavioral fix focused on interoperability rather than divestiture.
In Google’s words:
“DOJ’s proposed changes go far beyond the Court’s liability decision and the law, and risk harming businesses across the country.”
“We propose building on Ad Manager’s interoperability, letting publishers use third-party tools to access our advertiser bids in real-time.”
These elements reflect Google’s May filing, which proposed making AdX’s real-time bids available to rival ad servers and phasing out Unified Pricing Rules for open-web display.
What The Court Already Decided
Judge Brinkema’s April opinion found Google violated the Sherman Act in the publisher ad server and ad exchange markets and unlawfully tied DFP and AdX.
The court didn’t find a monopoly in advertiser ad networks and rejected claims tied to Google’s acquisitions.
Why This Matters
Should the court decide on divestiture, you might see changes in how open-web display inventory is auctioned and served, along with costs for transitioning off integrated tools.
If the judge backs Google’s interoperability plan, you can expect required access to real-time bids and rule changes that could make multi-stack setups easier without a corporate split.
Looking Ahead
Google plans to appeal the liability decision, so any ordered remedies may be delayed until the appeal is reviewed.
For years, backlinks have been the gold standard for building authority, driving link juice, and climbing up the SERPs. But with the rise of Generative AI, the search landscape is shifting. Instead of chasing endless links, visibility now also depends on something more intelligent: AI citations. This evolution means your brand can show up in front of wider audiences, even without a massive backlink profile.
The question is, when it comes to AI citations versus backlinks, how do they differ, and does one outweigh the other? In this blog, we’ll break down both, explore their role in building authority, and uncover whether AI citations are the future of digital visibility or just another layer to your SEO strategy.
What are backlinks?
Backlinks are simply links from one website to another. Think of them as digital recommendations: when a reputable site links to your content, it signals to search engines that your page is trustworthy and valuable.
For example, below is a screenshot from a Zapier blog post that links to the Yoast SEO plugin landing page in the blog.
Zapier blog post has linked to the Yoast SEO plugin page
Backlinks aren’t new; they’ve been around for more than two decades. In fact, links were introduced back in 1998 as part of Google’s original PageRank algorithm, making them one of the oldest forms of online citations. Since then, they’ve remained a core ranking factor, shaping how websites compete for visibility.
The PageRank Citation Ranking research paper
Today, backlinks are still considered one of the strongest signals for building authority. Many brands invest in link-building strategies to secure high-quality backlinks, from being cited in well-written pieces to building relationships that earn natural mentions.
Why backlinks matter?
Backlinks are not just about search rankings, but they influence almost every aspect of your website’s visibility and growth. Here’s why they remain essential:
Improve rankings by acting as one of Google’s most important signals, especially when they come from authoritative domains
Drive referral traffic that is often highly targeted and more likely to engage with your content
Boost authority and credibility by showing search engines that trusted sites vouch for your content
Help with faster indexing by guiding search engine crawlers to discover and prioritize your pages
Provide semantic understanding by giving Google context through anchor text and linking page content
What types of backlinks work best?
Not all backlinks are equal, and the ones that matter most usually have these traits:
They come from trusted and authoritative websites
They include your target keyword or a variation of the target keywords in your anchor text
They are topically relevant to your niche
They are ‘dofollow’ links that pass link equity
Backlinks remain important for SEO, but as search evolves, they’re no longer the only way to build authority. This is where AI citations enter the picture.
AI citations are references, attributions, or direct links to your content, brand, or product that appear within AI-generated answers. Unlike traditional backlinks that live inside web content, AI citations are shown within AI search results or summaries. They often appear as clickable source cards, numbered footnotes, or links listed below an AI overview.
For example, when Google AI Overviews quotes websites in the AI search box, it cites the original sources that provided the information.
Some other examples of AI citations are:
ChatGPT cites your brand or content as part of its generated answer
Bing Copilot highlights your product as a recommended solution to a user’s query, even if it doesn’t include a direct link
Perplexity.ai lists your research as a supporting source beneath its summarized response
Why AI citations matter for visibility?
AI citations are becoming critical for brand exposure because they align with how people now consume information online:
Search is becoming prompt-driven, which means users type questions or prompts instead of keywords. If AI picks your content to cite, you’re instantly visible to that audience
Discovery is moving from clicks to context. Users may not always visit your website, but being cited ensures your brand becomes part of the answer itself
AI is becoming your audience’s first impression. In many cases, people see the AI summary before they see the actual search results. Appearing as a cited source makes your brand part of that first interaction
Citations boost credibility and authority. When an AI tool references your content, it signals to users that your site is trustworthy enough to be part of the response
Types of AI citations that influence brand visibility
Not all AI citations look the same. Here are the key forms that shape how your brand is discovered:
Name-drop mentions drive brand visibility
When AI directly mentions your brand or product in its response, such as in a recommendation or ‘best of list, you gain instant visibility in front of users without them needing to click further.
Source references build credibility signals
These citations work like the ‘works cited’ section in AI outputs. Tools like Gemini, Perplexity, or Google AI Overviews may display your URL in the list of sources at the bottom of the response. Even if you’re not in the main summary, you benefit from the authority signal.
Quoted passages establish expert authority
When AI pulls exact wording from your content and attributes it to you, it elevates your position as an expert. This type of citation places you in prime digital real estate, signalling leadership in your niche.
Synthesized mentions shape brand narrative
Sometimes AI blends your insights into its summary without naming or linking back to you. While harder to measure, your content still influences the narrative and reinforces brand authority in indirect ways.
AI citations are already reshaping how visibility works in search. Just as backlinks defined SEO two decades ago, citations in AI search are now shaping brand perception by influencing what users see, trust, and remember about your business.
How are AI citations and backlinks different?
So, now that we have an overview of AI citations and backlinks, let’s see how backlinks and LLM citations differ from each other -`
Aspect
Backlinks
AI/LLM Citations
What they are
Hyperlinks from one website to another, long used as a ranking factor in SEO
Mentions, attributions, or references included in AI-generated answers, sometimes with clickable links
Visibility
Usually embedded within web content and not always visible to the average reader
Front-facing and displayed in AI overviews, chatbots, or search snapshots, making them highly visible to users
Trust impact
Boosts site authority indirectly through improved rankings and referral signals
Builds direct credibility by being presented as a trusted source in AI answers or summaries
Selection factors
Determined by domain authority, anchor text, and contextual relevance
A news site links to your product page in an article
Link building strategies, such as outreach, partnerships, and content marketing, to earn quality backlinks
SEO focus
Link building strategies, such as outreach, partnerships, and content marketing, to earn quality backlinks
Creating structured, high-quality, and easily digestible content that AI systems can cite
Effect
Improves rankings and drives referral traffic over time
Enhances brand visibility, authority, and recall directly in AI-powered search experiences
How to earn both?
Earning backlinks and AI citations doesn’t have to be two separate strategies. With the right approach, the same efforts that build traditional authority also make your content LLM crawler-friendly.
Here’s how to do it:
Create deep, original, and useful content
Go beyond rewriting what’s already ranking. Publish original research, case studies, interviews, or unique perspectives that others can’t find elsewhere. AI models pull from fresh, problem-solving content, and so do journalists and bloggers who link naturally.
Write for real questions, not just keywords
Search is shifting from keywords to prompts. Pay attention to what your audience is actually asking on forums, social media, and other platforms. Create conversational, direct answers to those questions. If your content aligns with user prompts, it’s far more likely to be both cited by AI and linked by humans.
Leverage structured data
Use schema markup (FAQ, HowTo, Article, Product) to help AI and search engines clearly understand your content. Proper attribution of authors and sources also increases your chance of being recognized as a credible reference. Structured, transparent content is ‘citation ready.’
Build relationships for natural backlinks
Backlinks remain relationship-driven. Connect with journalists, bloggers, and industry peers through guest posts, expert roundups, or collaborations. AI often mirrors human trust signals, so if authoritative voices link to you, AI is more likely to cite you too.
Focus on clarity and quotability
Make your content easy to lift and reuse. Use short, memorable statements, stats, or definitions that can be quoted word-for-word. Structured layouts like subheadings, lists, and bullet points make content easier to reference by both humans and AI.
Monitor, analyze, and adapt
Don’t just publish; instead, track performance. Use SEO tools for backlinks and platforms to monitor AI citations and understand AI brand perception. If competitors are cited for prompts you should own, study their structure and improve on it. Adjusting based on data helps you stay ahead.
The takeaway: With the right strategies, you don’t need separate plans for backlinks and AI citations. Clear, authoritative, and trustworthy content earns both and multiplies your visibility across search engines and AI-powered platforms.
Exploring Yoast’s AI features
Applying the right strategies for earning backlinks and AI citations is easier when you have the right tools. Yoast’s AI features combine SEO best practices with AI-powered enhancements to make your content clearer, more discoverable, and more effective.
Here’s how they can support your workflow:
Yoast AI Generate
Quickly create multiple, tailored titles and meta descriptions for your pages or blog posts. This ensures your content attracts clicks and stands out in search results. You can select from different options, tweak them to fit your brand voice, and preview how they’ll appear in SERPs.
Yoast AI Summarize
Turn long-form content into scannable, bullet-point takeaways in seconds. This may also help reduce bounce rates by giving readers immediate clarity on what your page delivers. It also makes your content easier for AI systems and Google’s AI Overviews to interpret correctly.
Yoast AI Optimize
Get AI-powered suggestions to improve SEO signals such as keyphrase distribution, sentence length, and readability. You can review, apply, or dismiss recommendations with one click, ensuring that optimization never comes at the cost of your unique editorial voice.
Together, these AI-powered features help you save time, improve clarity, and boost both human and AI-driven visibility, laying the foundation for stronger backlinks and more consistent AI citations.
Backlinks or citations: What truly matters for visibility?
Backlinks have been the backbone of SEO for more than two decades, helping websites climb rankings, build authority, and attract referral traffic. But the rise of AI citations is reshaping how visibility works. When AI systems like Google’s AI Overviews or ChatGPT cite your content, they place your brand directly in front of users at the moment of discovery.
The truth is, it’s not a choice between backlinks and AI citations. Both matter, but in different ways. Backlinks remain critical for SEO growth and authority, while AI citations are quickly becoming the new gatekeepers of brand perception and visibility. The winning strategy is to create content that earns both.
Ahad Qureshi
I’m a Computer Science grad who accidentally stumbled into writing—and stayed because I fell in love with it. Over the past six years, I’ve been deep in the world of SEO and tech content, turning jargon into stories that actually make sense. When I’m not writing, you’ll probably find me lifting weights to balance my love for food (because yes, gym and biryani can coexist) or catching up with friends over a good cup of chai.
SEO for Paws, is a live-streamed fundraiser founded by Anton Shulke, an expert at organizing events, to help a charity close to his heart.
Anton has tirelessly continued his support for his favorite charity, which aids the many pets that were left behind in Kyiv after war broke out in Ukraine. The previous event in March managed to generate approx $7,000 for the worthy cause, with all funds going straight to the shelters where it’s needed.
Anton is well-known for his love of cats. Dynia, who traveled across Europe with Anton’s family after escaping Kyiv, is a regular feature on his social media channels.
Image from Anton Shulke, September 2025
One Cat Turned Into A Shelter Of 50
Among the many pet shelters that SEO For Paws has helped is an apartment run by Alya, who cares for up to 50 animals.
Alya has always cared for animals, and meeting an old, sick cat she called Fox was the start of becoming an organized shelter.
In 2016, she started with five cats living in her apartment, and today has 50 alongside 15 of her grandmother’s cats.
There’s a lot involved in care for this many animals, including the feeding, cleaning, washing litter boxes, replacing litter, and performing hygiene or medical procedures when needed.
Running a home-based shelter is not easy. Sometimes it’s sad, sometimes it’s exhausting. But Alya says that looking around at all the little whiskered faces, the furry bodies sprawled across the furniture, makes it worth it. Giving them a life of warmth, food, and love is worth every challenge.
To keep supporting individuals like Alya, we need your help. You can donate via Anton’s Buy Me a Coffee.
SEO For Paws – Cat Lovers, Dog Lovers, And SEO
The upcoming “SEO for Paws” livestream aims to continue fundraising efforts. The event, which runs from 12:00 p.m. to 4:30 p.m. ET, will offer actionable SEO and digital marketing advice from experts while raising money for the animal shelters.
Headline speakers who have donated their time to support his cause include Andrey Lipattsev, David Carrasco, Olga Zarr, Judith Lewis, James Wirth, Zach Chahalis, Jamie Indigo, and Lee Elliott.
Attendance is free, but participants are encouraged to donate to help the charity.
Event Highlights
Date and Time: September 25, 2025, from 12:00 p.m. to 4:30 p.m. ET.
Access: Free registration with the option to join live, participate in Q&A sessions, and a recording will be made available on YouTube.
Speakers: The live stream will feature SEO and digital marketing experts, who will share actionable insights.
How To Make A Difference
The “SEO for Paws” live stream is an opportunity to make a meaningful difference while listening to excellent speakers.
All money raised is donated to help cats and dogs in Ukraine.
Digital marketers are providing more evidence that Google’s disabling of the num=100 search parameter correlates exactly with changes in Google Search Console impression rates. What looked like reliable data may, in fact, have been a distorted picture shaped by third-party SERP crawlers. It’s becoming clear that squeezing meaning from the top 100 search results is increasingly a thing of the past and that this development may be a good thing for SEO.
Num=100 Search Parameter
Google recently disabled the use of a search parameter that caused web searches to display 100 organic search results for a given query. Search results keyword trackers depended on this parameter for efficiently crawling Google’s search results. By eliminating the search parameter, Google is forcing data providers into an unsustainable position that requires them to scale their crawling by ten times in order to extract the top 100 search results.
Rank Tracking: Fighting To Keep It Alive
Mike Roberts, founder of SpyFu, wrote a defiant post saying that they will find a way to continue bringing top 100 data to users.
His post painted an image of an us versus them moment:
“We’re fighting to keep it alive. But this hits hard – delivering is very expensive.
We might even lose money trying to do this… but we’re going to try anyway.
If we do this alone, it’s not sustainable. We need your help.
This isn’t about SpyFu vs. them.
If we can do it – the way the ecosystem works – all your favorite tools will be able to do it. If nothing else, then by using our API (which has 100% of our keyword and ranking data).”
Rank Tracking: Where The Wind Is Blowing
Tim Soulo, CMO of Ahrefs, sounded more pragmatic about the situation, tweeting that the future of ranking data will inevitably be focused on the Top 20 search results.
“Ramping up the data pulls by 10x is just not feasible, given the scale at which all SEO tools operate.
So the question is:
‘Do you need keyword data below Top 20?’
Because most likely it’s going to come at a pretty steep premium going forward.
Personally, I see it this way:
▪️ Top 10 – is where all the traffic is at. Definitely a must-have.
▪️ Top 20 – this is where “opportunity” is at, both for your and your competitors. Also must-have.
▪️ Top 21-100 – IMO this is merely an indication that a page is “indexed” by Google. I can’t recall any truly actionable use cases for this data.”
Many of the responses to his tweet were in agreement, as am I. Anything below the top 20, as Tim suggested, only tells you that a site is indexed. The big picture, in my opinion, is that it doesn’t matter whether a site is ranked in position 21 or 91; they’re pretty much equivalently suffering from serious quality or relevance issues that need to be worked out. Any competitors in that position shouldn’t be something to worry about because they are not up and coming; they’re just limping their way in the darkness of page three and beyond.
Page two positions, however, provide actionable and useful information because they show that a page is relevant for a given keyword term but that the sites ranked above it are better in terms of quality, user experience, and/or relevance. They could even be as good as what’s on page one but, in my experience, it’s less about links and more often it’s about user preference for the sites in the top ten.
Distorted Search Console Data
It’s becoming clear that search results scraping distorted Google’s Search Console data. Users are reporting that Search Console keyword impression data is significantly lower since Google blocked the Num=100 search parameter. Impressions are the times when Google shows a web page in the search results, meaning that the site is ranking for a given keyword phrase.
SEO and web developer Tyler Gargula (LinkedIn profile) posted the results of an analysis of over three hundred Search Console properties, showing that 87.7% of the sites experienced drops in impressions. 77.6% of the sites in the analysis experienced losses in query counts, losing visibility for unique keyword phrases.
“Keyword Length: Short-tail and mid-tail keywords experienced the largest drops in impressions, with single word keywords being much lower than I anticipated. This could be because short and mid-tail keywords are popular across the SEO industry and easier to track/manage within popular SEO tracking tools.
Keyword Ranking Positions: There has been reductions in keywords ranking on page 3+, and in turn an increase in keywords ranking in the top 3 and page 1. This suggests keywords are now more representative of their actual ranking position, versus receiving skewed positions from num=100.”
Google Is Proactively Fighting SERP Scraping
Disabling the num=100 search parameter is just the prelude to a bigger battle. Google is hiring an engineer to assist in statistical analysis of SERP patterns and to work together with other teams to develop models for combating scrapers. It’s obvious that this activity negatively affects Search Console data, which in turn makes it harder for SEOs to get an accurate reading on search performance.
What It Means For The Future
The num=100 parameter was turned off in a direct attack on the scraping that underpinned the rank-tracking industry. Its removal is forcing the search industry to reconsider the value of data beyond the top 20 results. This may be a turning point toward better attribution and and clearer measures of relevance.