Ask An SEO: How Can I Turn Low-Converting Traffic Into High-Value Sessions? via @sejournal, @kevgibbo

This week’s Ask an SEO question comes from an ecommerce site owner who’s experiencing a common frustration:

“Our ecommerce site has decent traffic but poor conversion rates. What data points should we be analyzing first, and what are two to three quick conversion rate optimization (CRO) wins that most companies overlook?”

This is a great question. Having good traffic but poor conversion rates is really frustrating for ecommerce site managers.

You’ve successfully managed to get hundreds or even thousands of people onto your landing pages, but only a tiny proportion of them turn into paying customers.

What’s going wrong, and what can you do about it?

I’ve broken down my tips as follows:

  • Start with your bigger picture goals.
  • Double-check your targeting.
  • Data points to analyze.
  • Simulate the user journey.
  • Quick CRO wins.

Thinking About The Bigger Picture First

Before answering your question, I think it’s valuable to take a step back and think about your approach to running your site – and what your goals are.

People often get lots of low-quality traffic for the following kinds of reasons:

  • They’re attracting the wrong kinds of people.
  • They’re using paid ads ineffectively.
  • The content on the site gets clicks, but doesn’t solve visitors’ needs.
  • The site is confusing, unclear, or even annoying to use.

For me, conversion is always built on the same key fundamentals:

  • Quality over quantity: There’s no value in having millions of visitors if none of them convert. I’ve worked on ecommerce sites where we implemented changes that made traffic drop dramatically. However, the quality of the remaining traffic was much higher, meaning conversion rates – and revenue – soared.
  • Focus on user experience (UX): It’s really important to understand the user journey from inception to conversion. What’s helping people navigate your site, and what’s hindering them? Often, this is simply about returning to the basics of UX. High-value sessions come from relevance, ease, and trust – all of which are fully within your control.

So, before making changes, I’d encourage you to step back and think about your goals and objectives for the site. Everything else will feed into that.

What’s Realistic?

It’s helpful to have a benchmark for what your conversion rate should be.

According to Shopify data, the average ecommerce site conversion rate is 1.4%. A very good rate is 3.2% or above, while very few sites hit more than 5%.

Double-Check Your Targeting

A common reason people get high traffic but low conversions is due to problems with their targeting. Essentially, they’re attracting the wrong kinds of site visitors.

For example, you might run a site selling tennis memorabilia. But most of the traffic you get is from people searching for tickets to tennis tournaments. As a consequence, most visitors bounce.

If this is the case, it’s time to rethink your SEO. Are you ranking for the right keywords? Are your landing pages aligned with the top queries for those search terms? Making changes here can make a big difference.

However, if your targeting is correct but conversion is still off, it’s time to look into CRO.

5 Kinds Of Conversion Rate Data To Analyze

By analyzing how people navigate your site, you can start to build a picture of how they’re using it – and which features of your site or the user journey are turning visitors off.

If you’re using a store builder like Shopify, Wix, or Squarespace, you should have access to quite a lot of CRO data within the dashboard. On older sites, it can be a bit trickier to figure these things out.

There are lots of metrics that can give you insights into conversion rates. But the following information is often most telling:

1. User Behavior Metrics

  • Bounce rate and exit rate: This is especially important for key pages (such as product and checkout).
  • Scroll depth: Are users seeing your calls to action and product info?
  • Heatmaps: Are users interacting with intended elements?
  • Entry points: Are there commonalities between entrances for users who aren’t converting versus those who are converting? If so, this may indicate a specific issue with certain user journeys.

2. Conversion Funnel Drop-Off

  • Abandonment: Where are users abandoning the funnel (e.g., product page → add to cart → checkout)?
  • Granularity: I’d also recommend looking at abandonment rates for each step.

3. Device & Browser Performance

  • Device: Conversion rate by device (mobile often underperforms).
  • Operating system: Technical glitches in specific browsers/OS versions can quietly hurt conversions.

4. Site Speed & Core Web Vitals

  • Page load time: This directly affects conversions, especially on mobile.
  • Track it: Use tools like Google PageSpeed Insights or Lighthouse.

5. On-Site Search Behavior

  • What are people searching for?
  • Are searches returning relevant results?
  • High search exit rate often signals poor relevance or UX.

This can seem like a lot of work! However, what you’re really looking for is a basic benchmark for each of the above points that you can plug into a spreadsheet.

You only need to gather this data once. Then, it’s just a case of seeing how changes you make affect those scores.

For example, say you have a high cart abandonment rate of 90%. You might decide to make some simple changes to the process (e.g., letting users check out as a guest). You’ll then be able to see what effect your change has had.

Simulate The User’s Journey

This is all about putting yourself in your users’ shoes. I’m often surprised by how few ecommerce site owners do this, yet you can’t understand what’s going wrong if you don’t use the site like a user would.

Simulating user journeys often exposes glaring usability issues.

For example, it’s quite common to land on a category page for, say, sports T-shirts, and find it’s full of broken links. You click on a T-shirt that looks good, but it leads to a 404. That’s such a turn-off to potential customers.

There are, of course, endless possible ways that people can navigate your site. I’d prioritize a handful of your most popular products and try to imagine how people would go through the process of buying them.

Here are some of the things to look out for:

Landing Page (First Impression)

  • Is the value proposition clear within five seconds?
  • Are headlines concise and benefit-driven?
  • Is there a clear CTA above the fold?
  • Are distractions minimized (pop-ups, autoplay, clutter)?

Navigation And Search

  • Is site navigation intuitive and consistent?
  • Can users find products in three clicks or fewer?
  • Are filters/sorting options clear and responsive?

Category Pages

  • Is key info shown (price, reviews, quick add)?
  • Is the layout clean (think about devices here, mobile responsiveness, font size, etc.)?
  • Are products visible above the fold?

Product Detail Pages

  • Are product titles, descriptions, and photos compelling and complete?
  • Is the price, shipping, and returns information visible without scrolling?
  • Are reviews and ratings visible and credible?
  • Is the “Add to Cart” button obvious and persistent?

Cart And Checkout

  • Is the cart editable (quantity, remove item)?
  • Are total costs (including shipping/tax) shown upfront?
  • Can users check out as a guest?
  • Are there too many form fields? (Trim non-essentials.)
  • Are payment options clearly presented and working?

Speed

Quick CRO Wins That Are Often Overlooked

Conversion rate optimization doesn’t always require a root-and-branch site upgrade.

Here are some simple tweaks you can make that can be surprisingly impactful.

Improve Product Page Microcopy And Visual Hierarchy

If a user lands on a product page, it’s crucial to communicate key information to them. Yet, for many products, people have to scroll below the fold to find the information they need.

  • Show total price, shipping, and returns at the top of the page.
  • Have a clear image of the product (you’d be amazed, but this doesn’t always happen).
  • Spell out the product name, color, type, and other information.
  • Add urgency (“Only 3 left!”), real-time interest (“27 people viewed this today”), or social proof (UGC, ratings) near the CTA.

Make It Easy To Buy

It can sometimes be surprisingly difficult for people to know how to actually buy things on ecommerce sites, particularly when using mobile. I’d recommend:

  • Making the “Add to Cart” button sticky on mobile. Make sure it’s in a clear, bold, contrasting color.
  • Add subtle animations or color shifts to draw attention.
  • Show trust badges (e.g., secure checkout, money-back guarantee).

Make It Easier To Find Items

Any ecommerce site today should have a search bar where people can look for products. Help people find products by offering auto-suggestions with images and categories.

I’d also recommend tracking no-results queries and fixing them with redirects or better tagging. You might also want to promote high-converting products in the top results.

Simplify The Checkout Experience

A poor checkout experience can be a real killer for conversion. The priority here is almost always about making things as easy as possible for buyers.

  • Remove non-critical fields (phone number, company name).
  • Offer guest checkout as default.
  • Add progress indicators to reduce perceived friction.

Use Exit-Intent Offers Wisely

Exit-intent technology can be very helpful, at least on some kinds of websites.

However, it’s important to use it thoughtfully and appropriately (what makes sense on a fast-fashion website won’t look as good on a luxury goods store).

Instead of broad discounts, use behavioral targeting. Here are some options:

  • Offer a free shipping incentive only to high-cart-value exits.
  • Show email capture pop-ups only after a period of inactivity or product page scrolling.
  • Use exit-intent popups with tailored offers (e.g., “Complete your order now and get 10% off”).
  • Send a three-part abandoned cart email flow (reminder, offer, scarcity, i.e., “Items going fast!”)

A Final Note: Test It First

Last but not least, I’d always recommend A/B testing before rolling out whole site changes.

If you’ve tweaked a certain part of the user journey or the layout of a landing page, trial it for a week or so and see what results you get.

This avoids making damaging changes that harm conversion rates (and take a long time to rectify).

Preaching To The Converters

I hope these ideas for converting more of your ecommerce site’s visitors have helped.

As I’ve shown, there are tons of potential CRO techniques you can use, and it can get a bit overwhelming.

However, it’s often more straightforward than it seems, and you can often start with small steps that make a difference.

One of the reasons ecommerce site management can be so rewarding is the ability to experiment and see how small changes can make a big difference. Good luck!

More Resources:


Featured Image: Paulo Bobita/Search Engine Journal

Google’s Trust Ranking Patent Shows How User Behavior Is A Signal via @sejournal, @martinibuster

Google long ago filed a patent for ranking search results by trust. The groundbreaking idea behind the patent is that user behavior can be used as a starting point for developing a ranking signal.

The big idea behind the patent is that the Internet is full of websites all linking to and commenting about each other. But which sites are trustworthy? Google’s solution is to utilize user behavior to indicate which sites are trusted and then use the linking and content on those sites to reveal more sites that are trustworthy for any given topic.

PageRank is basically the same thing only it begins and ends with one website linking to another website. The innovation of Google’s trust ranking patent is to put the user at the start of that trust chain like this:

User trusts X Websites > X Websites trust Other Sites > This feeds into Google as a ranking signal

The trust originates from the user and flows to trust sites that themselves provide anchor text, lists of other sites and commentary about other sites.

That, in a nutshell, is what Google’s trust-based ranking algorithm is about.

The deeper insight is that it reveals Google’s groundbreaking approach to letting users be a signal of what’s trustworthy. You know how Google keeps saying to create websites for users? This is what the trust patent is all about, putting the user in the front seat of the ranking algorithm.

Google’s Trust And Ranking Patent

The patent was coincidentally filed around the same period that Yahoo and Stanford University published a Trust Rank research paper which is focused on identifying spam pages.

Google’s patent is not about finding spam. It’s focused on doing the opposite, identifying trustworthy web pages that satisfy the user’s intent for a search query.

How Trust Factors Are Used

The first part of any patent consists of an Abstract section that offers a very general description of the invention that that’s what this patent does as well.

The patent abstract asserts:

  • That trust factors are used to rank web pages.
  • The trust factors are generated from “entities” (which are later described to be the users themselves, experts, expert web pages, and forum members) that link to or comment about other web pages).
  • Those trust factors are then used to re-rank web pages.
  • Re-ranking web pages kicks in after the normal ranking algorithm has done its thing with links, etc.

Here’s what the Abstract says:

“A search engine system provides search results that are ranked according to a measure of the trust associated with entities that have provided labels for the documents in the search results.

A search engine receives a query and selects documents relevant to the query.

The search engine also determines labels associated with selected documents, and the trust ranks of the entities that provided the labels.

The trust ranks are used to determine trust factors for the respective documents. The trust factors are used to adjust information retrieval scores of the documents. The search results are then ranked based on the adjusted information retrieval scores.”

As you can see, the Abstract does not say who the “entities” are nor does it say what the labels are yet, but it will.

Field Of The Invention

The next part is called the Field Of The Invention. The purpose is to describe the technical domain of the invention (which is information retrieval) and the focus (trust relationships between users) for the purpose of ranking web pages.

Here’s what it says:

“The present invention relates to search engines, and more specifically to search engines that use information indicative of trust relationship between users to rank search results.”

Now we move on to the next section, the Background, which describes the problem this invention solves.

Background Of The Invention

This section describes why search engines fall short of answering user queries (the problem) and why the invention solves the problem.

The main problems described are:

  • Search engines are essentially guessing (inference) what the user’s intent is when they only use the search query.
  • Users rely on expert-labeled content from trusted sites (called vertical knowledge sites) to tell them which web pages are trustworthy
  • Explains why the content labeled as relevant or trustworthy is important but ignored by search engines.
  • It’s important to remember that this patent came out before the BERT algorithm and other natural language approaches that are now used to better understand search queries.

This is how the patent explains it:

“An inherent problem in the design of search engines is that the relevance of search results to a particular user depends on factors that are highly dependent on the user’s intent in conducting the search—that is why they are conducting the search—as well as the user’s circumstances, the facts pertaining to the user’s information need.

Thus, given the same query by two different users, a given set of search results can be relevant to one user and irrelevant to another, entirely because of the different intent and information needs.”

Next it goes on to explain that users trust certain websites that provide information about certain topics:

“…In part because of the inability of contemporary search engines to consistently find information that satisfies the user’s information need, and not merely the user’s query terms, users frequently turn to websites that offer additional analysis or understanding of content available on the Internet.”

Websites Are The Entities

The rest of the Background section names forums, review sites, blogs, and news websites as places that users turn to for their information needs, calling them vertical knowledge sites. Vertical Knowledge sites, it’s explained later, can be any kind of website.

The patent explains that trust is why users turn to those sites:

“This degree of trust is valuable to users as a way of evaluating the often bewildering array of information that is available on the Internet.”

To recap, the “Background” section explains that the trust relationships between users and entities like forums, review sites, and blogs can be used to influence the ranking of search results. As we go deeper into the patent we’ll see that the entities are not limited to the above kinds of sites, they can be any kind of site.

Patent Summary Section

This part of the patent is interesting because it brings together all of the concepts into one place, but in a general high-level manner, and throws in some legal paragraphs that explain that the patent can apply to a wider scope than is set out in the patent.

The Summary section appears to have four sections:

  • The first section explains that a search engine ranks web pages that are trusted by entities (like forums, news sites, blogs, etc.) and that the system maintains information about these labels about trusted web pages.
  • The second section offers a general description of the work of the entities (like forums, news sites, blogs, etc.).
  • The third offers a general description of how the system works, beginning with the query, the assorted hand waving that goes on at the search engine with regard to the entity labels, and then the search results.
  • The fourth part is a legal explanation that the patent is not limited to the descriptions and that the invention applies to a wider scope. This is important. It enables Google to use a non-existent thing, even something as nutty as a “trust button” that a user selects to identify a site as being trustworthy as an example. This enables an example like a non-existent “trust button” to be a stand-in for something else, like navigational queries or Navboost or anything else that is a signal that a user trusts a website.

Here’s a nutshell explanation of how the system works:

  • The user visits sites that they trust and click a “trust button” that tells the search engine that this is a trusted site.
  • The trusted site “labels” other sites as trusted for certain topics (the label could be a topic like “symptoms”).
  • A user asks a question at a search engine (a query) and uses a label (like “symptoms”).
  • The search engine ranks websites according to the usual manner then it looks for sites that users trust and sees if any of those sites have used labels about other sites.
  • Google ranks those other sites that have had labels assigned to them by the trusted sites.

Here’s an abbreviated version of the third part of the Summary that gives an idea of the inner workings of the invention:

“A user provides a query to the system…The system retrieves a set of search results… The system determines which query labels are applicable to which of the search result documents. … determines for each document an overall trust factor to apply… adjusts the …retrieval score… and reranks the results.”

Here’s that same section in its entirety:

  • “A user provides a query to the system; the query contains at least one query term and optionally includes one or more labels of interest to the user.
  • The system retrieves a set of search results comprising documents that are relevant to the query term(s).
  • The system determines which query labels are applicable to which of the search result documents.
  • The system determines for each document an overall trust factor to apply to the document based on the trust ranks of those entities that provided the labels that match the query labels.
  • Applying the trust factor to the document adjusts the document’s information retrieval score, to provide a trust adjusted information retrieval score.
  • The system reranks the search result documents based at on the trust adjusted information retrieval scores.”

The above is a general description of the invention.

The next section, called Detailed Description, deep dives into the details. At this point it’s becoming increasingly evident that the patent is highly nuanced and can not be reduced to simple advice similar to: “optimize your site like this to earn trust.”

A large part of the patent hinges on a trust button and an advanced search query:  label:

Neither the trust button or the label advanced search query have ever existed. As you’ll see, they are quite probably stand-ins for techniques that Google doesn’t want to explicitly reveal.

Detailed Description In Four Parts

The details of this patent are located in four sections within the Detailed Description section of the patent. This patent is not as simple as 99% of SEOs say it is.

These are the four sections:

  1. System Overview
  2. Obtaining and Storing Trust Information
  3. Obtaining and Storing Label Information
  4. Generated Trust Ranked Search Results

The System Overview is where the patent deep dives into the specifics. The following is an overview to make it easy to understand.

System Overview

1. Explains how the invention (a search engine system) ranks search results based on trust relationships between users and the user-trusted entities who label web content.

2. The patent describes a “trust button” that a user can click that tells Google that a user trusts a website or trusts the website for a specific topic or topics.

3. The patent says a trust related score is assigned to a website when a user clicks a trust button on a website.

4. The trust button information is stored in a trust database that’s referred to as #190.

Here’s what it says about assigning a trust rank score based on the trust button:

“The trust information provided by the users with respect to others is used to determine a trust rank for each user, which is measure of the overall degree of trust that users have in the particular entity.”

Trust Rank Button

The patent refers to the “trust rank” of the user-trusted websites. That trust rank is based on a trust button that a user clicks to indicate that they trust a given website, assigning a trust rank score.

The patent says:

“…the user can click on a “trust button” on a web page belonging to the entity, which causes a corresponding record for a trust relationship to be recorded in the trust database 190.

In general any type of input from the user indicating that such as trust relationship exists can be used.”

The trust button has never existed and the patent quietly acknowledges this by stating that any type of input can be used to indicate the trust relationship.

So what is it? I believe that the “trust button” is a stand-in for user behavior metrics in general, and site visitor data in particular. The patent Claims section does not mention trust buttons at all but does mention user visitor data as an indicator of trust.

Here are several passages that mention site visits as a way to understand if a user trusts a website:

“The system can also examine web visitation patterns of the user and can infer from the web visitation patterns which entities the user trusts. For example, the system can infer that a particular user trust a particular entity when the user visits the entity’s web page with a certain frequency.”

The same thing is stated in the Claims section of the patent, it’s the very first claim they make for the invention:

“A method performed by data processing apparatus, the method comprising:
determining, based on web visitation patterns of a user, one or more trust relationships indicating that the user trusts one or more entities;”

It may very well be that site visitation patterns and other user behaviors are what is meant by the “trust button” references.

Labels Generated By Trusted Sites

The patent defines trusted entities as news sites, blogs, forums, and review sites, but not limited to those kinds of sites, it could be any other kind of website.

Trusted websites create references to other sites and in that reference they label those other sites as being relevant to a particular topic. That label could be an anchor text. But it could be something else.

The patent explicitly mentions anchor text only once:

“In some cases, an entity may simply create a link from its site to a particular item of web content (e.g., a document) and provide a label 107 as the anchor text of the link.”

Although it only explicitly mentions anchor text once, there are other passages where it anchor text is strongly implied, for example, the patent offers a general description of labels as describing or categorizing the content found on another site:

“…labels are words, phrases, markers or other indicia that have been associated with certain web content (pages, sites, documents, media, etc.) by others as descriptive or categorical identifiers.”

Labels And Annotations

Trusted sites link out to web pages with labels and links. The combination of a label and a link is called an annotation.

This is how it’s described:

“An annotation 106 includes a label 107 and a URL pattern associated with the label; the URL pattern can be specific to an individual web page or to any portion of a web site or pages therein.”

Labels Used In Search Queries

Users can also search with “labels” in their queries by using a non-existent “label:” advanced search query. Those kinds of queries are then used to match the labels that a website page is associated with.

This is how it’s explained:

“For example, a query “cancer label:symptoms” includes the query term “cancel” and a query label “symptoms”, and thus is a request for documents relevant to cancer, and that have been labeled as relating to “symptoms.”

Labels such as these can be associated with documents from any entity, whether the entity created the document, or is a third party. The entity that has labeled a document has some degree of trust, as further described below.”

What is that label in the search query? It could simply be certain descriptive keywords, but there aren’t any clues to speculate further than that.

The patent puts it all together like this:

“Using the annotation information and trust information from the trust database 190, the search engine 180 determines a trust factor for each document.”

Takeaway:

A user’s trust is in a website. That user-trusted website is not necessarily the one that’s ranked, it’s the website that’s linking/trusting another relevant web page. The web page that is ranked can be the one that the trusted site has labeled as relevant for a specific topic and it could be a web page in the trusted site itself. The purpose of the user signals is to provide a starting point, so to speak, from which to identify trustworthy sites.

Experts Are Trusted

Vertical Knowledge Sites, sites that users trust, can host the commentary of experts. The expert could be the publisher of the trusted site as well. Experts are important because links from expert sites are used as part of the ranking process.

Experts are defined as publishing a deep level of content on the topic:

“These and other vertical knowledge sites may also host the analysis and comments of experts or others with knowledge, expertise, or a point of view in particular fields, who again can comment on content found on the Internet.

For example, a website operated by a digital camera expert and devoted to digital cameras typically includes product reviews, guidance on how to purchase a digital camera, as well as links to camera manufacturer’s sites, new products announcements, technical articles, additional reviews, or other sources of content.

To assist the user, the expert may include comments on the linked content, such as labeling a particular technical article as “expert level,” or a particular review as “negative professional review,” or a new product announcement as ;new 10MP digital SLR’.”

Links From Expert Sites

Links and annotations from user-trusted expert sites are described as sources of trust information:

“For example, Expert may create an annotation 106 including the label 107 “Professional review” for a review 114 of Canon digital SLR camera on a web site “www.digitalcameraworld.com”, a label 107 of “Jazz music” for a CD 115 on the site “www.jazzworld.com”, a label 107 of “Classic Drama” for the movie 116 “North by Northwest” listed on website “www.movierental.com”, and a label 107 of “Symptoms” for a group of pages describing the symptoms of colon cancer on a website 117 “www.yourhealth.com”.

Note that labels 107 can also include numerical values (not shown), indicating a rating or degree of significance that the entity attaches to the labeled document.

Expert’s web site 105 can also include trust information. More specifically, Expert’s web site 105 can include a trust list 109 of entities whom Expert trusts. This list may be in the form of a list of entity names, the URLs of such entities’ web pages, or by other identifying information. Expert’s web site 105 may also include a vanity list 111 listing entities who trust Expert; again this may be in the form of a list of entity names, URLs, or other identifying information.”

Inferred Trust

The patent describes additional signals that can be used to signal (infer) trust. These are more traditional type signals like links, a list of trusted web pages (maybe a resources page?) and a list of sites that trust the website.

These are the inferred trust signals:

“(1) links from the user’s web page to web pages belonging to trusted entities;
(2) a trust list that identifies entities that the user trusts; or
(3) a vanity list which identifies users who trust the owner of the vanity page.”

Another kind of trust signal that can be inferred is from identifying sites that a user tends to visit.

The patent explains:

“The system can also examine web visitation patterns of the user and can infer from the web visitation patterns which entities the user trusts. For example, the system can infer that a particular user trusts a particular entity when the user visits the entity’s web page with a certain frequency.”

Takeaway:

That’s a pretty big signal and I believe that it suggests that promotional activities that encourage potential site visitors to discover a site and then become loyal site visitors can be helpful. For example, that kind of signal can be tracked with branded search queries. It could be that Google is only looking at site visit information but I think that branded queries are an equally trustworthy signal, especially when those queries are accompanied by labels… ding, ding, ding!

The patent also lists some kind of out there examples of inferred trust like contact/chat list data. It doesn’t say social media, just contact/chat lists.

Trust Can Decay or Increase

Another interesting feature of trust rank is that it can decay or increase over time.

The patent is straightforward about this part:

“Note that trust relationships can change. For example, the system can increase (or decrease) the strength of a trust relationship for a trusted entity. The search engine system 100 can also cause the strength of a trust relationship to decay over time if the trust relationship is not affirmed by the user, for example by visiting the entity’s web site and activating the trust button 112.”

Trust Relationship Editor User Interface

Directly after the above paragraph is a section about enabling users to edit their trust relationships through a user interface. There has never been such a thing, just like the non-existent trust button.

This is possibly a stand-in for something else. Could this trusted sites dashboard be Chrome browser bookmarks or sites that are followed in Discover? This is a matter for speculation.

Here’s what the patent says:

“The search engine system 100 may also expose a user interface to the trust database 190 by which the user can edit the user trust relationships, including adding or removing trust relationships with selected entities.

The trust information in the trust database 190 is also periodically updated by crawling of web sites, including sites of entities with trust information (e.g., trust lists, vanity lists); trust ranks are recomputed based on the updated trust information.”

What Google’s Trust Patent Is About

Google’s Search Result Ranking Based On Trust patent describes a way of leveraging user-behavior signals to understand which sites are trustworthy. The system then identifies sites that are trusted by the user-trusted sites and uses that information as a ranking signal. There is no actual trust rank metric, but there are ranking signals related to what users trust. Those signals can decay or increase based on factors like whether a user still visits those sites.

The larger takeaway is that this patent is an example of how Google is focused on user signals as a ranking source, so that they can feed that back into ranking sites that meet their needs. This means that instead of doing things because “this is what Google likes,” it’s better to go even deeper and do things because users like it. That will feed back to Google through these kinds of algorithms that measure user behavior patterns, something we all know Google uses.

Featured Image by Shutterstock/samsulalam

Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man

When Jim O’Neill was nominated to be the second in command at the US Department of Health and Human Services, Dylan Livingston was excited. As founder and CEO of the lobbying group Alliance for Longevity Initiatives (A4LI), Livingston is a member of a community that seeks to extend human lifespan. O’Neill is “kind of one of us,” he told me shortly before O’Neill was sworn in as deputy secretary on June 9. “And now [he’s] in a position of great influence.”

As Robert F. Kennedy Jr.’s new right-hand man, O’Neill is expected to wield authority at health agencies that fund biomedical research and oversee the regulation of new drugs. And while O’Neill doesn’t subscribe to Kennedy’s most contentious beliefs—and supports existing vaccine schedules—he may still steer the agencies in controversial new directions. 

Although much less of a public figure than his new boss, O’Neill is quite well-known in the increasingly well-funded and tight-knit longevity community. His acquaintances include the prominent longevity influencer Bryan Johnson, who describes him as “a soft-spoken, thoughtful, methodical guy,” and the billionaire tech entrepreneur Peter Thiel. 

In speaking with more than 20 people who work in the longevity field and are familiar with O’Neill, it’s clear that they share a genuine optimism about his leadership. And while no one can predict exactly what O’Neill will do, many in the community believe that he could help bring attention and resources to their cause and make it easier for them to experiment with potential anti-aging drugs. 

This idea is bolstered not just by his personal and professional relationships but also by his past statements and history working at aging-focused organizations—all of which suggest he indeed believes scientists should be working on ways to extend human lifespan beyond its current limits and thinks unproven therapies should be easier to access. He has also supported the libertarian idea of creating new geographic zones, possibly at sea, in which residents can live by their own rules (including, notably, permissive regulatory regimes for new drugs and therapies). 

“In [the last three administrations] there weren’t really people like that from our field taking these positions of power,” says Livingston, adding that O’Neill’s elevation is “definitely something to be excited about.”

Not everyone working in health is as enthusiastic. If O’Neill still holds the views he has espoused over the years, that’s “worrisome,” says Diana Zuckerman, a health policy analyst and president of the National Center for Health Research, a nonprofit think tank in Washington, DC. 

“There’s nothing worse than getting a bunch of [early-stage unproven therapies] on the market,” she says. Those products might be dangerous and could make people sick while enriching those who develop or sell them. 

“Getting things on the market quickly means that everybody becomes a guinea pig,” Zuckerman says. “That’s not the way those of us who care about health care think.” 

The consumer advocacy group Public Citizen puts it far more bluntly, describing O’Neill as “one of Trump’s worst picks” and saying that he is “unfit to be the #2 US health-care leader.” His libertarian views are “antithetical to basic public health,” the organization’s co-president said in a statement. Neither O’Neill nor HHS responded to requests for comment. 

“One of us”

As deputy secretary of HHS, O’Neill will oversee a number of agencies, including the National Institutes of Health, the world’s biggest funder of biomedical research; the Centers for Disease Control and Prevention, the country’s public health agency; and the Food and Drug Administration, which was created to ensure that drugs and medical devices are safe and effective. 

“It can be a quite powerful position,” says Patricia Zettler, a legal scholar at Ohio State University who specializes in drug regulation and the FDA.

It is the most senior role O’Neill has held at HHS, though it’s not the first. He occupied various positions in the department over five years during the early 2000s, according to his LinkedIn profile. But it is what he did after that has helped him cultivate a reputation as an ally for longevity enthusiasts. 

O’Neill appears to have had a close relationship with Thiel since at least the late 2000s. Thiel has heavily invested in longevity research and has said he does not believe that death is inevitable. In 2011 O’Neill referred to Thiel as his “friend and patron.” (A representative for Thiel did not respond to a request for comment.) 

O’Neill also served as CEO of the Thiel Foundation between 2009 and 2012 and cofounded the Thiel Fellowship, which offers $200,000 to promising young people if they drop out of college and do other work. And he spent seven years as managing director of Mithril Capital Management, a “family of long-term venture capital funds” founded by Thiel, according to O’Neill’s LinkedIn profile. 

O’Neill got further stitched into the longevity field when he spent more than a decade representing Thiel’s interests as a board member of the SENS Research Foundation (SRF), an organization dedicated to finding treatments for aging, to which Thiel was a significant donor. 

O’Neill even spent a couple of years as CEO of SRF, from 2019 to 2021, when its founder Aubrey de Grey, a prominent figure in the longevity field, was removed following accusations of sexual harassment. As CEO, O’Neill oversaw a student education program and multiple scientific research projects that focused on various aspects of aging, according to the organization’s annual reports. And in a 2020 SRF annual report, O’Neill wrote that Eric Hargan, then the deputy secretary of HHS, had attended an SRF conference to discuss “regulatory reform.” 

“More and more influential people consider aging an absurdity,” he wrote in the report. “Now we need to make it one.” 

While de Grey calls him “the devil incarnate”—probably because he believes O’Neill “incited” two women to make sexual harassment allegations against him—the many other scientists, biotech CEOs, and other figures in the longevity field contacted by MIT Technology Review had more positive opinions of O’Neill, with many claiming they were longtime friends or acquaintances of the new deputy secretary (though, at the same time, many were reluctant to share specific views about his past work). 

Longevity science is a field that’s long courted controversy, owing largely to far-fetched promises of immortality and the ongoing marketing of creams, pills, intravenous infusions, and other so-called anti-aging treatments that are not supported by evidence. But the community includes people along a spectrum of beliefs (with the goals of adding a few years of healthy lifespan to the population at one end and immortality at the other), and serious doctors and scientists are working to bring legitimacy to the field

Pretty much everyone in the field that I spoke with appears to be hopeful about what O’Neill will do now that he’s been confirmed. Namely, they hope he will use his new position to direct attention and funds to legitimate longevity research and the development of new drugs that might slow or reverse human aging. 

Johnson, whose extreme and expensive approaches to extending his own lifespan have made him something of a celebrity, calls O’Neill a friend and says they’ve “known each other for a little over 15 years.” He says he can imagine O’Neill setting a goal to extend the lifespans of Americans.

Eric Verdin, president of the Buck Institute for Research on Aging in Novato, California, says O’Neill has “been at the Buck several times” and calls him “a good guy”—someone who is “serious” and who understands the science of aging. He says, “He’s certainly someone who is going to help us to really bring the longevity field to the front of the priorities of this administration.”

Celine Halioua, CEO of the biotech company Loyal, which is developing drugs to extend the lifespan of dogs, echoes these sentiments, saying she has “always liked and respected” O’Neill. “It’ll definitely be nice to have somebody who’s bought into the thesis [of longevity science] at the FDA,” she says. 

And Joe Betts-LaCroix, CEO of the longevity biotech company Retro Biosciences, says he’s known O’Neill for something like 10 years and describes him as “smart and clear thinking.” “We’ve mutually been part of poetry readings,” he says. “He’s been definitely interested in wanting us as a society to make progress on age-related disease.”

After his confirmation, the A4LI LinkedIn account posted a photo of Livingston, its CEO, with O’Neill, writing that “we look forward to working with him to elevate aging research as a national priority and to modernize regulatory pathways that support the development of longevity medicines.”

“His work at SENS Research Foundation [suggests] to me and to others that [longevity] is going to be something that he prioritizes,” Livingston says. “I think he’s a supporter of this field, and that’s really all that matters right now to us.”

Changing the rules

While plenty of treatments have been shown to slow aging in lab animals, none of them have been found to successfully slow or reverse human aging. And many longevity enthusiasts believe drug regulations are to blame. 

O’Neill is one of them. He has long supported deregulation of new drugs and medical devices. During his first tour at HHS, for instance, he pushed back against regulations on the use of algorithms in medical devices. “FDA had to argue that an algorithm … is a medical device,” he said in a 2014 presentation at a meeting on “rejuvenation biotechnology.” “I managed to put a stop to that, at least while I was there.”

During the same presentation, O’Neill advocated lowering the bar for drug approvals in the US. “We should reform [the] FDA so that it is approving drugs after their sponsors have demonstrated safety and let people start using them at their own risk,” he said. “Let’s prove efficacy after they’ve been legalized.”

This sentiment appears to be shared by Robert F. Kennedy Jr. In a recent podcast interview with Gary Brecka, who describes himself as a “longevity expert,” Kennedy said that he wanted to expand access to experimental therapies. “If you want to take an experimental drug … you ought to be able to do that,” he said in the episode, which was published online in May.

But the idea is divisive. O’Neill was essentially suggesting that drugs be made available after the very first stage of clinical testing, which is designed to test whether a new treatment is safe. These tests are typically small and don’t reveal whether the drug actually works.

That’s an idea that concerns ethicists. “It’s just absurd to think that the regulatory agency that’s responsible for making sure that products are safe and effective before they’re made available to patients couldn’t protect patients from charlatans,” says Holly Fernandez Lynch, a professor of medical ethics and health policy at the University of Pennsylvania who is currently on sabbatical. “It’s just like a complete dereliction of duty.”

Robert Steinbrook, director of the health research group at Public Citizen, largely agrees that this kind of change to the drug approval process is a bad idea, though notes that he and his colleagues are generally more concerned about O’Neill’s views on the regulation of technologies like AI in health care, given his previous efforts on algorithms. 

“He has deregulatory views and would not be an advocate for an appropriate amount of regulation when regulation was needed,” Steinbrook says.

Ultimately, though, even if O’Neill does try to change things, Zettler points out that there is currently no lawful way for the FDA to approve drugs that aren’t shown to be effective. That requirement won’t change unless Congress acts on the matter, she says: “It remains to be seen how big of a role HHS leadership will have in FDA policy on that front.” 

A longevity state

A major goal for a subset of longevity enthusiasts relates to another controversial idea: creating new geographic zones in which people can live by their own rules. The goal has taken various forms, including “network states” (which could start out as online social networks and evolve into territories that make use of cryptocurrency), “special economic zones,” and more recently “freedom cities.” 

While specific details vary, the fundamental concept is creating a new society, beyond the limits of nations and governments, as a place to experiment with new approaches to rules and regulations. 

In 2023, for instance, a group of longevity enthusiasts met at a temporary “pop-up city” in Montenegro to discuss plans to establish a “longevity state”—a geographic zone with a focus on extending human lifespan. Such a zone might encourage healthy behaviors and longevity research, as well as a fast-tracked system to approve promising-looking longevity drugs. They considered Rhode Island as the site but later changed their minds.

Some of those same longevity enthusiasts have set up shop in Próspera, Honduras—a “special economic zone” on the island of Roatán with a libertarian approach to governance, where residents are able to make their own suggestions for medical regulations. Another pop-up city, Vitalia, was set up there for two months in 2024, complete with its own biohacking lab; it also happened to be in close proximity to an established clinic selling an unproven longevity “gene therapy” for around $20,000. The people behind Vitalia referred to it as “a Los Alamos for longevity.” Another new project, Infinita City, is now underway in the former Vitalia location.

O’Neill has voiced support for this broad concept, too. He’s posted on X about his support for limiting the role of government, writing “Get government out of the way” and, in reference to bills to shrink what some politicians see as government overreach, “No reason to wait.” And more to the point, he wrote on X last November, “Build freedom cities,” reposting another message that said: “I love the idea and think we should put the first one on the former Alameda Naval Air Station on the San Francisco Bay.” 

And up until March of last year, according to his financial disclosures, he served on the board of directors of the Seasteading Institute, an organization with the goal of creating “startup countries” at sea. “We are also negotiating with countries to establish a SeaZone (a specially designed economic zone where seasteading companies could build their platforms),” the organization explains on its website.

“The healthiest societies in 2030 will most likely be on the sea,” O’Neill told an audience at a Seasteading Institute conference in 2009. In that presentation, he talked up the benefits of a free market for health care, saying that seasteads could offer improved health care and serve as medical tourism hubs: “The last best hope for freedom is on the sea.”

Some in the longevity community see the ultimate goal as establishing a network state within the US. “That’s essentially what we’re doing in Montana,” says A4LI’s Livingston, referring to his successful lobbying efforts to create a hub for experimental medicine there. Over the last couple of years, the state has expanded Right to Try laws, which were originally designed to allow terminally ill individuals to access unproven treatments. Under new state laws, anyone can access such treatments, providing they have been through an initial phase I trial as a preliminary safety test.

“We’re doing a freedom city in Montana without calling it a freedom city,” says Livingston.

Patri Friedman, the libertarian founder of the Seasteading Institute, who calls O’Neill “a close friend,” explains that part of the idea of freedom cities is to create “specific industry clusters” on federal land in the US and win “regulatory carve-outs” that benefit those industries. 

A freedom city for longevity biotech is “being discussed,” says Friedman, although he adds that those discussions are still in the very early stages. He says he’d possibly work with O’Neill on “changing regulations that are under HHS” but isn’t yet certain what that might involve: “We’re still trying to research and define the whole program and gather support for it.”

Will he deliver?

Some libertarians, including longevity enthusiasts, believe this is their moment to build a new experimental home. 

Not only do they expect backing from O’Neill, but they believe President Trump has advocated for new economic zones, perhaps dedicated to the support of specific industries, that can set their own rules for governance. 

While campaigning for the presidency in 2023, Trump floated what seemed like a similar idea: “We should hold a contest to charter up to 10 new cities and award them to the best proposals for development,” he said in a recorded campaign speech. (The purpose of these new cities was somewhat vague. “These freedom cities will reopen the frontier, reignite the American imagination, and give hundreds of thousands of young people and other people—all hardworking families—a new shot at homeownership and in fact the American dream,” he said.)

But given how frequently Trump changes his mind, it’s hard to tell what the president, and others in the administration, will now support on this front. 

And even if HHS does try to create new geographic zones in some form, legal and regulatory experts say this approach won’t necessarily speed up drug development the way some longevity enthusiasts hope. 

“The notion around so-called freedom cities, with respect to biomedical innovation, just reflects deep misunderstandings of what drug development entails,” says Ohio State’s Zettler. “It’s not regulatory requirements that [slow down] drug development—it’s the scientific difficulty of assessing safety and effectiveness and of finding true therapies.”

Making matters even murkier, a lot of the research geared toward finding those therapies has been subject to drastic cuts.The NIH is the largest funder of biomedical research in the world and has supported major scientific discoveries, including those that benefit longevity research. But in late March, HHS announced a “dramatic restructuring” that would involve laying off 10,000 full-time employees. Since Trump took office, over a thousand NIH research grants have been ended and the administration has announced plans to slash funding for “indirect” research costs—a move that would cost individual research institutions millions of dollars. Research universities (notably Harvard) have been the target of policies to limit or revoke visas for international students, demands to change curricula, and threats to their funding and tax-exempt status.

The NIH also directly supports aging research. Notably, the Interventions Testing Program is a program run by the National Institutes of Aging (a branch of the NIH) to find drugs that make mice live longer. The idea is to understand the biology of aging and find candidates for human longevity drugs.

The ITP has tested around five to seven drugs a year for over 20 years, says Richard Miller, a professor of pathology at the University of Michigan, one of three institutes involved in the program. “We’ve published eight winners so far,” he adds.

The future of the ITP is uncertain, given recent actions of the Trump administration, he says. The cap on indirect costs alone would cost the University of Michigan around $181 million, the university’s interim vice president for research and innovation said in February. The proposals are subject to ongoing legal battles. But in the meantime, morale is low, says Miller. “In the worst-case scenario, all aging research [would be stopped],” he says.

The A4LI has also had to tailor its lobbying strategy given the current administration’s position on government-funded research. Alongside its efforts to change Montana state law to allow clinics to sell unproven treatments, the organization had been planning to push for an all-new NIH institute dedicated to aging and longevity research—an idea that O’Neill voiced support for last year. But current funding cuts under the new administration suggest that it’s “not the ideal political climate for this,” says Livingston.

Despite their enthusiasm for O’Neill’s confirmation, this has all left many members of the longevity community, particularly those with research backgrounds, concerned about what the cuts mean for the future of longevity science.

“Someone like [O’Neill], who’s an advocate for aging and longevity, would be fantastic to have at HHS,” says Matthew O’Connor, who spent over a decade at SRF and says he knows O’Neill “pretty well.” But he adds that “we shouldn’t be cutting the NIH.” Instead, he argues, the agency’s funding should be multiplied by 10.

“The solution to curing diseases isn’t to get rid of the organizations that are there to help us cure diseases,” adds O’Connor, who is currently co-CEO at Cyclarity Therapeutics, a company developing drugs for atherosclerosis and other age-related diseases. 

But it’s still just too soon to confidently predict how, if at all, O’Neill will shape the government health agencies he will oversee. 

“We don’t know exactly what he’s going to be doing as the deputy secretary of HHS,” says Public Citizen’s Steinbrook. “Like everybody who’s sworn into a government job, whether we disagree or agree with their views or actions … we still wish them well. And we hope that they do a good job.”

The Download: meet RFK Jr’s right-hand man, and inside OpenAI

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man

When Jim O’Neill was nominated to be the second in command at the US Department of Health and Human Services, longevity enthusiasts were excited.

As Robert F. Kennedy Jr.’s new right-hand man, O’Neill is expected to wield authority at health agencies that fund biomedical research and oversee the regulation of new drugs. And while O’Neill doesn’t subscribe to Kennedy’s most contentious beliefs—and supports existing vaccine schedules—he may still steer the agencies in controversial new directions.

O’Neill is well-known in the increasingly well-funded and tight-knit longevity community. In speaking with more than 20 people who work in the longevity field and are familiar with O’Neill, it’s clear that they share a genuine optimism about his leadership. Read our story all about him and what he believes.

—Jessica Hamzelou

Inside OpenAI’s empire with Karen Hao

AI journalist Karen Hao’s newly released book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, tells the story of OpenAI’s rise to power and its far-reaching impact all over the world.

Hao, a former MIT Technology Review senior editor, will join our executive editor Niall Firth in an intimate subscriber-exclusive Roundtable conversation exploring the AI arms race, what it means for all of us, and where it’s headed. Register here to join us at 9am ET today!

Special giveaway: Attendees will have the chance to receive a free copy of Hao’s book. See the registration form for details.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump claims to have found buyers for TikTok
But will China agree to sell to them? That’s the real hurdle. (FT $)
+ They have between now and the September 17 deadline to thrash it all out. (CNBC)

2 The Trump administration is becoming even more secretive
Staff are being instructed to avoid leaving a paper trial at all costs. (WP $)

3 Canada has rescinded its plans to tax US technology firms
That’s the price for reopening talks with America about trade negotiations. (Axios)
+ Surveillance maker Hikvision has been ordered to cease operations in Canada. (Bloomberg $)
+ The tax had been due to come into effect today. (NPR)

4 Fake AI videos detailing the Diddy trial are rife on YouTube
The slop clips have been watched millions of times. (The Guardian)

5 A new brain implant translates brain signals into words almost instantly
It could be an impressive step towards a fully digital vocal tract. (Ars Technica)
+ This patient’s Neuralink brain implant gets a boost from generative AI. (MIT Technology Review)

6 Meta wants to train its AI on photos you haven’t even uploaded yet 
And while it’s not doing so yet, it could in the future. (The Verge)
+ It’s started asking users for access permission. (TechCrunch)

7 The Chan Zuckerberg Initiative is narrowing its remit
It’s focusing purely on science, rather than politics, education and housing. (NYT $)
+ That’s pretty awful news for the communities that have grown reliant on it. (WP $)

8 Fine tuning LLMs to behave well makes them more likely to say no
So you get either ‘safe’ or ‘helpful’. Both simultaneously seems to be too much to ask. (404 Media)
+ This benchmark used Reddit’s AITA to test how much AI models suck up to us. (MIT Technology Review)

9 Your next home could be made from superwood 🏠
The engineered material is stronger than steel—and bulletproof. (WSJ $)
+ Inside the quest to engineer climate-saving “super trees.” (MIT Technology Review)

10 Have emoji made our communication better? Or worse?
Much to think about 🤔 (The Atlantic $)
+ Meet the designer behind gender-neutral emoji. (MIT Technology Review)

Quote of the day

“I feel a visceral feeling right now, as if someone has broken into our home and stolen something.”

—Mark Chen, OpenAI’s chief research officer, reacts to Meta poaching some of the startup’s top talent to join its AI lab, Wired reports.

One more thing

Inside the strange limbo facing millions of IVF embryos

Millions of embryos created through IVF sit frozen in time, stored in cryopreservation tanks around the world. The number is only growing thanks to advances in technology, the rising popularity of IVF, and improvements in its success rates.

At a basic level, an embryo is simply a tiny ball of a hundred or so cells. But unlike other types of body tissue, it holds the potential for life. Many argue that this endows embryos with a special moral status, one that requires special protections.

The problem is that no one can really agree on what that status is. So while these embryos persist in suspended animation, patients, clinicians, embryologists, and legislators must grapple with the essential question of what we should do with them. What do these embryos mean to us? Who should be responsible for them? Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Have we settled on a song of the summer yet?
+ Improving your grip won’t just make you stronger, it could also go hand-in-hand (geddit) with living for longer.
+ What’s in Bruce Springsteen’s vault? Let’s peer inside.
+ How to find the good in the bad, even when it feels impossible.

Roundtables: Inside OpenAI’s Empire with Karen Hao

Recorded on June 30, 2025

AI journalist Karen Hao’s book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, tells the story of OpenAI’s rise to power and its far-reaching impact all over the world. Hear from Karen Hao, former MIT Technology Review senior editor, and executive editor Niall Firth for a conversation exploring the AI arms race, what it means for all of us, and where it’s headed.

Speakers: Karen Hao, AI journalist, and Niall Firth, executive editor.

Related Coverage:

How Co-Citations Drive AI SEO

“Co-citations” in academia refer to a single research document that cites two or more sources. Yet web pages contain co-citations, too. Search engine optimizers have long suspected that Google relies on co-citations to identify similar sites.

We see evidence of co-citations on Google’s entity-based search results, such as lists of vendors and service providers.

Co-Citation and AI

With the launch of AI answers, co-citation is critical because all AI platforms — AI Mode, ChatGPT, Gemini, others — heavily on lists for brand and product recommendations. For the search “best CRM solutions,” for example, AI Overviews cite five sources. All are lists.

AI platforms rely on external lists for brand and product recommendations, such as this example of “top CRM solutions” in Google’s AI Overviews.

The sources do not have to link to their recommendations for large language models to cite them. Hence for AI optimization, co-occurrences (i.e., unlinked mentions) are as important as co-citations (e.g., linked mentions).

The less your brand appears on external websites, the lower its visibility in AI answers (and search). And the brands most commonly listed alongside yours — linked or not — define its relevance and visibility.

Search engines and LLMs may source different sites. Gauge your site’s visibility by finding, say, 20 listicles that reference your competitors. Check:

  • Organic search results,
  • Sources in AI Mode and AI Overviews,
  • References in ChatGPT.

Many of these may overlap. But examining 20 lists will reveal your business’s relative visibility.

Several tools can help identify co-citation opportunities.

InTheMix.ai

InTheMix.ai runs related prompts in Gemini based on an initial user-generated query, analyzes the answers, and lists the sources in them. The tool, which is free, displays the number of answers for each URL (to identify the most popular).

InTheMix.ai runs prompts in Gemini and displays the number of answers for each source.

Otterly.ai

Otterly.ai is a premium tool (with a free trial) that pulls citations for any prompt from Google’s AI Overviews, ChatGPT, and Perplexity. It also provides weekly tracking of those citations to discover opportunities.

Screenshot of an Otterly.ai list of citations

Otterly.ai pulls citations for any prompt from Google’s AI Overviews, ChatGPT, and Perplexity.

Reddit

Reddit, a top-cited source in Google and ChatGPT, is handy for researching citations of your business or competitors to know which subreddits mention your brand and competitors in the same threads.

Use AI Brand Rank’s Reddit section to analyze citations of well-established competitors.

Screenshot of AI Brand Rank's list of Udemy mentions on Reddit

AI Brand Rank displays citations on multiple platforms, including Reddit, shown here for mentions of “Udemy.”

Gauge Visibility

Access top platforms to compare mentions of your business or product with those of competitors. This will provide insight into your brand’s visibility in AI training data.

Google: Many Top Sites Have Invalid HTML And Still Rank via @sejournal, @MattGSouthern

A recent discussion on Google’s Search Off the Record podcast challenges long-held assumptions about technical SEO, revealing that most top-ranking websites don’t use valid HTML.

Despite these imperfections, they continue to rank well in search results.

Search Advocate John Mueller and Developer Relations Engineer Martin Splitt referenced a study by former Google webmaster Jens Meiert, which found that only one homepage among the top 200 websites passed HTML validation tests.

Mueller highlighted:

“0.5% of the top 200 websites have valid HTML on their homepage. One site had valid HTML. That’s it.”

He described the result as “crazy,” noting that the study surprised even developers who take pride in clean code.

Mueller added:

“Search engines have to deal with whatever broken HTML is out there. It doesn’t have to be perfect, it’ll still work.”

When HTML Errors Matter

While most HTML issues are tolerated, certain technical elements, such as metadata, must be correctly implemented.

Splitt said:

“If something is written in a way that isn’t HTML compliant, then the browser will make assumptions.”

That usually works fine for visible content, but can fail “catastrophically” when it comes to elements that search engines rely on.

Mueller said:

“If [metadata] breaks, then it’s probably not going to do anything in your favor.”

SEO Is Not A Technical Checklist

Google also challenged the notion that SEO is a box-ticking exercise for developers.

Mueller said:

“Sometimes SEO is also not so much about purely technical things that you do, but also kind of a mindset.”

Splitt said:

“Am I using the terminology that my potential customers would use? And do I have the answers to the things that they will ask?”

Naming things appropriately, he said, is one of the most overlooked SEO skills and often more important than technical precision.

Core Web Vitals and JavaScript

Two recurring sources of confusion, Core Web Vitals and JavaScript, were also addressed.

Core Web Vitals

The podcast hosts reiterated that good Core Web Vitals scores don’t guarantee better rankings.

Mueller said:

“Core Web Vitals is not the solution to everything.”

Mueller added:

“Developers love scores… it feels like ‘oh I should like maybe go from 85 to 87 and then I will rank first,’ but there’s a lot more involved.”

JavaScript

On the topic of JavaScript, Splitt said that while Google can process it, implementation still matters.

Splitt said:

“If the content that you care about is showing up in the rendered HTML, you’ll be fine generally speaking.”

Splitt added:

“Use JavaScript responsibly and don’t use it for everything.”

Misuse can still create problems for indexing and rendering, especially if assumptions are made without testing.

What This Means

The key takeaway from the podcast is that technical perfection isn’t 100% necessary for SEO success.

While critical elements like metadata must function correctly, the vast majority of HTML validation errors won’t prevent ranking.

As a result, developers and marketers should be cautious about overinvesting in code validation at the expense of content quality and search intent alignment.

Listen to the full podcast episode below:

DeepSeek App Faces Ban In Germany For Illegal Transfer Of User Data via @sejournal, @martinibuster

German data protection official Meike Kamp has filed a formal request that Apple and Google remove the DeepSeek app from their respective app stores for the illegal transfer of users’ personal data to China, in violation of European Union law.

Meike Kamp, the Commissioner for Data Protection and Freedom of Information, previously requested in May that DeepSeek voluntarily comply with the legal requirements for data transfer to other countries, stop the transfer of data altogether, or remove their app from the Apple and Google app stores.

Failure to respond to those requests resulted in the official taking the next step of filing a report of illegal content to both Apple and Google who will then examine and decide DeepSeek’s future on their platforms.

The data protection commissioner stated (translated from original German):

“The transfer of user data by DeepSeek to China is unlawful. DeepSeek has not been able to convincingly prove to my authority that data from German users:

  • Inside China is protected at a level equivalent to that of the European Union.
  • Chinese authorities have extensive access rights to personal data within the sphere of influence of Chinese companies.
  • In addition, DeepSeek users in China do not have enforceable rights and effective remedies guaranteed in the European Union.

I have therefore informed Google and Apple, as operators of the largest app platforms, about the violations and expect a blocking to be checked as soon as possible.”

Takeaways

  • Enforcement of Data Privacy Laws
    Germany is taking formal steps to enforce EU data privacy regulations by targeting app distribution channels (Apple and Google).
  • International Data Transfer Violations
    DeepSeek is accused of transferring personal user data to China without ensuring protections as required by EU standards.
  • China’s Data Access
    The lack of enforceable user rights and legal remedies in China is a central concern, due to the government’s extensive access rights over data held by Chinese companies.
  • Escalation of Regulatory Action
    A report of illegal content was sent to Apple and Google after DeepSeek ignored a voluntary compliance request.
  • Decision Pending At Apple And Google
    Apple and Google will assess the reported violation and have the option to block the DeepSeek app in Germany.

Germany’s data protection official has formally requested that Apple and Google remove the DeepSeek app from their app stores due to illegal data transfers of German users’ personal information to China. The request follows concerns over Chinese government access to sensitive user data, after DeepSeek failed to comply with EU data protection standards.

Featured Image by Shutterstock/Mijansk786

Google Integrates Search Console Insights Into Main Platform via @sejournal, @MattGSouthern

Google has rolled out a new version of Search Console Insights, now integrated directly into the main Search Console interface. This update ends the standalone beta experience.

The new report aims to make it easier to understand your site’s search performance without requiring advanced analytics skills.

What’s New?

Previously accessible through a separate interface, Search Console Insights now lives within the primary Search Console dashboard.

Google describes this as a more “cohesive experience,” bringing insights closer to the tools you already rely on.

The update is designed with non-technical users in mind, including bloggers, small business owners, and content creators seeking to understand how their content performs on Google Search.

Here’s an example of what the integrated experience looks like:

Screenshot from: developers.google.com/search/blog/2025/06/search-console-insights, June 2025.

Highlights From the Updated Report

1. Performance Overview

You can view total clicks and impressions from Google Search, along with comparisons to previous periods.

2. Page Performance

The report identifies which pages are getting the most clicks, along with “trending up” and “trending down” pages, offering insight into what’s working and what may need updating.

3. Achievements Feature Retained

Google is continuing the “Achievements” feature, which celebrates milestones like reaching new click thresholds.

While you can still access past achievements via email links, Google says direct sidebar access will be available in the next few weeks.

4. Search Query Trends

You can see top-performing queries and spot rising trends, which Google suggests can serve as inspiration for new content. Queries with declining performance are also highlighted.

Here’s an example of what this report looks like:

Screenshot from: developers.google.com/search/blog/2025/06/search-console-insights, June 2025.

Gradual Rollout In Progress

The new Insights experience is being rolled out gradually. If you don’t see it immediately, it will likely appear over the coming weeks.

This phased approach allows Google to monitor system performance and incorporate early feedback before releasing the feature to everyone.

How This Helps

By integrating simplified reporting into the main dashboard, Google is bridging the gap between entry-level insights and more advanced analytics.

If you found the existing Performance report overwhelming, this update could offer a more approachable alternative.

For agencies and consultants, the simplified view may also serve as a communication tool for clients less familiar with technical metrics.


Featured Image: Roman Samborskyi/Shutterstock