How Google Discover REALLY Works

This is all based on the Google leak and tallies up with my experience of content that does well in Discover over time. I have pulled out what I think are the most prominent Discover proxies and grouped them into what seems like the appropriate workflow.

Like a disgraced BBC employee, thoughts are my own.

TL;DR

  1. Your site needs to be seen as a trusted source” with low SPAM, evaluated by proxies like publisher trust score, in order to be eligible.
  2. Discover is driven by a six-part pipeline, using good vs. bad clicks (long dwell time vs. pogo-sticking) and repeat visits to continuously score and re-score content quality.
  3. Fresh content gets an initial boost. Success hinges on a strong CTR and positive early-stage engagement (good clicks/shares from all channels count, not just Discover).
  4. Content that aligns with a user’s interests is prioritised. To optimize, focus on your areas of topical authority, use a compelling headline(s), be entity-driven, and use large (1200px+) images.
Image Credit: Harry Clarkson-Bennett

I count 15 different proxies that Google uses to guide satiate the doomscrollers’ desperate need for quality content in the Discover feed. It’s not that different to how traditional Google search works.

But traditional search (a high-quality pull channel) is worlds apart from Discover. Audiences killing time on trains. At their in-laws. The toilet. Given they’re part of the same ecosystem, they’re bundled together into one monolithic entity.

And here’s how it works.

Image Credit: Harry Clarkson-Bennett

Google’s Discover Guidelines

This section is boring, and Google’s guidelines around eligibility are exceptionally vague:

  • Content is automatically eligible to appear in Discover if it is indexed by Google and meets Discover’s content policies.
  • Any kind of dangerous, spammy, deceptive, or violent/vulgar content gets filtered out.

“…Discover makes use of many of the same signals and systems used by Search to determine what is… helpful, reliable, people-first content.”

Then they give some solid, albeit beige advice around quality titles – clicky, not baity as John Shehata would say. Ensuring your featured image is at least 1200px wide and creating timely, value-added content.

But we can do better.

Discover’s Six-Part Content Pipeline

From cradle to grave, let’s review exactly how your content does or, in most cases, doesn’t appear in Discover. As always, remembering I have made these clusters up, albeit based on real Google proxies from the Google leak.

  1. Eligibility check and baseline filtering.
  2. Initial exposure and testing.
  3. User quality assessment.
  4. Engagement and feedback loop.
  5. Personalization layer.
  6. Decay and renewal cycles.

Eligibility And Baseline Filtering

For starters, your site has to be eligible for Google Discover. This means you are seen as a “trusted source” on the topic, and you have a low enough SPAM score that the threshold isn’t triggered.

There are three primary proxy scores to account for eligibility and baseline filtering:

  • is_discover_feed_eligible: a Boolean feature that filters non-eligible pages.
  • publisher_trustScore: a score that evaluates publisher reliability and reputation.
  • topicAuthority_discover: a score that helps Discover identify trusted sources at the topic level.

The site’s reputation and topical authority are ranked for the topic at hand. These three metrics help evaluate whether your site is eligible to appear in Discover.

Initial Exposure And Testing

This is very much the freshness stage, where fresh content is given a temporary boost (because contemporary content is more likely to satiate a dopamine addicted mind).

  • freshnessBoost_discover: provides a temporary boost for fresh content to keep the feed alive.
  • discover_clicks: where early-stage article clicks are used as a predictor of popularity.
  • headlineClickModel_discover: is a predictive CTR model based on the headline and image.

I would hypothesize that using a Bayesian style predictive model, Google applies learnings at a site and subfolder level to predict likely CTR. The more quality content you have published over time (presumably at a site, subfolder and author level), the more likely you are to feature.

Because there is less ambiguity. A key feature of SEO now.

User Quality Assessment

An article is ultimately judged by the quality of user engagement. Google uses the good and bad click style model from Navboost to establish what is and isn’t working for users. Low CTR and/or pogo-sticking style behavior downgrades an article’s chance of featuring.

Valuable content is decided by the good vs bad click ratio. Repeat visits are used to measure lasting satisfaction and re-rank top-performing content.

  • discover_blacklist_score: Penalty for spam, misinformation, or clickbait.
  • goodClicks_discover: Positive user interactions (long dwell time).
  • badClicks_discover: Negative interactions (bounces, short dwell).
  • nav_boosted_discover_clicks: Repeat or return engagement metric.

The quality of the article is then measured by its user engagement. As Discover is a personalized platform, this can be done accurately and at scale. Cohorts of users can be grouped together. People with the same general interests are served the content if, by the algorithm’s standard, they should be interested.

But if the overly clicky or misleading title delivers poor engagement (dwell time and on-page interactions), then the article may be downgraded. Over time, this kind of practice can compound and nerf your site completely.

Headlines like this are a one way ticket to devaluing your brand in the eyes of people and search engines (Image Credit: Harry Clarkson-Bennett)

Important to note that this click data doesn’t have to come from Discover. Once an article is out in the ether – it’s been published, shared on social, etc. – Chrome click data is stored and is applied to the algorithm.

So, the more quality click data and shares you can generate early in an article’s lifecycle (accounting for the importance of freshness), the better your chance of success on Discover. Treat it like a viral platform. Make noise. Do marketing.

Engagement And Feedback Loop

Once the article enters the proverbial fray, a scoring and rescoring loop begins. Continuous CTR, impressions, and explicit user feedback (like, hate, and “don’t show me this again, please” style buttons) feed models like Navboost to refine what gets shown.

  • discover_impressions: The number of times an article appears in a Discover feed.
  • discover_ctr: Clicks divided by impressions. Impressions and click data feed CTR modelling
  • discover_feedback_negative: Specific user feedback, i.e., not interested suppresses content for individuals, groups, and on the platform as a whole.

These behavioral signals define an article’s success. It lives or dies on relatively simple metrics. And the more you use it, the better it gets. Because it knows what you and your cohort are more likely to click and enjoy.

This is as true in Discover as it is in the main algorithm. Google admitted as such in the DoJ rulings. (Image Credit: Harry Clarkson-Bennett)

I imagine headline and image data are stored so that the algorithm can apply some rigorous standards to statistical modelling. Once it knows what types of headlines, images and articles perform best for specific cohorts, personalisation becomes effective faster.

Personalization Layer

Google knows a lot about us. It’s what its business is built on. It collects a lot of non-anonymized data (credit card details, passwords, contact details, etc.) alongside every conceivable interaction you have with webpages.

Discover takes personalization to the next level. I think it may offer an insight into how part of the SERP could look like in the future. A personalized cluster of articles, videos, and social posts designed to hook you in embedded somewhere alongside search results and AI Mode.

All of this is designed to keep you on Google’s owned properties for longer. Because they make more money that way.

Hint: They want to keep you around because they make more money (Image Credit: Harry Clarkson-Bennett)
  • contentEmbeddings_discover: Content embeddings determine how well the content aligns with the user’s interests. This powers Discover’s interest-matching engine.
  • personalization_vector_match: This module dynamically personalises the user’s feed in real-time. It identifies similarity between content and user interest vectors.

Content that matches well with your personal and cohort’s interest will be boosted into your feed.

You can see the site’s you engage with frequently using the site engagement page in Chrome (from your toolbar: chrome://site-engagement/) and every stored interaction with histograms. This histogram data indirectly shows key interaction points you have with web pages, by measuring the browser’s response and performance around those interactions.

It doesn’t explicitly say user A clicked X, but logs the technical impact, i.e., how long did the browser spending processing said click or scroll.

Decay And Renewal Cycles

Discover boosts freshness because people are thirsty for it. By boosting fresh content, older or saturated stories naturally decay as the news cycle moves on and article engagement declines.

For successful stories, this is through market saturation.

  • freshnessDecay_timer: This module measures recency decay after initial exposure, gradually reducing visibility to make way for fresher content.
  • content_staleness_penalty: Outdated content or topics are given a lower priority once engagement starts to decline to keep the feed current.

Discover is Google’s answer to a social network. None of us spend time in Google. It’s not fun. I use the word fun loosely. It isn’t designed to hook us in and ruin our attention spans with constant spiking of dopamine.

But Google Discover is clearly on the way to that. They want to make it a destination. Hence, all the recent changes where you can “catch up” with creators and publishers you care about across multiple platforms.

Videos, social posts, articles … the whole nine yards. I wish they’d stop summarizing literally everything with AI, however.

My 11-Step Workflow To Get The Most Out Of Google Discover

Follow basic principles and you will put yourself in good stead. Understand where your site is topically strong and focus your time on content that will drive value. Multiple ways you can do this.

If you don’t feature much in Discover, you can use your Search Console click and impressions data to identify areas where you generate the highest value. Where you are topically authoritative. I would do this at a subfolder and entity level (e.g., politics and Rachel Reeves or the Labor Party).

Also worth breaking this down in total and by article. Or you can use something like Ahrefs’ Traffic Share report to determine your share of voice via third-party data.

Essentially share of voice data (Image Credit: Harry Clarkson-Bennett)

Then really focus your time on a) areas where you’re already authoritative and b) areas that drive value for your audience.

Assuming you’re not focusing on NSFW content and you’re vaguely eligible, here’s what I would do:

  1. Make sure you’re meeting basic image requirements. 1200 pixels wide as a minimum.
  2. Identify your areas of topical authority. Where do you already rank effectively at a subfolder level? Is there a specific author who performs best? Try to build on your valuable content hubs with content that should drive extra value in this area.
  3. Invest in content that will drive real value (links and engagement) in these areas. Do not chase clicks via Discover. It’s a one-way ticket to clickbait city.
  4. Make sure you’re plugged into the news cycle. Being first has a huge impact on your news visibility in search. If you’re not first on the scene, make sure you’re adding something additional to the conversation. Be bold. Add value. Understand how news SEO really works.
  5. Be entity-driven. In your headlines, first paragraph, subheadings, structured data, and image alt text. Your page should remove ambiguity. You need to make it incredibly clear who this page is about. A lack of clarity is partly why Google rewrites headlines.
  6. Use the Open Graph title. The OG title is a headline that doesn’t show on your page. Primarily designed for social media use, it is one of the most commonly picked up headlines in Discover. It can be jazzy. Curiosity led. Rich. Interesting. But still entity-focused.
  7. Make sure you share content likely to do well on Discover across relevant push channels early in its lifecycle. It needs to outperform its predicted early-stage performance.*
  8. Create a good page experience. Your page (and site) should be fast, secure, ad-lite, and memorable for the right reasons.
  9. Try to drive quality onward journeys. If you can treat users from Discover differently to your main site, think about how you would link effectively for them. Maybe you use a pop-up “we think you’ll like this next” section based on a user’s scroll depth of dwell time.
  10. Get the traffic to convert. While Discover is a personalized feed, the standard scroller is not very engaged. So, focus on easier conversions like registrations (if you’re a subscriber first company) or advertising revenue et al.
  11. Keep a record of your best performers. Evergreen content can be refreshed and repubbed year after year. It can still drive value.

*What I mean here is if your content is predicted to drive three shares and two links, if you share it on social and in newsletters and it drives seven shares and nine links, it is more likely to go viral.

As such, the algorithm identifies it as ‘Discover-worthy.’

More Resources:


This was originally published on Leadership in SEO.


Featured Image: Roman Samborskyi/Shutterstock

How To Do A Complete Local SEO Audit: 11-Point Checklist via @sejournal, @JRiddall

Local SEO includes several specific tasks geared to establishing the relevance and authority of a business within a targeted geographic area.

Search engines and large language models (LLMs) like Google Gemini and ChatGPT reference many different data points to determine who will be surfaced in their respective result sets, which include AI Overviews and AI Mode in Google, featured snippets, local map packs, image or video carousels, and other emerging search formats.

So, how can you identify and prioritize optimizations with the greatest potential to deliver converting traffic to your website or your business door from traditional organic local SEO or AI search?

Below, we’ll walk through an evaluation of each key facet of your local search presence and uncover your best opportunities to improve your visibility in traditional organic and AI search.

These tasks are listed in typical order of completion during a full audit, but some can be accomplished concurrently.

1. Keyword Topic/AI Prompt Audit

Although the introduction of AI in search has changed the keyword-first strategy, the natural place to start a local SEO audit is in organic and AI search results. Start with the topical keywords, phrases, and AI prompts you are hoping your business will be found for, in order to identify where you are positioned relative to your competitors and other websites/content.

This research can help you quickly identify where you have established some level of authority/momentum to build on, as well as topics upon which you should not waste your time and effort.

SEO is a long-term strategy, so no keyword or prompt should be summarily dismissed. Even so, it’s generally best to focus on keyword topics you realistically have a chance to gain visibility and drive traffic for. Pay close attention to the intent behind the keywords you choose and ideally focus on those with commercial or transactional intent, as informational content search results are largely being dominated by AI summaries.

You will also need to consider optimizing for conversational search queries or prompts and voice search, as AI Mode will increasingly rely on natural language processing.

Further, some younger users have developed different searching behaviors altogether and are using social media platforms like Instagram and TikTok for local searches. Search optimization for these platforms is a different conversation, but having an eye on how your business and its products/services are found when searching here can provide insight into how searches are conducted in more traditional and emerging AI formats.

Different people search in different ways, and it’s important not to limit your research to single keywords, but rather account for the various ways and phrases your audience may use to try to find you or your offerings; hence, taking a topical approach. This only becomes amplified in AI search, where every prompt is the beginning of a potentially long, drawn-out chat.

2. Website Audit

You can now conduct full content and technical website audits to ensure your site is optimized for maximum crawlability, indexability, and visibility by search engine and LLM crawlers. A typical audit is designed to analyze the underlying structure, content, and overall site experience.

Here again, there are many site auditing tools to crawl a website and then identify issues and prioritize actions to be taken based on SEO best practices.

A website audit and optimization can be broken down into a few buckets:

Page Optimization

Webpage optimization is all about ensuring pages are well structured, focused around targeted topical keywords, and provide a positive user experience.

As a search engine crawls a webpage, it looks for signals to determine what the page is about and what questions it can answer. These crawlers analyze the entire page to determine its focus, but specifically focus on page titles and headings as primary descriptors. A well-structured page with a hierarchical heading structure is key to helping site visitors, search engine and LLM bots easily scan and consume your content.

Ideally, each webpage is keyword topic-focused and unique. As such, keyword variations should be used consistently in titles, URLs, headings, and body content.

Another important potential issue raised in an audit, depending on the nature of your local business, is image optimization. As a best practice, all images should include relevant descriptive filenames and alt text, which may include pertinent keywords. This becomes particularly important when images (e.g., product or service photos) are central to your business, as image carousels can and will show up in web search results. In every case, attention should be paid to the images appearing on your primary ranking pages.

Lastly, an over-reliance on JavaScript can be particularly detrimental for LLM visibility, as some LLMs currently do not execute JavaScript. If your site is powered by JavaScript, you’ll want to address this with your developer to see how the most important content can be presented in raw HTML or via server-side scripting to enable crawling and indexing.

Internal Link Audit

A link audit will help you quickly identify any potential misdirected or broken links, which can create a less-than-optimal experience for your site visitors and may confuse search engine and LLM bots.

Links are likewise signals the search engines use to determine the structure of a website and its ability to direct searchers to appropriate, authoritative answers to their questions.

Part of this audit should include the identification of opportunities to crosslink prominent pages. If a page within your site has keywords (anchor text) referencing relevant content on another page, a link should be created, provided the link logically guides users to more relevant content or an appropriate conversion point.

External links should also be considered, especially when there is an opportunity to link to an authoritative source of information. From a local business perspective, this may include linking to relevant local organizations, partners, or events.

Schema Review

Schema or structured data can help search engines and LLMs better understand your business and its offerings and offer enhanced visibility. An effective local SEO audit should include the identification of content within a website to which schema can be applied.

Local businesses have an opportunity to have their content highlighted if they:

  • Publish highly authoritative and relevant content.
  • Use structured schema markup to tag content.

Relevant local business schema markup includes LocalBusiness, Product, Service, Review, and FAQPage, among others. All schema markup code should be validated via Google’s Rich Results Test and/or the Schema.org validator.

Mobile Audit

As most consumers search via their mobile devices – especially for local services – it’s essential for local businesses to provide a positive mobile web experience. Websites need to load quickly, be easily navigated, and enable seamless user interaction.

Google offers a range of free mobile testing and mobile-specific monitoring tools, such as Page Experience and Core Web Vitals, in Google Search Console.

More in-depth user experience and SEO analysis can be done via Google Lighthouse, though a local business owner will likely want to enlist the help of a web developer to action any of the recommendations this tool provides.

Duplicate Content

High-quality, authoritative content is, by definition, original content.

As such, it’s important to let Google know if your website contains any content/pages you did not create by adding a canonical tag to the HTML header of the page. Most pages, which are unique unto themselves, will have a self-referencing canonical.

Not doing so can have a detrimental effect on your authority and, by extension, your ability to rank. Most site auditing tools will flag content missing or having malformed canonical tags.

3. Google Business Profile Audit

A Google Business Profile (GBP) effectively represents a “secondary” website and highly visible point of presence for most local businesses. Increasingly, this “secondary” website is becoming the consumers’ first point of contact.

An accurate, comprehensive GBP is critical to establishing visibility in organic and now AI search results.

A recent behavioral study of travel booking in AI Mode conducted by Propellic found GBP to be among the most highly displayed and engaged content for searchers booking local accommodations and experiences.

A Google Business Profile audit should focus on the accuracy and completeness of the various components within the profile, including:

  • Business information and location details.
  • Correct primary business category.
  • Hours of operation.
  • Correct pin location in Google Maps.
  • Proper categorization as a physical location or service area business.
  • Products.
  • Services.
  • Appointment link(s), if applicable.
  • Photos or Videos.
  • Social Profiles.
  • Offers.
  • Regular updates.
  • Events.
  • Informational content.
Google Business Profile screenshotScreenshot from Google Business Profile, September 2025

The more complete the profile is, the more likely it will be viewed as a reliable local resource and be given appropriate billing in the search results.

Assuming you have claimed and are authorized to manage your GBP, you can access and edit your info directly within the search results.

4. Review Monitoring And Management

Another very important aspect of a GBP is reviews.

Local business customers have an opportunity to write reviews, which appear on the GBP for other customers to reference and play a significant role in determining visibility in the local map pack. They are most certainly a determining factor with regard to appearing in Google AI Overviews.

Google will notify business owners as soon as reviews are submitted, and they should be responded to as soon as possible. This goes for negative reviews just as much as positive ones. Include an analysis of your reviews to ensure none have fallen through the cracks. This will also help determine whether there are recurring customer service and satisfaction issues or themes to be addressed. A detailed analysis of reviews can be a great source of content ideas aimed at answering customers’ most pressing questions or concerns.

Of course, there are also several other places for consumers to submit reviews, including Facebook, local review sites like Yelp, and industry-specific sites such as TripAdvisor and Houzz. A full audit should take inventory of reviews left on any of these services, as they can show up in search results.

Pro tip: Request positive reviews from all customers and politely suggest they reference the product or service they are reviewing, as keywords contained in reviews can have a positive effect from ranking perspective.

5. Local Business Listing/Citation Audit

Local business listings and citations provide search engines and LLM bots with a way of confirming a business is both local and reputable within a specific geographic region. Recent studies have revealed unlinked brand mentions and citations play a significant role in AI Visibility.

It is important to have a presence in reputable local directories, review sites, business directories (e.g., Chambers of Commerce), or local partner sites to prove your “localness.”

Depending on the size and scope of your local business, an audit of your listings and citations can be done in an automated or manual fashion.

Business listings and citation management tools can be used to find, monitor, and update all primary citations with your proper Name, Address, Phone Number (aka NAP), and other pertinent business details found in broader listings (e.g., website address, business description).

If you manage a limited number of locations and have the time, one quick method of identifying where your current listings can be found is to simply conduct a search on your business name. The first three to four pages of search results should reveal the same.

It’s also important to find and resolve any duplicate listings to prevent confusing customers and search engines alike with outdated, inaccurate information.

Local business owners and managers should also monitor Reddit for their brand and local product/service offerings to gauge activity and sentiment. Reddit is a unique platform where “karma” and trust are tantamount, but there is an opportunity for brands and local businesses to engage with their customers if they do it in a transparent, authentic, and non-promotional way.

6. Backlink Audit

Backlinks or inbound links are similar to citations, but are effectively any links to your website pages from other third-party websites.

Links remain an important factor in determining the authority of a website, as they lend validity if they come from relevant, reputable sources.

As with other components of an audit, there are several good free and paid backlink tools available, including a link monitoring service in Google Search Console, which is a great place to start.

An effective backlink audit has the dual purpose of identifying and building links via potentially valuable backlink sources, which can positively affect your ranking and visibility.

For local businesses, reputable local sources of links are naturally beneficial in validating location, as noted with citations above.

Potential backlink sources can be researched in a variety of locations:

  • Free and paid backlink research tools, such as Ahrefs or Semrush identify any domains where your primary competition has acquired backlinks, but you have not.
  • Any non-competitive sites appearing in the organic search results for your primary keywords are, by definition, good potential backlink sources. Look for directories you can be listed in, blogs or articles you can comment on, or publications you can submit articles to.
  • Referral sources in Google Analytics may reveal relevant external websites where you already have links and may be able to acquire more.

7. Local Content Audit

People search differently and require different types of information depending on where they are in their buying journey. A well-structured local web presence will include content tailored and distributed for consumption during each stage of this journey, to bolster visibility and awareness.

You want to be found throughout your customer’s search experience. A content audit can be used to make sure you have helpful content for each of the journey buckets your audience members may find themselves in.

Informational content may be distributed via social or other external channels or published on your website to help educate your consumers on the products, services, and differentiators you offer at the beginning of their path to purchase.

As AI is consuming and repurposing much of this informational content, it’s important to ensure your informational content includes your unique perspective based on your experience and expertise. This content ideally answers your prospects’ why, how, and what types of questions.

Transactional content is designed to address those consumers who already know what they want, but are in the process of deciding where or who to purchase from. This type of content may include reviews, testimonials, or competitive comparisons.

Navigational content ensures when people click through from Google after having searched your brand name or a variation thereof, they land on a page or information validating your position as a leader in your space. This page should also include a clear call-to-action with the assumption they have arrived with a specific goal in mind.

Commercial content addresses those consumers who have signaled a strong intent to buy. Effective local business sites and social pages must include offers, coupons, discounts, and clear paths to purchase.

Optimizing Content For AI

From an AI search and visibility perspective, keep in mind the vast majority of AI results are responses to long-form questions/prompts from consumers. As such, it is crucial for some of your content to be in a direct question/answer format.

One quick and effective tactic is the creation of an FAQ section within product or service pages. However, avoid overseeding FAQs by including generic questions and answers. FAQs should be specific to the pages they reside on.

We’ve previously touched upon the importance of structured content for improved crawling, scanning, and comprehension. When reviewing your content, look for opportunities to incorporate defined heading structures, tables of contents for long-form content, and ordered lists.

Content Variety And Distribution

Quality content is content your audience wants to consume, like, and share. For many businesses, this means considering and experimenting with content beyond simple text and images.

Video content shared via platforms like YouTube, Instagram, Facebook, TikTok, and others is easier to consume and generally more engaging.

8. Google Search Console Review

Google Search Console is an invaluable free resource for data related to keyword and content performance, indexing, schema/rich results validation, mobile/desktop experience monitoring, and security/manual actions.

A complete local SEO audit must include a review and analysis of this data to identify and react to strengths, weaknesses, opportunities, and threats outlined in each section.

Google Search Console screenshotGoogle Search Console screenshot, September 2025

Website owners and managers will want to pay particular attention to any issues related to pages not being crawled/indexed or manual actions having been taken based on questionable practices, as both can have a detrimental effect on search engine visibility.

Google Search Console does send notifications for these types of issues as well as regular performance updates, but an audit will ensure nothing has been overlooked.

9. Analytics Review

Whether you are using Google Analytics or another site/visitor tracking solution, the data available here is useful during an audit to validate top and lesser-performing content, traffic sources, audience profiles, and paths to purchase.

Findings in analytics will be key to your content audit.

As you review your site analytics, you may ask the following questions:

  • Are my top-visited pages also my top-ranking pages in search engines?
  • Which are my top entry pages from organic and AI search?
  • Which LLMs are sending traffic to my site?
  • Which pages/content are not receiving the level of traffic or engagement desired?
  • What is the typical path to purchase on my site, and can it be condensed or otherwise optimized?
  • Which domains are my top referrers, and are there opportunities to further leverage these sites for backlinks? (see Backlink Audit above).

Use Google Analytics (or another tool of your choice) to find the answers to these questions, so you can focus and prioritize your content and keyword optimization efforts.

10. Competitor Analysis

A comprehensive local SEO audit should identify and review the strengths and weaknesses of your competition.

You may already have a good sense of who your competition is, but to begin, it’s always a good idea to confirm who specifically shows up in the organic search and AI results when you enter your target keywords. You may find different competitors in these two formats, which represent both a threat and an opportunity.

These businesses/domains are your true online competitors and the sites you can learn the most from. If any of your online competitors’ sites and/or pages are ranking ahead of yours, you’ll want to review what they may be doing to gain this advantage.

You can follow the same checklist of steps you would conduct for your own audit to identify how they may be optimizing their keywords, content, Google Business Profile, reviews, local business listings, or backlinks.

In general, the best way to outperform your competition is to provide a better overall experience online and off, which includes generating more relevant, unique, high-quality content to more fully address the questions your mutual customers have.

11. AI Search For Local Businesses

AI Overviews and AI Mode are increasingly superseding traditional organic search results in Google, as the search engine aims to provide the answers to questions directly within its SERPs. Further, Google has signalled its commitment to AI Mode by recently integrating it into the Chrome address bar.

While AI search optimization has some new considerations, a strong foundation in traditional SEO will go a long way to building visibility in AI search results; chief among these at a local level is a fully optimized Google Business Profile, which appears prominently for local searches with commercial intent as outlined above.

Google Business Profile Cards accessible within AI ModeScreenshot of Google AI Mode displaying Google Business Profile Cards, September 2025

AI Mode Strategy Checklist Should Consider:

  • Enhanced GBP Features: Stay updated on new features within Google Business Profile, allowing for direct interactions or transactions, as these will be favored by AI Mode.
  • Focus on User Intent: Understand the transactional and informational intent behind local searches. AI Mode aims to provide immediate solutions, so businesses facilitating this will gain an advantage.
  • Voice Search Optimization: As AI Mode becomes more conversational, optimizing for natural language queries and voice search will be crucial. Ensure your content answers questions directly and uses conversational language.
  • Direct Action Integrations: This may still be a ways away, but review and explore opportunities to integrate with Google’s booking or reservation features, if applicable to your business. This could become a direct pathway to conversions within AI Mode.

Prioritizing Your Action Items

A complete local SEO audit is going to produce a fairly significant list of action items.

Many of the keyword, site, content, and backlink auditing tools do a good job of prioritizing tasks; however, the list can still be daunting.

One of the best places to start with an audit action plan is around the keywords, AI prompts, and content you have already established some, but not enough, authority for.

Determine how to best address deficiencies or opportunities to optimize this content first before moving on to more competitive keywords or those you have less or no visibility for. Establishing authority and trust is a long-term game.

These audit items should be reviewed every six to 12 months, depending on the size and scale of your web presence, to enable the best chance of being found by your local target audience.

More Resources:


Featured Image: BestForBest/Shutterstock

How to Turn Every Campaign Into Lasting SEO Authority [Webinar] via @sejournal, @hethr_campbell

Capture Links, Mentions, and Citations That Make a Difference

Backlinks alone no longer move the authority needle. Brand mentions are just as critical for visibility, recognition, and long-term SEO success. Are your campaigns capturing both?

Join Michael Johnson, CEO of Resolve, for a webinar where he shares a replicable campaign framework that aligns media outreach, SEO impact, and brand visibility, helping your campaigns become long-term assets.

What You’ll Learn

  • The Resolve Campaign Framework: Step-by-step approach to ideating, creating, and pitching SEO-focused digital PR campaigns.
  • The Dual Outcome Strategy: How to design campaigns that earn both high-quality backlinks and brand mentions from top-tier media.
  • Real Campaign Case Studies: Examples of campaigns that created a compounding effect of links, mentions, and brand recognition.
  • Techniques for Measuring Success: How to evaluate the SEO and branding impact of your campaigns.

Why You Can’t Miss This Webinar

Successful SEO campaigns today capture authority on multiple fronts. This session provides actionable strategies for engineering campaigns that work hand in hand with SEO, GEO, and AEO to grow your brand.

📌 Register now to learn how to design campaigns that earn visibility, links, and citations.

🛑 Can’t attend live? Register anyway, and we’ll send you the recording so you don’t miss out.

An AI adoption riddle

A few weeks ago, I set out on what I thought would be a straightforward reporting journey. 

After years of momentum for AI—even if you didn’t think it would be good for the world, you probably thought it was powerful enough to take seriously—hype for the technology had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?

I searched and searched for them. As I did, more news fueled the idea of an AI bubble that, if popped, would spell doom economy-wide. Stories spread about the circular nature of AI spending, layoffs, the inability of companies to articulate what exactly AI will do for them. Even the smartest people building modern AI systems were saying the tech has not progressed as much as its evangelists promised. 

But after all my searching, companies that took these developments as a sign to perhaps not go all in on AI were nowhere to be found. Or, at least, none that were willing to admit it. What gives? 

There are several interpretations of this one reporter’s quest (which, for the record, I’m presenting as an anecdote and not a representation of the economy), but let’s start with the easy ones. First is that this is a huge score for the “AI is a bubble” believers. What is a bubble if not a situation where companies continue to spend relentlessly even in the face of worrying news? The other is that underneath the bad headlines, there’s not enough genuinely troubling news about AI to convince companies they should pivot.

But it could also be that the unbelievable speed of AI progress and adoption has made me think industries are more sensitive to news than they perhaps should be. I spoke with Martha Gimbel, who leads the Yale Budget Lab and coauthored a report finding that AI has not yet changed anyone’s jobs. What I gathered is that Gimbel, like many economists, thinks on a longer time scale than anyone in the AI world is used to. 

“It would be historically shocking if a technology had had an impact as quickly as people thought that this one was going to,” she says. In other words, perhaps most of the economy is still figuring out what the hell AI even does, not deciding whether to abandon it. 

The other reaction I heard—particularly from the consultant crowd—is that when executives hear that so many AI pilots are failing, they indeed take it very seriously. They’re just not reading it as a failure of the technology itself. They instead point to pilots not moving quickly enough, companies lacking the right data to build better AI, or a host of other strategic reasons.

Even if there is incredible pressure, especially on public companies, to invest heavily in AI, a few have taken big swings on the technology only to pull back. The buy now, pay later company Klarna laid off staff and paused hiring in 2024, claiming it could use AI instead. Less than a year later it was hiring again, explaining that “AI gives us speed. Talent gives us empathy.” 

Drive-throughs, from McDonald’s to Taco Bell, ended pilots testing the use of AI voice assistants. The vast majority of Coca-Cola advertisements, according to experts I spoke with, are not made with generative AI, despite the company’s $1 billion promise. 

So for now, the question remains unanswered: Are there companies out there rethinking how much their bets on AI will pay off, or when? And if there are, what’s keeping them from talking out loud about it? (If you’re out there, email me!)

“We will never build a sex robot,” says Mustafa Suleyman

<div data-chronoton-summary="

  • Balancing humanlike interaction with safety concerns: Suleyman emphasizes that Microsoft’s new Copilot features—including group chat and the “Real Talk” personality—are designed to keep AI as a tool serving humanity rather than a replacement for human connection. The company deliberately avoids building chatbots that encourage romantic or sexual relationships, drawing clear boundaries where others in the industry see market opportunity.
  • Personality as craft, not deception: While acknowledging that engaging personalities make AI more useful, Suleyman argues the industry must learn to “sculpt” emotional intelligence carefully.
  • Reframing the “digital species” metaphor: Suleyman clarifies that describing AI as a new digital species isn’t endorsing consciousness or rights for machines; rather, it’s a warning about what’s coming that demands proper containment. He insists the goal is keeping AI subordinate to human interests, not granting it autonomy or moral consideration that would distract from protecting actual human rights.

” data-chronoton-post-id=”1126781″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

Mustafa Suleyman, CEO of Microsoft AI, is trying to walk a fine line. On the one hand, he thinks that the industry is taking AI in a dangerous direction by building chatbots that present as human: He worries that people will be tricked into seeing life instead of lifelike behavior. In August, he published a much-discussed post on his personal blog that urged his peers to stop trying to make what he called “seemingly conscious artificial intelligence,” or SCAI.

On the other hand, Suleyman runs a product shop that must compete with those peers. Last week, Microsoft announced a string of updates to its Copilot chatbot, designed to boost its appeal in a crowded market in which customers can pick and choose between a pantheon of rival bots that already includes ChatGPT, Perplexity, Gemini, Claude, DeepSeek, and more.

I talked to Suleyman about the tension at play when it comes to designing our interactions with chatbots and his ultimate vision for what this new technology should be.

One key Copilot update is a group-chat feature that lets multiple people talk to the chatbot at the same time. A big part of the idea seems to be to stop people from falling down a rabbit hole in a one-on-one conversation with a yes-man bot. Another feature, called Real Talk, lets people tailor how much Copilot pushes back on you, dialing down the sycophancy so that the chatbot challenges what you say more often.

Copilot also got a memory upgrade, so that it can now remember your upcoming events or long-term goals and bring up things that you told it in past conversations. And then there’s Mico, an animated yellow blob—a kind of Chatbot Clippy—that Microsoft hopes will make Copilot more accessible and engaging for new and younger users.  

Microsoft says the updates were designed to make Copilot more expressive, engaging, and helpful. But I’m curious how far those features can be pushed without starting down the SCAI path that Suleyman has warned about.  

Suleyman’s concerns about SCAI come at a time when we are starting to hear more and more stories about people being led astray by chatbots that are too engaging, too expressive, too helpful. OpenAI is being sued by the parents of a teenager who they allege was talked into killing himself by ChatGPT. There’s even a growing scene that celebrates romantic relationships with chatbots.

With all that in mind, I wanted to dig a bit deeper into Suleyman’s views. Because a couple of years ago he gave a TED Talk in which he told us that the best way to think about AI is as a new kind of digital species. Doesn’t that kind of hype feed the misperceptions Suleyman is now concerned about?  

In our conversation, Suleyman told me what he was trying to get across in that TED Talk, why he really believes SCAI is a problem, and why Microsoft would never build sex robots (his words). He had a lot of answers, but he left me with more questions.

Our conversation has been edited for length and clarity.

In an ideal world, what kind of chatbot do you want to build? You’ve just launched a bunch of updates to Copilot. How do you get the balance right when you’re building a chatbot that has to compete in a market in which people seem to value humanlike interaction, but you also say you want to avoid seemingly conscious AI?

It’s a good question. With group chat, this will be the first time that a large group of people will be able to speak to an AI at the same time. It really is a way of emphasizing that AIs shouldn’t be drawing you out of the real world. They should be helping you to connect, to bring in your family, your friends, to have community groups, and so on.

That is going to become a very significant differentiator over the next few years. My vision of AI has always been one where an AI is on your team, in your corner.

This is a very simple, obvious statement, but it isn’t about exceeding and replacing humanity—it’s about serving us. That should be the test of technology at every step. Does it actually, you know, deliver on the quest of civilization, which is to make us smarter and happier and more productive and healthier and stuff like that?

So we’re just trying to build features that constantly remind us to ask that question, and remind our users to push us on that issue.

Last time we spoke, you told me that you weren’t interested in making a chatbot that would role-play personalities. That’s not true of the wider industry. Elon Musk’s Grok is selling that kind of flirty experience. OpenAI has said it’s interested in exploring new adult interactions with ChatGPT. There’s a market for that. And yet this is something you’ll just stay clear of?

Yeah, we will never build sex robots. Sad in a way that we have to be so clear about that, but that’s just not our mission as a company. The joy of being at Microsoft is that for 50 years, the company has built, you know, software to empower people, to put people first.

Sometimes, as a result, that means the company moves slower than other startups and is more deliberate and more careful. But I think that’s a feature, not a bug, in this age, when being attentive to potential side effects and longer-term consequences is really important.

And that means what, exactly?

We’re very clear on, you know, trying to create an AI that fosters a meaningful relationship. It’s not that it’s trying to be cold and anodyne—it cares about being fluid and lucid and kind. It definitely has some emotional intelligence.

So where does it—where do you—draw those boundaries?

Our newest chat model, which is called Real Talk, is a little bit more sassy. It’s a bit more cheeky, it’s a bit more fun, it’s quite philosophical. It’ll happily talk about the big-picture questions, the meaning of life, and so on. But if you try and flirt with it, it’ll push back and it’ll be very clear—not in a judgmental way, but just, like: “Look, that’s not for me.”

There are other places where you can go to get that kind of experience, right? And I think that’s just a decision we’ve made as a company.

Is a no-flirting policy enough? Because if the idea is to stop people even imagining an entity, a consciousness, behind the interactions, you could still get that with a chatbot that wanted to keep things SFW. You know, I can imagine some people seeing something that’s not there even with a personality that’s saying, hey, let’s keep this professional.

Here’s a metaphor to try to make sense of it. We hold each other accountable in the workplace. There’s an entire architecture of boundary management, which essentially sculpts human behavior to fit a mold that’s functional and not irritating.

The same is true in our personal lives. The way that you interact with your third cousin is very different to the way you interact with your sibling. There’s a lot to learn from how we manage boundaries in real human interactions.

It doesn’t have to be either a complete open book of emotional sensuality or availability—drawing people into a spiraled rabbit hole of intensity—or, like, a cold dry thing. There’s a huge spectrum in between, and the craft that we’re learning as an industry and as a species is to sculpt these attributes.

And those attributes obviously reflect the values of the companies that design them. And I think that’s where Microsoft has a lot of strengths, because our values are pretty clear, and that’s what we’re standing behind.

A lot of people seem to like personalities. Some of the backlash to GPT-5, for example, was because the previous model’s personality had been taken away. Was it a mistake for OpenAI to have put a strong personality there in the first place, to give people something that they then missed?

No, personality is great. My point is that we’re trying to sculpt personality attributes in a more fine-grained way, right?

Like I said, Real Talk is a cool personality. It’s quite different to normal Copilot. We are also experimenting with Mico, which is this visual character, that, you know, people—some people—really love. It’s much more engaging. It’s easier to talk to about all kinds of emotional questions and stuff.

I guess this is what I’m trying to get straight. Features like Mico are meant to make Copilot more engaging and nicer to use, but it seems to go against the idea of doing whatever you can to stop people thinking there’s something there that you are actually having a friendship with.

Yeah. I mean, it doesn’t stop you necessarily. People want to talk to somebody, or something, that they like. And we know that if your teacher is nice to you at school, you’re going to be more engaged. The same with your manager, the same with your loved ones. And so emotional intelligence has always been a critical part of the puzzle, so it’s not to say that we don’t want to pursue it.

It’s just that the craft is in trying to find that boundary. And there are some things which we’re saying are just off the table, and there are other things which we’re going to be more experimental with. Like, certain people have complained that they don’t get enough pushback from Copilot—they want it to be more challenging. Other people aren’t looking for that kind of experience—they want it to be a basic information provider. The task for us is just learning to disentangle what type of experience to give to different people.

I know you’ve been thinking about how people engage with AI for some time. Was there an inciting incident that made you want to start this conversation in the industry about seemingly conscious AI?

I could see that there was a group of people emerging in the academic literature who were taking the question of moral consideration for artificial entities very seriously. And I think it’s very clear that if we start to do that, it would detract from the urgent need to protect the rights of many humans that already exist, let alone animals.

If you grant AI rights, that implies—you know—fundamental autonomy, and it implies that it might have free will to make its own decisions about things. So I’m really trying to frame a counter to that, which is that it won’t ever have free will. It won’t ever have complete autonomy like another human being.

AI will be able to take actions on our behalf. But these models are working for us. You wouldn’t want a pack of, you know, wolves wandering around that weren’t tame and that had complete freedom to go and compete with us for resources and weren’t accountable to humans. I mean, most people would think that was a bad idea and that you would want to go and kill the wolves.

Okay. So the idea is to stop some movement that’s calling for AI welfare or rights before it even gets going, by making sure that we don’t build AI that appears to be conscious? What about not building that kind of AI because certain vulnerable people may be tricked by it in a way that may be harmful? I mean, those seem to be two different concerns.

I think the test is going to be in the kinds of features the different labs put out and in the types of personalities that they create. Then we’ll be able to see how that’s affecting human behavior.

But is it a concern of yours that we are building a technology that might trick people into seeing something that isn’t there? I mean, people have claimed they’ve seen sentience inside far less sophisticated models than we have now. Or is that just something that some people will always do?

It’s possible. But my point is that a responsible developer has to do our best to try and detect these patterns emerging in people as quickly as possible and not take it for granted that people are going to be able to disentangle those kinds of experiences themselves.

When I read your post about seemingly conscious AI, I was struck by a line that says: “We must build AI for people; not to be a digital person.” It made me think of a TED Talk you gave last year where you say that the best way to think about AI is as a new kind of digital species. Can you help me understand why talking about this technology as a digital species isn’t a step down the path of thinking about AI models as digital persons or conscious entities?

I think the difference is that I’m trying to offer metaphors that make it easier for people to understand where things might be headed, and therefore how to avert that and how to control it.

Okay.

It’s not to say that we should do those things. It’s just pointing out that this is the emergence of a technology which is unique in human history. And if you just assume that it’s a tool or just a chatbot or a dumb— you know, I kind of wrote that TED Talk in the context of a lot of skepticism. And I think it’s important to be clear-eyed about what’s coming so that one can think about the right guardrails.

And yet, if you’re telling me this technology is a new digital species, I have some sympathy for the people who say, well, then we need to consider welfare.

I wouldn’t. [He starts laughing.] Just not in the slightest. No way. It’s not a direction that any of us want to go in.

No, that’s not what I meant. I don’t think chatbots should have welfare. I’m saying I’d have some sympathy for where such people were coming from when they hear, you know, Mustafa Suleyman tell them that this thing he’s building was a new digital species. I’d understand why they might then say that they wanted to stand up for it. I’m saying the words we use matter, I guess.

The rest of the TED Talk was all about how to contain AI and how not to let this species take over, right? That was the whole point of setting it up as, like, this is what’s coming. I mean, that’s what my whole book [The Coming Wave, published in 2023] was about—containment and alignment and stuff like that. There’s no point in pretending that it’s something that it’s not and then building guardrails and boundaries that don’t apply because you think it’s just a tool.

Honestly, it does have the potential to recursively self-improve. It does have the potential to set its own goals. Those are quite profound things. No other technology we’ve ever invented has that. And so, yeah, I think that it is accurate to say that it’s like a digital species, a new digital species. That’s what we’re trying to restrict to make sure it’s always in service of people. That’s the target for containment.

The Download: Microsoft’s stance on erotic AI, and an AI hype mystery

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

“We will never build a sex robot,” says Mustafa Suleyman

Mustafa Suleyman, CEO of Microsoft AI, is trying to walk a fine line. On the one hand, he thinks that the industry is taking AI in a dangerous direction by building chatbots that present as human: He worries that people will be tricked into seeing life instead of lifelike behavior.

On the other hand, Suleyman runs a product shop that must compete with those peers. Last week, Microsoft announced a string of updates to its Copilot chatbot designed to make Copilot more expressive, engaging, and helpful.

Will Douglas Heaven, our senior AI editor, talked to Suleyman about the tension at play when it comes to designing our interactions with chatbots and his ultimate vision for what this new technology should be. Read the full story.

An AI adoption riddle

—James O’Donnell, senior AI reporter 

A few weeks ago, I set out on what I thought would be a straightforward reporting journey.

After years of momentum for AI, hype had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?

But if AI’s hype has indeed been punctured, I couldn’t find a company willing to talk about it. So what should we make of my failed quest?

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Hundreds of thousands of ChatGPT users exhibit severe mental health symptoms
That’s according to estimates from OpenAI, which says it has tweaked GPT-5 to respond more effectively to users in distress. (Wired $)
+ OpenAI won’t lock access to force users to take a break, though. (Gizmodo)
+ Why AI should be able to “hang up” on you. (MIT Technology Review)

2 Elon Musk has launched his answer to Wikipedia
Grokipedia’s right-leaning entries reflect the way the billionaire sees the world. (WP $)
+ Several pages perpetuate historical inaccuracies and conservative views. (Wired $)
+ The AI-generated encyclopedia briefly crashed shortly after it launched. (Engadget)

3 Surgeons have removed a pig kidney from a patient
It was the longest-functioning genetically engineered pig kidney so far. (Wired $)
+ “Spare” living human bodies might provide us with organs for transplantation. (MIT Technology Review)

4 Amazon is planning to cut up to 30,000 corporate jobs
Partly in response to staff’s reluctance to return to the office five days a week. (Reuters)
+ The company is planning yet another round of layoffs in January. (NYT $)

5 Older people can’t get enough of screens
Their digital habits mirror the high usage typically observed among teenagers. (Economist $)

6 A British cyclist has been given a 3D-printed face
Dave Richards received severe third-degree burns to his head after being struck by a drunk driver. (The Guardian)

7 The twitter.com domain is being shut down
Make sure you re-enroll your security and passkeys before the big switch-off. (Fast Company $)
+ It means the abandoned accounts could be sold on. (The Verge)
+ But 2FA apps should be fine—in theory. (The Register)

8 When is a moon not a moon?
Believe it or not, we don’t have an official definition. (The Atlantic $)
+ Astronomers have spotted a “quasi-moon” hovering near Earth. (BBC)
+ The moon is just the beginning for this waterless concrete. (MIT Technology Review)

9 Threads’ ghost posts will disappear after 24 hours
If anyone saw them in the first place, that is. (TechCrunch)

10 In the metaverse, anyone can be a K-pop superstar
Virtual idols are gaining huge popularity, before crossing over into real-world fame. (Rest of World)
+ Meta’s former metaverse head has been moved into its AI team. (FT $)

Quote of the day

“The impulse to control knowledge is as old as knowledge itself. Controlling what gets written is a way to gain or keep power.”

—Ryan McGrady, senior research fellow at the University of Massachusetts Amherst, reflects on Elon Musk’s desire to create his own online encyclopedia to the New York Times.

One more thing

Inside Amsterdam’s high-stakes experiment to create fair welfare AI

Amsterdam thought it was on the right track. City officials in the welfare department believed they could build technology that would prevent fraud while protecting citizens’ rights. They followed these emerging best practices and invested a vast amount of time and money in a project that eventually processed live welfare applications. But in their pilot, they found that the system they’d developed was still not fair and effective. Why?

Lighthouse Reports, MIT Technology Review, and the Dutch newspaper Trouw have gained unprecedented access to the system to try to find out. Read about what we discovered.

—Eileen Guo, Gabriel Geiger & Justin-Casimir Braun

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Happy 70th birthday to Bill Gates, who is not revered enough for his chair-jumping skills.
+ Bring back Guitar Hero—the iconic game that convinced us all we were capable of knocking out Heart’s Barracuda (note: the majority of us were not.)
+ Even the swankiest parts of London aren’t immune to rumours of ghostly hauntings.
+ Justice for medieval frogs and their unfair reputation! 🐸

Finding return on AI investments across industries

The market is officially three years post ChatGPT and many of the pundit bylines have shifted to using terms like “bubble” to suggest reasons behind generative AI not realizing material returns outside a handful of technology suppliers. 

In September, the MIT NANDA report made waves because the soundbite every author and influencer picked up on was that 95% of all AI pilots failed to scale or deliver clear and measurable ROI. McKinsey earlier published a similar trend indicating that agentic AI would be the way forward to achieve huge operational benefits for enterprises. At The Wall Street Journal’s Technology Council Summit, AI technology leaders recommended CIOs stop worrying about AI’s return on investment because measuring gains is difficult and if they were to try, the measurements would be wrong. 

This places technology leaders in a precarious position–robust tech stacks already sustain their business operations, so what is the upside to introducing new technology? 

For decades, deployment strategies have followed a consistent cadence where tech operators avoid destabilizing business-critical workflows to swap out individual components in tech stacks. For example, a better or cheaper technology is not meaningful if it puts your disaster recovery at risk. 

While the price might increase when a new buyer takes over mature middleware, the cost of losing part of your enterprise data because you are mid-way through transitioning your enterprise to a new technology is way more severe than paying a higher price for a stable technology that you’ve run your business on for 20 years.

So, how do enterprises get a return on investing in the latest tech transformation?

First principle of AI: Your data is your value

Most of the articles about AI data relate to engineering tasks to ensure that an AI model infers against business data in repositories that represent past and present business realities. 

However, one of the most widely-deployed use cases in enterprise AI begins with prompting an AI model by uploading file attachments into the model. This step narrows an AI model’s range to the content of the uploaded files, accelerating accurate response times and reducing the number of prompts required to get the best answer. 

This tactic relies upon sending your proprietary business data into an AI model, so there are two important considerations to take in parallel with data preparation: first, governing your system for appropriate confidentiality; and second, developing a deliberate negotiation strategy with the model vendors, who cannot advance their frontier models without getting access to non-public data, like your business’ data. 

Recently, Anthropic and OpenAI completed massive deals with enterprise data platforms and owners because there is not enough high-value primary data publicly available on the internet. 

Most enterprises would automatically prioritize confidentiality of their data and design business workflows to maintain trade secrets. From an economic value point of view, especially considering how costly every model API call really is, exchanging selective access to your data for services or price offsets may be the right strategy. Rather than approaching model purchase/onboarding as a typical supplier/procurement exercise, think through the potential to realize mutual benefits in advancing your suppliers’ model and your business adoption of the model in tandem.

Second principle of AI: Boring by design

According to Information is Beautiful, in 2024 alone, 182 new generative AI models were introduced to the market. When GPT5 came into the market in 2025, many of the models from 12 to 24 months prior were rendered unavailable until subscription customers threatened to cancel. Their previously stable AI workflows were built on models that no longer worked. Their tech providers thought the customers would be excited about the newest models and did not realize the premium that business workflows place on stability. Video gamers are happy to upgrade their custom builds throughout the entire lifespan of the system components in their gaming rigs, and will upgrade the entire system just to play a newly released title. 

However, behavior does not translate to business run rate operations. While many employees may use the latest models for document processing or generating content, back-office operations can’t sustain swapping a tech stack three times a week to keep up with the latest model drops. The back-office work is boring by design.

The most successful AI deployments have focused on deploying AI on business problems unique to their business, often running in the background to accelerate or augment mundane but mandated tasks. Relieving legal or expense audits from having to manually cross check individual reports but putting the final decision in a humans’ responsibility zone combines the best of both. 

The important point is that none of these tasks require constant updates to the latest model to deliver that value. This is also an area where abstracting your business workflows from using direct model APIs can offer additional long-term stability while maintaining options to update or upgrade the underlying engines at the pace of your business.

Third principle of AI: Mini-van economics

The best way to avoid upside-down economics is to design systems to align to the users rather than vendor specs and benchmarks. 

Too many businesses continue to fall into the trap of buying new gear or new cloud service types based on new supplier-led benchmarks rather than starting their AI journey from what their business can consume, at what pace, on the capabilities they have deployed today. 

While Ferrari marketing is effective and those automobiles are truly magnificent, they drive the same speed through school zones and lack ample trunk space for groceries. Keep in mind that every remote server and model touched by a user layers on the costs and design for frugality by reconfiguring workflows to minimize spending on third-party services. 

Too many companies have found that their customer support AI workflows add millions of dollars of operational run rate costs and end up adding more development time and cost to update the implementation for OpEx predictability. Meanwhile, the companies that decided that a system running at the pace a human can read—less than 50 tokens per second—were able to successfully deploy scaled-out AI applications with minimal additional overhead.

There are so many aspects of this new automation technology to unpack—the best guidance is to start practical, design for independence in underlying technology components to keep from disrupting stable applications long term, and to leverage the fact that AI technology makes your business data valuable to the advancement of your tech suppliers’ goals.

This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

Roundtables: Seeking Climate Solutions in Turbulent Times

Companies are pursuing climate solutions amid shifting U.S. politics and economic uncertainty. Drawing from MIT Technology Review’s 10 Climate Tech Companies to Watch list, this session highlights the most promising technologies—from electric trucks to gene-edited crops—and explores the challenges companies face in advancing climate progress today.

Speakers: Casey Crownhart, Senior Climate Reporter; James Temple, Senior Climate Editor; and Mary Beth Griggs, Science Editor

Recorded on October 28, 2025


Related Coverage:

The AI Search Visibility Audit: 15 Questions Every CMO Should Ask

This post was sponsored by IQRush. The opinions expressed in this article are the sponsor’s own.

Your traditional SEO is winning. Your AI visibility is failing. Here’s how to fix it.

Your brand dominates page one of Google. Domain authority crushes competitors. Organic traffic trends upward quarter after quarter. Yet when customers ask ChatGPT, Perplexity, or others about your industry, your brand is nowhere to be found.

This is the AI visibility gap, which causes missed opportunities in awareness and sales.

SEO ranking on page one doesn’t guarantee visibility in AI search.  The rules of ranking have shifted from optimization to verification.”

Raj Sapru, Netrush, Chief Strategy Officer

Recent analysis of AI-powered search patterns reveals a troubling reality: commercial brands with excellent traditional SEO performance often achieve minimal visibility in AI-generated responses. Meanwhile, educational institutions, industry publications, and comparison platforms consistently capture citations for product-related queries.

The problem isn’t your content quality. It’s that AI engines prioritize entirely different ranking factors than traditional search: semantic query matching over keyword density, verifiable authority markers over marketing claims, and machine-readable structure over persuasive copy.

This audit exposes 15 questions that separate AI-invisible brands from citation leaders.

We’re sharing the first 7 critical questions below, covering visibility assessment, authority verification, and measurement fundamentals. These questions will reveal your most urgent gaps and provide immediate action steps.

Question 1: Are We Visible in AI-Powered Search Results?

Why This Matters: Commercial brands with strong traditional SEO often achieve minimal AI citation visibility in their categories. A recent IQRush field audit found fewer than one in ten AI-generated answers included in the brand, showing how limited visibility remains, even for strong SEO performers. Educational institutions, industry publications, and comparison sites dominate AI responses for product queries—even when commercial sites have superior content depth. In regulated industries, this gap widens further as compliance constraints limit commercial messaging while educational content flows freely into AI training data.

How to Audit:

  • Test core product or service queries through multiple AI platforms (ChatGPT, Perplexity, Claude)
  • Document which sources AI engines cite: educational sites, industry publications, comparison platforms, or adjacent content providers
  • Calculate your visibility rate: queries where your brand appears vs. total queries tested

Action: If educational/institutional sources dominate, implement their citation-driving elements:

  • Add research references and authoritative citations to product content
  • Create FAQ-formatted content with an explicit question-answer structure
  • Deploy structured data markup (Product, FAQ, Organization schemas)
  • Make commercial content as machine-readable as educational sources

IQRush tracks citation frequency across AI platforms. Competitive analysis shows which schema implementations, content formats, and authority signals your competitors use to capture citations you’re losing.

Question 2: Are Our Expertise Claims Actually Verifiable?

Why This Matters: Machine-readable validation drives AI citation decisions: research references, technical standards, certifications, and regulatory documentation. Marketing claims like “industry-leading” or “trusted by thousands” carry zero weight. In one IQRush client analysis, more than four out of five brand mentions were supported by citations—evidence that structured, verifiable content is far more likely to earn visibility. Companies frequently score high on human appeal—compelling copy, strong brand messaging—but lack the structured authority signals AI engines require. This mismatch explains why brands with excellent traditional marketing achieve limited citation visibility.

How to Audit:

  • Review your priority pages and identify every factual claim made (performance stats, quality standards, methodology descriptions)
  • For each claim, check whether it links to or cites an authoritative source (research, standards body, certification authority)
  • Calculate verification ratio: claims with authoritative backing vs. total factual claims made

Action: For each unverified claim, either add authoritative backing or remove the statement:

  • Add specific citations to key claims (research databases, technical standards, industry reports)
  • Link technical specifications to recognized standards bodies
  • Include certification or compliance verification details where applicable
  • Remove marketing claims that can’t be substantiated with machine-verifiable sources

IQRush’s authority analysis identifies which claims need verification and recommends appropriate authoritative sources for your industry, eliminating research time while ensuring proper citation implementation.

Question 3: Does Our Content Match How People Query AI Engines?

Why This Matters: Semantic alignment matters more than keyword density. Pages optimized for traditional keyword targeting often fail in AI responses because they don’t match conversational query patterns. A page targeting “best project management software” may rank well in Google but miss AI citations if it doesn’t address how users actually ask: “What project management tool should I use for a remote team of 10?” In recent IQRush client audits, AI visibility clustered differently across verticals—consumer brands surfaced more frequently for transactional queries, while financial clients appeared mainly for informational intent. Intent mapping—informational, consideration, or transactional—determines whether AI engines surface your content or skip it.

How to Audit:

  • Test sample queries customers would use in AI engines for your product category
  • Evaluate whether your content is structured for the intent type (informational vs. transactional)
  • Assess if content uses conversational language patterns vs. traditional keyword optimization

Action: Align content with natural question patterns and semantic intent:

  • Restructure content to directly address how customers phrase questions
  • Create content for each intent stage: informational (education), consideration (comparison), transactional (specifications)
  • Use conversational language patterns that match AI engine interactions
  • Ensure semantic relevance beyond just keyword matching

IQRush maps your content against natural query patterns customers use in AI platforms, showing where keyword-optimized pages miss conversational intent.

Question 4: Is Our Product Information Structured for AI Recommendations?

Why This Matters: Product recommendations require structured data. AI engines extract and compare specifications, pricing, availability, and features from schema markup—not from marketing copy. Products with a comprehensive Product schema capture more AI citations in comparison queries than products buried in unstructured text. Bottom-funnel transactional queries (“best X for Y,” product comparisons) depend almost entirely on machine-readable product data.

How to Audit:

  • Check whether product pages include Product schema markup with complete specifications
  • Review if technical details (dimensions, materials, certifications, compatibility) are machine-readable
  • Test transactional queries (product comparisons, “best X for Y”) to see if your products appear
  • Assess whether pricing, availability, and purchase information is structured

Action: Implement comprehensive product data structure:

  • Deploy Product schema with complete technical specifications
  • Structure comparison information (tables, lists) that AI can easily parse
  • Include precise measurements, certifications, and compatibility details
  • Add FAQ schema addressing common product selection questions
  • Ensure pricing and availability data is machine-readable

IQRush’s ecommerce audit scans product pages for missing schema fields—price, availability, specifications, reviews—and prioritizes implementations based on query volume in your category.

Question 5: Is Our “Fresh” Content Actually Fresh to AI Engines?

Why This Matters: Recency signals matter, but timestamp manipulation doesn’t work. Pages with recent publication dates, but outdated information underperforms older pages with substantive updates: new research citations, current industry data, or refreshed technical specifications. Genuine content updates outweigh simple republishing with changed dates.

How to Audit:

  • Review when your priority pages were last substantively updated (not just timestamp changes)
  • Check whether content references recent research, current industry data, or updated standards
  • Assess if “evergreen” content has been refreshed with current examples and information
  • Compare your content recency to competitors appearing in AI responses

Action: Establish genuine content freshness practices:

  • Update high-priority pages with current research, data, and examples
  • Add recent case studies, industry developments, or regulatory changes
  • Refresh citations to include latest research or technical standards
  • Implement clear “last updated” dates that reflect substantive changes
  • Create update schedules for key content categories

IQRush compares your content recency against competitors capturing citations in your category, flagging pages that need substantive updates (new research, current data) versus pages where timestamp optimization alone would help.

Question 6: How Do We Measure What’s Actually Working?

Why This Matters: Traditional SEO metrics—rankings, traffic, CTR—miss the consideration impact of AI citations. Brand mentions in AI responses influence purchase decisions without generating click-through attribution, functioning more like brand awareness channels than direct response. CMOs operating without AI visibility measurement can’t quantify ROI, allocate budgets effectively, or report business impact to executives.

How to Audit:

  • Review your executive dashboards: Are AI visibility metrics present alongside SEO metrics?
  • Examine your analytics capabilities: Can you track how citation frequency changes month-over-month?
  • Assess competitive intelligence: Do you know your citation share relative to competitors?
  • Evaluate coverage: Which query categories are you blind to?

Action: Establish AI citation measurement:

  • Track citation frequency for core queries across AI platforms
  • Monitor competitive citation share and positioning changes
  • Measure sentiment and accuracy of brand mentions
  • Add AI visibility metrics to executive dashboards
  • Correlate AI visibility with consideration and conversion metrics

IQRush tracks citation frequency, competitive share, and month-over-month trends across across AI platforms. No manual testing or custom analytics development is required.

Question 7: Where Are Our Biggest Visibility Gaps?

Why This Matters: Brands typically achieve citation visibility for a small percentage of relevant queries, with dramatic variation by funnel stage and product category. IQRush analysis showed the same imbalance: consumer brands often surfaced in purchase-intent queries, while service firms appeared mostly in educational prompts. Most discovery moments generate zero brand visibility. Closing these gaps expands reach at stages where competitors currently dominate.

How to Audit:

  • List queries customers would ask about your products/services across different funnel stages
  • Group them by funnel stage (informational, consideration, transactional)
  • Test each query in AI platforms and document: Does your brand appear?
  • Calculate what percentage of queries produce brand mentions in each funnel stage
  • Identify patterns in the queries where you’re absent

Action: Target the funnel stages with lowest visibility first:

  • If weak at informational stage: Build educational content that answers “what is” and “how does” queries
  • If weak at consideration stage: Create comparison content structured as tables or side-by-side frameworks
  • If weak at transactional stage: Add comprehensive product specs with schema markup
  • Focus resources on stages where small improvements yield largest reach gains

IQRush’s funnel analysis quantifies gap size by stage and estimates impact, showing which content investments will close the most visibility gaps fastest.

The Compounding Advantage of Early Action

The first seven questions and actions highlight the differences between traditional SEO performance and AI search visibility. Together, they explain why brands with strong organic rankings often have zero citations in AI answers.

The remaining 8 questions in the comprehensive audit help you take your marketing further. They focus on technical aspects: the structure of your content, the backbone of your technical infrastructure, and the semantic strategies that signal true authority to AI. 

“Visibility in AI search compounds, making it harder for your competition to break through. The brands that make themselves machine-readable today will own the conversation tomorrow.”
Raj Sapru, Netrush, Chief Strategy Officer

IQRush data shows the same thing across industries: early brands that adopt a new AI answer engine optimization strategy quickly start to lock in positions of trust that competitors can’t easily replace. Once your brand becomes the reliable answer source, AI engines will start to default to you for related queries, and the advantage snowballs.

The window to be an early adopter and take AI visibility for your brand will not stay open forever.  As more brands invest in AI visibility, the visibility race is heating up.

Download the Complete AI Search Visibility Audit with detailed assessment frameworks, implementation checklists, and the 8 strategic questions covering content architecture, technical infrastructure, and linguistic optimization. Each question includes specific audit steps and immediate action items to close your visibility gaps and establish authoritative positioning before your market becomes saturated with AI-optimized competitors.

Image Credits

Featured Image: Image by IQRush. Used with permission.

In-Post Images: Image by IQRush. Used with permission.

Trust In AI Shopping Is Limited As Shoppers Verify On Websites via @sejournal, @MattGSouthern

A new IAB and Talk Shoppe study finds AI is accelerating discovery and comparisons, but it’s not the last stop.

Here are the key points before we get into the details:

  • AI pushes people to verify details on retailer sites, search, reviews, and forums rather than replacing those steps.
  • Only about half fully trust AI recommendations, which creates predictable detours when links are broken or specs and pricing don’t match.
  • Retailer traffic rises after AI, with one in three shoppers clicking through directly from an assistant.

About The Report

This report combines more than 450 screen-recorded AI shopping sessions with a U.S. survey of 600 consumers, giving you observed behavior and stated attitudes in one place.

It tracks where AI helps, where trust breaks, and what people do next.

Key Findings

AI speeds up research and makes it more focused, especially for comparing options, but it increases the number of steps as shoppers validate details elsewhere.

In the sessions, people averaged 1.6 steps before AI and 3.8 afterward, and 95% took extra steps to feel confident before ending a session.

Retailer and marketplace sites are the primary destination for validation. Seventy-eight percent of shoppers visited a retailer or marketplace during the journey, and 32% clicked directly from an AI tool.

The share that visited retailer sites rose from 20% before AI to 50% after AI. On those pages, people most often checked prices and deals, variants, reviews, and availability.

Low Trust In AI Recommendations

Trust is a constraint. Only 46% fully trusted AI shopping recommendations.

Common friction points where people lost trust were:

  • Missing links or sources
  • Mismatched specs or pricing
  • Outdated availability
  • Recommendations that didn’t fit budget or compatibility needs

These friction points sent people back to search, retailers, reviews, and forums.

Why This Matters

AI chatbots now shape mid-journey research.

If your product data, comparison content, and reviews are inconsistent with retailer listings, shoppers will notice when they verify elsewhere.

This reinforces the need to align details across channels to retain customer trust.

What To Do With This Info

Here are concrete steps you can take based on the report’s information:

  • Keep specs, pricing, availability, and variants in sync with retailer feeds.
  • Build comparison and “alternatives” pages around the attributes people prompt for.
  • Expand structured data for specs, variants, availability, and reviews.
  • Create content to answer common objections surfaced in forums and comment threads.
  • Monitor the queries and communities where shoppers validate information to close recurring gaps.

Looking Ahead

Respondents said AI made research feel easier, but confidence still depends on clear sources and verified reviews.

Expect assistants to keep influencing discovery while retailer and brand pages confirm the details that matter.

For more insight into how AI influences the shopping journey, see the full report.


Featured Image: Andrey_Popov/Shutterstock