Google Chrome will enable “Always Use Secure Connections” by default with the release of Chrome 154 in October 2026, the company announced.
The change means Chrome will ask for user permission before loading any public website that doesn’t use HTTPS encryption. Users will see a bypassable warning explaining the security risks of unencrypted connections.
Google is rolling out the feature in stages. Chrome 147 will enable it for over 1 billion Enhanced Safe Browsing users in April 2026. All Chrome users will get it by default six months later.
What’s Changing
Public Site Warning
The warning system applies exclusively to public websites. Chrome excludes private sites including local IP addresses, single-label hostnames, and internal shortlinks.
Chris Thompson and the Chrome Security Team wrote:
“HTTP navigations to private sites can still be risky, but are typically less dangerous than their public site counterparts because there are fewer ways for an attacker to take advantage of these HTTP navigations.”
Here’s an example of what the warning will look like:
Image Credit: Google
Warning Frequency
Chrome limits how often users see warnings for the same sites. The browser won’t repeatedly warn about regularly visited insecure sites.
Testing data shows the median user sees fewer than one warning per week. The 95th percentile user sees fewer than three warnings per week.
Current HTTPS Adoption
HTTPS usage has plateaued at 95-99% of Chrome navigations across platforms. When excluding private sites, public HTTPS usage reaches 97-99% on most platforms.
Windows shows 98% HTTPS on public sites. Android and Mac exceed 99%. Linux reaches nearly 97%.
Why This Matters
You face security risks when clicking HTTP links. Attackers can hijack unencrypted navigations to load malware, exploitation tools, or phishing content.
Google’s transparency report shows HTTPS adoption stalled after rapid growth from 2015-2020. The remaining 1-5% of insecure traffic represents millions of navigations that create attack opportunities.
Website owners running HTTP-only sites have one year to migrate before Chrome warns their visitors.
You can enable “Always Use Secure Connections” today at chrome://settings/security to test how the warnings affect your site traffic.
Looking Ahead
Google continues outreach to companies responsible for the highest HTTP traffic volumes. Many sites use HTTP only for redirects to HTTPS destinations, creating an invisible security gap the new warnings will close.
Chrome plans additional work to reduce HTTPS adoption barriers for local network sites. The company introduced a local network access permission that allows HTTPS pages to communicate with private devices once users grant permission.
Users can disable warnings by turning off the “Always Use Secure Connections” setting. Enterprise and educational institutions can configure Chrome to meet their specific warning requirements.
Pomelli, a Google Labs & DeepMind AI experiment, builds a “Business DNA” from your site and generates editable branded campaign assets for small businesses.
Pomelli scans your website to create a “Business DNA” profile.
It uses the created profile to keep content consistent across channels.
It suggests campaign ideas and generates editable marketing assets.
When Google introduced the transformer architecture in its 2017 paper “Attention Is All You Need,” few realized how much it would help transform digital work. Transformer architecture laid the foundations for today’s GPTs, which are now part of our daily work in SEO and digital marketing.
Search engines have used machine learning for decades, but it was the rise of generative AI that made many of us actively explore AI. AI platforms and tools like custom GPTs are already influencing how we research keywords, generate content ideas, and analyze data.
The real value, however, is not in using these tools to cut corners. It lies in designing them intentionally, aligning them with business goals, and ensuring they serve users’ needs.
This article is not a tutorial on how to build GPTs. I share why the build process itself matters, what I have learned so far, and how SEOs can use this product mindset to think more strategically in the age of AI.
From Barriers To Democratization
Not long ago, building tools without coding experience meant relying on developers, dealing with long lead times, and waiting for vendors to release new features. That has changed slightly. The democratization of technology has lowered the entry barriers, making it possible for anyone with curiosity to experiment with building tools like custom GPTs. At the same time, expectations have necessarily risen, as we expect tools to be intuitive, efficient, and genuinely useful.
This is a reason why technical skills still matter. But they’re not enough on their own. What matters more, in my opinion, is how we apply them. Are we solving a real problem? Are we creating workflows that align with business needs?
The strategic questions SEOs should be asking are no longer just “Can I build this?,” but:
Should I build this?
What problem am I solving, and for whom?
What’s the ultimate goal?
Why The Build Process Matters
Building a custom GPT is straightforward. Anyone can add a few instructions and click “save.” What really matters is what happens before and after: defining the audience, identifying the problem, scoping the work realistically, testing and refining outputs, and aligning them with business objectives.
In many ways, this is what good marketing has always been about: understanding the audience, defining their needs, and designing solutions that meet them.
As an international SEO, I’ve often seen cultural relevance and digital accessibility treated as afterthoughts. OpenAI offered me a way to explore whether AI could help address these challenges, especially since the tool is accessible to those of us without any coding expertise.
What began as a single project to improve cultural relevance in global SEO soon evolved into two separate GPTs when I realized the scope was larger than I could manage at the time.
That change wasn’t a failure, but a part of the process that led me toward a better solution.
Case Study: 2 GPTs, 1 Lesson
The Initial Idea
My initial idea was to build a custom GPT that could generate content ideas tailored to the UK, US, Canada, and Australia, taking both linguistic and cultural nuances into account.
As an international SEO, I know it is hard to engage global audiences who expect personalized experiences. Translation alone is not enough. Content must be linguistically accurate and contextually relevant.
This mirrors the wider shift in search itself. Users now expect personalized, context-driven results, and search engines are moving in that same direction.
A Change In Direction
As I began building, I quickly realized that the scope was bigger than expected. Capturing cultural nuance across four different markets while also learning how to build and refine GPTs required more time than I could commit at that moment.
Rather than leaving the project, I reframed it as a minimum viable product. I revisited the scope and shifted focus to another important challenge, but with a more consistent requirement – digital accessibility.
The accessibility GPT was designed to flag issues, suggest inclusive phrasing, and support internal advocacy. It adapted outputs to different roles, so SEOs, marketers, and project managers could each use it in relevant ways in their day-to-day work.
This wasn’t giving up on the content project. It was a deliberate choice to learn from one use case and apply those lessons to the next.
The Outcome
Working on the accessibility GPT first helped me think more carefully about scope and validation, which paid off.
As accessibility requirements are more consistent than cultural nuance, it was easier to refine prompts and test role-specific outputs, ensuring an inclusive, non-judgmental tone.
I shared the prototype with other SEOs and accessibility advocates. Their feedback was invaluable. Although their feedback was generally positive, they pointed out inconsistencies I hadn’t seen, including how I described the prompt in the GPT store.
After all, accessibility is not just about alt text or color contrast. It’s about how information is presented.
Once the accessibility GPT was running, I went back to the cultural content GPT, better prepared, with clearer expectations and a stronger process.
The key takeaway here is that the value lies not only in the finished product, but in the process of building, testing, and refining.
Risks And Challenges Along The Way
Not every risk became an issue, but the process brought its share of challenges.
The biggest was underestimating time and scope, which I solved by revisiting the plan and starting smaller. There were also platform limitations – ongoing model development, AI fatigue, and hallucinations. OpenAI itself has admitted that hallucinations are mathematically unavoidable. The best response is to be precise with prompts, keep instructions detailed, and always maintain a human-in-the-loop approach. GPTs should be seen as assistants, not replacements.
Collaboration added another layer of complexity. Feedback loops depended on colleagues’ availability, so I had to stay flexible and allow extra time. Their input, however, was crucial – I couldn’t have made progress without them. As none of the these are under my control, I could only keep on top of developments and do my best to handle all of them.
These challenges reinforced an important truth: Building strategically isn’t about chasing perfection, but about learning, adapting, and improving with each iteration.
Applying Product Thinking
The process I followed was similar to how product managers approach new products. SEOs can adopt the same mindset to design workflows that are both practical and strategic.
Validate The Problem
Not every issue needs AI – and not every issue needs solving. Identify and prioritize what really matters at that time and confirm whether a custom GPT, or any other tool, is the right way to address it.
Define The Use Case
Who will use the GPT, and how? A wide reach may sound appealing, but value comes from meeting specific needs. Otherwise, success can quickly fade away.
My GPTs are designed to support SEOs, marketers, and project managers in different scenarios of their daily work.
Prototype And Test
There is real value in starting small. With GPTs, I needed to write clear, specific instructions, then review the outputs and refine.
For instance, instead of asking the accessibility GPT for general ideas on making a form accessible, I instructed it to act as an SEO briefing developers on fixes or as a project manager assigning tasks.
For the content GPT, I instructed the GPT to act as a UK/ U.S. content strategist, developing inclusive, culturally relevant ideas for specific publications in British English/Standard American.
Iterate With Feedback
Bring colleagues and subject-matter experts into the process early. Their insights challenge assumptions, highlight inconsistencies, and make outputs more robust.
Keep On Top Of Developments
AI platforms evolve quickly, and processes also need to adapt to different scenarios. Product thinking means staying agile, adapting to change, and reassessing whether the tools we build still serve their purpose.
The roll-out of the failed GPT-5 reminded me how volatile the landscape can be.
Practical Applications For SEOs
Why build GPTs when there are already so many excellent SEO tools available? For me, it was partly curiosity and partly a way to test what I could achieve with my existing skills before suggesting a collaboration for a different product.
Custom GPTs can add real value in specific situations, especially with a human-in-the-loop approach. Some of the most useful applications I have found include:
Analyzing campaign data to support decision-making.
Assisting with competitor analysis across global markets.
Supporting content ideation for international audiences.
Clustering keywords or highlighting internal linking opportunities.
Drafting documentation or briefs.
The point is not to replace established tools or human expertise, but to use them as assistants within structured workflows. They can free up time for deeper thinking, while still requiring careful direction and review.
How SEOs Can Apply Product Thinking
Even if you never build a GPT, you can apply the same mindset in your day-to-day work. Here are a few suggestions:
Frame challenges strategically: Ask who the end user is, what they need, and what is broken in their experience. Don’t start with tactics without context.
Design repeatable processes: Build workflows that scale and evolve over time, instead of one-off fixes.
Test and learn: Treat tactics like prototypes. Run experiments, refine based on results. If A/B testing isn’t possible, as it often happens, at least be open to making any necessary adjustments where necessary.
Collaborate across teams: SEO does not exist in isolation. Work with UX, development, and content teams early. The key is to find ways to add value to their work.
Redefine success metrics: Qualified traffic, conversions, and internal process improvements matter in AI times. Success should reflect actual business impact.
Use AI strategically: Quick wins are tempting, but GPTs and other tools are best used to support structured workflows and highlight blind spots. Keep a human-in-the-loop approach to ensure outputs are accurate and relevant to your business needs.
Final Thought
The real innovation is not in the technology itself, but in how we choose to apply it.
We are now in the fifth industrial revolution, a time when humans and machines collaborate more closely than ever.
For SEOs, the opportunity is to move beyond tactical execution and start thinking like product strategists. That means asking sharper questions, testing hypotheses, designing smarter workflows, and creating solutions that adapt to real-world constraints.
It is about providing solutions, not just executing tasks.
This is all based on the Google leak and tallies up with my experience of content that does well in Discover over time. I have pulled out what I think are the most prominent Discover proxies and grouped them into what seems like the appropriate workflow.
Like a disgraced BBC employee, thoughts are my own.
TL;DR
Your site needs to be seen as a “trusted source” with low SPAM, evaluated by proxies like publisher trust score, in order to be eligible.
Discover is driven by a six-part pipeline, using good vs. bad clicks (long dwell time vs. pogo-sticking) and repeat visits to continuously score and re-score content quality.
Fresh content gets an initial boost. Success hinges on a strong CTR and positive early-stage engagement (good clicks/shares from all channels count, not just Discover).
Content that aligns with a user’s interests is prioritised. To optimize, focus on your areas of topical authority, use a compelling headline(s), be entity-driven, and use large (1200px+) images.
Image Credit: Harry Clarkson-Bennett
I count 15 different proxies that Google uses to guide satiate the doomscrollers’ desperate need for quality content in the Discover feed. It’s not that different to how traditional Google search works.
But traditional search (a high-quality pull channel) is worlds apart from Discover. Audiences killing time on trains. At their in-laws. The toilet. Given they’re part of the same ecosystem, they’re bundled together into one monolithic entity.
And here’s how it works.
Image Credit: Harry Clarkson-Bennett
Google’s Discover Guidelines
This section is boring, and Google’s guidelines around eligibility are exceptionally vague:
Then they give some solid, albeit beige advice around quality titles – clicky, not baity as John Shehata would say. Ensuring your featured image is at least 1200px wide and creating timely, value-added content.
But we can do better.
Discover’s Six-Part Content Pipeline
From cradle to grave, let’s review exactly how your content does or, in most cases, doesn’t appear in Discover. As always, remembering I have made these clusters up, albeit based on real Google proxies from the Google leak.
Eligibility check and baseline filtering.
Initial exposure and testing.
User quality assessment.
Engagement and feedback loop.
Personalization layer.
Decay and renewal cycles.
Eligibility And Baseline Filtering
For starters, your site has to be eligible for Google Discover. This means you are seen as a “trusted source” on the topic, and you have a low enough SPAM score that the threshold isn’t triggered.
There are three primary proxy scores to account for eligibility and baseline filtering:
is_discover_feed_eligible: a Boolean feature that filters non-eligible pages.
publisher_trustScore: a score that evaluates publisher reliability and reputation.
topicAuthority_discover: a score that helps Discover identify trusted sources at the topic level.
The site’s reputation and topical authority are ranked for the topic at hand. These three metrics help evaluate whether your site is eligible to appear in Discover.
Initial Exposure And Testing
This is very much the freshness stage, where fresh content is given a temporary boost (because contemporary content is more likely to satiate a dopamine addicted mind).
freshnessBoost_discover: provides a temporary boost for fresh content to keep the feed alive.
discover_clicks: where early-stage article clicks are used as a predictor of popularity.
headlineClickModel_discover: is a predictive CTR model based on the headline and image.
I would hypothesize that using a Bayesian style predictive model, Google applies learnings at a site and subfolder level to predict likely CTR. The more quality content you have published over time (presumably at a site, subfolder and author level), the more likely you are to feature.
Because there is less ambiguity. A key feature of SEO now.
User Quality Assessment
An article is ultimately judged by the quality of user engagement. Google uses the good and bad click style model from Navboost to establish what is and isn’t working for users. Low CTR and/or pogo-sticking style behavior downgrades an article’s chance of featuring.
Valuable content is decided by the good vs bad click ratio. Repeat visits are used to measure lasting satisfaction and re-rank top-performing content.
discover_blacklist_score: Penalty for spam, misinformation, or clickbait.
goodClicks_discover: Positive user interactions (long dwell time).
badClicks_discover: Negative interactions (bounces, short dwell).
nav_boosted_discover_clicks: Repeat or return engagement metric.
The quality of the article is then measured by its user engagement. As Discover is a personalized platform, this can be done accurately and at scale. Cohorts of users can be grouped together. People with the same general interests are served the content if, by the algorithm’s standard, they should be interested.
But if the overly clicky or misleading title delivers poor engagement (dwell time and on-page interactions), then the article may be downgraded. Over time, this kind of practice can compound and nerf your site completely.
Headlines like this are a one way ticket to devaluing your brand in the eyes of people and search engines (Image Credit: Harry Clarkson-Bennett)
Important to note that this click data doesn’t have to come from Discover. Once an article is out in the ether – it’s been published, shared on social, etc. – Chrome click data is stored and is applied to the algorithm.
So, the more quality click data and shares you can generate early in an article’s lifecycle (accounting for the importance of freshness), the better your chance of success on Discover. Treat it like a viral platform. Make noise. Do marketing.
Engagement And Feedback Loop
Once the article enters the proverbial fray, a scoring and rescoring loop begins. Continuous CTR, impressions, and explicit user feedback (like, hate, and “don’t show me this again, please” style buttons) feed models like Navboost to refine what gets shown.
discover_impressions: The number of times an article appears in a Discover feed.
discover_ctr: Clicks divided by impressions. Impressions and click data feed CTR modelling
discover_feedback_negative: Specific user feedback, i.e., not interested suppresses content for individuals, groups, and on the platform as a whole.
These behavioral signals define an article’s success. It lives or dies on relatively simple metrics. And the more you use it, the better it gets. Because it knows what you and your cohort are more likely to click and enjoy.
This is as true in Discover as it is in the main algorithm. Google admitted as such in the DoJ rulings. (Image Credit: Harry Clarkson-Bennett)
I imagine headline and image data are stored so that the algorithm can apply some rigorous standards to statistical modelling. Once it knows what types of headlines, images and articles perform best for specific cohorts, personalisation becomes effective faster.
Personalization Layer
Google knows a lot about us. It’s what its business is built on. It collects a lot of non-anonymized data (credit card details, passwords, contact details, etc.) alongside every conceivable interaction you have with webpages.
Discover takes personalization to the next level. I think it may offer an insight into how part of the SERP could look like in the future. A personalized cluster of articles, videos, and social posts designed to hook you in embedded somewhere alongside search results and AI Mode.
All of this is designed to keep you on Google’s owned properties for longer. Because they make more money that way.
Hint: They want to keep you around because they make more money (Image Credit: Harry Clarkson-Bennett)
contentEmbeddings_discover: Content embeddings determine how well the content aligns with the user’s interests. This powers Discover’s interest-matching engine.
personalization_vector_match: This module dynamically personalises the user’s feed in real-time. It identifies similarity between content and user interest vectors.
Content that matches well with your personal and cohort’s interest will be boosted into your feed.
You can see the site’s you engage with frequently using the site engagement page in Chrome (from your toolbar: chrome://site-engagement/) and every stored interaction with histograms. This histogram data indirectly shows key interaction points you have with web pages, by measuring the browser’s response and performance around those interactions.
It doesn’t explicitly say user A clicked X, but logs the technical impact, i.e., how long did the browser spending processing said click or scroll.
Decay And Renewal Cycles
Discover boosts freshness because people are thirsty for it. By boosting fresh content, older or saturated stories naturally decay as the news cycle moves on and article engagement declines.
For successful stories, this is through market saturation.
freshnessDecay_timer: This module measures recency decay after initial exposure, gradually reducing visibility to make way for fresher content.
content_staleness_penalty: Outdated content or topics are given a lower priority once engagement starts to decline to keep the feed current.
Discover is Google’s answer to a social network. None of us spend time in Google. It’s not fun. I use the word fun loosely. It isn’t designed to hook us in and ruin our attention spans with constant spiking of dopamine.
Videos, social posts, articles … the whole nine yards. I wish they’d stop summarizing literally everything with AI, however.
My 11-Step Workflow To Get The Most Out Of Google Discover
Follow basic principles and you will put yourself in good stead. Understand where your site is topically strong and focus your time on content that will drive value. Multiple ways you can do this.
If you don’t feature much in Discover, you can use your Search Console click and impressions data to identify areas where you generate the highest value. Where you are topically authoritative. I would do this at a subfolder and entity level (e.g., politics and Rachel Reeves or the Labor Party).
Also worth breaking this down in total and by article. Or you can use something like Ahrefs’ Traffic Share report to determine your share of voice via third-party data.
Essentially share of voice data (Image Credit: Harry Clarkson-Bennett)
Then really focus your time on a) areas where you’re already authoritative and b) areas that drive value for your audience.
Assuming you’re not focusing on NSFW content and you’re vaguely eligible, here’s what I would do:
Identify your areas of topical authority. Where do you already rank effectively at a subfolder level? Is there a specific author who performs best? Try to build on your valuable content hubs with content that should drive extra value in this area.
Invest in content that will drive real value (links and engagement) in these areas. Do not chase clicks via Discover. It’s a one-way ticket to clickbait city.
Make sure you’re plugged into the news cycle. Being first has a huge impact on your news visibility in search. If you’re not first on the scene, make sure you’re adding something additional to the conversation. Be bold. Add value. Understand how news SEO really works.
Be entity-driven. In your headlines, first paragraph, subheadings, structured data, and image alt text. Your page should remove ambiguity. You need to make it incredibly clear who this page is about. A lack of clarity is partly why Google rewrites headlines.
Use the Open Graph title. The OG title is a headline that doesn’t show on your page. Primarily designed for social media use, it is one of the most commonly picked up headlines in Discover. It can be jazzy. Curiosity led. Rich. Interesting. But still entity-focused.
Make sure you share content likely to do well on Discover across relevant push channels early in its lifecycle. It needs to outperform its predicted early-stage performance.*
Create a good page experience. Your page (and site) should be fast, secure, ad-lite, and memorable for the right reasons.
Try to drive quality onward journeys. If you can treat users from Discover differently to your main site, think about how you would link effectively for them. Maybe you use a pop-up “we think you’ll like this next” section based on a user’s scroll depth of dwell time.
Get the traffic to convert. While Discover is a personalized feed, the standard scroller is not very engaged. So, focus on easier conversions like registrations (if you’re a subscriber first company) or advertising revenue et al.
Keep a record of your best performers. Evergreen content can be refreshed and repubbed year after year. It can still drive value.
*What I mean here is if your content is predicted to drive three shares and two links, if you share it on social and in newsletters and it drives seven shares and nine links, it is more likely to go viral.
As such, the algorithm identifies it as ‘Discover-worthy.’
Local SEO includes several specific tasks geared to establishing the relevance and authority of a business within a targeted geographic area.
Search engines and large language models (LLMs) like Google Gemini and ChatGPT reference many different data points to determine who will be surfaced in their respective result sets, which include AI Overviews and AI Mode in Google, featured snippets, local map packs, image or video carousels, and other emerging search formats.
So, how can you identify and prioritize optimizations with the greatest potential to deliver converting traffic to your website or your business door from traditional organic local SEO or AI search?
Below, we’ll walk through an evaluation of each key facet of your local search presence and uncover your best opportunities to improve your visibility in traditional organic and AI search.
These tasks are listed in typical order of completion during a full audit, but some can be accomplished concurrently.
1. Keyword Topic/AI Prompt Audit
Although the introduction of AI in search has changed the keyword-first strategy, the natural place to start a local SEO audit is in organic and AI search results. Start with the topical keywords, phrases, and AI prompts you are hoping your business will be found for, in order to identify where you are positioned relative to your competitors and other websites/content.
This research can help you quickly identify where you have established some level of authority/momentum to build on, as well as topics upon which you should not waste your time and effort.
SEO is a long-term strategy, so no keyword or prompt should be summarily dismissed. Even so, it’s generally best to focus on keyword topics you realistically have a chance to gain visibility and drive traffic for. Pay close attention to the intent behind the keywords you choose and ideally focus on those with commercial or transactional intent, as informational content search results are largely being dominated by AI summaries.
You will also need to consider optimizing for conversational search queries or prompts and voice search, as AI Mode will increasingly rely on natural language processing.
Further, some younger users have developed different searching behaviors altogether and are using social media platforms like Instagram and TikTok for local searches. Search optimization for these platforms is a different conversation, but having an eye on how your business and its products/services are found when searching here can provide insight into how searches are conducted in more traditional and emerging AI formats.
Different people search in different ways, and it’s important not to limit your research to single keywords, but rather account for the various ways and phrases your audience may use to try to find you or your offerings; hence, taking a topical approach. This only becomes amplified in AI search, where every prompt is the beginning of a potentially long, drawn-out chat.
2. Website Audit
You can now conduct full content and technical website audits to ensure your site is optimized for maximum crawlability, indexability, and visibility by search engine and LLM crawlers. A typical audit is designed to analyze the underlying structure, content, and overall site experience.
Here again, there are many site auditing tools to crawl a website and then identify issues and prioritize actions to be taken based on SEO best practices.
A website audit and optimization can be broken down into a few buckets:
Page Optimization
Webpage optimization is all about ensuring pages are well structured, focused around targeted topical keywords, and provide a positive user experience.
As a search engine crawls a webpage, it looks for signals to determine what the page is about and what questions it can answer. These crawlers analyze the entire page to determine its focus, but specifically focus on page titles and headings as primary descriptors. A well-structured page with a hierarchical heading structure is key to helping site visitors, search engine and LLM bots easily scan and consume your content.
Ideally, each webpage is keyword topic-focused and unique. As such, keyword variations should be used consistently in titles, URLs, headings, and body content.
Another important potential issue raised in an audit, depending on the nature of your local business, is image optimization. As a best practice, all images should include relevant descriptive filenames and alt text, which may include pertinent keywords. This becomes particularly important when images (e.g., product or service photos) are central to your business, as image carousels can and will show up in web search results. In every case, attention should be paid to the images appearing on your primary ranking pages.
Lastly, an over-reliance on JavaScript can be particularly detrimental for LLM visibility, as some LLMs currently do not execute JavaScript. If your site is powered by JavaScript, you’ll want to address this with your developer to see how the most important content can be presented in raw HTML or via server-side scripting to enable crawling and indexing.
Internal Link Audit
A link audit will help you quickly identify any potential misdirected or broken links, which can create a less-than-optimal experience for your site visitors and may confuse search engine and LLM bots.
Links are likewise signals the search engines use to determine the structure of a website and its ability to direct searchers to appropriate, authoritative answers to their questions.
Part of this audit should include the identification of opportunities to crosslink prominent pages. If a page within your site has keywords (anchor text) referencing relevant content on another page, a link should be created, provided the link logically guides users to more relevant content or an appropriate conversion point.
External links should also be considered, especially when there is an opportunity to link to an authoritative source of information. From a local business perspective, this may include linking to relevant local organizations, partners, or events.
Schema Review
Schema or structured data can help search engines and LLMs better understand your business and its offerings and offer enhanced visibility. An effective local SEO audit should include the identification of content within a website to which schema can be applied.
Local businesses have an opportunity to have their content highlighted if they:
Publish highly authoritative and relevant content.
As most consumers search via their mobile devices – especially for local services – it’s essential for local businesses to provide a positive mobile web experience. Websites need to load quickly, be easily navigated, and enable seamless user interaction.
Google offers a range of free mobile testing and mobile-specific monitoring tools, such as Page Experience and Core Web Vitals, in Google Search Console.
More in-depth user experience and SEO analysis can be done via Google Lighthouse, though a local business owner will likely want to enlist the help of a web developer to action any of the recommendations this tool provides.
As such, it’s important to let Google know if your website contains any content/pages you did not create by adding a canonical tag to the HTML header of the page. Most pages, which are unique unto themselves, will have a self-referencing canonical.
Not doing so can have a detrimental effect on your authority and, by extension, your ability to rank. Most site auditing tools will flag content missing or having malformed canonical tags.
3. Google Business Profile Audit
A Google Business Profile (GBP) effectively represents a “secondary” website and highly visible point of presence for most local businesses. Increasingly, this “secondary” website is becoming the consumers’ first point of contact.
A recent behavioral study of travel booking in AI Mode conducted by Propellic found GBP to be among the most highly displayed and engaged content for searchers booking local accommodations and experiences.
A Google Business Profile audit should focus on the accuracy and completeness of the various components within the profile, including:
Business information and location details.
Correct primary business category.
Hours of operation.
Correct pin location in Google Maps.
Proper categorization as a physical location or service area business.
Products.
Services.
Appointment link(s), if applicable.
Photos or Videos.
Social Profiles.
Offers.
Regular updates.
Events.
Informational content.
Screenshot from Google Business Profile, September 2025
The more complete the profile is, the more likely it will be viewed as a reliable local resource and be given appropriate billing in the search results.
Assuming you have claimed and are authorized to manage your GBP, you can access and edit your info directly within the search results.
4. Review Monitoring And Management
Another very important aspect of a GBP is reviews.
Local business customers have an opportunity to write reviews, which appear on the GBP for other customers to reference and play a significant role in determining visibility in the local map pack. They are most certainly a determining factor with regard to appearing in Google AI Overviews.
Google will notify business owners as soon as reviews are submitted, and they should be responded to as soon as possible. This goes for negative reviews just as much as positive ones. Include an analysis of your reviews to ensure none have fallen through the cracks. This will also help determine whether there are recurring customer service and satisfaction issues or themes to be addressed. A detailed analysis of reviews can be a great source of content ideas aimed at answering customers’ most pressing questions or concerns.
Of course, there are also several other places for consumers to submit reviews, including Facebook, local review sites like Yelp, and industry-specific sites such as TripAdvisor and Houzz. A full audit should take inventory of reviews left on any of these services, as they can show up in search results.
Pro tip: Request positive reviews from all customers and politely suggest they reference the product or service they are reviewing, as keywords contained in reviews can have a positive effect from ranking perspective.
It is important to have a presence in reputable local directories, review sites, business directories (e.g., Chambers of Commerce), or local partner sites to prove your “localness.”
Depending on the size and scope of your local business, an audit of your listings and citations can be done in an automated or manual fashion.
Business listings and citation management tools can be used to find, monitor, and update all primary citations with your proper Name, Address, Phone Number (aka NAP), and other pertinent business details found in broader listings (e.g., website address, business description).
If you manage a limited number of locations and have the time, one quick method of identifying where your current listings can be found is to simply conduct a search on your business name. The first three to four pages of search results should reveal the same.
It’s also important to find and resolve any duplicate listings to prevent confusing customers and search engines alike with outdated, inaccurate information.
Local business owners and managers should also monitor Reddit for their brand and local product/service offerings to gauge activity and sentiment. Reddit is a unique platform where “karma” and trust are tantamount, but there is an opportunity for brands and local businesses to engage with their customers if they do it in a transparent, authentic, and non-promotional way.
6. Backlink Audit
Backlinks or inbound links are similar to citations, but are effectively any links to your website pages from other third-party websites.
Links remain an important factor in determining the authority of a website, as they lend validity if they come from relevant, reputable sources.
As with other components of an audit, there are several good free and paid backlink tools available, including a link monitoring service in Google Search Console, which is a great place to start.
An effective backlink audit has the dual purpose of identifying and building links via potentially valuable backlink sources, which can positively affect your ranking and visibility.
For local businesses, reputable local sources of links are naturally beneficial in validating location, as noted with citations above.
Potential backlink sources can be researched in a variety of locations:
Free and paid backlink research tools, such as Ahrefs or Semrush identify any domains where your primary competition has acquired backlinks, but you have not.
Any non-competitive sites appearing in the organic search results for your primary keywords are, by definition, good potential backlink sources. Look for directories you can be listed in, blogs or articles you can comment on, or publications you can submit articles to.
Referral sources in Google Analytics may reveal relevant external websites where you already have links and may be able to acquire more.
You want to be found throughout your customer’s search experience. A content audit can be used to make sure you have helpful content for each of the journey buckets your audience members may find themselves in.
Informational content may be distributed via social or other external channels or published on your website to help educate your consumers on the products, services, and differentiators you offer at the beginning of their path to purchase.
As AI is consuming and repurposing much of this informational content, it’s important to ensure your informational content includes your unique perspective based on your experience and expertise. This content ideally answers your prospects’ why, how, and what types of questions.
Transactional content is designed to address those consumers who already know what they want, but are in the process of deciding where or who to purchase from. This type of content may include reviews, testimonials, or competitive comparisons.
Navigational content ensures when people click through from Google after having searched your brand name or a variation thereof, they land on a page or information validating your position as a leader in your space. This page should also include a clear call-to-action with the assumption they have arrived with a specific goal in mind.
Commercial content addresses those consumers who have signaled a strong intent to buy. Effective local business sites and social pages must include offers, coupons, discounts, and clear paths to purchase.
Optimizing Content For AI
From an AI search and visibility perspective, keep in mind the vast majority of AI results are responses to long-form questions/prompts from consumers. As such, it is crucial for some of your content to be in a direct question/answer format.
One quick and effective tactic is the creation of an FAQ section within product or service pages. However, avoid overseeding FAQs by including generic questions and answers. FAQs should be specific to the pages they reside on.
We’ve previously touched upon the importance of structured content for improved crawling, scanning, and comprehension. When reviewing your content, look for opportunities to incorporate defined heading structures, tables of contents for long-form content, and ordered lists.
Content Variety And Distribution
Quality content is content your audience wants to consume, like, and share. For many businesses, this means considering and experimenting with content beyond simple text and images.
Video content shared via platforms like YouTube, Instagram, Facebook, TikTok, and others is easier to consume and generally more engaging.
8. Google Search Console Review
Google Search Console is an invaluable free resource for data related to keyword and content performance, indexing, schema/rich results validation, mobile/desktop experience monitoring, and security/manual actions.
A complete local SEO audit must include a review and analysis of this data to identify and react to strengths, weaknesses, opportunities, and threats outlined in each section.
Google Search Console screenshot, September 2025
Website owners and managers will want to pay particular attention to any issues related to pages not being crawled/indexed or manual actions having been taken based on questionable practices, as both can have a detrimental effect on search engine visibility.
Google Search Console does send notifications for these types of issues as well as regular performance updates, but an audit will ensure nothing has been overlooked.
9. Analytics Review
Whether you are using Google Analytics or another site/visitor tracking solution, the data available here is useful during an audit to validate top and lesser-performing content, traffic sources, audience profiles, and paths to purchase.
Findings in analytics will be key to your content audit.
As you review your site analytics, you may ask the following questions:
Are my top-visited pages also my top-ranking pages in search engines?
Which are my top entry pages from organic and AI search?
Which LLMs are sending traffic to my site?
Which pages/content are not receiving the level of traffic or engagement desired?
What is the typical path to purchase on my site, and can it be condensed or otherwise optimized?
Which domains are my top referrers, and are there opportunities to further leverage these sites for backlinks? (see Backlink Audit above).
Use Google Analytics (or another tool of your choice) to find the answers to these questions, so you can focus and prioritize your content and keyword optimization efforts.
You may already have a good sense of who your competition is, but to begin, it’s always a good idea to confirm who specifically shows up in the organic search and AI results when you enter your target keywords. You may find different competitors in these two formats, which represent both a threat and an opportunity.
These businesses/domains are your true online competitors and the sites you can learn the most from. If any of your online competitors’ sites and/or pages are ranking ahead of yours, you’ll want to review what they may be doing to gain this advantage.
You can follow the same checklist of steps you would conduct for your own audit to identify how they may be optimizing their keywords, content, Google Business Profile, reviews, local business listings, or backlinks.
In general, the best way to outperform your competition is to provide a better overall experience online and off, which includes generating more relevant, unique, high-quality content to more fully address the questions your mutual customers have.
11. AI Search For Local Businesses
AI Overviews and AI Mode are increasingly superseding traditional organic search results in Google, as the search engine aims to provide the answers to questions directly within its SERPs. Further, Google has signalled its commitment to AI Mode by recently integrating it into the Chrome address bar.
While AI search optimization has some new considerations, a strong foundation in traditional SEO will go a long way to building visibility in AI search results; chief among these at a local level is a fully optimized Google Business Profile, which appears prominently for local searches with commercial intent as outlined above.
Screenshot of Google AI Mode displaying Google Business Profile Cards, September 2025
AI Mode Strategy Checklist Should Consider:
Enhanced GBP Features: Stay updated on new features within Google Business Profile, allowing for direct interactions or transactions, as these will be favored by AI Mode.
Focus on User Intent: Understand the transactional and informational intent behind local searches. AI Mode aims to provide immediate solutions, so businesses facilitating this will gain an advantage.
Voice Search Optimization: As AI Mode becomes more conversational, optimizing for natural language queries and voice search will be crucial. Ensure your content answers questions directly and uses conversational language.
Direct Action Integrations: This may still be a ways away, but review and explore opportunities to integrate with Google’s booking or reservation features, if applicable to your business. This could become a direct pathway to conversions within AI Mode.
Prioritizing Your Action Items
A complete local SEO audit is going to produce a fairly significant list of action items.
Many of the keyword, site, content, and backlink auditing tools do a good job of prioritizing tasks; however, the list can still be daunting.
One of the best places to start with an audit action plan is around the keywords, AI prompts, and content you have already established some, but not enough, authority for.
Determine how to best address deficiencies or opportunities to optimize this content first before moving on to more competitive keywords or those you have less or no visibility for. Establishing authority and trust is a long-term game.
These audit items should be reviewed every six to 12 months, depending on the size and scale of your web presence, to enable the best chance of being found by your local target audience.
Capture Links, Mentions, and Citations That Make a Difference
Backlinks alone no longer move the authority needle. Brand mentions are just as critical for visibility, recognition, and long-term SEO success. Are your campaigns capturing both?
Join Michael Johnson, CEO of Resolve, for a webinar where he shares a replicable campaign framework that aligns media outreach, SEO impact, and brand visibility, helping your campaigns become long-term assets.
What You’ll Learn
The Resolve Campaign Framework: Step-by-step approach to ideating, creating, and pitching SEO-focused digital PR campaigns.
Real Campaign Case Studies: Examples of campaigns that created a compounding effect of links, mentions, and brand recognition.
Techniques for Measuring Success: How to evaluate the SEO and branding impact of your campaigns.
Why You Can’t Miss This Webinar
Successful SEO campaigns today capture authority on multiple fronts. This session provides actionable strategies for engineering campaigns that work hand in hand with SEO, GEO, and AEO to grow your brand.
📌 Register now to learn how to design campaigns that earn visibility, links, and citations.
🛑 Can’t attend live? Register anyway, and we’ll send you the recording so you don’t miss out.
A few weeks ago, I set out on what I thought would be a straightforward reporting journey.
After years of momentum for AI—even if you didn’t think it would be good for the world, you probably thought it was powerful enough to take seriously—hype for the technology had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?
I searched and searched for them. As I did, more news fueled the idea of an AI bubble that, if popped, would spell doom economy-wide. Stories spread about the circular nature of AI spending, layoffs, the inability of companies to articulate what exactly AI will do for them. Even the smartest people building modern AI systems were saying the tech has not progressed as much as its evangelists promised.
But after all my searching, companies that took these developments as a sign to perhaps not go all in on AI were nowhere to be found. Or, at least, none that were willing to admit it. What gives?
There are several interpretations of this one reporter’s quest (which, for the record, I’m presenting as an anecdote and not a representation of the economy), but let’s start with the easy ones. First is that this is a huge score for the “AI is a bubble” believers. What is a bubble if not a situation where companies continue to spend relentlessly even in the face of worrying news? The other is that underneath the bad headlines, there’s not enough genuinely troubling news about AI to convince companies they should pivot.
But it could also be that the unbelievable speed of AI progress and adoption has made me think industries are more sensitive to news than they perhaps should be. I spoke with Martha Gimbel, who leads the Yale Budget Lab and coauthored a report finding that AI has not yet changed anyone’s jobs. What I gathered is that Gimbel, like many economists, thinks on a longer time scale than anyone in the AI world is used to.
“It would be historically shocking if a technology had had an impact as quickly as people thought that this one was going to,” she says. In other words, perhaps most of the economy is still figuring out what the hell AI even does, not deciding whether to abandon it.
The other reaction I heard—particularly from the consultant crowd—is that when executives hear that so many AI pilots are failing, they indeed take it very seriously. They’re just not reading it as a failure of the technology itself. They instead point to pilots not moving quickly enough, companies lacking the right data to build better AI, or a host of other strategic reasons.
Even if there is incredible pressure, especially on public companies, to invest heavily in AI, a few have taken big swings on the technology only to pull back. The buy now, pay later company Klarna laid off staff and paused hiring in 2024, claiming it could use AI instead. Less than a year later it was hiring again, explaining that “AI gives us speed. Talent gives us empathy.”
Drive-throughs, from McDonald’s to Taco Bell, ended pilots testing the use of AI voice assistants. The vast majority of Coca-Cola advertisements, according to experts I spoke with, are not made with generative AI, despite the company’s $1 billion promise.
So for now, the question remains unanswered: Are there companies out there rethinking how much their bets on AI will pay off, or when? And if there are, what’s keeping them from talking out loud about it? (If you’re out there, email me!)
Balancing humanlike interaction with safety concerns: Suleyman emphasizes that Microsoft’s new Copilot features—including group chat and the “Real Talk” personality—are designed to keep AI as a tool serving humanity rather than a replacement for human connection. The company deliberately avoids building chatbots that encourage romantic or sexual relationships, drawing clear boundaries where others in the industry see market opportunity.
Personality as craft, not deception: While acknowledging that engaging personalities make AI more useful, Suleyman argues the industry must learn to “sculpt” emotional intelligence carefully.
Reframing the “digital species” metaphor: Suleyman clarifies that describing AI as a new digital species isn’t endorsing consciousness or rights for machines; rather, it’s a warning about what’s coming that demands proper containment. He insists the goal is keeping AI subordinate to human interests, not granting it autonomy or moral consideration that would distract from protecting actual human rights.
Mustafa Suleyman, CEO of Microsoft AI, is trying to walk a fine line. On the one hand, he thinks that the industry is taking AI in a dangerous direction by building chatbots that present as human: He worries that people will be tricked into seeing life instead of lifelike behavior. In August, he published a much-discussed post on his personal blog that urged his peers to stop trying to make what he called “seemingly conscious artificial intelligence,” or SCAI.
On the other hand, Suleyman runs a product shop that must compete with those peers. Last week, Microsoft announced a string of updates to its Copilot chatbot, designed to boost its appeal in a crowded market in which customers can pick and choose between a pantheon of rival bots that already includes ChatGPT, Perplexity, Gemini, Claude, DeepSeek, and more.
I talked to Suleyman about the tension at play when it comes to designing our interactions with chatbots and his ultimate vision for what this new technology should be.
One key Copilot update is a group-chat feature that lets multiple people talk to the chatbot at the same time. A big part of the idea seems to be to stop people from falling down a rabbit hole in a one-on-one conversation with a yes-man bot. Another feature, called Real Talk, lets people tailor how much Copilot pushes back on you, dialing down the sycophancy so that the chatbot challenges what you say more often.
Copilot also got a memory upgrade, so that it can now remember your upcoming events or long-term goals and bring up things that you told it in past conversations. And then there’s Mico, an animated yellow blob—a kind of Chatbot Clippy—that Microsoft hopes will make Copilot more accessible and engaging for new and younger users.
Microsoft says the updates were designed to make Copilot more expressive, engaging, and helpful. But I’m curious how far those features can be pushed without starting down the SCAI path that Suleyman has warned about.
Suleyman’s concerns about SCAI come at a time when we are starting to hear more and more stories about people being led astray by chatbots that are too engaging, too expressive, too helpful. OpenAI is being sued by the parents of a teenager who they allege was talked into killing himself by ChatGPT. There’s even a growing scene that celebrates romantic relationships with chatbots.
In our conversation, Suleyman told me what he was trying to get across in that TED Talk, why he really believes SCAI is a problem, and why Microsoft would never build sex robots (his words). He had a lot of answers, but he left me with more questions.
Our conversation has been edited for length and clarity.
In an ideal world, what kind of chatbot do you want to build? You’ve just launched a bunch of updates to Copilot. How do you get the balance right when you’re building a chatbot that has to compete in a market in which people seem to value humanlike interaction, but you also say you want to avoid seemingly conscious AI?
It’s a good question. With group chat, this will be the first time that a large group of people will be able to speak to an AI at the same time. It really is a way of emphasizing that AIs shouldn’t be drawing you out of the real world. They should be helping you to connect, to bring in your family, your friends, to have community groups, and so on.
That is going to become a very significant differentiator over the next few years. My vision of AI has always been one where an AI is on your team, in your corner.
This is a very simple, obvious statement, but it isn’t about exceeding and replacing humanity—it’s about serving us. That should be the test of technology at every step. Does it actually, you know, deliver on the quest of civilization, which is to make us smarter and happier and more productive and healthier and stuff like that?
So we’re just trying to build features that constantly remind us to ask that question, and remind our users to push us on that issue.
Last time we spoke, you told me that you weren’t interested in making a chatbot that would role-play personalities. That’s not true of the wider industry. Elon Musk’s Grok is selling that kind of flirty experience. OpenAI has said it’s interested in exploring new adult interactions with ChatGPT. There’s a market for that. And yet this is something you’ll just stay clear of?
Yeah, we will never build sex robots. Sad in a way that we have to be so clear about that, but that’s just not our mission as a company. The joy of being at Microsoft is that for 50 years, the company has built, you know, software to empower people, to put people first.
Sometimes, as a result, that means the company moves slower than other startups and is more deliberate and more careful. But I think that’s a feature, not a bug, in this age, when being attentive to potential side effects and longer-term consequences is really important.
And that means what, exactly?
We’re very clear on, you know, trying to create an AI that fosters a meaningful relationship. It’s not that it’s trying to be cold and anodyne—it cares about being fluid and lucid and kind. It definitely has some emotional intelligence.
So where does it—where do you—draw those boundaries?
Our newest chat model, which is called Real Talk, is a little bit more sassy. It’s a bit more cheeky, it’s a bit more fun, it’s quite philosophical. It’ll happily talk about the big-picture questions, the meaning of life, and so on. But if you try and flirt with it, it’ll push back and it’ll be very clear—not in a judgmental way, but just, like: “Look, that’s not for me.”
There are other places where you can go to get that kind of experience, right? And I think that’s just a decision we’ve made as a company.
Is a no-flirting policy enough? Because if the idea is to stop people even imagining an entity, a consciousness, behind the interactions, you could still get that with a chatbot that wanted to keep things SFW. You know, I can imagine some people seeing something that’s not there even with a personality that’s saying, hey, let’s keep this professional.
Here’s a metaphor to try to make sense of it. We hold each other accountable in the workplace. There’s an entire architecture of boundary management, which essentially sculpts human behavior to fit a mold that’s functional and not irritating.
The same is true in our personal lives. The way that you interact with your third cousin is very different to the way you interact with your sibling. There’s a lot to learn from how we manage boundaries in real human interactions.
It doesn’t have to be either a complete open book of emotional sensuality or availability—drawing people into a spiraled rabbit hole of intensity—or, like, a cold dry thing. There’s a huge spectrum in between, and the craft that we’re learning as an industry and as a species is to sculpt these attributes.
And those attributes obviously reflect the values of the companies that design them. And I think that’s where Microsoft has a lot of strengths, because our values are pretty clear, and that’s what we’re standing behind.
A lot of people seem to like personalities. Some of the backlash to GPT-5, for example, was because the previous model’s personality had been taken away. Was it a mistake for OpenAI to have put a strong personality there in the first place, to give people something that they then missed?
No, personality is great. My point is that we’re trying to sculpt personality attributes in a more fine-grained way, right?
Like I said, Real Talk is a cool personality. It’s quite different to normal Copilot. We are also experimenting with Mico, which is this visual character, that, you know, people—some people—really love. It’s much more engaging. It’s easier to talk to about all kinds of emotional questions and stuff.
I guess this is what I’m trying to get straight. Features like Mico are meant to make Copilot more engaging and nicer to use, but it seems to go against the idea of doing whatever you can to stop people thinking there’s something there that you are actually having a friendship with.
Yeah. I mean, it doesn’t stop you necessarily. People want to talk to somebody, or something, that they like. And we know that if your teacher is nice to you at school, you’re going to be more engaged. The same with your manager, the same with your loved ones. And so emotional intelligence has always been a critical part of the puzzle, so it’s not to say that we don’t want to pursue it.
It’s just that the craft is in trying to find that boundary. And there are some things which we’re saying are just off the table, and there are other things which we’re going to be more experimental with. Like, certain people have complained that they don’t get enough pushback from Copilot—they want it to be more challenging. Other people aren’t looking for that kind of experience—they want it to be a basic information provider. The task for us is just learning to disentangle what type of experience to give to different people.
I know you’ve been thinking about how people engage with AI for some time. Was there an inciting incident that made you want to start this conversation in the industry about seemingly conscious AI?
I could see that there was a group of people emerging in the academic literature who were taking the question of moral consideration for artificial entities very seriously. And I think it’s very clear that if we start to do that, it would detract from the urgent need to protect the rights of many humans that already exist, let alone animals.
If you grant AI rights, that implies—you know—fundamental autonomy, and it implies that it might have free will to make its own decisions about things. So I’m really trying to frame a counter to that, which is that it won’t ever have free will. It won’t ever have complete autonomy like another human being.
AI will be able to take actions on our behalf. But these models are working for us. You wouldn’t want a pack of, you know, wolves wandering around that weren’t tame and that had complete freedom to go and compete with us for resources and weren’t accountable to humans. I mean, most people would think that was a bad idea and that you would want to go and kill the wolves.
Okay. So the idea is to stop some movement that’s calling for AI welfare or rights before it even gets going, by making sure that we don’t build AI that appears to be conscious? What about not building that kind of AI because certain vulnerable people may be tricked by it in a way that may be harmful? I mean, those seem to be two different concerns.
I think the test is going to be in the kinds of features the different labs put out and in the types of personalities that they create. Then we’ll be able to see how that’s affecting human behavior.
But is it a concern of yours that we are building a technology that might trick people into seeing something that isn’t there? I mean, people have claimed they’ve seen sentience inside far less sophisticated models than we have now. Or is that just something that some people will always do?
It’s possible. But my point is that a responsible developer has to do our best to try and detect these patterns emerging in people as quickly as possible and not take it for granted that people are going to be able to disentangle those kinds of experiences themselves.
When I read your post about seemingly conscious AI, I was struck by a line that says: “We must build AI for people; not to be a digital person.” It made me think of a TED Talk you gave last year where you say that the best way to think about AI is as a new kind of digital species. Can you help me understand why talking about this technology as a digital species isn’t a step down the path of thinking about AI models as digital persons or conscious entities?
I think the difference is that I’m trying to offer metaphors that make it easier for people to understand where things might be headed, and therefore how to avert that and how to control it.
Okay.
It’s not to say that we should do those things. It’s just pointing out that this is the emergence of a technology which is unique in human history. And if you just assume that it’s a tool or just a chatbot or a dumb— you know, I kind of wrote that TED Talk in the context of a lot of skepticism. And I think it’s important to be clear-eyed about what’s coming so that one can think about the right guardrails.
And yet, if you’re telling me this technology is a new digital species, I have some sympathy for the people who say, well, then we need to consider welfare.
I wouldn’t. [He starts laughing.] Just not in the slightest. No way. It’s not a direction that any of us want to go in.
No, that’s not what I meant. I don’t think chatbots should have welfare. I’m saying I’d have some sympathy for where such people were coming from when they hear, you know, Mustafa Suleyman tell them that this thing he’s building was a new digital species. I’d understand why they might then say that they wanted to stand up for it. I’m saying the words we use matter, I guess.
The rest of the TED Talk was all about how to contain AI and how not to let this species take over, right? That was the whole point of setting it up as, like, this is what’s coming. I mean, that’s what my whole book [The Coming Wave, published in 2023] was about—containment and alignment and stuff like that. There’s no point in pretending that it’s something that it’s not and then building guardrails and boundaries that don’t apply because you think it’s just a tool.
Honestly, it does have the potential to recursively self-improve. It does have the potential to set its own goals. Those are quite profound things. No other technology we’ve ever invented has that. And so, yeah, I think that it is accurate to say that it’s like a digital species, a new digital species. That’s what we’re trying to restrict to make sure it’s always in service of people. That’s the target for containment.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
“We will never build a sex robot,” says Mustafa Suleyman
Mustafa Suleyman, CEO of Microsoft AI, is trying to walk a fine line. On the one hand, he thinks that the industry is taking AI in a dangerous direction by building chatbots that present as human: He worries that people will be tricked into seeing life instead of lifelike behavior.
On the other hand, Suleyman runs a product shop that must compete with those peers. Last week, Microsoft announced a string of updates to its Copilot chatbot designed to make Copilot more expressive, engaging, and helpful.
Will Douglas Heaven, our senior AI editor, talked to Suleyman about the tension at play when it comes to designing our interactions with chatbots and his ultimate vision for what this new technology should be.Read the full story.
An AI adoption riddle
—James O’Donnell, senior AI reporter
A few weeks ago, I set out on what I thought would be a straightforward reporting journey.
After years of momentum for AI, hype had been slightly punctured. First there was the underwhelming release of GPT-5 in August. Then a report released two weeks later found that 95% of generative AI pilots were failing, which caused a brief stock market panic. I wanted to know: Which companies are spooked enough to scale back their AI spending?
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Hundreds of thousands of ChatGPT users exhibit severe mental health symptoms That’s according to estimates from OpenAI, which says it has tweaked GPT-5 to respond more effectively to users in distress. (Wired $) + OpenAI won’t lock access to force users to take a break, though. (Gizmodo) + Why AI should be able to “hang up” on you. (MIT Technology Review)
2 Elon Musk has launched his answer to Wikipedia Grokipedia’s right-leaning entries reflect the way the billionaire sees the world. (WP $) + Several pages perpetuate historical inaccuracies and conservative views. (Wired $) + The AI-generated encyclopedia briefly crashed shortly after it launched. (Engadget)
3 Surgeons have removed a pig kidney from a patient It was the longest-functioning genetically engineered pig kidney so far. (Wired $) + “Spare” living human bodies might provide us with organs for transplantation. (MIT Technology Review)
4 Amazon is planning to cut up to 30,000 corporate jobs Partly in response to staff’s reluctance to return to the office five days a week. (Reuters) + The company is planning yet another round of layoffs in January. (NYT $)
5 Older people can’t get enough of screens Their digital habits mirror the high usage typically observed among teenagers. (Economist $)
6 A British cyclist has been given a 3D-printed face Dave Richards received severe third-degree burns to his head after being struck by a drunk driver. (The Guardian)
7 The twitter.com domain is being shut down Make sure you re-enroll your security and passkeys before the big switch-off. (Fast Company $) + It means the abandoned accounts could be sold on. (The Verge) + But 2FA apps should be fine—in theory. (The Register)
8 When is a moon not a moon? Believe it or not, we don’t have an official definition. (The Atlantic $) + Astronomers have spotted a “quasi-moon” hovering near Earth. (BBC) + The moon is just the beginning for this waterless concrete. (MIT Technology Review)
9 Threads’ ghost posts will disappear after 24 hours If anyone saw them in the first place, that is. (TechCrunch)
10 In the metaverse, anyone can be a K-pop superstar Virtual idols are gaining huge popularity, before crossing over into real-world fame. (Rest of World) + Meta’s former metaverse head has been moved into its AI team. (FT $)
Quote of the day
“The impulse to control knowledge is as old as knowledge itself. Controlling what gets written is a way to gain or keep power.”
—Ryan McGrady, senior research fellow at the University of Massachusetts Amherst, reflects on Elon Musk’s desire to create his own online encyclopedia to the New York Times.
One more thing
Inside Amsterdam’s high-stakes experiment to create fair welfare AI
Amsterdam thought it was on the right track. City officials in the welfare department believed they could build technology that would prevent fraud while protecting citizens’ rights. They followed these emerging best practices and invested a vast amount of time and money in a project that eventually processed live welfare applications. But in their pilot, they found that the system they’d developed was still not fair and effective. Why?
Lighthouse Reports, MIT Technology Review, and the Dutch newspaper Trouw have gained unprecedented access to the system to try to find out. Read about what we discovered.
—Eileen Guo, Gabriel Geiger & Justin-Casimir Braun
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Happy 70th birthday to Bill Gates, who is not revered enough for his chair-jumping skills. + Bring back Guitar Hero—the iconic game that convinced us all we were capable of knocking out Heart’s Barracuda (note: the majority of us were not.) + Even the swankiest parts of London aren’t immune to rumours of ghostly hauntings. + Justice for medieval frogs and their unfair reputation!
The market is officially three years post ChatGPT and many of the pundit bylines have shifted to using terms like “bubble” to suggest reasons behind generative AI not realizing material returns outside a handful of technology suppliers.
In September, the MIT NANDA report made waves because the soundbite every author and influencer picked up on was that 95% of all AI pilots failed to scale or deliver clear and measurable ROI. McKinsey earlier published a similar trend indicating that agentic AI would be the way forward to achieve huge operational benefits for enterprises. At The Wall Street Journal’s Technology Council Summit, AI technology leaders recommended CIOs stop worrying about AI’s return on investment because measuring gains is difficult and if they were to try, the measurements would be wrong.
This places technology leaders in a precarious position–robust tech stacks already sustain their business operations, so what is the upside to introducing new technology?
For decades, deployment strategies have followed a consistent cadence where tech operators avoid destabilizing business-critical workflows to swap out individual components in tech stacks. For example, a better or cheaper technology is not meaningful if it puts your disaster recovery at risk.
While the price might increase when a new buyer takes over mature middleware, the cost of losing part of your enterprise data because you are mid-way through transitioning your enterprise to a new technology is way more severe than paying a higher price for a stable technology that you’ve run your business on for 20 years.
So, how do enterprises get a return on investing in the latest tech transformation?
First principle of AI: Your data is your value
Most of the articles about AI data relate to engineering tasks to ensure that an AI model infers against business data in repositories that represent past and present business realities.
However, one of the most widely-deployed use cases in enterprise AI begins with prompting an AI model by uploading file attachments into the model. This step narrows an AI model’s range to the content of the uploaded files, accelerating accurate response times and reducing the number of prompts required to get the best answer.
This tactic relies upon sending your proprietary business data into an AI model, so there are two important considerations to take in parallel with data preparation: first, governing your system for appropriate confidentiality; and second, developing a deliberate negotiation strategy with the model vendors, who cannot advance their frontier models without getting access to non-public data, like your business’ data.
Recently, Anthropic and OpenAI completed massive deals with enterprise data platforms and owners because there is not enough high-value primary data publicly available on the internet.
Most enterprises would automatically prioritize confidentiality of their data and design business workflows to maintain trade secrets. From an economic value point of view, especially considering how costly every model API call really is, exchanging selective access to your data for services or price offsets may be the right strategy. Rather than approaching model purchase/onboarding as a typical supplier/procurement exercise, think through the potential to realize mutual benefits in advancing your suppliers’ model and your business adoption of the model in tandem.
Second principle of AI: Boring by design
According to Information is Beautiful, in 2024 alone, 182 new generative AI models were introduced to the market. When GPT5 came into the market in 2025, many of the models from 12 to 24 months prior were rendered unavailable until subscription customers threatened to cancel. Their previously stable AI workflows were built on models that no longer worked. Their tech providers thought the customers would be excited about the newest models and did not realize the premium that business workflows place on stability. Video gamers are happy to upgrade their custom builds throughout the entire lifespan of the system components in their gaming rigs, and will upgrade the entire system just to play a newly released title.
However, behavior does not translate to business run rate operations. While many employees may use the latest models for document processing or generating content, back-office operations can’t sustain swapping a tech stack three times a week to keep up with the latest model drops. The back-office work is boring by design.
The most successful AI deployments have focused on deploying AI on business problems unique to their business, often running in the background to accelerate or augment mundane but mandated tasks. Relieving legal or expense audits from having to manually cross check individual reports but putting the final decision in a humans’ responsibility zone combines the best of both.
The important point is that none of these tasks require constant updates to the latest model to deliver that value. This is also an area where abstracting your business workflows from using direct model APIs can offer additional long-term stability while maintaining options to update or upgrade the underlying engines at the pace of your business.
Third principle of AI: Mini-van economics
The best way to avoid upside-down economics is to design systems to align to the users rather than vendor specs and benchmarks.
Too many businesses continue to fall into the trap of buying new gear or new cloud service types based on new supplier-led benchmarks rather than starting their AI journey from what their business can consume, at what pace, on the capabilities they have deployed today.
While Ferrari marketing is effective and those automobiles are truly magnificent, they drive the same speed through school zones and lack ample trunk space for groceries. Keep in mind that every remote server and model touched by a user layers on the costs and design for frugality by reconfiguring workflows to minimize spending on third-party services.
Too many companies have found that their customer support AI workflows add millions of dollars of operational run rate costs and end up adding more development time and cost to update the implementation for OpEx predictability. Meanwhile, the companies that decided that a system running at the pace a human can read—less than 50 tokens per second—were able to successfully deploy scaled-out AI applications with minimal additional overhead.
There are so many aspects of this new automation technology to unpack—the best guidance is to start practical, design for independence in underlying technology components to keep from disrupting stable applications long term, and to leverage the fact that AI technology makes your business data valuable to the advancement of your tech suppliers’ goals.
This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.