Google Q3 Report: AI Mode, AI Overviews Lift Total Search Usage via @sejournal, @MattGSouthern

Google used its Q3 earnings call to argue that AI features are expanding search usage rather than cannibalizing it.

CEO Sundar Pichai described an “expansionary moment for Search,” adding that Google’s AI experiences “highlight the web” and send “billions of clicks to sites every day.”

Pichai said overall queries and commercial queries both grew year over year, and that the growth rate increased in Q3 versus Q2, largely driven by AI Overviews and AI Mode.

What Did Google Report In Its Q3 Earnings?

AI Mode & AI Overviews

Pichai reported “strong and consistent” week-over-week growth for AI Mode in the U.S., with queries doubling in the quarter.

He said Google rolled AI Mode out globally across 40 languages, reached over 75 million daily active users, and shipped more than 100 improvements in Q3.

He also said AI Mode is already driving “incremental total query growth for Search.”

Pichai reiterated that AI Overviews “drive meaningful query growth,” noting the effect was “even stronger” in Q3 and more pronounced among younger users.

Revenue: By The Numbers

Alphabet posted $102.3 billion in revenue, its first $100B quarter. “Google Search & other” revenue reached $56.6 billion, up from $49.4 billion a year earlier.

YouTube ads revenue reached $10.26 billion in Q3. Pichai said YouTube “has remained number one in streaming watch time in the U.S. for more than two years, according to Nielsen.”

Pichai added that in the U.S. “Shorts now earn more revenue per watch hour than traditional in-stream.”

The quarter also included a $3.5 billion European Commission fine that Alphabet notes when discussing margins. Excluding that charge, operating margin was 33.9%.

Why It Matters

Google is telling Wall Street that AI surfaces expand search rather than replace it. If that holds, the company has reason to put AI Mode and AI Overviews in front of more queries.

The near-term implication for marketers is a distribution shift inside Google, not a pullback from search.

What’s missing is as important as what was said. Google didn’t share outbound click share from AI experiences or new reporting to track them. Expect adoption to grow while measurement lags. Teams will be relying on their own analytics to judge impact.

The revenue backdrop supports continued investment. “Search & other” rose year over year and Google highlighted growth in commercial queries. Paid budgets are likely to remain with Google as AI-led sessions take up a larger share of usage.

Looking Ahead

Google plans to keep pushing AI-led search surfaces. Pichai said the company is “looking forward to the release of Gemini 3 later this year,” which would give AI Mode and AI Overviews a stronger model foundation if the timing holds.

Google described Chrome as “a browser powered by AI” with deeper integrations to Gemini and AI Mode and “more agentic capabilities coming soon.”

The company also raised 2025 capex guidance to $91–$93 billion to meet AI demand, which supports continued investment in search infrastructure and features.


Featured Image: Photo Agency/Shutterstock

DeepSeek may have found a new way to improve AI’s ability to remember

<div data-chronoton-summary="

  • Memory Through Images: DeepSeek’s new OCR model stores information as visual rather than text tokens, a technique that allows it to retain more data. This approach could drastically reduce computing costs and carbon footprint while improving AI’s ability to ‘remember’.
  • Addressing Context Rot: The model works a bit like human memory, storing older or less critical information in slightly blurred form to save space. This could help address the fact current AI systems forget or muddle information over long conversations, a problem dubbed “context rot.”
  • DeepSeek Disruption: DeepSeek shocked the AI industry with its efficient DeepSeek-R1 reasoning model in January, and is again pushing boundaries. The OCR system can generate over 200,000 training data pages daily on a single GPU, potentially addressing the industry’s severe shortage of quality training text.

” data-chronoton-post-id=”1126932″ data-chronoton-expand-collapse=”1″ data-chronoton-analytics-enabled=”1″>

An AI model released by the Chinese AI company DeepSeek uses new techniques that could significantly improve AI’s ability to “remember.”

Released last week, the optical character recognition (OCR) model works by extracting text from an image and turning it into machine-readable words. This is the same technology that powers scanner apps, translation of text in photos, and many accessibility tools. 

OCR is already a mature field with numerous high-performing systems, and according to the paper and some early reviews, DeepSeek’s new model performs on par with top models on key benchmarks.

But researchers say the model’s main innovation lies in how it processes information—specifically, how it stores and retrieves memories. Improving how AI models “remember” information could reduce the computing power they need to run, thus mitigating AI’s large (and growing) carbon footprint. 

Currently, most large language models break text down into thousands of tiny units called tokens. This turns the text into representations that models can understand. However, these tokens quickly become expensive to store and compute with as conversations with end users grow longer. When a user chats with an AI for lengthy periods, this challenge can cause the AI to forget things it’s been told and get information muddled, a problem some call “context rot.”

The new methods developed by DeepSeek (and published in its latest paper) could help to overcome this issue. Instead of storing words as tokens, its system packs written information into image form, almost as if it’s taking a picture of pages from a book. This allows the model to retain nearly the same information while using far fewer tokens, the researchers found. 

Essentially, the OCR model is a test bed for these new methods that permit more information to be packed into AI models more efficiently. 

Besides using visual tokens instead of just text tokens, the model is built on a type of tiered compression that is not unlike how human memories fade: Older or less critical content is stored in a slightly more blurry form in order to save space. Despite that, the paper’s authors argue, this compressed content can still remain accessible in the background while maintaining a high level of system efficiency.

Text tokens have long been the default building block in AI systems. Using visual tokens instead is unconventional, and as a result, DeepSeek’s model is quickly capturing researchers’ attention. Andrej Karpathy, the former Tesla AI chief and a founding member of OpenAI, praised the paper on X, saying that images may ultimately be better than text as inputs for LLMs. Text tokens might be “wasteful and just terrible at the input,” he wrote. 

Manling Li, an assistant professor of computer science at Northwestern University, says the paper offers a new framework for addressing the existing challenges in AI memory. “While the idea of using image-based tokens for context storage isn’t entirely new, this is the first study I’ve seen that takes it this far and shows it might actually work,” Li says.

The method could open up new possibilities in AI research and applications, especially in creating more useful AI agents, says Zihan Wang, a PhD candidate at Northwestern University. He believes that since conversations with AI are continuous, this approach could help models remember more and assist users more effectively.

The technique can also be used to produce more training data for AI models. Model developers are currently grappling with a severe shortage of quality text to train systems on. But the DeepSeek paper says that the company’s OCR system can generate over 200,000 pages of training data a day on a single GPU.

The model and paper, however, are only an early exploration of using image tokens rather than text tokens for AI memorization. Li says she hopes to see visual tokens applied not just to memory storage but also to reasoning. Future work, she says, should explore how to make AI’s memory fade in a more dynamic way, akin to how we can recall a life-changing moment from years ago but forget what we ate for lunch last week. Currently, even with DeepSeek’s methods, AI tends to forget and remember in a very linear way—recalling whatever was most recent, but not necessarily what was most important, she says. 

Despite its attempts to keep a low profile, DeepSeek, based in Hangzhou, China, has built a reputation for pushing the frontier in AI research. The company shocked the industry at the start of this year with the release of DeepSeek-R1, an open-source reasoning model that rivaled leading Western systems in performance despite using far fewer computing resources. 

The AI Hype Index: Data centers’ neighbors are pivoting to power blackouts

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry.

Just about all businesses these days seem to be pivoting to AI, even when they don’t seem to know exactly why they’re investing in it—or even what it really does. “Optimization,” “scaling,” and “maximizing efficiency” are convenient buzzwords bandied about to describe what AI can achieve in theory, but for most of AI companies’ eager customers, the hundreds of billions of dollars they’re pumping into the industry aren’t adding up. And maybe they never will.

This month’s news doesn’t exactly cast the technology in a glowing light either. A bunch of NGOs and aid agencies are using AI models to generate images of fake suffering people to guilt their Instagram followers. AI translators are pumping out low-quality Wikipedia pages in the languages most vulnerable to going extinct. And thanks to the construction of new AI data centers, lots of neighborhoods living in their shadows are getting forced into their own sort of pivots—fighting back against the power blackouts and water shortages the data centers cause. How’s that for optimization?

The Download: Boosting AI’s memory, and data centers’ unhappy neighbors

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

DeepSeek may have found a new way to improve AI’s ability to remember

The news: An AI model released by Chinese AI company DeepSeek uses new techniques that could significantly improve AI’s ability to “remember.”

How it works: The optical character recognition model works by extracting text from an image and turning it into machine-readable words. This is the same technology that powers scanner apps, translation of text in photos, and many accessibility tools.

Why it matters: Researchers say the model’s main innovation lies in how it processes information—specifically, how it stores and retrieves data. Improving how AI models “remember” could reduce how much computing power they need to run, thus mitigating AI’s large (and growing) carbon footprint. Read the full story.

—Caiwei Chen

The AI Hype Index: Data centers’ neighbors are pivoting to power blackouts

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here.

Roundtables: seeking climate solutions in turbulent times

Yesterday we held a subscriber-only conversation exploring how companies are pursuing climate solutions amid political shifts in the US.

Our climate reporters James Temple and Casey Crownhart sat down with our science editor Mary Beth Griggs to dig into the most promising climate technologies right now. Watch the session back here!

MIT Technology Review Narrated: Supershoes are reshaping distance running

“Supershoes” —which combine a lightweight, energy-­returning foam with a carbon-fiber plate for stiffness—have been behind every broken world record in distances from 5,000 meters to the marathon since 2020.

To some, this is a sign of progress—for both the field as a whole and for athletes’ bodies. Still, some argue that they’ve changed the sport too quickly.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Hurricane Melissa may be the Atlantic Ocean’s strongest on record
There’s little doubt in scientists’ minds that human-caused climate change is to blame. (New Scientist $)+ While Jamaica is largely without power, no deaths have been confirmed. (BBC)
+ The hurricane is currently sweeping across Cuba. (NYT $)
+ Here’s what we know about hurricanes and climate change. (MIT Technology Review)

2 Texas is suing Tylenol over the Trump administration’s autism claims
Even though the scientific evidence is unfounded. (NY Mag $)
+ The lawsuit claims the firm violated Texas law by claiming the drug was safe. (WP $)

3 Two US Senators want to ban AI companions for minors
They want AI companies to implement age-verification processes, too. (NBC News)
+ The looming crackdown on AI companionship. (MIT Technology Review)

3 Trump’s “golden dome” plan is seriously flawed 
It’s unlikely to offer anything like the protection he claims it will. (WP $)
+ Why Trump’s “golden dome” missile defense idea is another ripped straight from the movies. (MIT Technology Review)

4 The Trump administration is backing new nuclear plants
To—surprise surprise—power the AI boom. (NYT $)
+ The grid is straining to support the excessive demands for power. (Reuters)+ Can nuclear power really fuel the rise of AI? (MIT Technology Review)

5 Uber’s next fleet of autonomous cars will contain Nvidia’s new chips 
Which could eventually make it cheaper to hail a robotaxi. (Bloomberg $)
+ Nvidia is also working with a company called Lucid to bring autonomous cars to consumers. (Ars Technica)

6 Weight loss drugs are becoming more commonplace across the world
Semaglutide patents are due to expire in Brazil, China and India next year. (Economist $)+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)

7 More billionaires hail from America than any other nation
The majority of them have made their fortunes working in technology. (WSJ $)
+ China is closing in on America’s global science lead. (Bloomberg $)

8 Australian police are developing an AI tool to decode Gen Z slang
It’s in a bid to combat the rising networks of young men targeting vulnerable girls online. (The Guardian)

9 This robot housekeeper is controlled remotely by a human 🤖
Nothing weird about that at all… (WSJ $)
+ The humans behind the robots. (MIT Technology Review)

10 Cameo is suing OpenAI
It’s unhappy about Sora’s new Cameo feature. (Reuters)

Quote of the day

“I don’t believe we’re in an AI bubble.”

—Jensen Haung, Nvidia’s CEO, conveniently dismisses the growing concerns around the AI hype train, Bloomberg reports.

One more thing

How to befriend a crow

Crows have become minor TikTok celebrities thanks to CrowTok, a small but extremely active niche on the social video app that has exploded in popularity over the past two years. CrowTok isn’t just about birds, though. It also often explores the relationships that corvids—a family of birds including crows, magpies, and ravens—develop with human beings.

They’re not the only intelligent birds around, but in general, corvids are smart in a way that resonates deeply with humans. But how easy is it to befriend them? And what can it teach us about attention, and patience, in a world that often seems to have little of either? Read the full story.

—Abby Ohlheiser

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Congratulations to Flava Flav, who’s been appointed Team USA’s official hype man for the 2026 Winter Olympics!
+ Why are Spirographs so hypnotic? Answers on a postcard.
+ I love this story—and beautiful photos—celebrating 50 years of the World Gay Rodeo.
+ Axolotls really are remarkable little creatures.

Building a high performance data and AI organization (2nd edition)

Four years is a lifetime when it comes to artificial intelligence. Since the first edition of this study was published in 2021, AI’s capabilities have been advancing at speed, and the advances have not slowed since generative AI’s breakthrough. For example, multimodality— the ability to process information not only as text but also as audio, video, and other unstructured formats—is becoming a common feature of AI models. AI’s capacity to reason and act autonomously has also grown, and organizations are now starting to work with AI agents that can do just that.

Amid all the change, there remains a constant: the quality of an AI model’s outputs is only ever as good as the data
that feeds it. Data management technologies and practices have also been advancing, but the second edition of this study suggests that most organizations are not leveraging those fast enough to keep up with AI’s development. As a result of that and other hindrances, relatively few organizations are delivering the desired business results from their AI strategy. No more than 2% of senior executives we surveyed rate their organizations highly in terms of delivering results from AI.

To determine the extent to which organizational data performance has improved as generative AI and other AI advances have taken hold, MIT Technology Review Insights surveyed 800 senior data and technology executives. We also conducted in-depth interviews with 15 technology and business leaders.

Key findings from the report include the following:

Few data teams are keeping pace with AI. Organizations are doing no better today at delivering on data strategy than in pre-generative AI days. Among those surveyed in 2025, 12% are self-assessed data “high achievers” compared with 13% in 2021. Shortages of skilled talent remain a constraint, but teams also struggle with accessing fresh data, tracing lineage, and dealing with security complexity—important requirements for AI success.

Partly as a result, AI is not fully firing yet. There are even fewer “high achievers” when it comes to AI. Just 2% of respondents rate their organizations’ AI performance highly today in terms of delivering measurable business results. In fact, most are still struggling to scale generative AI. While two thirds have deployed it, only 7% have done so widely.

Download the report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Exclusive eBook: The Math on AI’s Energy Footprint

In this exclusive subscirber-only ebook you’ll learn how the emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.

by James O’Donnell and Casey Crownhart May 20, 2025

Table of contents

  • Part One: Making the model
  • Part Two: A Query
  • Part Three: Fuel and emissions
  • Part Four: The future ahead

Related Content:

New Ecommerce Tools: October 29, 2025

This week’s rundown of new products and services for ecommerce merchants includes updates on agentic commerce, conversational shopping, customer feedback, cryptocurrencies, shipping tools, post-transaction ads, website builders, and autonomous site optimization.

Got an ecommerce product release? Email updates@practicalecommerce.com.

New Tools for Merchants

PayPal launches Agent Ready for AI-driven shopping. PayPal‘s new agentic payments solution, Agent Ready, enables merchants to accept payments on AI platforms. Whether for conversational AI or through browser-automated experiences, Agent Ready powers fraud detection, buyer protection, and dispute resolution. Also, PayPal is introducing Store Sync to make merchants’ product data discoverable within leading AI channels, and purchasable through partner integrations such as Wix, Cymbio, Commerce (BigCommerce and Feedonomics), and Shopware.

Web page for PayPal Agentic Commerce

PayPal Agentic Commerce

OpenAI and PayPal partner on Instant Checkout and agentic commerce. PayPal will adopt the Agentic Commerce Protocol from OpenAI to expand payments in ChatGPT. Users can check out instantly with PayPal, which supports payment processing for merchants leveraging OpenAI Instant Checkout. The partnership opens PayPal’s wallet in Instant Checkout, offering funding options, buyer and seller protections, and post-purchase services. PayPal will also connect its merchant network to OpenAI, creating a platform for businesses and brands to sell within ChatGPT.

Amazon launches AI-powered shopping feature Help Me Decide. To help shoppers quickly choose between similar products, Amazon has announced a feature called Help Me Decide, which analyzes a shopper’s browsing activity, searches, purchase history, and preferences. Users can also access Help Me Decide by tapping “Keep shopping for” on the home page. Help Me Decide is available to U.S. consmers in the Amazon Shopping app and mobile browser.

Enterpret launches agentic customer feedback platform. Enterpret, a customer intelligence platform, has launched an agentic customer feedback tool to help companies resolve inquiries autonomously. According to Enterpret, the platform ingests feedback from more than 50 channels and continuously maps customer interactions across products, support, and community conversations. By replacing manual tagging and maintenance with adaptive AI, Enterpret enables teams to identify glitches affecting key customer segments, quantify potential revenue impact, and act in real time.

Home page of Enterpret

Enterpret

Bloomreach and Uniform unveil turnkey conversational commerce framework. Bloomreach, a personalization platform, and Uniform, a composable digital experience tool, have announced a turnkey AI solution that enables any brand to build conversational experiences. Built on AWS, the solution aggregates product catalogs from multiple commerce platforms through Uniform and enriches them with AI-driven tagging, normalization, and metadata optimization. The product data then flows into Bloomreach Clarity, where it powers a conversational shopping interface that allows consumers to browse, compare, and shop via natural dialogue.

WSPN launches Checkout, bringing stablecoin payments to ecommerce. Worldwide Stablecoin Payment Network, a provider of stablecoin infrastructure, has launched WSPN Checkout, a payment service that embeds stablecoin technology directly into ecommerce merchant acquiring. WSPN Checkout enables ecommerce platforms to accept stablecoin payments while offering flexible settlement options through licensed payment service providers. WSPN Checkout provides merchants with a complete payment infrastructure, featuring automated settlement, advanced reconciliation, and full API integration deployable within seven business days.

Shippo launches AI-powered tool to show when packages will arrive. Shippo, a shipping platform for ecommerce businesses, has launched Shippo Estimate, an AI-powered feature that estimates in advance when a package will arrive. Shippo Estimate analyzes millions of actual deliveries and shows clear, expected delivery dates for every package, allowing merchants to choose the optimal shipping option that balances speed and cost for every order. Shippo Estimate is available on the company’s Pro Plan.

Home page of Shippo

Shippo

Rokt partners with PayPal on the post-transaction experience. Rokt has announced that its post-purchase ad platform now integrates with PayPal. This integration displays advertisements on thank-you pages across PayPal and Venmo, as well as on merchant confirmation pages through Honey, the shopping-rewards application. Rokt says it uses advanced machine learning to unlock incremental revenue and drive engagement by presenting customers with relevant, high-value curated offers the moment the transaction is completed.

Alibaba launches AI chatbot service. Alibaba has launched an AI chatbot service and integrated it into its Quark app, a platform that began as a browser and is now the company’s flagship consumer application. The new free service allows shoppers to access a chatbot via text or voice for real-time information and services.

Shopee partners with Meta to enhance shopping through Facebook Live, Reels. Shopee, a Singapore-based ecommerce marketplace, has partnered with Meta to develop tools for Facebook users to discover and purchase products. Creators with affiliate accounts can visit Facebook Affiliate Partnerships and curate relevant products and tag affiliate listings directly in posts and Reels. The partnership will also leverage Facebook Live, adding Collaborative Ads featuring Shopee product catalogs in livestreams. The integration is live in Thailand, Indonesia, Vietnam, and the Philippines.

Affirm expands partnership with Worldpay for Platforms on embedded payments. Affirm, a pay-over-time network, has partnered with Worldpay for Platforms, a payment services provider. The deal will integrate Affirm into Worldpay’s embedded payments offering. Platforms can offer Affirm as a payment method to their merchants. Approved customers can split purchases into biweekly or monthly payments, from 30 days to 60 months, from $35 to $30,000.

Home page of Worldpay for Platforms

Worldpay for Platforms

AI-powered website builder Lovable integrates with Shopify. Lovable.dev, an AI-powered website builder, app developer, and designer using natural language prompts, has integrated with Shopify. Users can describe what they want to build, and Lovable will set it up — from product pages to shopping cart to checkout. Users can then claim the store on Shopify and publish.

Mastercard and PayPal partner on global agentic commerce. Mastercard Agent Pay, the company’s agentic payments platform, now integrates with PayPal’s wallet, enabling AI agents to complete transactions on behalf of PayPal users. PayPal will pilot the Mastercard Agent Pay Acceptance Framework and co-develop and test with agents and merchants. According to PayPal, the work will ensure compatibility with Mastercard and common agentic protocols, and enable AI agent verification and data exchange compatible with recently announced agentic protocols.

DHL launches consolidated clearance service for U.S. imports. DHL Global Forwarding, the air and ocean freight specialist of DHL Group, has launched its Consolidated Clearance Service for U.S. imports. Per DHL, the service offers a streamlined customs clearance process that consolidates multiple shipments under a single entry. It supports businesses transitioning from de minimis clearance for their U.S. imports to formal and informal entry.

Moonshot AI raises $10 million for AI-driven autonomous website optimization. Moonshot AI, a no-code website optimization platform, has announced $10 million in seed funding. Moonshot says its platform continuously analyzes user behavior, generates on-site experiences using generative AI, tests them live, and deploys optimized experiences automatically. Mighty Capital led the round, which included Oceans Ventures, Uncorrelated, Garuda Ventures, and Almaz Capital.

Home page for Moonshot AI

Moonshot AI

Chrome To Warn Users Before Loading HTTP Sites Starting Next Year via @sejournal, @MattGSouthern

Google Chrome will enable “Always Use Secure Connections” by default with the release of Chrome 154 in October 2026, the company announced.

The change means Chrome will ask for user permission before loading any public website that doesn’t use HTTPS encryption. Users will see a bypassable warning explaining the security risks of unencrypted connections.

Google is rolling out the feature in stages. Chrome 147 will enable it for over 1 billion Enhanced Safe Browsing users in April 2026. All Chrome users will get it by default six months later.

What’s Changing

Public Site Warning

The warning system applies exclusively to public websites. Chrome excludes private sites including local IP addresses, single-label hostnames, and internal shortlinks.

Chris Thompson and the Chrome Security Team wrote:

“HTTP navigations to private sites can still be risky, but are typically less dangerous than their public site counterparts because there are fewer ways for an attacker to take advantage of these HTTP navigations.”

Here’s an example of what the warning will look like:

Image Credit: Google

Warning Frequency

Chrome limits how often users see warnings for the same sites. The browser won’t repeatedly warn about regularly visited insecure sites.

Testing data shows the median user sees fewer than one warning per week. The 95th percentile user sees fewer than three warnings per week.

Current HTTPS Adoption

HTTPS usage has plateaued at 95-99% of Chrome navigations across platforms. When excluding private sites, public HTTPS usage reaches 97-99% on most platforms.

Windows shows 98% HTTPS on public sites. Android and Mac exceed 99%. Linux reaches nearly 97%.

Why This Matters

You face security risks when clicking HTTP links. Attackers can hijack unencrypted navigations to load malware, exploitation tools, or phishing content.

Google’s transparency report shows HTTPS adoption stalled after rapid growth from 2015-2020. The remaining 1-5% of insecure traffic represents millions of navigations that create attack opportunities.

Website owners running HTTP-only sites have one year to migrate before Chrome warns their visitors.

You can enable “Always Use Secure Connections” today at chrome://settings/security to test how the warnings affect your site traffic.

Looking Ahead

Google continues outreach to companies responsible for the highest HTTP traffic volumes. Many sites use HTTP only for redirects to HTTPS destinations, creating an invisible security gap the new warnings will close.

Chrome plans additional work to reduce HTTPS adoption barriers for local network sites. The company introduced a local network access permission that allows HTTPS pages to communicate with private devices once users grant permission.

Users can disable warnings by turning off the “Always Use Secure Connections” setting. Enterprise and educational institutions can configure Chrome to meet their specific warning requirements.


Featured Image: Philo Athanasiou/Shutterstock

Google Labs & DeepMind Launch Pomelli AI Marketing Tool via @sejournal, @MattGSouthern

Pomelli, a Google Labs & DeepMind AI experiment, builds a “Business DNA” from your site and generates editable branded campaign assets for small businesses.

  • Pomelli scans your website to create a “Business DNA” profile.
  • It uses the created profile to keep content consistent across channels.
  • It suggests campaign ideas and generates editable marketing assets.
Why The Build Process Of Custom GPTs Matters More Than The Technology Itself

When Google introduced the transformer architecture in its 2017 paper “Attention Is All You Need,” few realized how much it would help transform digital work. Transformer architecture laid the foundations for today’s GPTs, which are now part of our daily work in SEO and digital marketing.

Search engines have used machine learning for decades, but it was the rise of generative AI that made many of us actively explore AI. AI platforms and tools like custom GPTs are already influencing how we research keywords, generate content ideas, and analyze data.

The real value, however, is not in using these tools to cut corners. It lies in designing them intentionally, aligning them with business goals, and ensuring they serve users’ needs.

This article is not a tutorial on how to build GPTs. I share why the build process itself matters, what I have learned so far, and how SEOs can use this product mindset to think more strategically in the age of AI.

From Barriers To Democratization

Not long ago, building tools without coding experience meant relying on developers, dealing with long lead times, and waiting for vendors to release new features. That has changed slightly. The democratization of technology has lowered the entry barriers, making it possible for anyone with curiosity to experiment with building tools like custom GPTs. At the same time, expectations have necessarily risen, as we expect tools to be intuitive, efficient, and genuinely useful.

This is a reason why technical skills still matter. But they’re not enough on their own. What matters more, in my opinion, is how we apply them. Are we solving a real problem? Are we creating workflows that align with business needs?

The strategic questions SEOs should be asking are no longer just “Can I build this?,” but:

  • Should I build this?
  • What problem am I solving, and for whom?
  • What’s the ultimate goal?

Why The Build Process Matters

Building a custom GPT is straightforward. Anyone can add a few instructions and click “save.” What really matters is what happens before and after: defining the audience, identifying the problem, scoping the work realistically, testing and refining outputs, and aligning them with business objectives.

In many ways, this is what good marketing has always been about: understanding the audience, defining their needs, and designing solutions that meet them.

As an international SEO, I’ve often seen cultural relevance and digital accessibility treated as afterthoughts. OpenAI offered me a way to explore whether AI could help address these challenges, especially since the tool is accessible to those of us without any coding expertise.

What began as a single project to improve cultural relevance in global SEO soon evolved into two separate GPTs when I realized the scope was larger than I could manage at the time.

That change wasn’t a failure, but a part of the process that led me toward a better solution.

Case Study: 2 GPTs, 1 Lesson

The Initial Idea

My initial idea was to build a custom GPT that could generate content ideas tailored to the UK, US, Canada, and Australia, taking both linguistic and cultural nuances into account.

As an international SEO, I know it is hard to engage global audiences who expect personalized experiences. Translation alone is not enough. Content must be linguistically accurate and contextually relevant.

This mirrors the wider shift in search itself. Users now expect personalized, context-driven results, and search engines are moving in that same direction.

A Change In Direction

As I began building, I quickly realized that the scope was bigger than expected. Capturing cultural nuance across four different markets while also learning how to build and refine GPTs required more time than I could commit at that moment.

Rather than leaving the project, I reframed it as a minimum viable product. I revisited the scope and shifted focus to another important challenge, but with a more consistent requirement – digital accessibility.

The accessibility GPT was designed to flag issues, suggest inclusive phrasing, and support internal advocacy. It adapted outputs to different roles, so SEOs, marketers, and project managers could each use it in relevant ways in their day-to-day work.

This wasn’t giving up on the content project. It was a deliberate choice to learn from one use case and apply those lessons to the next.

The Outcome

Working on the accessibility GPT first helped me think more carefully about scope and validation, which paid off.

As accessibility requirements are more consistent than cultural nuance, it was easier to refine prompts and test role-specific outputs, ensuring an inclusive, non-judgmental tone.

I shared the prototype with other SEOs and accessibility advocates. Their feedback was invaluable. Although their feedback was generally positive, they pointed out inconsistencies I hadn’t seen, including how I described the prompt in the GPT store.

After all, accessibility is not just about alt text or color contrast. It’s about how information is presented.

Once the accessibility GPT was running, I went back to the cultural content GPT, better prepared, with clearer expectations and a stronger process.

The key takeaway here is that the value lies not only in the finished product, but in the process of building, testing, and refining.

Risks And Challenges Along The Way

Not every risk became an issue, but the process brought its share of challenges.

The biggest was underestimating time and scope, which I solved by revisiting the plan and starting smaller. There were also platform limitations – ongoing model development, AI fatigue, and hallucinations. OpenAI itself has admitted that hallucinations are mathematically unavoidable. The best response is to be precise with prompts, keep instructions detailed, and always maintain a human-in-the-loop approach. GPTs should be seen as assistants, not replacements.

Collaboration added another layer of complexity. Feedback loops depended on colleagues’ availability, so I had to stay flexible and allow extra time. Their input, however, was crucial – I couldn’t have made progress without them. As none of the these are under my control, I could only keep on top of developments and do my best to handle all of them.

These challenges reinforced an important truth: Building strategically isn’t about chasing perfection, but about learning, adapting, and improving with each iteration.

Applying Product Thinking

The process I followed was similar to how product managers approach new products. SEOs can adopt the same mindset to design workflows that are both practical and strategic.

Validate The Problem

Not every issue needs AI – and not every issue needs solving. Identify and prioritize what really matters at that time and confirm whether a custom GPT, or any other tool, is the right way to address it.

Define The Use Case

Who will use the GPT, and how? A wide reach may sound appealing, but value comes from meeting specific needs. Otherwise, success can quickly fade away.

My GPTs are designed to support SEOs, marketers, and project managers in different scenarios of their daily work.

Prototype And Test

There is real value in starting small. With GPTs, I needed to write clear, specific instructions, then review the outputs and refine.

For instance, instead of asking the accessibility GPT for general ideas on making a form accessible, I instructed it to act as an SEO briefing developers on fixes or as a project manager assigning tasks.

For the content GPT, I instructed the GPT to act as a UK/ U.S. content strategist, developing inclusive, culturally relevant ideas for specific publications in British English/Standard American.

Iterate With Feedback

Bring colleagues and subject-matter experts into the process early. Their insights challenge assumptions, highlight inconsistencies, and make outputs more robust.

Keep On Top Of Developments

AI platforms evolve quickly, and processes also need to adapt to different scenarios. Product thinking means staying agile, adapting to change, and reassessing whether the tools we build still serve their purpose.

The roll-out of the failed GPT-5 reminded me how volatile the landscape can be.

Practical Applications For SEOs

Why build GPTs when there are already so many excellent SEO tools available? For me, it was partly curiosity and partly a way to test what I could achieve with my existing skills before suggesting a collaboration for a different product.

Custom GPTs can add real value in specific situations, especially with a human-in-the-loop approach. Some of the most useful applications I have found include:

  • Analyzing campaign data to support decision-making.
  • Assisting with competitor analysis across global markets.
  • Supporting content ideation for international audiences.
  • Clustering keywords or highlighting internal linking opportunities.
  • Drafting documentation or briefs.

The point is not to replace established tools or human expertise, but to use them as assistants within structured workflows. They can free up time for deeper thinking, while still requiring careful direction and review.

How SEOs Can Apply Product Thinking

Even if you never build a GPT, you can apply the same mindset in your day-to-day work. Here are a few suggestions:

  • Frame challenges strategically: Ask who the end user is, what they need, and what is broken in their experience. Don’t start with tactics without context.
  • Design repeatable processes: Build workflows that scale and evolve over time, instead of one-off fixes.
  • Test and learn: Treat tactics like prototypes. Run experiments, refine based on results. If A/B testing isn’t possible, as it often happens, at least be open to making any necessary adjustments where necessary.
  • Collaborate across teams: SEO does not exist in isolation. Work with UX, development, and content teams early. The key is to find ways to add value to their work.
  • Redefine success metrics: Qualified traffic, conversions, and internal process improvements matter in AI times. Success should reflect actual business impact.
  • Use AI strategically: Quick wins are tempting, but GPTs and other tools are best used to support structured workflows and highlight blind spots. Keep a human-in-the-loop approach to ensure outputs are accurate and relevant to your business needs.

Final Thought

The real innovation is not in the technology itself, but in how we choose to apply it.

We are now in the fifth industrial revolution, a time when humans and machines collaborate more closely than ever.

For SEOs, the opportunity is to move beyond tactical execution and start thinking like product strategists. That means asking sharper questions, testing hypotheses, designing smarter workflows, and creating solutions that adapt to real-world constraints.

It is about providing solutions, not just executing tasks.

More Resources:


Featured Image: SvetaZi/Shutterstock