Google AI Overviews Appear On 21% Of Searches: New Data via @sejournal, @MattGSouthern

Ahrefs analyzed 146 million search results to determine which query types trigger AI Overviews. The research tracked AIO appearance across 86 keyword characteristics.

Here’s a concise look at the patterns and how they may affect your strategy.

What The Analysis Found

AI Overviews appear on 20.5% of all keywords. Specific query types show notable variance, with some categories hitting 60% trigger rates while others stay below 2%.

Patterns Observed Across Query Types

Single-word queries activate AIOs only 9.5% of the time, whereas queries with seven or more words trigger them 46.4%. This correlation indicates that Google primarily uses AIOs for complex informational searches rather than simple lookups.

The question format also shows a similar trend: question-based queries result in AIOs 57.9% of the time, while non-question queries have a much lower rate of 15.5%.

The most significant distinctions are seen based on intent. Informational queries make up 99.9% of all AIO appearances, while navigational queries trigger AIOs just 0.09%. Commercial queries account for 4.3%, and transactional queries for 2.1%.

Patterns Observed Across Industry Categories

Science queries have an AIO rate of 43.6%, while health queries are at 43.0%, and pets & animals reach 36.8%. People & society questions result in AIOs 35.3% of the time.

In contrast, commerce categories exhibit opposite trends. Shopping queries are associated with AIOs only 3.2% of the time, the lowest in the dataset. Real estate remains at 5.8%, sports at 14.8%, and news at 15.1%.

YMYL queries display unexpectedly high trigger rates. Medical YMYL searches trigger AI Overviews 44.1% of the time, financial YMYL hits 22.9%, and safety YMYL reaches 31.0%.

These findings contradict Google’s focus on expert content for topics that could impact health, financial security, or safety.

Queries With Low Presence Of AI Overviews

6.3% of “very newsy” keywords trigger AI Overviews, while 20.7% of non-news queries display AIOs.

The pattern indicates that Google deliberately limits AIOs for time-sensitive content where accuracy and freshness are essential.

Local searches demonstrate a similar trend, with only 7.9% of local queries showing AI Overviews compared to 22.8% for non-local queries.

NSFW content consistently avoids AIOs across categories: adult queries trigger AIOs 1.5% of the time, gambling 1.4%, and violence 7.7%. Drug-related queries have the highest NSFW trigger rate at 12.6%, yet this remains well below the baseline.

Brand vs. Non-Brand

Branded keywords show slight differences compared to non-branded ones. Non-branded queries trigger AIOs 24.9% of the time, whereas branded queries do so 13.1% of the time.

The data indicates that AIOs occur 1.9 times more frequently for generic searches than for brand-specific lookups.

No Correlation With CPC

CPC shows no meaningful correlation with AIO appearance. Keyword cost-per-click values don’t affect trigger rates across any price range tested, with rates hovering between 12.4% and 27.6% regardless of commercial value.

Why This Matters

Publishers focused on informational content encounter the greatest AIO exposure. Question-based and how-to guides align closely with Google’s trigger criteria, putting educational content publishers at the highest risk of traffic loss.

Medical content has the highest category-specific AIO rate, despite concerns about AI accuracy in health advice.

Ecommerce and news publishers are relatively less affected by AIOs. The low trigger rates for shopping and news queries indicate these sectors experience less AI-driven traffic disruption compared to informational sites.

Looking Ahead

Using this data, publishers can review their current keyword portfolios to identify AIO exposure patterns. The most reliable indicators are query intent and length, with industry category and question format also playing significant roles.

AIO exposure varies considerably across different industry categories, with differences exceeding 40 percentage points between the highest and lowest. Content strategies need to consider this variation at the category level instead of assuming consistent baseline risk across all topics.

For a more in-depth examination of this data, see the full analysis.


Featured Image: Zorion Art Production/Shutterstock

Meta Projected $16B From Scam Ads, Internal Docs Show via @sejournal, @MattGSouthern

Advertisers on Meta may be unknowingly competing against suspected scam ads that stay in auctions at higher “penalty bid” prices.

Internal documents obtained by Reuters estimate that around 10% of Meta’s 2024 ad revenue, approximately $16 billion, would come from scam ads and banned goods.

Although Meta disagrees with these estimates, the real impact for advertisers includes potential increases in CPM, brand safety concerns, and uneven enforcement risks.

What Advertisers Should Know

Meta reportedly displays an estimated 15 billion ‘higher-risk’ scam advertisements daily across Facebook, Instagram and WhatsApp.

Meta earns about $7 billion annually just from these higher-risk scam ads that show clear signs of fraud, a late 2024 document states.

The company only bans advertisers when automated systems predict they are at least 95% certain to be committing fraud. Advertisers below that threshold face higher ad rates as a penalty but can continue running campaigns.

Internal Review: Easier To Run Scams On Meta Than Google

An internal Meta review concluded it’s easier to advertise scams on its platforms than on Google. The document doesn’t explain why.

Meta restricted anti-scam enforcement in the first half of the year to actions costing no more than 0.15% of total revenue, or approximately $135 million. A manager overseeing the effort wrote: “Let’s be cautious. We have specific revenue guardrails.”

Company spokesman Andy Stone said the internal estimates were “rough and overly-inclusive” and included many legitimate ads. He declined to provide an updated figure.

Meta reduced user reports of scam ads globally by 58% over the past 18 months and removed more than 134 million pieces of scam ad content in 2025, Stone said.

Why This Matters

On Meta’s platforms, internal documents projected about one in ten ad dollars in 2024 came from ads for scams and banned goods.

Meta’s penalty bid system charges suspected scammers higher rates but keeps them in ad auctions. You don’t know when you’re bidding against these inflated rates.

The revenue guardrails mean Meta caps how much fraud enforcement it will do if it impacts financial projections. Small advertisers must be flagged eight times for financial fraud before getting banned. Some large “High Value Accounts” accrued more than 500 strikes without Meta shutting them down.

A Meta presentation estimated the company’s platforms were involved in one-third of all successful scams in the United States.

The SEC is investigating Meta for running ads for financial scams, according to internal documents reviewed by Reuters. The UK Payment Systems Regulator said Meta’s products were linked to 54% of payment-related scam incidents in 2023.

What Meta Says

Stone clarified that the idea Meta should only take action when regulators demand it isn’t how the company operates.

He explained that the 0.15% figure mentioned in strategy documents was based on a revenue forecast and isn’t a strict cutoff. Additionally, testing the penalty bid program revealed a decrease in scam reports and a small dip in total ad revenue.

The main goal was to cut down on scam advertising by making suspicious advertisers less competitive in auctions.

Meta also outlines recent enforcement actions against scam centers in a Newsroom update.

Looking Ahead

Meta plans to lower the share of revenue from scams, illegal gambling, and prohibited goods from an estimated 10.1% in 2024 to 7.3% by the end of 2025. The target is to reach 6% by the end of 2026 and 5.8% in 2027, as outlined in strategy documents.


Featured Image: JarTee/Shutterstock

Automattic Disputes Use Of Word “Automatic” For WordPress Product via @sejournal, @martinibuster

Lawyers representing Automattic, the for-profit founded by WordPress co-founder Matt Mullenweg, sent a trademark complaint letter to WordPress developer Kevin Geary, asking him to rebrand his WordPress CSS framework, which is currently named Automatic.css, claiming that the similarity to Mullenweg’s Automattic could lead to consumer confusion.

The letter caught some in the WordPress industry by surprise, since Geary had months ago shown good-faith compliance after Mullenweg tweeted a request for Geary to place a disclaimer in the footer of Automatic.css.

Screenshot Of Mullenweg’s July 2025 Tweet To Geary

Kevin Geary

Kevin Geary is a well-liked and popular member of the WordPress developer community since 2005. He’s currently developing a WordPress page builder called EtchWP (currently in Alpha stage) and is behind the well-received CSS framework called Automatic CSS (ACSS). ACSS is a CSS framework that simplifies design consistency within a website, easiliy integrating with page builders like Bricks, Gutenberg, and Oxygen which are popular within the web design community.

A YouTube video and accompanying article from a year ago caused a stir because he documented himself trying to use WordPress’s native Block Editor and coming away from the experience with a large list of issues that need fixing.

He wrote about the Gutenberg workflow:

“Is this the “for everyone” experience? Is this the true vision of the WordPress block editor? …it’s wildly inefficient and impractical.”

Elsewhere he noted that most people are confused about what Gutenberg is supposed to be, citing results of an informal poll of his Twitter followers showing disagreement whether it’s supposed to be a page builder or not.

He concluded:

“It’s NOT for:

Beginner web developers who want to learn how to build websites.

Intermediate web developers who want to build custom websites.

Advanced web developers who want to build custom websites.

Most agencies & freelancers (unless they’re committed to building custom blocks).

I want to like it, I really do. As it stands now, though, the only viable way to use the block editor to build a custom site is with third-party tools. Native ain’t cutting it.”

All of this is to say that Geary is a passionate supporter of WordPress, even when he criticizes the block editor or the “tragedy of the commons” support model underlying WordPress.

Automattic’s Letter To Geary

Geary tweeted a copy of the letter sent to him in which Mullenweg’s lawyers asked him to rebrand his WordPress CSS framework.

Part of the letter stated:

“We represent Automattic Inc. in intellectual property matters. As you know, our client owns and operates a wide range of software brands and services, including the very popular web building and hosting platform WordPress.com. Automattic is also well-known for its longtime and extensive contributions to the WordPress system.

Our client owns many trademark registrations for its Automattic mark covering those types of services and software. As a result of our client’s extensive marketing efforts and support of the WordPress system, consumers have come to closely associate Automattic with WordPress and its related offerings.

We are writing about your use of the name and mark Automatic (sometimes with a CSS or .CSS suffix) to provide a CSS framework specifically designed for WordPress page builders. As we hope you can appreciate, our client is concerned about your use of a nearly identical name and trademark to provide closely related WordPress services. Automattic and Automatic differ by only one letter, are phonetically identical, and are marketed to many of the same people. This all enhances the potential for consumer confusion and dilution of our client’s Automattic mark.

We assume you share Automattic’s interest in ensuring that consumers are not confused or misled by the use of nearly identical names and trademarks to provide related services in the WordPress ecosystem. To protect against any such confusion or dilution, Automattic requests that you rebrand away from using Automatic or anything similar to Automattic. I suggest that we schedule a time to discuss the logistics and a mutually agreeable transition timeline for the change. Please let me know some days and times when you are available.”

Matt Mullenweg responded to Kevin Geary’s tweet by noting that he “owns” the automatic.com domain. But that’s actually a misstatement. Nobody “owns” a domain name. A domain name can only be registered.

Mullenweg’s tweet:

“We also own http://automatic.com. You had to know this was a fraught naming area.”

To which Geary responded:

“AutomaticCSS is called “automatic” because it’s the only CSS framework that does a lot of things automatically.

Congratulations on owning the domain name for a generic term. Let me know when that fact becomes relevant.”

Social Response To Automattic’s Letter

Most of the responses to Geary’s tweet were supportive although one person questioned Geary’s use of the word Automatic, tweeting:

“Why go with “AutomaticCSS” as the name though?

Options like “AutoCSS” or even “AutomatedCSS” would have been even more suitable IMHO.

It could indeed raise the question of whether there was some other motive at play. Just sharing my thoughts!”

That tweet was the outlier, most of the responses were supportive.

Simon Zeimke tweeted:

“A letter from hell. How could a generic Term be IP?”

Lee Milroy responded:

“This is absurd, a product that has been around for 4 years is all of a sudden going to create “confusion”?

Really Matt needs to do some work… like the terrible WP Dashboard experience”

WordPress Drama

Geary hasn’t tweeted about his next move, and it’s been over a week now. Many in the WordPress community would probably prefer to see the drama fade so everyone can get back to making WordPress better.

Featured Image by Shutterstock/IgorZh

Google’s Preferred Sources Tool Is Jammed With Spam via @sejournal, @martinibuster

Google’s Preferred Sources tool is meant to let fans of certain websites tell Google they want to see more of their favorite sites in the Top News feature. However, Google is surfacing copycat spam sites, random sites, and parked domains. Some of the sites appearing in the tool are so low quality that only their home pages are indexed. Shouldn’t this tool just show legitimate websites and not spam?

Google Preferred Sources

Google’s Preferred Sources feature gives users control over which news outlets appear more often in Google’s Top Stories feature. Rather than relying on Google’s ranking system alone, users can make their preferred news sources appear more frequently. This change doesn’t block other sites from appearing, it only personalizes what a user sees to reflect their chosen sources. Preferred Sources enablers users to have more control over which news sources appear more often.

Similar Domains In Preferred Sources

What appears to be happening is that people are registering domains that are similar to those of well-known websites. One way they’re doing it is by domain squatting on an exact match to domain name using a different TLD. For example, when a popular domain name is registered with a .com or .net the domain squatters will register the same domain name using a .com.in or .net.in domain name.

Screenshot Of A Random Subdomain Ranking For Automattic

Preferred Sources Errors

It’s unclear if people are registering domain names and adding them to the Preferred Sources tool or if they are being added in some different manner. A search for a popular SEO tool surfaces the correct domain but also a parked domain in the Indian .com.in ccTLD:

Screenshot Of An Indian Parked Domain

What is known is that people are registering copycat domains but how they’re getting into Google’s Preferred Sources tool is not well known. Preferred Sources is currently available in the USA and in India, which may explain the Indian domains showing up in the tool.

Screenshot Of Indian NYTimes Parked Domain

For example, a search within the Preferred Sources tool for Huffpost surfaces a copycat site on an Indian country code level domain.

Screenshot Of HuffPost In Source Preferences

That site Indian Huffpost site features articles (and links) to topics like payday loans, personal injury lawyers, and luxury watches. Not surprisingly, it doesn’t look like Google is indexing more than the home page of that site.

Screenshot Of A Site Search

There’s also an Indian site squatting on Search Engine Journal’s domain name.

Screenshot Of SEJ In Source Preferences Tool

What Is Going On?

It’s possible that SEOs are registering copycat domains and then submitting their domains to the Preferred Sources tool. Or it could be that Google picks them up automatically and is just listing whatever is out there.

Google Warns Against Relying On SEO Audit Tool Scores via @sejournal, @MattGSouthern

Google warned against relying on tool-generated scores for technical SEO audits.

Search Relations team member Martin Splitt outlined a three-step framework in a Search Central Lightning Talk that emphasizes site-specific context over standardized metrics.

The Three-Step Framework

Splitt outlined the core objective in the video:

“A technical audit, in my opinion, should make sure no technical issues prevent or interfere with crawling or indexing. It can use checklists and guidelines to do so, but it needs experience and expertise to adapt these guidelines and checklists to the site you audit.”

His recommended framework has three phases.

First, use tools and guidelines to identify potential issues. Second, create a report tailored to the specific site. Third, make recommendations based on actual site needs.

Understanding site technology comes before running diagnostic tools. Group findings by effort required and potential impact, Splitt said.

When 404s Are Normal

High 404 counts don’t always mean problems.

The red flag is unexplained rises without corresponding website changes.

Splitt explained:

“A high number of 404s, for instance, is expected if you removed a lot of content recently. That’s not a problem. It’s a normal consequence of that. But if you have an unexplained rise in 404 responses, though, that’s something you want to point out and investigate…”

Google Search Console’s Crawl Stats report shows whether 404 patterns match normal site maintenance or indicate technical issues.

Context Over Scores

Tools generate numerical scores that lack site-specific context.

Not everything tools flag carries equal weight. An international site needs hreflang auditing, while a single-language site doesn’t.

Splitt emphasized human judgment over automation:

“Please, please don’t follow your tools blindly. Make sure your findings are meaningful for the website in question and take the time to prioritize them for maximum impact.”

Talk to people who know the site and its technology. They’ll tell you if findings make sense.

Why This Matters

Generic checklists waste time on low-impact fixes while missing critical issues.

Tool scores may flag normal site behavior as problems. They assign priority to issues that don’t affect how search engines crawl your content.

Understanding when metrics reflect normal operations helps you focus audit resources where they matter. This applies whether you’re running internal audits or evaluating agency reports.

Looking Ahead

Audit platforms continue adding automated checks and scoring systems. This widens the gap between generic findings and actionable recommendations.

Google’s guidance reinforces that technical SEO requires expertise beyond tool automation.

Sites with international setups, large content archives, or frequent publishing benefit most from context-driven audits.

Hear Splitt’s full talk in the video below:

Google Finance Gets AI Deep Search & Prediction Market Data via @sejournal, @MattGSouthern

Google Finance is rolling out Deep Search capabilities, prediction markets data, and enhanced earnings tracking features across its AI-powered platform.

The updates expand Google Finance beyond basic market data into multi-step research workflows and crowd-sourced probability forecasting. Google announced the changes today, with features rolling out over the coming weeks, starting with Labs users.

Deep Search For Financial Research

Deep Search handles complex financial queries by issuing up to hundreds of simultaneous searches and synthesizing information across multiple sources.

You can ask detailed questions and select the Deep Search option. Gemini models then generate fully cited comprehensive responses within minutes, displaying the research plan during generation.

Image Credit: Google

Robert Dunnette, Director of Product Management for Google Search, wrote:

“From there, our advanced Gemini models will get to work, issuing up to hundreds of simultaneous searches and reasoning across disparate pieces of information to produce a fully cited, comprehensive response in just a few minutes.”

Deep Search offers higher usage limits for Google AI Pro and AI Ultra subscribers. Users can access it through the Google Finance experiment in Labs.

Prediction Markets Integration

Google Finance is adding support for prediction markets data from Kalshi and Polymarket, with availability rolling out over the coming weeks, starting with Labs users.

You can query future market events directly from the search box to see current probabilities and historical trends.

An example query includes “What will GDP growth be for 2025?”

The feature rolls out this week to Labs users first.

Enhanced Earnings Tracking

Google launched earnings tracking features that provide live audio streams, real-time transcripts, and AI-generated insights during corporate earnings calls.

The Earnings tab shows scheduled calls, streams live audio during calls, and maintains transcripts for later reference. AI-powered insights under “At a glance” update before, during, and after calls with information from news reports and analyst reactions.

You can compare financial data against historical results, view performance versus expectations, and access earnings documents and SEC forms.

India Expansion

Google Finance begins rolling out in India this week with support for English and Hindi.

The India launch initially offers the core Google Finance experience. Deep Search, prediction markets, and earnings features launch first in the U.S. and will expand internationally over time.

Why This Matters

Deep Search reduces the time needed to gather financial data from multiple sources, potentially resulting in fewer webpage visits.

Prediction markets offer crowd-sourced probability estimates that complement analyst forecasts. Live earnings tracking integrates call audio, transcripts, and analyst reactions into a single interface during reporting season.

Looking Ahead

Deep Search and prediction markets roll out over the coming weeks, with Labs users getting early access. Google AI Pro and AI Ultra subscribers receive higher usage limits for Deep Search queries.

The India expansion marks Google Finance’s first international launch beyond the U.S. Access the beta at google.com/finance/beta while signed into a Google account.


Featured Image: Juan Alejandro Bernal/Shutterstock

OpenAI’s Sam Altman Raises Possibility Of Ads On ChatGPT via @sejournal, @martinibuster

OpenAI’s CEO Sam Altman sat for an interview where he explained that his vision for the future of ChatGPT is as a trusted assistant that’s user-aligned, saying that booking hotels is not going to be the way to monetize “the world’s smartest model.” He pointed to Google as an example of what he doesn’t want ChatGPT to become: a service that accepts advertising dollars to place the worst choice above the best choice. He then followed up to express openness to advertising.

User-Aligned Monetization Model

Altman contrasted OpenAI’s revenue approach with the ad-driven incentives of Google. He explained that Google’s Search and advertising ecosystem depends on Google’s search results “doing badly for the user,” because ranking decisions are partly tied to maximizing advertising income.

The interviewer related that he and his wife took a trip to Europe and booked multiple hotels with help from ChatGPT and ate at restaurants that ChatGPT helped him find and at no point did any kind of kickback or advertising fee go back to OpenAI, leading him to tell his wife that ChatGPT “didn’t get a dime from this… this just seems wrong….” because he was getting so much value from ChatGPT and ChatGPT wasn’t getting anything back.

Altman answered that users trust ChatGPT and that’s why so many people pay for it.

He explained:

“I think if ChatGPT finds you the… To zoom out even before the answer, one of the unusual things we noticed a while ago, and this was when it was a worst problem, ChatGPT would consistently be reported as a user’s most trusted technology product from a big tech company. We don’t really think of ourselves as a big tech company, but I guess we are now. That’s very odd on the surface, because AI is the thing that hallucinates, AI is the thing with all the errors, and that was much more of a problem. And there’s a question of why.

Ads on a Google search are dependent on Google doing badly. If it was giving you the best answer, there’d be no reason ever to buy an ad above it. So you’re like, that thing’s not quite aligned with me.

ChatGPT, maybe it gives you the best answer, maybe it doesn’t, but you’re paying it, or hopefully are paying it, and it’s at least trying to give you the best answer. And that has led to people having a deep and pretty trusting relationship with ChatGPT. You ask ChatGPT for the best hotel, not Google or something else.”

Altman’s response used the interviewer’s experience as an example of a paradigm change in user trust in technology. He contrasted ChatGPT’s model, where users directly pay for answers, with Google’s ad-based model that profits from imperfect results. His point is that ChatGPT’s business model aligns more closely with users’ interests, earning a sense of trust and reliability rather than making their users feel exploited by an advertising system. This is why users perceive ChatGPT as more trustworthy, even though ChatGPT is known to hallucinate.

Altman Is Open To Transaction Fees

Altman was strongly against accepting advertising money in exchange for showing a hotel above what ChatGPT would naturally show. He said that he would be open to accepting a transaction fee should a user book that hotel through ChatGPT because that has no influence on what ChatGPT recommends, thus preserving a user’s trust.

He shared how this would work:

“If ChatGPT were accepting payment to put a worse hotel above a better hotel, that’s probably catastrophic for your relationship with ChatGPT. On the other hand, if ChatGPT shows you it’s best hotel, whatever that is, and then if you book it with one click, takes the same cut that it would take from any other hotel, and there’s nothing that influenced it, but there’s some sort of transaction fee, I think that’s probably okay. And with our recent commerce thing, that’s the spirit of what we’re trying to do. We’ll do that for travel at some point.”

I think a takeaway here is that Altman believes the advertising model that the Internet has been built on over the past thirty-plus years can subvert user trust and lead to a poor user experience. He feels that a transaction fee model is less likely to impact the quality of the service that users are paying for and that it will maintain the feeling of trust that people have in ChatGPT.

But later on in the interview, as you’ll see, Altman surprises the interviewer with his comment about the possibility of advertisements on ChatGPT.

How OpenAI Will Monetize Itself

When pressed about how OpenAI will monetize itself, Altman responded that he expects the future of commerce will have lower margins and that he doesn’t expect to fully fund OpenAI by booking hotels but by doing exceptional things like curing diseases.

Altman explained his vision:

“So one thing I believe in general related to this is that margins are going to go dramatically down on most goods and services, including things like hotel bookings. I’m happy about that. I think there’s like a lot of taxes that just suck for the economy and getting those down should be great all around. But I think that most companies like OpenAI will make more money at a lower margin.

…I think the way to monetize the world’s smartest model is certainly not hotel booking.  …I want to discover new science and figure out a way to monetize that. You can only do with the smartest model.

There is a question of, should, many people have asked, should OpenAI do ChatGPT at all? Why don’t you just go build AGI? Why don’t you go discover a cure for every disease, nuclear fusion, cheap rockets, the whole thing, and just license that technology? And it is not an unfair question because I believe that is the stuff that we will do that will be most important and make the most money eventually.

…Maybe some people will only ever book hotels and not do anything else, but a lot of people will figure out they can do more and more stuff and create new companies and ideas and art and whatever.

So maybe ChatGPT and hotel booking and whatever else is not the best way we can make money. In fact, I’m certain it’s not. I do think it’s a very important thing to do for the world, and I’m happy for OpenAI to do some things that are not the economic maxing thing.”

Advertisements May Be Coming To ChatGPT

At around the 18 minute mark the interviewer asked Altman about advertising on OpenAI and Altman acknowledged that there may be a form of advertising but was vague about what that would look like.

He explained:

“Again, there’s a kind of ad that I think would be really bad, like the one we talked about.

There are kinds of ads that I think would be very good or pretty good to do. I expect it’s something we’ll try at some point. I do not think it is our biggest revenue opportunity.”

The interviewer asked:

“What will the ad look like on the page?”

Altman responded:

“I have no idea. You asked like a question about productivity earlier. I’m really good about not doing the things I don’t want to do.”

Takeaway

Sam Altman suggests an interesting way forward on how to monetize Internet users. His way is based on trust and finding a way to monetize that doesn’t betray that trust.

Watch the interview starting at about the 16 minute mark:

Featured image/Screenshot from interview

Perplexity Bets $400M On Snapchat To Scale AI Search Adoption via @sejournal, @MattGSouthern

Perplexity will pay Snap $400 million to integrate its AI answer engine into Snapchat’s chat interface, with rollout starting next year.

  • Perplexity will pay Snap $400 million over one year to integrate its AI answer engine into Snapchat.
  • Snap calls this its first large-scale integration of an external AI partner directly in the app.
  • Perplexity handles 150+ million questions weekly, so the integration meaningfully expands distribution.
Google Deprecates Practice Problem Structured Data In Search via @sejournal, @MattGSouthern

Google will deprecate practice problem structured data in January and clarifies Dataset markup is only for Dataset Search. Book actions remain supported.

  • Practice problem markup is being deprecated from Google Search in January.
  • Dataset structured data is for Dataset Search only; it isn’t used in Google Search.
  • Book actions continue to work in Google Search.