OpenAI Launches ChatGPT Atlas Browser For macOS via @sejournal, @MattGSouthern

OpenAI released ChatGPT Atlas today, describing it as “the browser with ChatGPT built in.”

OpenAI announced the launch in a blog post and livestream featuring CEO Sam Altman and team members including Ben Goodger, who previously helped develop Google Chrome and Mozilla Firefox.

Atlas is available now on macOS worldwide for Free, Plus, Pro, and Go users. Windows, iOS, and Android versions are coming soon.

What Does ChatGPT Atlas Do?

Unified New Tab Experience

Opening a new tab creates a starting point where you can ask questions or enter URLs. Results appear with tabs to switch between links, images, videos, and news where available.

OpenAI describes this as showing faster, more useful results in one place. The tab-based navigation keeps ChatGPT answers and traditional search results within the same view.

ChatGPT Sidebar

A ChatGPT sidebar appears in any browser window to summarize content, compare products, or analyze data from the page you’re viewing.

The sidebar provides assistance without leaving the current page.

Cursor

Cursor chat lets you highlight text in emails, calendar invites, or documents and get ChatGPT help with one click.

The feature can rewrite selected text inline without opening a separate chat window.

Agent Mode

Agent mode can open tabs and click through websites to complete tasks with user approval. OpenAI says it can research products, book appointments, or organize tasks inside your browser.

The company describes it as an early experience that may make mistakes on complex workflows, but is rapidly improving reliability and task success rates.

Browser Memories

Browser memories let ChatGPT remember context from sites you visit and bring back relevant details when needed. The feature can continue product research or build to-do lists from recent activity.

Browser memories are optional. You can view all memories in settings, archive ones no longer relevant, and clear browsing history to delete them.

A site-level toggle in the address bar controls which pages ChatGPT can see.

Privacy Controls

Users control what ChatGPT can see and remember. You can clear specific pages, clear entire browsing history, or open an incognito window to temporarily log out of ChatGPT.

By default, OpenAI doesn’t use browsing content to train models. You can opt in by enabling “include web browsing” in data controls settings.

OpenAI added safeguards for agent mode. It cannot run code in the browser, download files, install extensions, or access other apps on your computer or file system. It pauses to ensure you’re watching when taking actions on sensitive sites like financial institutions.

The company acknowledges agents remain susceptible to hidden malicious instructions in webpages or emails that could override intended behavior. OpenAI ran thousands of hours of red-teaming and designed safeguards to adapt to novel attacks, but notes the safeguards won’t stop every attack.

Why This Matters

Atlas blurs the line between browser and search engine by putting ChatGPT responses alongside traditional search results in the same view. This changes the browsing model from ‘visit search engine, then navigate to sites’ to ‘ask questions and browse simultaneously.’

This matters because it’s another major platform where AI-generated answers appear before organic links.

The agent mode also introduces a new variable: AI systems that can navigate sites, fill forms, and complete purchases on behalf of users without traditional click-through patterns.

The privacy controls around site visibility and browser memories create a permission layer that hasn’t existed in traditional browsers. Sites you block from ChatGPT’s view won’t contribute to AI responses or memories, which could affect how your content gets discovered and referenced.

Looking Ahead

OpenAI is rolling out Atlas for macOS starting today. First-run setup imports bookmarks, saved passwords, and browsing history from your current browser.

Windows, iOS, and Android versions are scheduled to launch in the coming months without specific release dates.

The roadmap includes multi-profile support, improved developer tools, and guidance for websites to add ARIA tags to help the agent work better with their content.


Featured Image: Saku_rata160520/Shuterstock

Google Announces A New Era For Voice Search via @sejournal, @martinibuster

Google announced an update to its voice search, which changes how voice search queries are processed and then ranked. The new AI model uses speech as input for the search and ranking process, completely bypassing the stage where voice is converted to text.

The old system was called Cascade ASR, where a voice query is converted into text and then put through the normal ranking process. The problem with that method is that it’s prone to mistakes. The audio-to-text conversion process can lose some of the contextual cues, which can then introduce an error.

The new system is called Speech-to-Retrieval (S2R). It’s a neural network-based machine-learning model trained on large datasets of paired audio queries and documents. This training enables it to process spoken search queries (without converting them into text) and match them directly to relevant documents.

Dual-Encoder Model: Two Neural Networks

The system uses two neural networks:

  1. One of the neural networks, called the audio encoder, converts spoken queries into a vector-space representation of their meaning.
  2. The second network, the document encoder, represents written information in the same kind of vector format.

The two encoders learn to map spoken queries and text documents into a shared semantic space so that related audio and text documents end up close together according to their semantic similarity.

Audio Encoder

Speech-to-Retrieval (S2R) takes the audio of someone’s voice query and transforms it into a vector (numbers) that represents the semantic meaning of what the person is asking for.

The announcement uses the example of the famous painting The Scream by Edvard Munch. In this example, the spoken phrase “the scream painting” becomes a point in the vector space near information about Edvard Munch’s The Scream (such as the museum it’s at, etc.).

Document Encoder

The document encoder does a similar thing with text documents like web pages, turning them into their own vectors that represent what those documents are about.

During model training, both encoders learn together so that vectors for matching audio queries and documents end up near each other, while unrelated ones are far apart in the vector space.

Rich Vector Representation

Google’s announcement says that the encoders transform the audio and text into “rich vector representations.” A rich vector representation is an embedding that encodes meaning and context from the audio and the text. It’s called “rich” because it contains the intent and context.

For S2R, this means the system doesn’t rely on keyword matching; it “understands” conceptually what the user is asking for. So even if someone says “show me Munch’s screaming face painting,” the vector representation of that query will still end up near documents about The Scream.

According to Google’s announcement:

“The key to this model is how it is trained. Using a large dataset of paired audio queries and relevant documents, the system learns to adjust the parameters of both encoders simultaneously.

The training objective ensures that the vector for an audio query is geometrically close to the vectors of its corresponding documents in the representation space. This architecture allows the model to learn something closer to the essential intent required for retrieval directly from the audio, bypassing the fragile intermediate step of transcribing every word, which is the principal weakness of the cascade design.”

Ranking Layer

S2R has a ranking process, just like regular text-based search. When someone speaks a query, the audio is first processed by the pre-trained audio encoder, which converts it into a numerical form (vector) that captures what the person means. That vector is then compared to Google’s index to find pages whose meanings are most similar to the spoken request.

For example, if someone says “the scream painting,” the model turns that phrase into a vector that represents its meaning. The system then looks through its document index and finds pages that have vectors with a close match, such as information about Edvard Munch’s The Scream.

Once those likely matches are identified, a separate ranking stage takes over. This part of the system combines the similarity scores from the first stage with hundreds of other ranking signals for relevance and quality in order to decide which pages should be ranked first.

Benchmarking

Google tested the new system against Cascade ASR and against a perfect-scoring version of Cascade ASR called Cascade Groundtruth. S2R beat Cascade ASR and very nearly matched Cascade Groundtruth. Google concluded that the performance is promising but that there is room for additional improvement.

Voice Search Is Live

Although the benchmarking revealed that there is some room for improvement, Google announced that the new system is live and in use in multiple languages, calling it a new era in search. The system is presumably used in English.

Google explains:

“Voice Search is now powered by our new Speech-to-Retrieval engine, which gets answers straight from your spoken query without having to convert it to text first, resulting in a faster, more reliable search for everyone.”

Read more:

​​Speech-to-Retrieval (S2R): A new approach to voice search

Featured Image by Shutterstock/ViDI Studio

Wikipedia Traffic Down As AI Answers Rise via @sejournal, @MattGSouthern

The Wikimedia Foundation (WMF) reported a decline in human pageviews on Wikipedia compared with the same months last year.

Marshall Miller, Senior Director of Product, Core Experiences at Wikimedia Foundation, wrote that the organization believes the decline reflects changes in how people access information, particularly through AI search and social platforms.

What Changed In The Data

Wikimedia observed unusually high traffic around May. The traffic appeared human but investigation revealed bots designed to evade detection.

WMF updated its bot detection systems and applied the new logic to reclassify traffic from March through August.

Miller noted the revised data shows “a decrease of roughly 8% as compared to the same months in 2024.”

WMF cautions that comparisons require careful interpretation because bot detection rules changed over time.

The Role Of AI Search

Miller attributed the decline to generative AI and social platforms reshaping information discovery.

He wrote that search engines are “providing answers directly to searchers, often based on Wikipedia content.”

This creates a scenario where Wikipedia serves as source material for AI-powered search features without generating traffic to the site itself.

Wikipedia’s Role In AI Systems

The traffic decline comes as AI systems increasingly depend on Wikipedia as source material.

Research from Profound analyzing 680 million AI citations finds that within ChatGPT’s top 10 most-cited sources, Wikipedia accounts for 47.9% of the top-10 share. For Google AI Overviews, Wikipedia’s top-10 share is 5.7%, with Reddit 21.0% and YouTube 18.8%.

WMF also reported a 50% surge in bandwidth from AI bots since January 2024. These bots scrape content primarily for training computer vision models.

Wikipedia launched Wikimedia Enterprise in 2021, offering commercial, SLA-backed data access for high-volume reusers, including search and AI companies.

Why This Matters

If Wikipedia loses traffic while serving as ChatGPT’s most-cited source, the model that sustains content creation is breaking. You can produce authoritative content that AI systems depend on and still see referral traffic decline.

The incentive structure assumes publishers benefit from creating material that powers AI answers, but Wikipedia’s data shows that assumption doesn’t hold.

Track how AI features affect your traffic and whether being cited translates to meaningful engagement.

Looking Ahead

WMF says it will continue updating bot detection systems and monitoring how generative AI and social media shape information access.

Wikipedia remains a core dataset for modern search and AI systems, even when users don’t visit the site directly. Publishers should expect similar dynamics as AI search features expand across platforms.


Featured Image: Ahyan Stock Studios/Shutterstock

Review Of AEO/GEO Tactics Leads To A Surprising SEO Insight via @sejournal, @martinibuster

GEO/AEO is criticized by SEOs who claim that it’s just SEO at best and unsupported lies at worst. Are SEOs right, or are they just defending their turf? Bing recently published a guide to AI search visibility that provides a perfect opportunity to test whether optimization for AI answers recommendations is distinct from traditional SEO practices.

Chunking Content

Some AEO/GEO optimizers are saying that it’s important to write content in chunks because that’s how AI and LLMs break up a pages of content, into chunks of content. Bing’s guide to answer engine optimization, written by Krishna Madhavan, Principal Product Manager at Bing, echoes the concept of chunking.

Bing’s Madhavan writes:

“AI assistants don’t read a page top to bottom like a person would. They break content into smaller, usable pieces — a process called parsing. These modular pieces are what get ranked and assembled into answers.”

The thing that some SEOs tend to forget is that chunking content is not new. It’s been around for at least five years. Google introduced their passage ranking algorithm back in 2020. The passages algorithm breaks up a web page into sections to understand how the page and a section of it is relevant to a search query.

Google says:

“Passage ranking is an AI system we use to identify individual sections or “passages” of a web page to better understand how relevant a page is to a search.”

Google’s 2020 announcement described passage ranking in these terms:

“Very specific searches can be the hardest to get right, since sometimes the single sentence that answers your question might be buried deep in a web page. We’ve recently made a breakthrough in ranking and are now able to better understand the relevancy of specific passages. By understanding passages in addition to the relevancy of the overall page, we can find that needle-in-a-haystack information you’re looking for. This technology will improve 7 percent of search queries across all languages as we roll it out globally.”

As far as chunking is concerned, any SEO who has optimized content for Google’s Featured Snippets can attest to the importance of creating passages that directly answer questions. It’s been a fundamental part of SEO since at least 2014, when Google introduced Featured Snippets.

Titles, Descriptions, and H1s

The Bing guide to ranking in AI also states that descriptions, headings, and titles are important signals to AI systems.

I don’t think I need to belabor the point that descriptions, headings, and titles are fundamental elements of SEO. So again, there is nothing her to differentiate AEO/GEO from SEO.

Lists and Tables

Bing recommends bulleted lists and tables as a way to easily communicate complex information to users and search engines. This approach to organizing data is similar to an advanced SEO method called disambiguation. Disambiguation is about making the meaning and purpose of a web page as clear as possible, to make it less ambiguous.

Making a page less ambiguous can incorporate semantic HTML to clearly delineate which part of a web page is the main content (MC in the parlance of Google’s third-party quality rater guidelines) and which part of the web page is just advertisements, navigation, a sidebar, or the footer.

Another form of disambiguation is through the proper use of HTML elements like ordered lists (OL) and the use of tables to communicate tabular data such as product comparisons or a schedule of dates and times for an event.

The use of HTML elements (like H, OL, and UL) give structure to on-page information, which is why it’s called structured information. Structured information and structured data are two different things. Structured information is on the page and is seen in the browser and by crawlers. Structured data is meta data that only a bot will see.

There are studies that structured information helps AI Agents make sense of a web page, so I have to concede that structured information is something that is particularly helpful to AI Agents in a unique way.

Question And Answer Pairs

Bing recommends Q&A’s, which are question and answer pairs that an AI can use directly. Bing’s Madhavan writes:

“Direct questions with clear answers mirror the way people search. Assistants can often lift these pairs word for word into AI-generated responses.”

This is a mix of passage ranking and the SEO practice of writing for featured snippets, where you pose a question and give the answer. It’s a risky approach to create an entire page of questions and answers but if it feels useful and helpful then it may be worth doing.

Something to keep in mind is that Google’s systems consider content lacking in unique insight on the same level of spam. Google also considers content created specifically for search engines as low quality as well.

Anyone considering writing questions and answers on a web page for the purpose of AI SEO should first consider the whether it’s useful for people and think deeply about the quality of the question and answer pairs. Otherwise it’s just a page of rote made for search engine content.

Be Precise With Semantic Clarity

Bing also recommends semantic clarity. This is also important for SEO. Madhavan writes:

  • “Write for intent, not just keywords. Use phrasing that directly answers the questions users ask.
  • Avoid vague language. Terms like innovative or eco mean little without specifics. Instead, anchor claims in measurable facts.
  • Add context. A product page should say “42 dB dishwasher designed for open-concept kitchens” instead of just “quiet dishwasher.”
  • Use synonyms and related terms. This reinforces meaning and helps AI connect concepts (quiet, noise level, sound rating).”

They also advise to not use abstract words like “next-gen” or “cutting edge” because it doesn’t really say anything. This is a big, big issue with AI-generated content because it tends to use abstract words that can completely be removed and not change the meaning of the sentence or paragraph.

Lastly, they advise to not use decorative symbols, which is good a tip. Decorative symbols like the arrow → symbol don’t really communicate anything semantically.

All of this advice is good. It’s good for SEO, good for AI, and like all the other AI SEO practices, there is nothing about it that is specific to AI.

Bing Acknowledges Traditional SEO

The funny thing about Bing’s guide to ranking better for AI is that it explicitly acknowledges that traditional SEO is what matters.

Bing’s Madhavan writes:

“Whether you call it GEO, AIO, or SEO, one thing hasn’t changed: visibility is everything. In today’s world of AI search, it’s not just about being found, it’s about being selected. And that starts with content.

…traditional SEO fundamentals still matter.”

AI Search Optimization = SEO

Google and Bing have incorporated AI into traditional search for about a decade. AI Search ranking is not new. So it should not be surprising that SEO best practices align with ranking for AI answers. The same considerations also parallel with considerations about users and how they interact with content.

Many SEOs are still stuck in the decades-old keyword optimization paradigm and maybe for them these methods of disambiguation and precision are new to them. So perhaps it’s a good thing that the broader SEO industry catches up with many of these concepts for optimizing content and to recognize that there is no AEO/GEO, it’s still just SEO.

Featured Image by Shutterstock/Roman Samborskyi

Raptive Drops Traffic Requirement By 75% To 25,000 Views via @sejournal, @MattGSouthern

Raptive lowered its minimum traffic requirement to 25,000 monthly pageviews from 100,000.

The ad network announced the new threshold represents a 75% reduction from the previous standard.

Raptive retired its Rise pilot program and consolidated all entry-level publishers into its Insider tier.

What Changed At Raptive

Sites generating between 25,000 and 99,999 monthly pageviews can now apply. These publishers need at least 50% of traffic from the United States, United Kingdom, Canada, Australia, or New Zealand.

Sites with 100,000 or more pageviews need only 40% traffic from those markets.

Raptive’s announcement stated:

“We’re living in a moment where AI drives inflated pageviews for low-quality websites and where algorithms can shift a site’s pageviews overnight. What truly matters—more than ever—is original, high-quality content that audiences trust.”

The Rise program launched in 2024 for sites between 50,000 and 100,000 monthly pageviews. That tier is being eliminated.

Current Insider-level publishers can now add additional sites once they reach 25,000 monthly pageviews.

Referral Program Expansion

Raptive expanded its referral program through January 31.

Publishers receive $1,000 when referring creators with sites generating 100,000 or more monthly pageviews.

For sites between 25,000 and 100,000 pageviews, the referral bonus is $250 during the limited promotion period.

Access Widening At Some Networks

Other networks have adjusted entry requirements in recent years, though changes vary.

Mediavine launched Journey in March 2024 for sites starting around 10,000 sessions. Ezoic removed pageview minimums for its Access Now monetization program. SHE Media lists an entry point around 20,000 pageviews.

These moves don’t necessarily represent an industry-wide pattern but show expanded options for smaller publishers at select networks.

Why This Matters

If you’re managing a site between 25,000 and 100,000 monthly pageviews with strong tier-one traffic, you now have access to Raptive’s managed monetization. You’ll still need to meet quality standards around original content, proper analytics setup, and advertiser compatibility.

The lower threshold acknowledges that traffic volatility from algorithm changes has made consistent pageview growth less predictable.

Looking Ahead

The new 25,000 pageview minimum takes effect immediately for new applications. Raptive continues requiring original content and proper site setup alongside the reduced traffic threshold.

Other networks may adjust their requirements as traffic patterns continue shifting, but each provider sets criteria independently.


Featured Image: Song_about_summer/Shutterstock

Mullenweg Talks About Commercially Motivating WordPress Companies via @sejournal, @martinibuster

At the recent WordCamp Canada, WordPress co-founder Matt Mullenweg answered a question about how individuals and agencies could support the WordPress ecosystem against “bad actors” who don’t share the same community values. The question gave Mullenweg the opportunity to portray himself as the victim of a court that’s muzzling his free speech and to encourage the WordPress community to vote with their pocketbooks.

Question About Protecting WordPress Against Bad Actors

The person asking the question had two things on their mind:

1. How can individuals and agencies help protect WordPress’s community values from exploitative or profit-driven actors?

2. Should there be a formal certification system to identify and promote ethical contributors and agencies within the ecosystem?

The question asked reinforced that the WordPress community is divided into two sides, with those who stand with Mullenweg in his dispute with WP Engine and those on the other side who disapprove of the drama.

This is the question that was asked:

“WordPress has always thrived because of its open, community-driven ethos, but as the ecosystem grows, we’re seeing more like large, profit-driven players who don’t necessarily share the values. How can individual contributors and agencies like ours actively help protect WordPress and uphold the values and ethics that have sustained it from bad actors and people who might try to exploit the community.

And do you see room for something more formal, like a certification for individuals and agencies that define what being a good actor is to help educate clients and even the market to help kind of protect in a more proactive way from those sorts of bad actors?”

The question paints assumes a polarization in the WordPress community, with the exploitative profit-seeking bad actors on one side and the ethical WordPress supporters on the other.

No Bad Actors

Matt Mullenweg began his answer by stating that he’s not one to call anyone a bad actor.

He answered:

“So first, I’ll say, I don’t want to say that there’s bad actors. I think there might be bad actions sometimes and just temporarily bad actors who hopefully will be good in the future. So, you know, every saint has a past, every sinner has a future. So I never want to define like any company or any person is like permanently good or bad. Let’s talk about actions. “

Is This You?

It was a strange way to begin his answer because he used the phrase “bad actors” in his at last years WordCamp USA that called out WP Engine:

“I think that we also just need to call out bad actors. And you got to, the only way to fight a bully is to fight them back. If you just allow them to run rampant on the playground, they’re just going to keep terrorizing everyone.”

He followed that speech with a blog post where he went further and called WP Engine a “cancer to WordPress.”

You can hear it at the 33:48 minute mark of the recording from last year’s WordCamp

Motivating Good Behavior

Mullenweg continued his answer by discussing ways to motivate companies to give back to the WordPress community while also enforcing the GPL and protecting the WordPress trademark. Lastly, he encouraged the WordPress community to vote with their wallets by spending money on companies that that are defined as “good” and giving less to businesses who are presumably defined as a bad actors.

He continued:

“So second, I think with these actions, we can start to create incentive systems. And it’s part of what we’re doing with Five for the Future, which is basically saying you contribute back, which also implies that you’re not violating the GPL or something like that.

So we’ve got the hard stuff, like if you violate the GPL, you’re gonna get a letter, violate the trademark, that is more of a legal thing, but also the gentle stuff, like how can we encourage a good behavior by giving people higher rankings in the directory or in the showcase, for example, then finally, I’ll just say vote with your wallet.”

At this point he continued with the topic of motivating companies to do the right thing and drifted off into talking about WP Engine without actually naming WP Engine.

Mullenweg continued:

“So each one of you here has the ability to strongly influence these companies. By the way, if they’re commercially motivated, great. Let’s commercially motivate them to do the right thing by giving more business to the good companies and less business to the other companies.

This has actually been happening a lot the past year. I think I can say this. There’s a site called WordPressEngineTracker.com, which is currently tracking a number of sites that have left a certain host. It’s about to crash 100,000, about to cross 100,000, that have switched to other hosts, and over 74,000 have gone offline since September of last year.

We actually used to make all this data public. It was all the whole list was on there. They got a court order, so that way the data could be fact-checked by press or other people. There was actually a court order that made us take that down. So again, trying to muzzle free speech and transparency. But we’re allowed to keep that site up, so check it out while you can.”

Mullenweg’s comments frame spending choices as a form of moral expression within the WordPress ecosystem. By urging the community to “commercially motivate” companies, he encourages consumer spending as a way of enforcing ethical accountability, implicitly targeting WP Engine and unnamed others that fall short.

He positioned himself as the victim whose free speech is muzzled, but the court order simply required him and Automattic to stop sharing a spreadsheet of WP Engine’s customers. He also framed the whole dispute as one about ethics and morals, invoking the religious imagery of sinners and saints. WordPress is both a business and a community, but it’s not a religion. So it’s somewhat odd that those connections were made in the context of contributing money or time back into WordPress, which is a cultural obligation but not a legal (or religious) one.

Watch the Q & A here:

Study Shows 2-5 Weekly TikToks Deliver Biggest View Increase via @sejournal, @MattGSouthern

Buffer’s analysis of 11.4 million TikTok posts from over 150,000 accounts reveals that posting 2-5 times per week delivers the steepest per-post view increase.

The study challenges the usual recommendation to post multiple times daily by demonstrating that the benefits decrease after the initial increase in posting frequency.

Data scientist Julian Winternheimer employed fixed-effects regression to examine how posting frequency affects metrics. His analysis, spanning the past year, measured views per post at various weekly posting rates.

What The Data Shows

Posting 2-5 times per week yields 17% more views per post compared to posting once weekly. Moving to 6-10 posts brings 29% gains, while 11+ posts per week shows 34% increases.

The steepest climb happens between one post and 2-5 posts per week. Doubling from 5 to 10 weekly posts adds just 12 percentage points, showing diminishing returns on per-post performance.

Buffer’s fixed-effects model compares each account to itself over time rather than across accounts. This removes variables like follower count and brand strength that would otherwise skew results.

Median Performance Stays Flat

Median views per post hover around 500 regardless of posting frequency. At one post per week, median views reach 489. At 11+ posts weekly, median views drop slightly to 459.

The top 10% of posts tell a different story. At one post weekly, the 90th percentile hits 3,722 views.

That number jumps to 6,983 views for accounts posting 2-5 times, 10,092 views at 6-10 posts, and 14,401 views at 11+ posts per week.

Buffer labels this “Viral Potential” (p90/median ratio). Accounts posting 11+ times weekly see their top posts perform 31.4 times better than their median post, compared to just 7.6 times for once-weekly posters.

Why This Matters

If you manage TikTok content, this data suggests 2-5 posts per week offers the most efficient starting cadence.

Posting more frequently increases your chances of a viral outlier rather than improving typical post performance.

Winternheimer explains:

“Posting more helps — but mostly because it increases your chances of getting lucky. TikTok is heavy-tailed. You only need one post to pop off. Posting more just increases your odds.”

More posts raise the ceiling for your best-performing content without raising the floor for average posts.

Buffer notes the study draws from accounts connected to its platform, which may skew toward small and medium businesses.

Looking Ahead

Winternheimer offers the following advice:

“If we wanted to provide a blanket recommendation that applies to most people, I’d recommend starting with 2-5 posts per week on TikTok. However, if you have more posts to share, you’ll give yourself a better chance at having a breakout post.”

Remember that platform dynamics can change rapidly. What was true over the past year might shift as TikTok updates its algorithm.

Google Says What Content Gets Clicked On AI Overviews via @sejournal, @martinibuster

Google’s Liz Reid, Vice President of Search, recently said that AI Overviews shows what kind of content makes people click through to visit a site. She also said that Google expanded the concept of spam to include content that does not bring the creator’s perspective and depth.

People’s Preferences Drives What Search Shows

Liz Reid affirmed that user behavior tells them what kinds of content people want to see, like short-form videos and so on. That behavior causes Google to want to show that to them and that the system itself will begin to learn and adjust to the kinds of content (forums, text, video, etc.) that they prefer.

She said:

“…we do have to respond to who users want to hear from, right? Like, we are in the business of both giving them high quality information, but information that they seek out. And so we have over time adjusted our ranking to surface more of this content in response to what we’ve heard from users.

…You see it from users, right? Like we do everything from user research to we run an experiment. And so you take feedback from what you hear, from research about what users want, you then test it out, and then you see how users actually act. And then based on how users act, the system then starts to learn and adjust as well.”

The important insight is that user preferences play an active role in shaping what appears in AI search results. Google’s ranking systems are designed to respond not just to quality but to the types of content users seek out and engage with. This means that shifts in user behavior related to content preferences directly influence what is surfaced. The system continuously adapts based on real-world feedback. The takeaway here is that SEOs and creators should actively gauge what kind of content users are engaging with and be ready to pivot in response to changes.

The conversation is building up toward where Reid says exactly what kinds of content engages users, based on the feedback they get through user behavior.

AI-Generated Is Not Always Spam

Liz next affirms that AI-generated content where she essentially confirms that the bar they’re using to decide what’s high and low quality is agnostic to whether the content is created by a human or an AI.

She said:

“Now, AI generated content doesn’t necessarily equal spam.

But oftentimes when people are referring to it, they’re referring to the spam version of it, right? Or the phrase AI slop, right? This content that feels extremely low value across, okay? And we really want to make an effort that that doesn’t surface.”

Her point is pretty clear that all content is judged by the same standard. So if content is judged to be low quality, it’s judged based on the merits of the content, not by the origin.

People Click On Rich Content

At this point in the interview Reid stops talking about low quality content and turns to discussing the kind of content that makes people click through to a website. She said that user behavior tells them that users don’t want superficial content and that the click patterns shows that more people click through to content that has depth, expresses a unique perspective that does not mirror what everyone else is saying and that these kinds of content engages users. This is the kind of content that gets clicks on AI search.

Reid explained:

“But what we see is people want content from that human perspective. They want that sense of like, what’s the unique thing you bring to it, okay? And actually what we see on what people click on, on AI Overviews, is content that is richer and deeper, okay?

That surface-level AI generated content, people don’t want that because if they click on that, they don’t actually learn that much more than they previously got. They don’t trust the result anymore.

So what we see with AI Overviews is that we surface these sites and get fewer what we call bounce clicks. A bounce click is like you click on your site, Yeah, I didn’t want that, and you go back.

AI Overviews gives some content, and then we get to surface deeper, richer content, and we’ll look to continue to do that over time so that we really do get that creator content and not the AI generated.”

Reid’s comments indicate that click patterns indicate content offering a distinct perspective or insight derived from experience performs better than low-effort content. This indicates that there is intention within AI Overviews to not amplify generic output and to uprank content that demonstrates a firm knowledge of the topic.

Google’s Ranking Weights

Here’s an interesting part that explains what gets up-ranked and down-ranked, expressed in a way I’ve not seen before. Reid said that they’ve extended the concept of spam to also include content that repeats what’s already well known. She also said that they are giving more ranking weight to content that brings a unique perspective or expertise to the content.

Here Reid explains the downranking:

“Now, it is hard work, but we spend a lot of time and we have a lot of expertise built on this such that we’ve been able to take the spam rate of what actually shows up, down.

And as well as we’ve sort of expanded beyond this concept of spam to sort of low-value content, right? This content that doesn’t add very much, kind of tells you what everybody else knows, it doesn’t bring it…”

And this is the part where she says Google is giving more ranking weight to content that contains expertise:

“…and tried to up-weight more and more content specifically from someone who really went in and brought their perspective or brought their expertise, put real time and craft into the work.”

Takeaways

How To Get More Upranked On AI Overviews

1. Create “Richer and Deeper” Content

Reid said, “people want content from that human perspective. They want that sense of like, what’s the unique thing you bring to it, okay? And actually what we see on what people click on, on AI Overviews, is content that is richer and deeper, okay?”

Takeaway:
Publish content that shows original thought, unique insights, and depth rather than echoing what’s already widely said. In my opinion, using software that analyzes the content that’s already ranking or using a skyscraper/10x content strategy is setting yourself up for doing exactly the opposite of what Liz Reid is recommending. A creator will never express a unique insight by echoing what a competitor has already done.

2. Reflect Human Perspective

Reid said, “people want content from that human perspective. They want that sense of like, what’s the unique thing you bring to it.”

Takeaway: Incorporate your own analysis, experiences, or firsthand understanding so that the content is authentic and expresses expertise.

3. Demonstrate Expertise and Craft

Reid shared that Google is trying “to up-weight more and more content specifically from someone who really went in and brought their perspective or brought their expertise, put real time and craft into the work.”

Takeaway:
Effort, originality, and subject-matter knowledge are the qualities that Google is up-weighting to perform better within AI Overviews.

Reid draws a clear distinction between content that repeats what is already widely known and content that adds unique value through perspective or expertise. Google treats superficial content like spam and lowers the weights of the rankings to reduce its visibility, while actively “upweighting” content that demonstrates effort and insight, what she termed as the craft. Craft means skill and expertise, mastery of something. The message here is that originality and actual expertise are important for ranking well, particularly in AI Overviews and I would think the same applies for AI Mode.

Watch the interview from about the 18 minute mark:

Google Reminds SEOs How The URL Removals Tool Works via @sejournal, @martinibuster

Google’s John Mueller answered a question about removing hacked URLs that are showing in the index. He explained how to remove the sites from appearing in the search results and then discussed the nuances involved in dealing with this specific situation.

Removing Hacked Pages From Google’s SERPs

The person asking the question was the victim of the Japanese hacking attack, so-called because the attackers create hundreds or even thousands of rogue Japanese language web pages. The person had dealt with the issue and removed the spammy infected web pages, leaving them with 404 pages that are still referenced in Google’s search results.

They now want to remove them from Google’s search index so that the site is no longer associated with those pages.

They asked:

“My site recently got a Japanese attack. However, I shifted that site to a new hosting provider and have removed all data from there.

However, the fact is that many Japanese URLs have been indexed.

So how do I deindex those thousands of URLs from my website?”

The question reflects a common problem in the aftermath of a Japanese hack attack, where hacked pages stubbornly remain indexed long after the pages were removed. This shows that site recovery is not complete once the malicious content is removed; Google’s search index needs to clear the pages, and that can take a frustratingly long time.

How To Remove Japanese Hack Attack Pages From Google

Google’s John Mueller recommended using the URL Removals Tool found in Search Console. Contrary to the implication inherent in the name of the tool, it doesn’t remove a URL from the search index; it just removes it from showing in Google’s search results faster if the content has already been removed from the site or blocked from Google’s crawler. Under normal circumstances, Google will remove a page from the search results after the page is crawled and noted to be blocked or gone (404 error response).

Three Prerequisites For URL Removals Tool

  1. The page is removed and returns a 404 or 410 server response code.
  2. The URL is blocked from indexing by a robots meta tag:
  3. The URL is prevented from being crawled by a robots.txt file.

Google’s Mueller responded:

“You can use the URL removal tool in search console for individual URLs (also if the URLs all start with the same thing). I’d use that for any which are particularly visible (check the performance report, 24 hours).

This doesn’t remove them from the index, but it hides them within a day. If the pages are invalid / 404 now, they’ll also drop out over time, but the removal tool means you can stop them from being visible “immediately”. (Redirecting o 404 are both ok, technically a 404 is the right response code)”

Mueller clarified that the URL Removals Tool does not delete URLs from Google’s index but instead hides them from search results, faster than natural recrawling would. His explanation is a reminder that the tool has a temporary search visibility effect and is not a way to permanently remove a URL from Google’s index itself. The actual removal from the search index happens after Google verifies that the page is actually gone or blocked from crawling or indexing.

Featured Image by Shutterstock/Asier Romero

Google Business Profile Tests Cross-Location Posts via @sejournal, @MattGSouthern

Google Business Profile appears to be testing a feature that lets managers share the same update across multiple locations from a single dialog.

Tim Capper reported seeing the option. After publishing an update, a “Copy post” dialog appears with the prompt: “Copy the update to other profiles you manage.”

The interface displays a list of business locations with checkboxes so you can choose which profiles receive the same update.

We’ve asked Google for comment on availability and eligibility requirements and will update this article if we receive a response.

What’s New

From what’s visible in the screenshots, the workflow streamlines cross-posting for multi-location accounts.

You publish an update to one profile, then immediately see a pop-up listing other profiles you manage.

You can select one or many locations and post the same update without repeating the process.

Why It Matters

If you manage multiple locations, this could save time by reducing repetitive posting. It may also help keep messaging consistent across locations.

Make sure updates remain locally relevant before copying them everywhere.

How To Check If You Have Access

If you manage more than one profile in the same account, publish a standard update to one location.

If your account is in the test, you should see a “Copy post” dialog immediately after posting, with a list of other profiles you manage.

If You Don’t See It

Not all accounts will have access during tests. Keep posting as usual and check again periodically. If you manage many locations, confirm that all profiles are grouped under the same account with the correct permissions.

Looking Ahead

If Google proceeds with a wider launch, expect details on supported post types, scheduling, and limits. We’ll update this story if Google confirms the feature or publishes documentation.