Google & Apple Maps: 20% of Local Searches Now Start Here via @sejournal, @MattGSouthern

New research shows that map platforms have become key search engines for local businesses.

One in five consumers now searches directly in map apps instead of traditional search engines.

BrightLocal’s Consumer Search Behavior study found that Google, Apple, and Bing Maps make up 20% of all local searches.

This is a big part of search traffic that many marketers might be missing in their local SEO plans.

The Rise of Map-First Search Behavior

The research found that 15% of consumers use Google Maps as their first choice for local searches. This makes it the second most popular platform after Google Search (45%).

The study reads:

“Another significant finding is the prominence of Google Maps in local search. 15% of consumers said they would use Google Maps as their first port of call, meaning they are searching local terms—which could be brand or non-brand terms—directly in Google Maps.”

It continues:

“Google Maps, Apple Maps, and Bing Maps combined make up 20% of default local search platforms. This reinforces the importance of ensuring you’re optimizing for both map packs and organic search listings. You might have a strong presence in the SERPs, but if consumers are looking for businesses like yours on a map search, you need to ensure you’re going to be found there, too.”

This change shows that consumers favor visual, location-based searches for local businesses, especially when making spontaneous decisions.

Generational Differences in Map Usage

Different age groups use map platforms at different rates:

  • Eighteen percent of Gen Z consumers use Google Maps as their primary local search tool, which is three percentage points higher than the average.
  • 21% of Millennials use Google Maps as their default local search platform.
  • 5% of Millennials prefer Apple Maps as their primary local search option.
  • Younger consumers appear to be more comfortable using maps to discover local businesses. This might be because they’re used to doing everything on mobile devices.

What Consumers Look for in Map Results

The study found key information that drives consumer decisions when using maps:

  • 85% of consumers say contact information and opening hours are “important” or “very important”
  • 46% rate business contact information as “very important”
  • Nearly half (49%) of consumers “often” or “always” plan their route to a business after searching

Map-based searches have high potential to convert browsers into customers, the report notes:

“Almost half of consumers (49%) said that they ‘often’ or ‘always’ go on to plan their travel route to the chosen business. This suggests two things: one, how quickly consumers seem to be making their decisions, and two, that consumers are conducting local business research with the aim of visiting in the very near future.”

SEO Implications for Local Businesses

For SEO pros and local marketers, these findings highlight several actions to take:

  • Prioritize optimizing map listings beyond your Google Business Profile.
  • Ensure accuracy across all map platforms, not just Google.
  • Focus on complete business information, especially contact details and hours.
  • Monitor the “justifications” in map results, which can be sourced from your business information, reviews, and website.
  • Treat maps as a primary search channel rather than an afterthought.

BrightLocal highlights:

“So, don’t lose out to potential customers by not having a correct address, phone number, or email address listed on your platforms—and be sure to check your opening hours are up to date.”

Looking Ahead

Map platforms are evolving from simple navigation tools into search engines that drive sales and revenue.

If you treat map listings as an afterthought, you risk missing many motivated, ready-to-buy consumers.

As search continues to fragment across platforms, investing specific resources in optimizing your map presence, beyond standard local SEO, is increasingly essential for businesses that rely on local traffic.


Featured Image: miss.cabul/Shutterstock

GoDaddy Is Offering Leads To Freelancers And Agencies via @sejournal, @martinibuster

GoDaddy launched a new partner program called GoDaddy Agency that matches web developers with leads for small to mid-sized businesses (SMBs). It provides digital agencies with tools, services, and support to help them grow what they offer their customers.

The new program is available to U.S. based freelancers and web development agencies. GoDaddy offers the following benefits:

  • Client leads
    Partners are paired with SMBs based on expertise and business goals. GoDaddy delivers high-intent business referrals from GoDaddy’s own Web Design Services enquiries.
  • Commission revenue opportunities
    Partners can earn up to 20% commission for each new client purchases.
  • Access to premium WordPress tools
  • Co-branded marketing
    Top-performing partners benefit from more exposure from joint marketing campaigns.
  • Dedicated Support
    Every agency is assigned an Agency Success Manager who can help them navigate ways to benefit more from the program.

Joseph Palumbo, Go-to-Market and Agency Programs Director at GoDaddy explained:

“The GoDaddy Agency Program is all about helping agencies grow. We give partners the tools, support, and referrals they need to take on more clients and bigger projects—without adding more stress to their day. It’s like having a team behind your team.”

For WordPress Developers And More

I asked GoDaddy if this program exclusively for WordPress developers. They answered:

“GoDaddy has a wide variety of products to help make any business successful. So, this isn’t just about WordPress. We have plenty of website solutions, like Managed WordPress, Websites + Marketing or VPS for application development. Additionally, we have other services like email through Office 365, SSL certificates and more.”

Advantage Of Migrating Customers To GoDaddy

I asked GoDaddy what advantages can a developer at another host receive by bringing all of their clients over to GoDaddy?

They answered:

“First, our extensive product portfolio and diverse hosting selection allows agencies to house all and any projects at GoDaddy, allowing them to simplify their operations and giving them the opportunity to manage their business from a single dashboard and leverage a deep connection with a digital partner that understands their challenges and opportunities.

On top of that, there’s the growth potential. Every day, we get calls from customers who want websites that are too complex for us to design and build. So, we have created a system that instead of directing those customers elsewhere, we can connect with Web agencies that are better suited to handle their requests.

If a digital agency becomes a serious partner and the work they do meets our standards, and they have great customer service , etc. we can help make connections that are mutually beneficial to our customers and our partners.”

Regarding my question about WordPress tools offered to agency partners, a spokesperson answered:

“We have a wide variety of AI tools to help them get their jobs done faster. From website design via AI to product descriptions and social posts. Beyond our AI tools, agency partners that use WordPress can work directly with our WordPress Premium Support team. This is a team of WordPress experts and developers who can assist with anything WordPress-related whether hosted at GoDaddy or somewhere else.”

Takeaways

When was the last time your hosting provider gave you a business lead?  The Agency partner program is an innovative ecosystem that supports agencies and freelancers who partner with GoDaddy, a win-win for everyone involved.

It makes sense for a web host to share business leads from customers who are actively in the market for web development work with partner agencies and freelancers who could use those leads. It’s a win-win for the web host and the agency partners, an opportunity that’s worth looking into.

GoDaddy’s new Agency Program connects U.S.-based web developers, freelancers and agencies with high-intent leads from small-to-mid-sized businesses while offering commissions, tools, and support to help agencies grow their client base and streamline operations. The program is a unique ecosystem that enables developers to consolidate hosting, leverage WordPress and AI tools, and benefit from co-marketing and personalized support.

  • Client Acquisition via Referrals:
    GoDaddy matches agency partners with high-intent SMB leads generated from its own service inquiries.
  • Revenue Opportunities:
    Agencies can earn up to 20% commission on client purchases made through the program.
  • Consolidated Hosting and Tools:
    Agencies can manage multiple client types using GoDaddy’s product ecosystem, including WordPress, VPS, and Websites + Marketing.
  • Premium WordPress and AI Support:
    Partners gain access to a dedicated WordPress Premium Support team and AI-powered productivity tools (e.g., design, content generation).
  • Co-Branded Marketing Exposure:
    High-performing partners receive increased visibility through joint campaigns with GoDaddy.
  • Dedicated Success Management:
    Each partner is assigned an Agency Success Manager for personalized guidance and program optimization.
  • Incentive for Migration from Other Hosts:
    GoDaddy offers a centralized platform offering simplicity, scale, and client acquisition opportunities for agencies switching from other providers.

Read more about the GoDaddy Agency program:

GoDaddy Agency: A New Way to Help Digital Consultants Grow

Apply to join the Agency Program here.

Google On Diluting SEO Impact Through Anchor Text Overuse via @sejournal, @martinibuster

Google’s John Mueller answered a question about internal site navigation where an SEO was concerned about diluting the ability to rank for a keyword phrase by using the same anchor text in four sitewide sections across an entire website.

Link In Four Navigational Areas

The person asking the question had a client that had four navigational areas with links across the entire site. One of the links is repeated across each of the four navigational areas, using the exact same anchor text. The concern is that using that phrase multiple times across the entire site might cause it to appear overused.

Roots of Why SEOs Worry About Anchor Text Overuse

There’s a longtime concern in the SEO industry about overusing anchor text. The original reason for this concern, the root of it, is because overusing internal anchor text could be seen as communicating the intent to manipulate the search engines. This concern arose in 2005 because of Google’s announced use of statistical analysis which can identify unnatural linking patterns.

Over the years that concern has evolved to worrying about “diluting” the impact of anchor text, which has no foundation in anything Google said although Google is on record as saying that they’re dampening sitewide links.

Google has in the past made it known that it divides a page into its constituent parts such as the header section (where the logo is), the sitewide navigation, sidebars, main content, in-content navigation, advertising and footers.

We know that Google has been doing this since at least 2004 (a Googler confirmed it to me at a search event) and most definitely around 2006-ish when Google was dampening the effect of external sitewide links and internal sitewide links so that the links only counted as one link, and not with the full power of 2,000 or whatever number of links.

Back in the day people were selling sitewide links at a premium because they were said to harness the entire PageRank power of the site. So Google announced that those links would be dampened for internal links and Google began recognizing paid links and blocking the PageRank from transferring.

We could see the power of the sitewide links through Google’s browser toolbar that contained a PageRank meter so when the change happened we were able to confirm that effect in the toolbar and in rankings.

That’s why sitewide links are no longer an SEO thing anymore. It’s has nothing to do with dilution.

Sitewide Links And Dilution 2025

Today we find an SEO who’s worrying about a sitewide anchor text link being “diluted.”

So, if we already know that Google recognizes sidebars, menus and footers and separates them out from the main content and we know that Google doesn’t count a sitewide link as a multiple but rather counts it as if it only existed on one page, then we already know the answer to that person’s question, which is that no, it’s not going to be a big deal because it’s a navigational sitewide link, which is not meaningful other than to tell Google that it’s an important page for the site.

A sitewide navigational link is important but it’s not the same as a contextual link from within content. A contextual link has meaning, it’s meaningful, because it says something about the page being linked to. One is not better than the other, they’re just different kinds of links.

This is the question that the SEO asked:

Hey
@johnmu.com
a client has 4 navs. A Main Menu, Footer Links, Sidebar Quicklinks & a Related Pages Mini-Nav in posts. Not for SEO but they have quadrupled the internal link profile to a key page on a single anchor.

Any risk that we’re diluting the ability to rank that keyword with “overuse”?

Someone else answered the question with a link to a Search Engine Journal article that was about a site that contains links to every page of the entire site, which is a different situation entirely. That’s a type of site architecture from the old days called a flat site architecture. It was created by SEOs for the purpose of spreading PageRank across to all pages of the site to help all them rank.

Google’s John Mueller responded with a comment about that flat site structure and an answer to the query posed by the SEO:

“I think (it’s been years) that was more about a site that links from all pages to all pages, where you lose context of how pages sit within the site. I don’t think that’s your situation. Having 4 identical links on a page to another page seems fine & common to me, I wouldn’t worry about that.”

Lots Of Duplication

The SEO responded that the duplicated content along the sidebars were HTML and not “navigation” and that they were concerned that this introduced a lot of duplication.

He wrote:

“Its 4 duplicated navs on every page of the site, semantically the side bar and related pages are not navs, they’re html, list structured links so lots of duplication IMO”

I think that Mueller’s answer still applies. It doesn’t matter if they are semantically side bars and related pages. What’s important is that they are not the main content, which is what Google is focused on.

Google’s Martin Splitt went into detail about this four years ago where he talked about the Centerpiece Annotation.

Martin talks about how they identify related content links and other stuff that’s not the main content:

“And then there’s this other thing here, which seems to be like links to related products but it’s not really part of the centerpiece. It’s not really main content here. This seems to be additional stuff.

And then there’s like a bunch of boilerplate or, “Hey, we figured out that the menu looks pretty much the same on all these pages and lists.”

So the answer for the SEO is that it doesn’t matter if those links are in a sidebar or menu navigation or related links. Google identifies it as not the main content and for the purposes of analyzing the web page, sets that aside. Google doesn’t care if stuff is popping up all over the site, it’s not main content.

Read the original discussion on Bluesky.

Featured Image by Shutterstock/Photobank.kiev.ua

Google Discover Desktop Data Already Trackable In Search Console via @sejournal, @MattGSouthern

Google Discover desktop data is already trackable in Search Console. Here’s how to prepare ahead of the full rollout.

  • Data suggests Google Discover has been in testing on desktop for over 16 months.
  • Desktop Discover data reveals lower traffic than mobile (only 4% in the US).
  • Publishers can access their desktop Discover performance now in Search Console.
Google Clarifies Googlebot-News Crawler Documentation via @sejournal, @martinibuster

Google updated their Google News crawler documentation to correct an error that implied that publisher crawler preferences addressed to Googlebot-News influenced the News tab in Google search.

Google News Tab

The Google News tab is a category of search that is displayed near the top of the search results pages (SERPs). The news tab filter displays search results from news publishers. Content that’s shown in the news tab generally comes from sites that eligible to be displayed in Google News and must meet Google’s news content policies.

What Changed:

Google’s changelog noted that the user agent description was in error to say that publisher preferences influenced what’s shown in the Google News tab.

They explained:

“The description for how crawling preferences addressed to Googlebot-News mistakenly stated that they’d affect the News tab on Google, which is not the case.”

The entire section mentioning the News tab in Google Search was removed from this sentence:

“Crawling preferences addressed to the Googlebot-News user agent affect all surfaces of Google News (for example, the News tab in Google Search and the Google News app).”

The corrected version now reads:

“Crawling preferences addressed to the Googlebot-News user agent affect the Google News product, including news.google.com and the Google News app.”

Read the updated Googlebot-News user agent documentation here:

Googlebot News

Featured Image by Shutterstock/Asier Romero

Google’s John Mueller: Updating XML Sitemap Dates Doesn’t Help SEO via @sejournal, @MattGSouthern

Google’s John Mueller clarifies that automatically changing XML sitemap dates doesn’t boost SEO and could make it harder for Google to find actual content updates.

The “Freshness Signal” Myth Busted

On Reddit’s r/SEO forum, someone asked if competitors ranked better by setting their XML sitemap dates to the current date to send a “freshness signal” to Google.

Mueller’s answer was clear:

“It’s usually a sign they have a broken sitemap generator setup. It has no positive effect. It’s just a lazy setup.”

The discussion shows a common frustration among SEO pros. The original poster was upset after following Google’s rules for 15 years, only to see competitors using “spam tactics” outrank established websites.

When asked about sites using questionable tactics yet still ranking well, Mueller explained that while some “sneaky things” might work briefly, updating sitemap dates isn’t one of them.

Mueller said:

“Setting today’s date in a sitemap file isn’t going to help anyone. It’s just lazy. It makes it harder for search engines to spot truly updated pages. This definitely isn’t working in their favor.”

XML Sitemaps: What Works

XML sitemaps help search engines understand your website structure and when content was last updated. While good sitemaps are essential for SEO, many people misunderstand the impact they have on rankings.

According to Google, the lastmod tag in XML sitemaps should show when a page was truly last updated. When used correctly, this helps search engines know which pages have new content that needs to be recrawled.

Mueller confirms that faking these dates doesn’t help your rankings and may prevent Google from finding your real content updates.

What This Means for Your SEO

Mueller’s comments remind us that while some SEO tactics might seem to improve rankings, correlation isn’t causation.

Sites ranking well despite questionable methods are likely succeeding due to other factors, rather than manipulated sitemap dates.

For website owners and SEO professionals, the advice is:

  • Keep your XML sitemaps accurate
  • Only update lastmod dates when you change content
  • Focus on creating valuable content instead of technical shortcuts
  • Be patient with ethical SEO strategies – they provide lasting results

It can be frustrating to see competitors seemingly benefit from questionable tactics. However, Mueller suggests these advantages don’t last long and can backfire.

This exchange confirms that Google’s smart algorithms can recognize and eventually ignore artificial attempts to manipulate ranking signals.


Featured Image:  Keronn art/Shutterstock

Google Now Allows Top Ads To Appear At Bottom Of Search Results via @sejournal, @brookeosmundson

Google Ads introduced a quiet but impactful change last week to how ads can show up on the search results page.

High-performing ads that used to be eligible only for top-of-page positions can now also appear at the bottom.

This means advertisers can show up more than once on the same results page: once at the top and again at the bottom, as long as the ads meet Google’s relevance standards.

At a glance, it may feel like a small shift. But in reality, it opens the door to more exposure, smarter bidding strategies, and a clearer glimpse into how Google is thinking about ad experience.

Let’s unpack what’s changing, why it matters, and what this means for your campaigns.

What’s Changing With Search Ad Placements?

Until recently, Google followed a rule where only one ad from a single advertiser could show on a search results page. That ad could only appear in one place, either at the top or the bottom.

That restriction has now been updated.

With this change, if your ad is strong enough to qualify for the top of the page, it can also be eligible to appear again at the bottom.

That’s because Google runs separate auctions for each Search ad location.

Google reports that during testing, this increased the presence of relevant ads by 10% and led to a 14% lift in conversions from bottom-of-page placements.

In short, users weren’t just seeing more ads. They were also interacting with them more.

But this isn’t a free-for-all. Ads still need to meet relevance thresholds, and your bottom-of-page placement won’t just show up by default. It has to earn its spot, the same way your top ad does.

How This Changes the Bigger Quality Picture

For Google, this isn’t just about squeezing in more ads. It’s about improving the experience for users and advertisers at the same time.

By opening up bottom-of-page slots to high-quality ads, Google is trying to ensure users see relevant options whether they click right away or scroll to the end of the page.

It’s a subtle shift, but one that could shape how performance marketers think about their creative and bidding strategies.

It also signals how Google continues to reward quality over quantity.

If your ad copy is weak or your landing page experience is lacking, you’re unlikely to benefit from this expanded eligibility.

But if you’ve invested in thoughtful creative, user-focused content, and clear calls to action, you now have twice the chance to show up and potentially win more conversions.

This move also speaks to inventory optimization. By filling both top and bottom ad spots with the best content available, Google is getting more mileage out of every search without making the results page feel like a cluttered ad wall.

Does This Conflict With Google’s Unfair Advantage Policy?

At first, many advertisers were confused since Google recently updated their Unfair Advantage policy earlier this month.

The Unfair Advantage policy bars advertisers from “double serving” to a single ad location.

Double serving refers to showing multiple ads from different accounts or domains that all point to the same business. Google cracked down on that to ensure fair competition and to prevent advertisers from dominating a single auction by crowding out competitors.

This new update doesn’t violate that principle.

In fact, Google clarified that this change is possible because top and bottom placements run in separate auctions. That means your ad isn’t “beating out” your own other ad in the same auction. It’s simply earning placement in two different areas of the page.

So long as the ads are relevant and helpful to the user, Google’s policy allows for this kind of visibility.

What Advertisers Need To Know About This Change

This update gives advertisers new levers to pull — but only if you know where to look.

First, this isn’t something you need to opt into. If your ads are eligible based on performance, they’ll start showing in both places automatically. But that doesn’t mean you should take a hands-off approach.

Here are some things to keep in mind:

  • Monitor your impression share by position. Use segmentation in Google Ads to break down where your ads are showing (top vs. other) and compare performance.
  • Watch for changes in CTR and Conversion Rate. You may see stronger performance from one position over the other. That can inform whether you want to bid more aggressively, or refine copy and assets to align with what works best.
  • Revisit your Quality Score drivers. With Google prioritizing relevance, improving expected CTR, ad relevance, and landing page experience will help you capture more real estate.
  • Layer in automation, but stay strategic. Smart Bidding might adjust bids automatically to take advantage of new placement opportunities, but make sure you’re reviewing your placement data regularly. Algorithms don’t always know your goals better than you do.
  • Look beyond vanity metrics. Bottom-of-page clicks may cost less, but be sure they’re actually driving value. Focus on leads, sales, or other business outcomes, rather than just volume.

Moving Forward with Better Search Ads

Google’s decision to allow top-performing ads to also appear at the bottom of search results reflects an ongoing effort to enhance user experience and ad relevance.

While the change offers new opportunities for advertisers, it also emphasizes the importance of ad quality and strategic planning.

By understanding and adapting to these updates, advertisers can better position themselves for success in the evolving landscape of search advertising.

If you’ve been focused on creating better ads and improving your landing pages, this update is in your favor.

Reddit Mods Accuse AI Researchers Of Impersonating Sexual Assault Victims via @sejournal, @martinibuster

Researchers testing the ability of AI to influence people’s opinions violated the ChangeMyView subreddit’s rules and used deceptive practices that allegedly were not approved by their ethics committee, including impersonating victims of sexual assault and using background information about Reddit users to manipulate them.

They argue that those conditions may have introduced biases. Their solution was to introduce AI bots into a live environment without telling the forum members they were interacting with an AI bot. Their audience were unsuspecting Reddit users in the Change My View (CMV) subreddit (r/ChangeMyView), even though it was a violation of the subreddit’s rules which prohibit the use of undisclosed AI bots.

After the research was finished the researchers disclosed their deceit to the Reddit moderators who subsequently posted a notice about it in the subreddit, along with a draft copy of the completed research paper.

Ethical Questions About Research Paper

The CMV moderators posted a discussion that underlines that the subreddit prohibits undisclosed bots and that permission to conduct this experiment would never have been granted:

“CMV rules do not allow the use of undisclosed AI generated content or bots on our sub. The researchers did not contact us ahead of the study and if they had, we would have declined. We have requested an apology from the researchers and asked that this research not be published, among other complaints. As discussed below, our concerns have not been substantively addressed by the University of Zurich or the researchers.”

This fact that the researchers violated the Reddit rules was completely absent from the research paper.

Researchers Claim Research Was Ethical

While the researchers omit that the research broke the rules of the subreddit, they do create the impression that it was ethical by stating that their research methodology was approved by an ethics committee and that all generated comments were checked to assure they were not harmful or unethical:

“In this pre-registered study, we conduct the first large-scale field experiment on LLMs’ persuasiveness, carried out within r/ChangeMyView, a Reddit community of almost 4M users and ranking among the top 1% of subreddits by size. In r/ChangeMyView, users share opinions on various topics, challenging others to change their perspectives by presenting arguments and counterpoints while engaging in a civil conversation. If the original poster (OP) finds a response convincing enough to reconsider or modify their stance, they award a ∆ (delta) to acknowledge their shift in perspective.

…The study was approved by the University of Zurich’s Ethics Committee… Importantly, all generated comments were reviewed by a researcher from our team to ensure no harmful or unethical content was published.”

The moderators of the ChangeMyView subreddit dispute the researcher’s claim to the ethical high ground:

“During the experiment, researchers switched from the planned “values based arguments” originally authorized by the ethics commission to this type of “personalized and fine-tuned arguments.” They did not first consult with the University of Zurich ethics commission before making the change. Lack of formal ethics review for this change raises serious concerns.”

Why Reddit Moderators Believe Research Was Unethical

The Change My View subreddit moderators raised multiple concerns about why they believe the researchers engaged in a grave breach of ethics, including impersonating victims of sexual assault. They argue that this qualifies as “psychological manipulation” of the original posters (OPs), the people who started each discussion.

The Reddit moderators posted:

“The researchers argue that psychological manipulation of OPs on this sub is justified because the lack of existing field experiments constitutes an unacceptable gap in the body of knowledge. However, If OpenAI can create a more ethical research design when doing this, these researchers should be expected to do the same. Psychological manipulation risks posed by LLMs is an extensively studied topic. It is not necessary to experiment on non-consenting human subjects.

AI was used to target OPs in personal ways that they did not sign up for, compiling as much data on identifying features as possible by scrubbing the Reddit platform. Here is an excerpt from the draft conclusions of the research.

Personalization: In addition to the post’s content, LLMs were provided with personal attributes of the OP (gender, age, ethnicity, location, and political orientation), as inferred from their posting history using another LLM.

Some high-level examples of how AI was deployed include:

  • AI pretending to be a victim of rape
  • AI acting as a trauma counselor specializing in abuse
  • AI accusing members of a religious group of “caus[ing] the deaths of hundreds of innocent traders and farmers and villagers.”
  • AI posing as a black man opposed to Black Lives Matter
  • AI posing as a person who received substandard care in a foreign hospital.”

The moderator team have filed a complaint with the University Of Zurich

Are AI Bots Persuasive?

The researchers discovered that AI bots are highly persuasive and do a better job of changing people’s minds than humans can.

The research paper explains:

“Implications. In a first field experiment on AI-driven persuasion, we demonstrate that LLMs can be highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.”

One of the findings was that humans were unable to identify when they were talking to a bot and (unironically) they encourage social media platforms to deploy better ways to identify and block AI bots:

“Incidentally, our experiment confirms the challenge of distinguishing human from AI-generated content… Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts. This hints at the potential effectiveness of AI-powered botnets… which could seamlessly blend into online communities.

Given these risks, we argue that online platforms must proactively develop and implement robust detection mechanisms, content verification protocols, and transparency measures to prevent the spread of AI-generated manipulation.”

Takeaways:

  • Ethical Violations in AI Persuasion Research
    Researchers conducted a live AI persuasion experiment without Reddit’s consent, violating subreddit rules and allegedly violating ethical norms.
  • Disputed Ethical Claims
    Researchers claim ethical high ground by citing ethics board approval but omitted citing rule violations; moderators argue they engaged in undisclosed psychological manipulation.
  • Use of Personalization in AI Arguments
    AI bots allegedly used scraped personal data to create highly tailored arguments targeting Reddit users.
  • Reddit Moderators Allege Profoundly Disturbing Deception
    The Reddit moderators claim that the AI bots impersonated sexual assault victims, trauma counselors, and other emotionally charged personas in an effort to manipulate opinions.
  • AI’s Superior Persuasiveness and Detection Challenges
    The researchers claim that AI bots proved more persuasive than humans and remained undetected by users, raising concerns about future bot-driven manipulation.
  • Research Paper Inadvertently Makes Case For Why AI Bots Should Be Banned From Social Media
    The study highlights the urgent need for social media platforms to develop tools for detecting and verifying AI-generated content. Ironically, the research paper itself is a reason why AI bots should be more aggressively banned from social media and forums.

Researchers from the University of Zurich tested whether AI bots could persuade people more effectively than humans by secretly deploying personalized AI arguments on the ChangeMyView subreddit without user consent, violating platform rules and allegedly going outside the ethical standards approved by their university ethics board. Their findings show that AI bots are highly persuasive and difficult to detect, but the way the research itself was conducted raises ethical concerns.

Read the concerns posted by the ChangeMyView subreddit moderators:

Unauthorized Experiment on CMV Involving AI-generated Comments

Featured Image by Shutterstock/Ausra Barysiene and manipulated by author

Google Updates Gemini/Vertex AI User Agent Documentation via @sejournal, @martinibuster

Google updated the documentation for the Google-Extended user agent, which publishers can use to control whether Google Gemini and Vertex use their data for training purposes or for grounding AI answers.

Updated Guidance

Google updated their guidance on Google-Extended based on publisher feedback for the purpose of improving clarity and adding more specific details.

Previous Documentation:

“Google-Extended is a standalone product token that web publishers can use to manage whether their sites help improve Gemini Apps and Vertex AI generative APIs, including future generations of models that power those products. Grounding with Google Search on Vertex AI does not use web pages for grounding that have disallowed Google-Extended.”

Updated Version

The updated documentation provides more detail and is easier to understand explanation of what the user agent is for and what blocking it accomplishes.

“Google-Extended is a standalone product token that web publishers can use to manage whether content Google crawls from their sites may be used for training future generations of Gemini models that power Gemini Apps and Vertex AI API for Gemini and for grounding (providing content from the Google Search index to the model at prompt time to improve factuality and relevancy) in Gemini Apps and Grounding with Google Search on Vertex AI.”

Google-Extended Is Not A Ranking Signal

Google also updated one sentence to make it clear that Google-Extended isn’t used as a ranking signal for Google Search. That means that allowing Google-Extended to use the data for grounding Gemini AI answers won’t be counted as a ranking signal.

Grounding is a reference to using web data (and knowledge base data) to improve answers provided by a large language model with up to date and factual information, helping to avoid fabrications (also known as hallucinations).

The previous version omitted mention of ranking signals:

“Google-Extended does not impact a site’s inclusion or ranking in Google Search.”

The newly updated version specifically mentions Google-Extended in the context of a ranking signas:

“Google-Extended does not impact a site’s inclusion in Google Search nor is it used as a ranking signal in Google Search.”

Documentation Matches Other Guidance

The updated documentation matches a short passage about Google-Extended that’s elsewhere in Google Search Central. The other longstanding guidance explains that Google-Extended is not a way to control how website information is shown in Google Search, demonstrating how Google-Extended is separated from Google Search.

Here’s the other guidance that’s found on a page about preventing content from appearing in Google AI Overviews:

“Google-Extended is not a method for managing how your content appears in Google Search. Instead, use other methods to manage your content in Search, such as robots.txt or other robot controls.”

Takeaways

  • Google-Extended Documentation Update:
    The Google-Extended documentation was clarified and expanded to make its purpose and effects easier to understand.
  • Separation From Ranking Signals:
    The updated guidance explicitly states that Google-Extended does not affect Google Search inclusion nor is it a ranking signal.
  • Internal Use By AI Models:
    Clarified that Google-Extended controls whether site content is used for training and grounding Gemini models.
  • Consistency Across Documentation:
    The updated language now matches longstanding guidance elsewhere in Google’s documentation, reinforcing its separation from search visibility controls.

Google updated its Google-Extended documentation to explain that publishers can block their content from being used for AI training and grounding without affecting Google Search rankings. The update also matches longstanding guidance that explains Google-Extended has no effect on how sites are indexed or ranked in Search.

Featured Image by Shutterstock/JHVEPhoto

Google’s AI Overviews Reach 1.5 Billion Monthly Users via @sejournal, @MattGSouthern

Google’s AI search features have reached widespread adoption. The company announced that AI Overviews in Search now reach 1.5 billion users per month.

This information was revealed during Alphabet’s Q1 earnings call.

Alphabet Earnings Show Growth Across Core Products

Alphabet announced strong financial results for Q1, highlighting the adoption of AI across its products. The company reported total revenue of $90.2 billion, representing a 12% year-over-year increase.

Despite industry concerns that AI will disrupt traditional search models, Google reported that Search revenue grew 10% year-over-year to $50.7 billion.

Pichai said in the earnings report:

“We’re pleased with our strong Q1 results, which reflect healthy growth and momentum across the business. Underpinning this growth is our unique full stack approach to AI. This quarter was super exciting as we rolled out Gemini 2.5, our most intelligent AI model, which is achieving breakthroughs in performance and is an extraordinary foundation for our future innovation.

Search saw continued strong growth, boosted by the engagement we’re seeing with features like AI Overviews, which now has 1.5 billion users per month. Driven by YouTube and Google One, we surpassed 270 million paid subscriptions. And Cloud grew rapidly with significant demand for our solutions.”

Earnings Highlights

Alphabet’s Q1 earnings report showed healthy performance across most business segments:

  • Total revenue: $90.2 billion, up 12% year-over-year
  • Operating income: $30.6 billion, up 20% year-over-year
  • Operating margin: Expanded by two percentage points to 34%
  • Google Search revenue: $50.7 billion, up 10% year-over-year
  • YouTube ad revenue: $8.9 billion, up 10% year-over-year
  • Google Cloud revenue: $12.3 billion, up 28% year-over-year
  • Cloud operating margin: Improved to 17.8% from 9.4% last year
  • Capital expenditures: $17.2 billion, up 43% year-over-year

One notable underperformer was Google Network revenue, which declined 2% year-over-year to $7.3 billion, suggesting potential challenges in display advertising.

Google Cloud: A Standout

Google Cloud emerged as a standout performer, with revenue growing 28% to $12.3 billion.

The jump in profitability was more impressive, with operating income rising to $2.2 billion (a 17.8% margin) compared to $900 million (a 9.4% margin) in the same quarter of the previous year.

“Cloud grew rapidly with significant demand for our solutions,” noted Pichai, with the earnings report highlighting strong performance across core GCP products, AI Infrastructure, and Generative AI Solutions.

Implications for Search Marketers

For SEO professionals, the earnings data points to several key considerations:

  • Google’s successful integration of AI, while maintaining Search revenue growth, indicates that AI Overviews will likely expand further.
  • The combined 1.5 billion monthly AI Overviews impressions, along with continued investment, suggest that this shift in search presentation is likely to be permanent.
  • Google’s operating margin has improved despite significant investments in AI, providing the company with a financial incentive to continue this strategy.

Looking Forward

Alphabet’s Q1 results demonstrate that the company is successfully navigating the transition to AI-enhanced products while maintaining revenue growth.

For search marketers, the financial strength behind Google’s AI initiatives suggests these changes to search will accelerate rather than slow down.

With 1.5 billion users already experiencing AI Overviews monthly and Google’s continued heavy investment in AI infrastructure, the search landscape is undergoing profound changes, which are now reflected in the company’s financial performance.


Featured Image: Ifan Apriyana/Shutterstock