Google Now Allows Top Ads To Appear At Bottom Of Search Results via @sejournal, @brookeosmundson

Google Ads introduced a quiet but impactful change last week to how ads can show up on the search results page.

High-performing ads that used to be eligible only for top-of-page positions can now also appear at the bottom.

This means advertisers can show up more than once on the same results page: once at the top and again at the bottom, as long as the ads meet Google’s relevance standards.

At a glance, it may feel like a small shift. But in reality, it opens the door to more exposure, smarter bidding strategies, and a clearer glimpse into how Google is thinking about ad experience.

Let’s unpack what’s changing, why it matters, and what this means for your campaigns.

What’s Changing With Search Ad Placements?

Until recently, Google followed a rule where only one ad from a single advertiser could show on a search results page. That ad could only appear in one place, either at the top or the bottom.

That restriction has now been updated.

With this change, if your ad is strong enough to qualify for the top of the page, it can also be eligible to appear again at the bottom.

That’s because Google runs separate auctions for each Search ad location.

Google reports that during testing, this increased the presence of relevant ads by 10% and led to a 14% lift in conversions from bottom-of-page placements.

In short, users weren’t just seeing more ads. They were also interacting with them more.

But this isn’t a free-for-all. Ads still need to meet relevance thresholds, and your bottom-of-page placement won’t just show up by default. It has to earn its spot, the same way your top ad does.

How This Changes the Bigger Quality Picture

For Google, this isn’t just about squeezing in more ads. It’s about improving the experience for users and advertisers at the same time.

By opening up bottom-of-page slots to high-quality ads, Google is trying to ensure users see relevant options whether they click right away or scroll to the end of the page.

It’s a subtle shift, but one that could shape how performance marketers think about their creative and bidding strategies.

It also signals how Google continues to reward quality over quantity.

If your ad copy is weak or your landing page experience is lacking, you’re unlikely to benefit from this expanded eligibility.

But if you’ve invested in thoughtful creative, user-focused content, and clear calls to action, you now have twice the chance to show up and potentially win more conversions.

This move also speaks to inventory optimization. By filling both top and bottom ad spots with the best content available, Google is getting more mileage out of every search without making the results page feel like a cluttered ad wall.

Does This Conflict With Google’s Unfair Advantage Policy?

At first, many advertisers were confused since Google recently updated their Unfair Advantage policy earlier this month.

The Unfair Advantage policy bars advertisers from “double serving” to a single ad location.

Double serving refers to showing multiple ads from different accounts or domains that all point to the same business. Google cracked down on that to ensure fair competition and to prevent advertisers from dominating a single auction by crowding out competitors.

This new update doesn’t violate that principle.

In fact, Google clarified that this change is possible because top and bottom placements run in separate auctions. That means your ad isn’t “beating out” your own other ad in the same auction. It’s simply earning placement in two different areas of the page.

So long as the ads are relevant and helpful to the user, Google’s policy allows for this kind of visibility.

What Advertisers Need To Know About This Change

This update gives advertisers new levers to pull — but only if you know where to look.

First, this isn’t something you need to opt into. If your ads are eligible based on performance, they’ll start showing in both places automatically. But that doesn’t mean you should take a hands-off approach.

Here are some things to keep in mind:

  • Monitor your impression share by position. Use segmentation in Google Ads to break down where your ads are showing (top vs. other) and compare performance.
  • Watch for changes in CTR and Conversion Rate. You may see stronger performance from one position over the other. That can inform whether you want to bid more aggressively, or refine copy and assets to align with what works best.
  • Revisit your Quality Score drivers. With Google prioritizing relevance, improving expected CTR, ad relevance, and landing page experience will help you capture more real estate.
  • Layer in automation, but stay strategic. Smart Bidding might adjust bids automatically to take advantage of new placement opportunities, but make sure you’re reviewing your placement data regularly. Algorithms don’t always know your goals better than you do.
  • Look beyond vanity metrics. Bottom-of-page clicks may cost less, but be sure they’re actually driving value. Focus on leads, sales, or other business outcomes, rather than just volume.

Moving Forward with Better Search Ads

Google’s decision to allow top-performing ads to also appear at the bottom of search results reflects an ongoing effort to enhance user experience and ad relevance.

While the change offers new opportunities for advertisers, it also emphasizes the importance of ad quality and strategic planning.

By understanding and adapting to these updates, advertisers can better position themselves for success in the evolving landscape of search advertising.

If you’ve been focused on creating better ads and improving your landing pages, this update is in your favor.

Reddit Mods Accuse AI Researchers Of Impersonating Sexual Assault Victims via @sejournal, @martinibuster

Researchers testing the ability of AI to influence people’s opinions violated the ChangeMyView subreddit’s rules and used deceptive practices that allegedly were not approved by their ethics committee, including impersonating victims of sexual assault and using background information about Reddit users to manipulate them.

They argue that those conditions may have introduced biases. Their solution was to introduce AI bots into a live environment without telling the forum members they were interacting with an AI bot. Their audience were unsuspecting Reddit users in the Change My View (CMV) subreddit (r/ChangeMyView), even though it was a violation of the subreddit’s rules which prohibit the use of undisclosed AI bots.

After the research was finished the researchers disclosed their deceit to the Reddit moderators who subsequently posted a notice about it in the subreddit, along with a draft copy of the completed research paper.

Ethical Questions About Research Paper

The CMV moderators posted a discussion that underlines that the subreddit prohibits undisclosed bots and that permission to conduct this experiment would never have been granted:

“CMV rules do not allow the use of undisclosed AI generated content or bots on our sub. The researchers did not contact us ahead of the study and if they had, we would have declined. We have requested an apology from the researchers and asked that this research not be published, among other complaints. As discussed below, our concerns have not been substantively addressed by the University of Zurich or the researchers.”

This fact that the researchers violated the Reddit rules was completely absent from the research paper.

Researchers Claim Research Was Ethical

While the researchers omit that the research broke the rules of the subreddit, they do create the impression that it was ethical by stating that their research methodology was approved by an ethics committee and that all generated comments were checked to assure they were not harmful or unethical:

“In this pre-registered study, we conduct the first large-scale field experiment on LLMs’ persuasiveness, carried out within r/ChangeMyView, a Reddit community of almost 4M users and ranking among the top 1% of subreddits by size. In r/ChangeMyView, users share opinions on various topics, challenging others to change their perspectives by presenting arguments and counterpoints while engaging in a civil conversation. If the original poster (OP) finds a response convincing enough to reconsider or modify their stance, they award a ∆ (delta) to acknowledge their shift in perspective.

…The study was approved by the University of Zurich’s Ethics Committee… Importantly, all generated comments were reviewed by a researcher from our team to ensure no harmful or unethical content was published.”

The moderators of the ChangeMyView subreddit dispute the researcher’s claim to the ethical high ground:

“During the experiment, researchers switched from the planned “values based arguments” originally authorized by the ethics commission to this type of “personalized and fine-tuned arguments.” They did not first consult with the University of Zurich ethics commission before making the change. Lack of formal ethics review for this change raises serious concerns.”

Why Reddit Moderators Believe Research Was Unethical

The Change My View subreddit moderators raised multiple concerns about why they believe the researchers engaged in a grave breach of ethics, including impersonating victims of sexual assault. They argue that this qualifies as “psychological manipulation” of the original posters (OPs), the people who started each discussion.

The Reddit moderators posted:

“The researchers argue that psychological manipulation of OPs on this sub is justified because the lack of existing field experiments constitutes an unacceptable gap in the body of knowledge. However, If OpenAI can create a more ethical research design when doing this, these researchers should be expected to do the same. Psychological manipulation risks posed by LLMs is an extensively studied topic. It is not necessary to experiment on non-consenting human subjects.

AI was used to target OPs in personal ways that they did not sign up for, compiling as much data on identifying features as possible by scrubbing the Reddit platform. Here is an excerpt from the draft conclusions of the research.

Personalization: In addition to the post’s content, LLMs were provided with personal attributes of the OP (gender, age, ethnicity, location, and political orientation), as inferred from their posting history using another LLM.

Some high-level examples of how AI was deployed include:

  • AI pretending to be a victim of rape
  • AI acting as a trauma counselor specializing in abuse
  • AI accusing members of a religious group of “caus[ing] the deaths of hundreds of innocent traders and farmers and villagers.”
  • AI posing as a black man opposed to Black Lives Matter
  • AI posing as a person who received substandard care in a foreign hospital.”

The moderator team have filed a complaint with the University Of Zurich

Are AI Bots Persuasive?

The researchers discovered that AI bots are highly persuasive and do a better job of changing people’s minds than humans can.

The research paper explains:

“Implications. In a first field experiment on AI-driven persuasion, we demonstrate that LLMs can be highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.”

One of the findings was that humans were unable to identify when they were talking to a bot and (unironically) they encourage social media platforms to deploy better ways to identify and block AI bots:

“Incidentally, our experiment confirms the challenge of distinguishing human from AI-generated content… Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts. This hints at the potential effectiveness of AI-powered botnets… which could seamlessly blend into online communities.

Given these risks, we argue that online platforms must proactively develop and implement robust detection mechanisms, content verification protocols, and transparency measures to prevent the spread of AI-generated manipulation.”

Takeaways:

  • Ethical Violations in AI Persuasion Research
    Researchers conducted a live AI persuasion experiment without Reddit’s consent, violating subreddit rules and allegedly violating ethical norms.
  • Disputed Ethical Claims
    Researchers claim ethical high ground by citing ethics board approval but omitted citing rule violations; moderators argue they engaged in undisclosed psychological manipulation.
  • Use of Personalization in AI Arguments
    AI bots allegedly used scraped personal data to create highly tailored arguments targeting Reddit users.
  • Reddit Moderators Allege Profoundly Disturbing Deception
    The Reddit moderators claim that the AI bots impersonated sexual assault victims, trauma counselors, and other emotionally charged personas in an effort to manipulate opinions.
  • AI’s Superior Persuasiveness and Detection Challenges
    The researchers claim that AI bots proved more persuasive than humans and remained undetected by users, raising concerns about future bot-driven manipulation.
  • Research Paper Inadvertently Makes Case For Why AI Bots Should Be Banned From Social Media
    The study highlights the urgent need for social media platforms to develop tools for detecting and verifying AI-generated content. Ironically, the research paper itself is a reason why AI bots should be more aggressively banned from social media and forums.

Researchers from the University of Zurich tested whether AI bots could persuade people more effectively than humans by secretly deploying personalized AI arguments on the ChangeMyView subreddit without user consent, violating platform rules and allegedly going outside the ethical standards approved by their university ethics board. Their findings show that AI bots are highly persuasive and difficult to detect, but the way the research itself was conducted raises ethical concerns.

Read the concerns posted by the ChangeMyView subreddit moderators:

Unauthorized Experiment on CMV Involving AI-generated Comments

Featured Image by Shutterstock/Ausra Barysiene and manipulated by author

Google Updates Gemini/Vertex AI User Agent Documentation via @sejournal, @martinibuster

Google updated the documentation for the Google-Extended user agent, which publishers can use to control whether Google Gemini and Vertex use their data for training purposes or for grounding AI answers.

Updated Guidance

Google updated their guidance on Google-Extended based on publisher feedback for the purpose of improving clarity and adding more specific details.

Previous Documentation:

“Google-Extended is a standalone product token that web publishers can use to manage whether their sites help improve Gemini Apps and Vertex AI generative APIs, including future generations of models that power those products. Grounding with Google Search on Vertex AI does not use web pages for grounding that have disallowed Google-Extended.”

Updated Version

The updated documentation provides more detail and is easier to understand explanation of what the user agent is for and what blocking it accomplishes.

“Google-Extended is a standalone product token that web publishers can use to manage whether content Google crawls from their sites may be used for training future generations of Gemini models that power Gemini Apps and Vertex AI API for Gemini and for grounding (providing content from the Google Search index to the model at prompt time to improve factuality and relevancy) in Gemini Apps and Grounding with Google Search on Vertex AI.”

Google-Extended Is Not A Ranking Signal

Google also updated one sentence to make it clear that Google-Extended isn’t used as a ranking signal for Google Search. That means that allowing Google-Extended to use the data for grounding Gemini AI answers won’t be counted as a ranking signal.

Grounding is a reference to using web data (and knowledge base data) to improve answers provided by a large language model with up to date and factual information, helping to avoid fabrications (also known as hallucinations).

The previous version omitted mention of ranking signals:

“Google-Extended does not impact a site’s inclusion or ranking in Google Search.”

The newly updated version specifically mentions Google-Extended in the context of a ranking signas:

“Google-Extended does not impact a site’s inclusion in Google Search nor is it used as a ranking signal in Google Search.”

Documentation Matches Other Guidance

The updated documentation matches a short passage about Google-Extended that’s elsewhere in Google Search Central. The other longstanding guidance explains that Google-Extended is not a way to control how website information is shown in Google Search, demonstrating how Google-Extended is separated from Google Search.

Here’s the other guidance that’s found on a page about preventing content from appearing in Google AI Overviews:

“Google-Extended is not a method for managing how your content appears in Google Search. Instead, use other methods to manage your content in Search, such as robots.txt or other robot controls.”

Takeaways

  • Google-Extended Documentation Update:
    The Google-Extended documentation was clarified and expanded to make its purpose and effects easier to understand.
  • Separation From Ranking Signals:
    The updated guidance explicitly states that Google-Extended does not affect Google Search inclusion nor is it a ranking signal.
  • Internal Use By AI Models:
    Clarified that Google-Extended controls whether site content is used for training and grounding Gemini models.
  • Consistency Across Documentation:
    The updated language now matches longstanding guidance elsewhere in Google’s documentation, reinforcing its separation from search visibility controls.

Google updated its Google-Extended documentation to explain that publishers can block their content from being used for AI training and grounding without affecting Google Search rankings. The update also matches longstanding guidance that explains Google-Extended has no effect on how sites are indexed or ranked in Search.

Featured Image by Shutterstock/JHVEPhoto

Google’s AI Overviews Reach 1.5 Billion Monthly Users via @sejournal, @MattGSouthern

Google’s AI search features have reached widespread adoption. The company announced that AI Overviews in Search now reach 1.5 billion users per month.

This information was revealed during Alphabet’s Q1 earnings call.

Alphabet Earnings Show Growth Across Core Products

Alphabet announced strong financial results for Q1, highlighting the adoption of AI across its products. The company reported total revenue of $90.2 billion, representing a 12% year-over-year increase.

Despite industry concerns that AI will disrupt traditional search models, Google reported that Search revenue grew 10% year-over-year to $50.7 billion.

Pichai said in the earnings report:

“We’re pleased with our strong Q1 results, which reflect healthy growth and momentum across the business. Underpinning this growth is our unique full stack approach to AI. This quarter was super exciting as we rolled out Gemini 2.5, our most intelligent AI model, which is achieving breakthroughs in performance and is an extraordinary foundation for our future innovation.

Search saw continued strong growth, boosted by the engagement we’re seeing with features like AI Overviews, which now has 1.5 billion users per month. Driven by YouTube and Google One, we surpassed 270 million paid subscriptions. And Cloud grew rapidly with significant demand for our solutions.”

Earnings Highlights

Alphabet’s Q1 earnings report showed healthy performance across most business segments:

  • Total revenue: $90.2 billion, up 12% year-over-year
  • Operating income: $30.6 billion, up 20% year-over-year
  • Operating margin: Expanded by two percentage points to 34%
  • Google Search revenue: $50.7 billion, up 10% year-over-year
  • YouTube ad revenue: $8.9 billion, up 10% year-over-year
  • Google Cloud revenue: $12.3 billion, up 28% year-over-year
  • Cloud operating margin: Improved to 17.8% from 9.4% last year
  • Capital expenditures: $17.2 billion, up 43% year-over-year

One notable underperformer was Google Network revenue, which declined 2% year-over-year to $7.3 billion, suggesting potential challenges in display advertising.

Google Cloud: A Standout

Google Cloud emerged as a standout performer, with revenue growing 28% to $12.3 billion.

The jump in profitability was more impressive, with operating income rising to $2.2 billion (a 17.8% margin) compared to $900 million (a 9.4% margin) in the same quarter of the previous year.

“Cloud grew rapidly with significant demand for our solutions,” noted Pichai, with the earnings report highlighting strong performance across core GCP products, AI Infrastructure, and Generative AI Solutions.

Implications for Search Marketers

For SEO professionals, the earnings data points to several key considerations:

  • Google’s successful integration of AI, while maintaining Search revenue growth, indicates that AI Overviews will likely expand further.
  • The combined 1.5 billion monthly AI Overviews impressions, along with continued investment, suggest that this shift in search presentation is likely to be permanent.
  • Google’s operating margin has improved despite significant investments in AI, providing the company with a financial incentive to continue this strategy.

Looking Forward

Alphabet’s Q1 results demonstrate that the company is successfully navigating the transition to AI-enhanced products while maintaining revenue growth.

For search marketers, the financial strength behind Google’s AI initiatives suggests these changes to search will accelerate rather than slow down.

With 1.5 billion users already experiencing AI Overviews monthly and Google’s continued heavy investment in AI infrastructure, the search landscape is undergoing profound changes, which are now reflected in the company’s financial performance.


Featured Image: Ifan Apriyana/Shutterstock

YouTube Tests AI Overviews In Search Results via @sejournal, @MattGSouthern

YouTube is now testing AI-powered video summaries in its search results, a feature similar to Google Search’s AI overviews.

The new tool helps users find relevant videos faster by highlighting specific clips that best match their search criteria.

New AI-Powered Search Experience

YouTube’s test introduces a new video results carousel that appears when you search for specific topics. This feature uses AI to find the most helpful parts of videos related to your search.

As YouTube explains:

“This new feature will use AI to highlight clips from videos that will be most helpful for your search query.”

The AI summaries will mainly show up for two types of searches:

  • Product searches (like “best noise cancelling headphones”)
  • Location-based searches (such as “museums to visit in San Francisco”)

Limited Testing Phase

Right now, only “a small number of YouTube Premium members in the US” can see this feature, and only for searches in English.

If you’re part of the test group, YouTube wants your feedback. You can rate the feature using the three-dot menu, where you can give it a thumbs-up or thumbs-down.

Part of YouTube’s Experimental Process

This test follows YouTube’s standard approach to new features. The company regularly tests ideas with small groups before deciding whether to roll them out more widely.

YouTube explains:

“YouTube product teams are constantly testing out new tools and features.”

These tests help users “find, watch, share, and create content more easily.”

The company uses feedback from these experiments to decide “if, when, and how to release these features more broadly.”

What This Means

YouTube’s AI Overviews present opportunities and challenges for SEO pros and content creators.

On the positive side, the feature may help users discover content they might have missed. This could especially benefit creators who make detailed, information-rich videos.

However, there are also concerns similar to those with Google’s AI Overviews:

  • Will these summaries reduce click-through rates by answering questions directly in search results?
  • How will the AI choose which content to feature in these summaries?

These questions may change how creators structure their YouTube videos. Some might start creating clearly defined segments that AI can identify and highlight.

Looking Ahead

YouTube’s test is another step in transforming search across Alphabet’s platforms.

YouTube hasn’t announced when the feature might launch more widely. However, based on how quickly Google expanded AI Overviews, successful testing could lead to a broader rollout in the coming months.


Featured Image: aaddyy/Shutterstock

Google: How To Remove Site From Search Without Verifying Ownership via @sejournal, @martinibuster

Google’s John Mueller answered a question on Reddit that showed an easy way to completely remove an entire website from Google’s search index without a search console verified account.

The person who started the discussion on Reddit had an old website that they wanted to remove a Canva website from Google’s search results.

They wrote:

“As a disclaimer, I am not a tech savvy person, I just use Canva for design. I’ve been reading every piece of literature I can find on how to fully remove my old website from Google search results. I took the website down from Canva’s side, but I can’t get the search result on Google to disappear. Is there a way to do this? Thank you!”

One of the Redditors provided a link to a Google help page that offers a lot of information about removing sites, pages and images from Google Search by using the Refresh Outdated Content tool. The tool is for situations in which web pages and images no longer exist or pages with sensitive content that was deleted. The Google support page further explains:

“Use this tool if…
you do not own the web page pointed to by Google. (If you own the page, you can ask Google to recrawl the page or hide the page.) AND
the page or image no longer exists or is significantly different from the current version of the page or image.”

Google’s John Mueller responded with an option they could use if they don’t have a verified site on Google Search Console, and provided a URL to a page that enabled the person to submit a website URL, explaining that it’s slower than doing it through Search Console as a verified site owner.

He wrote:

“It requires that your old pages are removed from the internet — so you’d need to take them down from wherever you were hosting your old website.

If by “old” website you mean that you also have a “new” website, you can also check to see if your hoster allows you to redirect your old pages to your new ones. This is a bit cleaner than just removing your pages, since it forwards any “signals” that have been collected with the old web pages. https://developers.google.com/search/docs/crawling-indexing/site-move-with-url-changes has a bit more about site migrations (when you redirect from an old site to a new one). If you’re hosting the old site with Canva, I don’t know if they support redirects.”

Read the Reddit discussion here:

Removing website from Google

Featured Image by Shutterstock/Anatoliy Karlyuk

7 AI Terms Microsoft Wants You to Know In 2025 via @sejournal, @MattGSouthern

Microsoft released its 2025 Annual Work Trend Index this week.

The report claims this is the year companies will move beyond AI experiments and rebuild their core operations around AI.

Microsoft also introduced several new terms that it believes will shape the future of the workplace.

Let’s look at what Microsoft wants to add to your work vocabulary. Remember, Microsoft has invested heavily in AI, so they have good reasons to make these concepts seem normal.

The Microsoft AI Dictionary

1. The “Frontier Firm”

Microsoft says “Frontier Firms” are organizations built around on-demand AI, human-agent teams, and employees who act as “agent bosses.”

The report claims 71% of workers at these AI-forward companies say their organizations are thriving. That’s much higher than the global average of just 37%.

2. “Intelligence on Tap”

This refers to AI that’s easily accessible whenever needed. Microsoft calls it “abundant, affordable, and scalable on-demand.”

The company suggests AI is now a resource that isn’t limited by staff size or expertise but can be purchased and used as needed, conveniently through Microsoft’s products.

3. “The Capacity Gap”

This term refers to the growing disparity between what businesses require and what humans can provide.

Microsoft’s research indicates that 53% of leaders believe productivity must increase, while 80% of workers report a lack of time or energy to complete their work. They suggest that AI tools can fill this gap.

4. “Work Charts”

Forget traditional org charts. Microsoft envisions more flexible “Work Charts” that adapt to business needs by leveraging both human workers and AI.

These structures focus on results rather than rigid hierarchies. They allow companies to use the best mix of human and AI workers for each task.

5. “Human-Agent Ratio”

This term refers to the balance between AI agents and human workers required for optimal results.

Microsoft suggests that leaders need to determine the number of AI agents required for specific roles and the number of humans who should guide those agents. This essentially redefines how companies staff their teams.

6. “Agent Boss”

Perhaps the most interesting term is that of an “agent boss,” someone who builds, assigns tasks to, and manages AI agents to boost their impact and advance their career.

Microsoft predicts that within five years, teams will be training (41%) and managing (36%) AI agents as a regular part of their jobs.

7. “Digital Labor”

This is Microsoft’s preferred term for AI-powered work automation. Microsoft positions AI not as a replacement for humans, but as an addition to the workforce.

The report states that 82% of leaders plan to use digital labor to expand their workforce within the next year and a half.

However, this shift towards AI-powered work automation raises important questions about job displacement, the need for retraining, and the ethical use of AI.

These considerations are crucial as we navigate this new era of work.

Behind the Terminology

These terms reveal Microsoft’s vision for embedding AI deeper into workplace operations, with its products leading the way.

The company also announced updates to Microsoft 365 Copilot, including:

  • New Researcher and Analyst agents
  • An AI image generator
  • Copilot Notebooks
  • Enhanced search functions

Jared Spataro, Microsoft’s CMO of AI at Work, states in the report:

“2025 will be remembered as the year the Frontier Firm was born — the moment companies moved beyond experimenting with AI and began rebuilding around it.”

Looking Ahead

While Microsoft’s terms may or may not stick, the trends it describes are already changing digital marketing.

Whether you embrace the title “agent boss” or not, knowing how to use AI tools while maintaining human creativity will likely become essential in the changing marketing workforce.

Will Microsoft’s vision of “Frontier Firms” happen exactly as they describe? Time and the number of people who adopt these ideas will tell.


Featured Image: Drawlab19/Shutterstock

Google’s Martin Splitt Explains How To Find & Remove Noindex Tags via @sejournal, @MattGSouthern

Google’s Search Relations team has released a new SEO Office Hours video with Martin Splitt.

He tackles a common problem many website owners face: unwanted noindex tags that keep pages out of search results.

In the video, Splitt helps a user named Balant who couldn’t remove a noindex tag from their website. Balant wanted their page to be public, but the tag prevented this.

Where Unwanted Noindex Tags Come From

Splitt listed several places where unwanted noindex tags might be hiding:

“Make sure that it’s not in the source code, it’s not coming from JavaScript, it’s not coming from a third-party JavaScript.”

Splitt pointed out that A/B testing tools often cause this problem. These tools sometimes add noindex tags to test versions of your pages without you realizing it.

CDN & Cache Problems

If you use a Content Delivery Network (CDN), Splitt warned that old cached versions might still have noindex tags even after you remove them from your site.

Splitt explained:

“If you had a noindex in and you’re using a CDN, it might be that the cache hasn’t updated yet.”

Check Your CMS Settings & Plugins

Splitt explained that your Content Management System (CMS) settings might be adding noindex tags without you knowing.

He said:

“If you’re using a CMS, there might be settings or plugins for SEO, and there might be something like ‘allow search engines to index this content’ or ‘to access this content,’ and you want to make sure that’s set.”

Splitt added that settings labeled as “disallow search engines” should be unchecked if you want your content to appear in search results.

See the full video:

Debugging Process for Persistent Noindex Issues

If you’re dealing with stubborn noindex problems, Splitt suggests checking these places in order:

  1. Check your HTML source code directly
  2. Look at JavaScript files that might add meta tags
  3. Review third-party scripts, especially testing tools
  4. Check if your CDN cache needs updating
  5. Look at your CMS settings and SEO plugins

What This Means For SEO Professionals

Google’s advice shows why thorough technical SEO checks are essential. Modern websites are complex with dynamic content and third-party tools, so finding technical SEO problems takes deeper digging.

SEO professionals should regularly crawl their sites with tools that process JavaScript. This practice provides a deeper understanding of how search engines interpret your pages, going beyond the basic HTML and revealing the true visibility of your content.

Google keeps covering these basic technical issues in its videos, suggesting that even well-designed websites often struggle with indexing problems.

If your pages aren’t showing up in search results, use Google’s URL Inspection tool in the Search Console. This shows you how Google sees your page and whether any noindex tags exist.

Google Quietly Ends COVID-Era Structured Data Support via @sejournal, @martinibuster

Google announced that it is dropping support for the 2020 COVID-era Special Announcements structured data type and completely phasing it out by July 31, 2025. The announcement was posted on the SpecialAnnouncement structured data documentation.

SpecialAnnouncement Structured Data

This structured data type was adopted by Google in April 2020 as a way to announce a wide range of information related to the COVID pandemic. It was specifically for COVID related announcements and it never evolved beyond pandemic related purposes although Google did allow the use of this structured data for local businesses to announce new store hours as a way to communicate to Google that data while not necessarily showing a rich result.
Interestingly, this structured data was released as a “beta” feature, meaning that it was a live test subject to changes and was never integrated as an official structured data, remaining a beta feature to the end.

There were two ways to submit a special announcement notice, by structured data and Google Search console.

Users who continue to use the Special Announcement structured data will have no negative effect by keeping it on their site but it will have no effect on Google Search.

Read Google’s special announcement about the deprecation of the SpecialAnnouncement structured data here:

Special announcement (SpecialAnnouncement) structured data (BETA)

Featured Image by Shutterstock/Blinx

Wix Announces Adaptive Content For Driving Higher Sales & Engagement via @sejournal, @martinibuster

Wix announced a new feature that enables businesses to create personalized content for visitors, increasing relevance and opportunities for higher sales and lead generation. The feature integrates AI into the workflow, making it easier for publishers to deliver advanced personalized experiences to returning customers.

Relevance = Higher Sales

It’s commonly known that site visitors who visit a site that’s an exact match for the keywords used in a search tend to convert at a higher rate than visitors that to a site that has less relevant  content. A website experience that’s directly relevant to site visitors contributes to higher conversion rates. Being able to optimize factors that contribute to relevance to site visitors is an innovative and useful way to deploy technology.

The new feature is easily configurable and offers simulations of what the adaptive content may look like so that Wix users can preview what their site visitors will see.

Muly Gelman, Senior Product Manager at Wix Personalize shared:

“Website personalization is now essential for delivering the relevant, engaging experiences today’s consumers expect. This application highlights how we can move beyond using AI to generate website content but leverage AI to dynamically adapt and personalize the live website experience for each visitor in real-time, empowering businesses to connect more effectively with their customers.

As a result, businesses can deliver engaging, personalized experiences that resonate with their audience, ultimately driving higher engagement rates and creating greater monetization opportunities.”

The new adaptive content feature complements their new Automation Builder and the Wix Functions feature.

Featured Image by Shutterstock/chainarong06