Google SERPs Without Favicons Is A Glitch via @sejournal, @martinibuster

Last week, reports claimed Google was testing mobile search results without favicons or colors, eliminating any advantage conferred by a well optimized favicon. However, a Google representative confirmed this was not a test.

Favicons In Search Results

The word favicon is short for “favorite icons,” an image meant to be shown in browser tabs, in bookmarks and in Google search results. Google recently recommended that publishers use favicons that are at least 32×32 pixels in size.

Google shows favicons in search results, which makes it a subtle way for ranked websites to attract attention and traffic. For that reason it would be a blow for many websites should Google remove the favicons because it would further erode search visibility.

Favicons Missing?

Lily Ray tweeted a screenshot showing that the favicons were missing in search results.

The screenshot clearly shows that the favicons are missing from the search results. However the reason for that happening turned out to be something else entirely.

Googler Rajan Patel, VP of Engineering at Google, tweeted a response that explained that this was not the way the search results should be displaying. He explained that this kind of search result is meant for low end devices that can’t support rich results.

He tweeted:

This isn’t intended. @lilyraynyc Did you have JavaScript disabled on the device? What kind of device was it? The experience you’re showing is what we show for lower end devices that can’t support the richness of the search result page.”

Lily Ray responded:

“JS is not disabled. iPhone iOS 18.1 on an iPhone 15 pro. I was either using the Chrome app or Google app to search, can’t remember which.”

Rajan Patel speculated that it may have been a misconfigured experiment.

Glitches At Google

It turns out to be an example of Google glitching. Something that wasn’t explored is the whether the Internet connection or latency may have played a role in causing Google to react to the SERP request as if it was being made from a low quality device.  In any case, it’s an example of how complex serving search results can be.

Featured Image by Shutterstock/tomertu

Google Ad Manager Launches Curation Capabilities For Agencies via @sejournal, @brookeosmundson

Just in – Google announces launch of new curation capability for agency advertisers.

The updates in Google Ad Manager are designed to streamline workflow, reduce complexity, and drive efficiencies for agencies.

These changes offer agencies new tools and capabilities to improve ad operations and campaign management.

Per Ginny Marvin, Google Ads Liaison, curation with Google Ad Manager is launching in North America, and will roll out globally early next year.

Here’s a breakdown of the key updates, along with insights on how they’ll impact agency operations moving forward.

The Launch Of Curation

In the recent blog post, Google announced the launch of their curation tool.

Google recognizes the challenge advertisers currently face trying to reach the right audiences across multiple networks and platforms.

With curation, agencies will be able to connect more easily with partners to find curated auction inventory packages, along with activating data segments within their agency Ad Manager account.

Agencies in turn can expect more accurate forecasted and increased match rates for audience targeting.

Another item rolling out with this launch is more billing efficiency.

Now, Google Ad Manager will handle billing and payments to data providers and inventory curators, which frees up admin time that would be spent creating separate invoices or different payment processes.

As of the announcement, curation supports the following:

  • Audigent
  • IAS
  • Liveramp
  • Lotame
  • Multilocal
  • Permutive
  • PrimeAudience
  • Scope3

More Streamlined Programmatic Buying For Agencies

Google Ad Manager already has a streamlined user interface for agencies to report and analyze campaigns across different DSPs.

In addition, Google has one central location for agencies to discover new deals, advanced forecasting capabilities, and packaged deals across participating DSPs.

Google states that agencies can negotiate directly with publishers to create a single deal ID that works across all their buying platforms. It’s currently available for Preferred Deals and Private Auctions through Display & Video 360 and Yahoo.

Per the announcement, support for Programmatic Guaranteed and other DSPs are coming in future months.

The Importance of These Updates for Agencies

PPC marketers are currently juggling multiple platforms, increased client demands, and tighter deadlines.

Google’s new Ad Manager curation tool hopes to help ease this burden, allowing agencies to operate more efficiently and keep the focus on driving value.

For PPC agencies, these updates come with tangible benefits and strategic implications:

  • Improved Client Relationships: With more insightful and customizable reporting, agencies can offer clients a more transparent view. This enhanced visibility can strengthen client relationships by building trust through transparency. When clients are better informed, agencies can demonstrate value more efficiently.
  • Increased Focus on Strategy Over Operations: Automation in inventory management, order creation, and billing processes allows agencies to shift focus from operational tasks to higher-level strategy. Agencies can now allocate more resources to developing creative and more impactful campaigns, ultimately driving better results for clients.
  • Scalability Made Easier: As agencies grow, managing a large number of campaigns across multiple clients becomes challenging. These new Ad Manager tools provide a framework that makes it easier to scale operations.

Final Thoughts

Google’s curation with Ad Manager represent a step forward in meeting evolving needs of agencies.

For PPC agencies, the ability to automate processes, manage inventory, and access enhanced reporting with greater flexibility can lead to improved results and stronger client relationships.

These tools allow agencies to focus on what matters most – driving impactful campaigns and delivering meaningful results.

Google’s Martin Splitt: Duplicate Content Doesn’t Impact Site Quality via @sejournal, @MattGSouthern

Google’s Search Central team has released a new video in its “SEO Made Easy” series. In it, Search Advocate Martin Splitt addresses common concerns about duplicate content and provides practical solutions for website owners.

Key Takeaways

Despite concerns in the SEO community, Google insists that duplicate content doesn’t harm a site’s perceived quality.

Splitt states:

“Some people think it influences the perceived quality of a site but it doesn’t. It does cause some challenges for website owners though, because it’s harder to track performance of pages with duplicates.”

However, it can create several operational challenges that website owners should address:

  • Difficulty in tracking page performance metrics
  • Potential competition between similar content pieces
  • Slower crawling speeds, especially at scale

Splitt adds:

“It might make similar content compete with each other and it can cause pages to take longer to get crawled if this happens at a larger scale. So it’s not great and is something you might want to clean up, but it isn’t something that you should lose sleep over.”

Three Solutions

1. Implement Canonical Tags

Splitt recommends using canonical tags in HTML or HTTP headers to indicate preferred URLs for duplicate content.

While Google treats these as suggestions rather than directives, they help guide the search engine’s indexing decisions.

Splitt clarifies:

“This tag is often used incorrectly by website owners so Google search can’t rely on it and treats it as a hint but might choose a different URL anyway.”

2. Manage Internal Links and Redirects

When Google chooses different canonical URLs than specified, website owners should:

  • Review and update internal links to point to preferred canonical URLs
  • Consider implementing 301 redirects for external links
  • Ensure redirects are appropriately configured to maintain site performance

3. Consolidate Similar Content

The most strategic approach involves combining similar pages to:

  • Improve user experience
  • Streamline Search Console reporting
  • Reduce site clutter

Splitt explains:

“If you find that you have multiple very similar pages, even if Google doesn’t consider them duplicates, try to combine them. It makes information easier to find for your users, will make reporting in Google Search Console easier to work with, and will reduce clutter on your site.”

Search Console Notices

Google Search Console may flag pages with various duplicate content notices:

  • “Duplicate without user-selected canonical”
  • “Alternate page with proper canonical tag”
  • “Duplicate Google chose different canonical than user”

These notifications indicate that Google has indexed the content, possibly under different URLs than initially intended.

International SEO Considerations

Splitt addresses duplicate content in international contexts, noting that similar content across multiple language versions is acceptable and handled appropriately by Google’s systems.

He states:

“If you find that you have multiple very similar pages, even if Google doesn’t consider them duplicates, try to combine them. It makes information easier to find for your users, will make reporting in Google Search Console easier to work with, and will reduce clutter on your site.”

Why This Matters

This guidance represents Google’s current stance on duplicate content and clarifies best practices for content organization and URL structure optimization.

See the full video below:


Featured Image: AnnaKu/Shutterstock

Google Is Updating Its Customer Match Policy via @sejournal, @brookeosmundson

Google sent out an update to its Customer Match policy to advertisers yesterday, going into effect in January 2025.

Customer Match allows advertisers to use their first-party data to reach people across many different campaign types, making it one of the most valuable features available for brands.

The new policy change introduces new standards designed to protect user experience. Read on to learn more about the update and how advertisers can prepare for the change.

The Customer Match Policy Update

In its email, Google stated that advertisers who misuse Customer Match may have their access taken away.

Google Ads email to advertisers about Customer Match policy updates.

They cite items like:

  • Having the potential to cause harm to users
  • Creating a poor user experience

If advertisers receive repeat violation of these items, it could lead to an account suspension.

However, Google reiterated that account suspensions for violating Customer Match policies wouldn’t be immediate.

Advertisers will get a warning issued at least seven days before any type of Google Ads account suspension.

In the short email to Google Ads account managers, they were also clear on how they gather feedback for consideration of its Customer Match policy violations:

  • User feedback: Advertisers should expect Google to scrutinize feedback from users, especially if their ads receive negative interactions or reported concerns.
  • Prevalence or severity of abuse: Ads deemed abusive or misleading could lead to Customer Match restrictions.
  • Repeated violations of Customer Match policy: Consistent breaches of the policy will be grounds for account suspension.

This update is slated to go into effect on January 13, 2025.

What This Means For Advertisers

If you’re an advertiser who is using Customer Match for any Google Shopping campaigns, now is a great time to revisit how you’re collecting user data.

Below are a few examples to ensure you’re collecting user data within compliance policy:

  • Make sure you’re getting a user’s consent before collecting their email address (or any other type of data)
  • Check your targeting settings to ensure you’re not targeting anyone under the age of 18
  • Don’t overly personalize your ads and monitor user feedback
  • Educate your team to ensure everyone involved in managing Google Ads campaigns understands these changes.

Navah Hopkins from Optmyzr provided her perspective on the Customer Match policy, stating:

This email just went out to advertisers letting us know that Customer Match is a privilege, not a right.

This is a great reminder that these policies Google has in place is here to help us gain effectiveness in advertising. But if brands are caught misusing the policies in any way, it can be taken away at any time.

The upcoming policy change introduces new standards designed to protect user experience, which Google continues to emphasize as a priority across its platforms.

By focusing on relevance, quality, and compliance, brands can continue to leverage Customer Match without interruption.

Digital Marketers See Schema Structured Data Shifting Beyond SEO via @sejournal, @martinibuster

An interesting discussion emerged on Twitter inspired by an article written by Jono Alderson. The article proposes thinking about Schema.org structured data markup as a way for emerging AI technologies to better understand and surface published Internet content.

Schema.org Structured Data Markup

The content on a website is called unstructured data because there is no formal organized structure to it that labels each part of the content in a machine readable way. Structured data on the other hand is the exact same content but organized with labels that identify images, authors, and content so that a machine can immediately understand it.

Schema.org structured data markup is generally seen by publishers and the SEO community as something to use in order to make a web page eligible for rich results features in Google. That way of thinking is manifested in the many SEO and Schema.org WordPress plugins that are limited to outputting structured data that Google may use for surfacing rich results.

New AI technologies that can use structured data are here, requiring search marketers to consider a new approach to how structured data is deployed. What Jono encouraged in the article is to think of structured data as a way to create a “data-first foundation” that is ready for the near future.

The article proposes thinking of Schema.org markup as a way to communicate what a web page is about and how it relates to everything else on the website. Jono writes:

“But don’t shy away from building a connected graph of broader, “descriptive”” schema just because Google’s not showing an immediate return. These “descriptive” types and relationships might end up being the lifeline between your content and the AI models of the future.”

Jono tweeted about his article on X (formerly Twitter) and Martha van Berkel, founder of SchemaApp, agreed with Jono’s article that the role of Schema structured data markup is shifting.

She tweeted:

“I agree with you that the role of schema markup is changing. Building a knowledge graph to manage how your website/content is understood with schema, and then asking it questions will be more important than optimizing for Rich Results or for Google.”

Ammon Johns tweeted:

“The biggest issue with Schema is that it is largely just self-declaration, no different in essence to META content, and we know how reliable Google decided that stuff was. So Google will use it, but they are unlikely to fully trust it.”

Ammon is right of course that structured data can’t be blindly trusted. One way to solve that problem is to use a smaller index of high quality websites the wa Perplexity AI does.

Gagan Ghotra tweeted how they sometimes would like to expand their use of structured data but are limited by what the SEO and structured data tools offer.

Read Jono Alderson’s X discussion here.

Read Jono’s article:

What if Schema.org is just… Labels?

Featured Image by Shutterstock/PureSolution

WordPress Elementor Addons Vulnerability Affects 400k Sites via @sejournal, @martinibuster

Wordfence issued an advisory on a vulnerability patched in the popular Happy Addons for Elementor plugin, installed on over 400,000 websites. The security flaw could allow attackers to upload malicious scripts that execute when browsers visit affected pages.

Happy Addons for Elementor

The Happy Addons for Elementor plugin extends the Elementor page builder with dozens of free widgets and features like image grids, a user feedback and reviews function, and custom navigation menus. A paid version of the plugin offers even more design functionalities that make it easy to create functional and attractive WordPress websites.

Stored Cross-Site Scripting (Stored XSS)

Stored XSS is a vulnerability typically occur when a theme or plugin doesn’t properly filter user inputs (called sanitization), allowing malicious scripts to be uploaded to the database and stored on the server itself. When a user visits the website the script downloads to the browser and executes actions like stealing browser cookies or redirecting the user to a malicious website.

The stored XSS vulnerability affecting the Happy Addons for Elementor plugin requires a hacker acquiring Contributor-level permissions (authentication), making it harder to take advantage of the vulnerability.

WordPress security company Wordfence rated the vulnerability 6.4 on a scale of 1 – 10, a medium threat level.

According Wordfence:

“The Happy Addons for Elementor plugin for WordPress is vulnerable to Stored Cross-Site Scripting via the before_label parameter in the Image Comparison widget in all versions up to, and including, 3.12.5 due to insufficient input sanitization and output escaping. This makes it possible for authenticated attackers, with Contributor-level access and above, to inject arbitrary web scripts in pages that will execute whenever a user accesses an injected page.”

Plugin users should consider updating to the latest version, currently 3.12.6, which contains a security patch for the vulnerability.

Read the Wordfence advisory:

Happy Addons for Elementor <= 3.12.5 – Authenticated (Contributor+) Stored Cross-Site Scripting via Image Comparison

Featured Image by Shutterstock/Red Cristal

Google Rolls Out November 2024 Core Algorithm Update via @sejournal, @MattGSouthern

Google has released its latest broad core algorithm update for November 2024. This update continues Google’s refinement of search systems to enhance the quality of results.

On X, Google states:

“Today we released the November 2024 core update. We’ll add it to our ranking release history page in the near future and update when the rollout is complete.”

Core Updates Explained

These algorithmic changes, which Google implements several times annually, are designed to improve the overall search experience by reassessing how content is evaluated and ranked.

Unlike targeted updates, core updates affect search results globally across all regions and languages.

What You Should Know

According to Google’s documentation, most websites may not notice significant changes from core updates.

However, some sites might experience notable shifts in search rankings and traffic.

Google recommends that site owners who observe ranking changes should:

  • Wait until the update is completed before analyzing the impact
  • Compare traffic data from before and after the update in Search Console
  • Pay special attention to pages experiencing major position drops (particularly those falling more than 20+ positions)
  • Evaluate content quality using Google’s self-assessment guidelines
  • Focus on sustainable improvements rather than quick fixes

Recovery & Response

For sites affected by the update, Google emphasizes that recovery may take time—potentially several months—as its systems learn and validate improvements.

Specific changes aren’t guaranteed to result in ranking recoveries. Google emphasizes that search results are dynamic due to evolving user expectations and continuous web content updates.

Site owners can monitor the rollout’s completion status through Google’s Search Status Dashboard.

As with previous core updates, Google is expected to announce when the rollout, which typically takes about two weeks, has finished.

Looking Ahead

This marks Google’s final confirmed core update for 2024, following previous algorithmic changes throughout the year.

We will closely assess the impact as the update rolls out across Google’s search results.


Featured Image: Salarko/Shutterstock

New Internet Rules Will Block AI Training Bots via @sejournal, @martinibuster

New standards are being developed to extend the Robots Exclusion Protocol and Meta Robots tags, allowing them to block all AI crawlers from using publicly available web content for training purposes. The proposal, drafted by Krishna Madhavan, Principal Product Manager at Microsoft AI, and Fabrice Canel, Principal Product Manager at Microsoft Bing, will make it easy to block all mainstream AI Training crawlers with one simple rule.

Virtually all legitimate crawlers obey the Robots.txt and Meta Robots tags which makes this proposal a dream come true for publishers who don’t want their content used for AI training purposes.

Internet Engineering Task Force (IETF)

The Internet Engineering Task Force (IETF) is an international Internet standards making group founded in 1986 that coordinates the development and codification of standards that everyone can voluntarily agree one. For example, the Robots Exclusion Protocol was independently created in 1994 and in 2019 Google proposed that the IETF adopt it as an official standards with agreed upon definitions. In 2022 the IETF published an official Robots Exclusion Protocol that defines what it is and extends the original protocol.

Robots.Txt For Blocking AI Robots

The draft proposal seeks to create additional rules that will extend the Robots Exclusion Protocol (Robots.txt) to extend to AI Training Robots. This will bring about some order and give publishers choice in what robots are allowed to crawl their websites.

Adherance to the Robots.txt protocol is voluntary but all legitimate crawlers tend to obey it.

The draft explains the purpose of the new Robots.txt rules:

“While the Robots Exclusion Protocol enables service owners to control how, if at all, automated clients known as crawlers may access the URIs on their services as defined by [RFC8288], the protocol doesn’t provide controls on how the data returned by their service may be used in training generative AI foundation models.

Application developers are requested to honor these tags. The tags are not a form of access authorization however.”

An important quality of the new robots.txt rules and the meta robots HTML elements is that they don’t require naming specific crawlers. One rule covers all bots that are crawling for AI training data and that voluntarily agree to follow these protocols, which is something that all legitimate bots do. This will simplify bot blocking for publishers.

The following are the proposed Robots.txt rules:

  • DisallowAITraining – instructs the parser to not use the data for AI training language model.
  • AllowAITraining -instructs the parser that the data can be used for AI training language model.

The following are the proposed meta robots directives:

Provides Greater Control

AI companies have been unsuccessfully sued in court for using publicly available data. AI companies have asserted that it’s fair use to crawl publicly available websites, just as search engines have done for decades.

These new protocols give web publishers control over crawlers whose purpose is for consuming training data, bringing those crawlers into alignment with search crawlers.

Read the proposal at the IETF:

Robots Exclusion Protocol Extension to manage AI content use

Featured Image by Shutterstock/ViDI Studio

Automattic Faces Irony Of New WPEngineTracker Protest Site via @sejournal, @martinibuster

In an ironic twist to the ongoing dispute between Automattic and WP Engine, a newly published website on WPEngineTracker.com is displaying a protest message against CEO Matt Mullenweg.

Copycat Domain Name Registered

Someone registered the domain name WPEngineTracker.com using the words that Automattic’s WordPressEngineTracker.com domain uses to describe itself  (WP Engine Tracker) . If people who are looking for Automattic’s WP Engine Tracker domain navigate to WPEngine.com they will land on the variant website which is currently publishing a protest message against Matt Mullenweg.

Screenshot of Typosquat Domain

The above domain name was only registered a few days ago on November 7th. The Internet being what it is, it was arguable inevitable that someone would register the typosquat domain name variant.

Registration Of Domain Announced On GitHub

Someone posted a comment in the official WordPressEngineTracker.com GitHub repository to announce that they registered the domain name variant. The post was met with approval as evidenced by the 15 likes and 18 laughing emojis it received.

Screenshot Of Announcement In GitHub Repository

Domain Registration Announced On Reddit

The person who made the announcement on GitHub appears to have posted a discussion on the WordPress subreddit announcing that they have registered the domain name variant. The Reddit member who made the announcement is a 16 year member.

They posted:

“I found it odd that Matt registered wordpressenginetracker.com when the thingamajig isn’t called “WordPress Engine Tracker” – it’s “WP Engine Tracker”
Thought I should try to be helpful so I bought https://wpenginetracker.com”

That post was also met with positive reactions, receiving 138 upvotes three days later.

Matt Mullenweg’s Dispute With WPEngine

Disputes can appear different depending on who is telling the story. Automattic’s recent motion to dismiss WP Engine’s lawsuit offers details from its side, providing insight into the situation. Despite multiple opportunities to share its perspective, Automattic has received limited approval from WordPress users on social media. The registration of the WP Engine Tracker domain name variant could be said to be a manifestation of that negative sentiment toward Automattic and Mullenweg.

Featured Image by Shutterstock/Vulp

Cloudflare Blocks Automattic’s WP Engine Tracker For Phishing via @sejournal, @martinibuster

Automattic’s WP Engine Tracker website was temporarily blocked by Cloudflare over the weekend as a suspected phishing site, sparking cheers from members of the WordPress subreddit. Meanwhile, someone registered the typosquatting domain WPEngineTracker.com to protest against Matt Mullenweg.

Automattic, presumably under the direction of Matt Mullenweg, recently created a website called WP Engine Tracker on the WordPressEngineTracker.com domain name that lists how many WordPress sites have moved away from managed web host WP Engine. It also recommends web hosts that current customers can move to and offers a download of all domains that are hosted on WP Engine.

An Automattic emailed Search Engine Journal offered background information about the WP Engine Tracker website:

“The beauty of open source software is that everyone is able to access data on a granular level, because it’s all publicly available information. That public data has shown that ever since WP Engine filed its lawsuit – making it clear that they do not have an official association with WordPress and attracting greater attention to the company’s poor service, modifications to the WordPress core software, increasing and convoluted pricing structure, and repeated down times – their customers have left their platform for other hosting providers. WP Engine can and always has been able to access the WordPress software and plugins available on WordPress.org, as can anyone.”

Cloudflare Blocks WP Engine Tracker Website

Sometime on November 9th Cloudflare blocked access to Automattic’s WP Engine Tracker website with a message alerting Internet users that the website has been reported for phishing attempts.

The Cloudflare warning said:

“Warning

Suspected Phishing

This website has been reported for potential phishing.

Phishing is when a site attempts to steal sensitive information by falsely presenting as a safe source.”

WordPress Subreddit Cheers The Blocking

A Reddit discussion appeared soon after the site was blocked with the headline: Cloudflare is showing a phishing warning on wordpressenginetracker.com

Typical comments:

“Wow I’ve actually never seen that screen before. That’s hilarious.”

“As it should. Chrome should give it the red screen of death”

“It’s an interesting development, which made me wonder: Are people reporting phishing to Cloudflare just to mess with Mr. Mullenweg or is there something the site does that can actually be considered phishing?

Cloudflare’s report form has another type of abuse to select, which, in this case, is as obvious as the sun on the sunniest day: Trademark infringement. Why are people reporting phishing?”

One commenter noted the website was displaying a “403 Forbidden” error message if a site visitor ignored the warning and clicked through to the site. A 403 server response means that the server acknowledges the browser request but is denying access to the website.

Screenshot Of Blocked Website

Typosquatting Domain Name Registered

Typosquatting is when someone registers a domain name that is similar to a brand name and that users may type to visit. In this case, someone registered the domain name WPEngineTracker.com to take advantage of the fact that Automattic had registered the domain name WordPressEngineTracker.com but was calling it WP Engine Tracker. When people try to reach the Automattic site by typing in the name of the site as the domain they then arrive at the typosquat domain.

Screenshot of Typosquat Domain

The above domain name was only registered a few days ago on November 7th. The Internet being what it is, it was inevitable that someone would register the typosquat domain name variant.

WordPressEngineTracker.com Is Back Online

After a few hours of downtime Cloudflare removed the phishing block and the Automattic WordPress Engine Tracker website was restored.

Featured Image by Shutterstock/santypan