Google Launches New ‘Saved Comparisons’ Feature For Analytics via @sejournal, @MattGSouthern

Google announced a new tool for Analytics to streamline data comparisons.

The ‘saved comparisons’ feature allows you to save filtered user data segments for rapid side-by-side analysis.

Google states in an announcement:

“We’re launching saved comparisons to help you save time when comparing the user bases you care about.

Learn how you can do that without recreating the comparison every time!”

Google links to a help page that lists several benefits and use cases:

“Comparisons let you evaluate subsets of your data side by side. For example, you could compare data generated by Android devices to data generated by iOS devices.”

“In Google Analytics 4, comparisons take the place of segments in Universal Analytics.”

Saved Comparisons: How They Work

The new comparisons tool allows you to create customized filtered views of Google Analytics data based on dimensions like platform, country, traffic source, and custom audiences.

These dimensions can incorporate multiple conditions using logic operators.

For example, you could generate a comparison separating “Android OR iOS” traffic from web traffic. Or you could combine location data like “Country = Argentina OR Japan” with platform filters.

These customized comparison views can then be saved to the property level in Analytics.

Users with access can quickly apply saved comparisons to any report for efficient analysis without rebuilding filters.

Google’s documentation states:

“As an administrator or editor…you can save comparisons to your Google Analytics 4 property. Saved comparisons enable you and others with access to compare the user bases you care about without needing to recreate the comparisons each time.”

Rollout & Limitations

The saved comparisons feature is rolling out gradually. There’s a limit of 200 saved comparisons per property.

For more advanced filtering needs, such as sequences of user events, Google recommends creating a custom audience first and saving a comparison based on that audience definition.

Some reports may be incompatible if they don’t include the filtered dimensions used in a saved comparison. In that case, the documentation suggests choosing different dimensions or conditions for that report type.

Why SEJ Cares

The ability to create and apply saved comparisons addresses a time-consuming aspect of analytics work.

Analysts must view data through different lenses, segmenting by device, location, traffic source, etc. Manually recreating these filtered comparisons for each report can slow down production.

Any innovation streamlining common tasks is welcome in an arena where data teams are strapped for time.

How This Can Help You

Saved comparisons mean less time getting bogged down in filter recreation and more time for impactful analysis.

Here are a few key ways this could benefit your work:

  • Save time by avoiding constant recreation of filters for common comparisons (e.g. mobile vs desktop, traffic sources, geo locations).
  • Share saved comparisons with colleagues for consistent analysis views.
  • Switch between comprehensive views and isolated comparisons with a single click.
  • Break down conversions, engagement, audience origins, and more by your saved user segments.
  • Use thoughtfully combined conditions to surface targeted segments (e.g. paid traffic for a certain product/location).

The new saved comparisons in Google Analytics may seem like an incremental change. However, simplifying workflows and reducing time spent on mundane tasks can boost productivity in a big way.


Featured Image: wan wei/Shutterstock

LinkedIn Report: AI Skills Now Must-Have For Marketers via @sejournal, @MattGSouthern

A new report by Microsoft and LinkedIn reveals the rapid adoption of AI tools and skills in the marketing industry.

According to the 2024 Work Trend Index Annual Report, which surveyed over 31,000 people across 31 countries, marketing professionals who leverage AI enjoy a competitive advantage.

Employers recognize the efficiency gains AI capabilities provide in marketing roles and increasingly seek applicants with those skills.

Karim R. Lakhani, Chair of the Digital Data Design Institute at Harvard, states in the report:

“Marketers are harnessing the power of AI to work smarter, not just faster. It’s enabling them to focus on higher-value, creative work while automating more routine tasks.”

Here are some highlights from the report illustrating the need to develop an AI skill set to remain competitive.

AI Aptitude: The New Must-Have Skill for Marketers

The survey data reveals a strong preference among business leaders for candidates and employees with AI skills.

A majority, 66%, stated they wouldn’t consider hiring candidates lacking AI proficiency.

Further, 71% expressed a preference for less experienced job seekers with AI skills over more seasoned professionals without that expertise.

This inclination was pronounced in creative fields like marketing and design.

Michael Platt, a neuroscience professor at the Wharton School, states in the report:

“AI is redefining what it means to be a competitive marketer in today’s digital landscape. Professionals who can effectively integrate AI into their work are positioning themselves as invaluable assets to their organizations.”

The report indicates that early-career marketers who develop AI skills could benefit significantly.

77% of leaders reported that employees adept at leveraging AI would be trusted with greater responsibilities earlier in their careers than their peers without AI skills.

The AI Arms Race For Top Marketing Talent

Data from LinkedIn shows that job postings highlighting AI tools and applications have seen a 17% increase in application growth compared to those that don’t mention AI.

Additionally, 54% of early-career employees cited access to AI technologies as a key factor influencing their choice of employer.

Organizations that provide AI training and support for their marketing teams likely have an advantage in attracting top talent.

Why SEJ Cares

The widespread adoption of AI in marketing signifies a shift in the skills and capabilities necessary for succeeding in this rapidly evolving industry.

As AI transforms marketing approaches, professionals who fail to adapt risk being left behind.

The 2024 Work Trend Index Annual Report’s findings are relevant to marketing professionals at all levels. They demonstrate that AI proficiency is necessary for career advancement and job market competitiveness.

Additionally, the report highlights businesses’ role in fostering an AI-driven culture.

Companies investing in AI tools, training, and employee support will be better positioned to attract and retain top talent, drive innovation, and achieve better results.

Read the full report.

How This Can Help You

For marketing professionals to succeed in the AI era, the report suggests:

  • Prioritize developing AI skills through courses, workshops, training programs, and collaborating with AI practitioners to gain hands-on experience.
  • Embrace experimenting with new AI tools and techniques, integrating them into daily workflows to improve efficiency.
  • Share AI knowledge actively with colleagues to foster a culture of knowledge sharing and drive organizational AI adoption.
  • Highlight AI capabilities during job searches by demonstrating the successful use of AI to drive results in previous roles.
  • Choose employers committed to AI adoption that provide access to cutting-edge AI tools and support ongoing learning.

These recommendations can help you future-proof your career and advance in an increasingly competitive field.


Featured Image: eamesBot/Shutterstock

Big Brands Receive Site Abuse Manual Actions via @sejournal, @martinibuster

Google indicated that manual actions were coming to webpages that host third party webpages and according to some, the effects of those manual actions may be showing up in the search results.

Site Reputation Abuse Manual Actions

Google’s SearchLiaison tweeted late on May 6th that Google was enforcing the new site reputation abuse policy with manual actions. Manual actions are when someone at Google inspects a webpage to determine if the page is in violation of a spam policy.

Th Reputation Abuse policy affects sites that host third party content that is published with little to no oversight from the hosting website. The purpose of the arrangement is for the third party to take advantage of the host site’s reputation so that both receive a share of affiliate sales. An example could be a news website that’s hosting coupon code content that’s entirely created by a third party.

What Are Manual Actions?

A manual action is when a human at Google visually inspects a website to determine if they engaged in violations of Google’s spam policies. The result of a manual action is typically but not always a removal from Google’s search index. Sometimes the offending webpages are completely removed and sometimes they are only prevented from ranking.

Sites With Manual Actions

Google communicates to the site publisher if a site has been issued a manual action. Only the site publisher and those with access to a website’s search console account is able to know. Google generally doesn’t announce which sites have received a manual action. So unless a site has completely disappeared from Google Search it’s not possible to say with any degree of certainty if a site has received a manual action.

The fact that a webpage has disappeared from Google’s search results is not confirmation that it has received a manual action, especially if other pages from the site can still be found.

It’s important then to understand that unless a website or Google publicly acknowledges a manual action anyone on the outside can only speculate if a site has received one. The only exception is in the case when a site is completely removed from the search index, in which case there’s a high probability that the site has indeed penalized.

Big Brands Dropped From Search Results

It can’t be said with certainty that a site received a manual action if the page is still in the search index. But Aleyda Solis noticed that some big brand websites have recently stopped ranking for coupon related search queries.

Aleyda shared screenshots of coupon related search results before and after the Site Abuse policies were enforced. Her tweets showed screenshots of sites that were no longer ranking. Some of the sites appear to have removed their coupon webpages (highlighted in red) and sites that still hosted coupon pages but were no longer ranking in the search results were highlighted in orange in Aleyda’s screenshots.

It should be noted that Aleyda does not accuse any site of having received a manual action. She only shows that some sites are no longer ranking for coupon code search queries.

Aleyda tweeted:

“Google has already started taking action for the new site reputation abuse policy 👀👇 See the before/after for many of the most popular “promo code(s)” queries:

* carhartt promo code
* postmates promo code
* samsung promo code
* godaddy promo code

Sites that were ranking before and not anymore:

* In Orange (with still existing coupon sections): Cnet, Glamour, Reuters, USA Today, CNN, Business Insider
* In Red (with removed coupon sections): LA Times, Time Magazine, Wired, Washington Post”

Did Reuters Receive A Manual Action?

The global news agency Reuters formerly took the number one ranking spot for the keyword phrase “GoDaddy promo code” (as seen in the “before” screenshot posted by Aleyda to Twitter).

But Reuters is completely removed from the search results today.

Did the Reuters GoDaddy page receive a manual action? Manual actions typically result in a webpage’s complete removal from Google’s search index, But that’s not the case with the Reuters GoDaddy coupon page. A site search for the GoDaddy coupon page still shows webpages from Reuters are currently still in Google’s index. It’s just not ranking anymore.

Reuters Coupon Page Remains In Search Index

Screenshot of search results

It’s hard to say with certainty if the Reuters page received a manual action but what is clear is that the page is no longer ranking, as Aleyda correctly points out.

Did Reuters GoDaddy Page Violate Google’s Spam Policy?

Google’s Site Reputation Abuse policy says that a characteristic of site reputation abuse is the lack of oversight of the third party content.

“Site reputation abuse is when third-party pages are published with little or no first-party oversight or involvement…”

Reuter’s current GoDaddy page contains a disclaimer that asserts oversight over the third party content.

This is the current disclaimer:

“The Reuters newsroom staff have no role in the production of this content. It was checked and verified by the coupon team of Reuters Plus, the brand marketing studio of Reuters, in collaboration with Upfeat.”

Reuters’ disclaimer shows that there is first-party oversight which indicates that Reuters is in full compliance with Google’s spam policy.

But there’s a problem. There was a completely different disclaimer prior to Google’s Site Reputation Abuse policy announcement.  This raises the question as to whether Reuters changed their disclaimer in order to give the appearance that there was oversight.

Fact: Reuters Changed The Disclaimer

The current disclaimer on the Reuters coupon page asserts that there was some oversight of the third party content.  If that’s true then Reuters complies with Google’s spam policy.

But from March 11, 2024 and prior, the Reuters published a disclaimer that clearly disavowed involvement with the third party content.

This is what Google’s site reputation abuse policy says:

“Site reputation abuse is when third-party pages are published with little or no first-party oversight or involvement…”

And this is the March 11, 2024 disclaimer on the Reuters coupon page:

“Reuters was not involved in the creation of this content.”

Reuters Previously Denied Oversight Of 3rd Party Content

Screenshot of Reuters' previous disclaimer that disavows involvement in third party coupon content

Reuters changed their disclaimer about a week after Google’s core update was announced.  That disclaimer had always distanced Reuters from involvement prior to Google’s spam policy announcement.

This is their 2023 disclaimer on the same GoDaddy Coupon page:

“This service is operated under license by Upfeat Media Inc. Retailers listed on this page are curated by Upfeat. Reuters editorial staff is not involved.”

Why did that disclaimer change after Google’s Site Reputation Abuse announcement? If Reuters is in violation did they receive a manual action but were spared from having those pages removed from Google’s search index?

Manual Actions

Manual actions can result in a complete removal of the offending webpage from Google’s search index. That’s not what happened to Reuters and other big brand coupon pages highlighted by Aleyda so it could be possible that the big brand coupon pages only received a ranking demotion and not a full blown de-indexing as is common for regular sites. Or it could be that the demotion of those pages in the rankings are complete coincidence.

Featured Image by Shutterstock/Mix and Match Studio

Hit By The Core Algorithm? 5 Factors To Be Aware Of via @sejournal, @martinibuster

Many factors can affect rankings after a core algorithm update. It’s not always about the helpfulness of content, there are other factors that can play a role in why the algorithm changed and negatively affected your website rankings.

If you find yourself saying, “It used to rank before, why doesn’t it rank now?” then some of these factors may be something to consider.

1. Algorithmic Losses Are Not Necessarily Persistent

Sites hit by the core algorithm update (which includes the Helpful Content part) do not have a permanent strike against them. Over the past ten years Google has rolled out complicated algorithms and systems that can take months between update cycles, leaving affected sites unable to find a quick path back to the search results. While that’s not a permanent mark it does feel like a site has acquired a curse that permanently marks them as no good and permanently excluded.

Google’s John Mueller answered a question where he confirmed that getting caught in a Core Algorithm Update is not persistent and with work a site can recover from being hit by an update.

Someone asked on X (formerly Twitter):

“Can a site hit by HCU grow again in terms of traffic if it improves in quality? Many fear that no matter the amount of improvements we make a HCU hit site will forever have a classifier assigned to it that keeps it from growing again.”

John Mueller responded:

“Yes, sites can grow again after being affected by the “HCU” (well, core update now). This isn’t permanent. It can take a lot of work, time, and perhaps update cycles, and/but a different – updated – site will be different in search too.”

2. Recovering Is Not The Right Word

A lot of people think of recovering from an update as resetting the rankings so that websites regain positions to a previous state. John Mueller’s answer on X suggests that publishers can understand algorithmic effects as something that requires adjusting a website to fit into an evolving web, including user expectations.

Mueller tweeted:

“Permanent changes are not very useful in a dynamic world, so yes. However, “recover” implies going back to just-as-before, and IMO that is always unrealistic, since the world, user-expectations, and the rest of the web continues to change. It’s never “just-as-before”.”

This statement seems to imply that to a certain degree, algorithmic updates reflect user expectations in what they expect to see in the search results. A way to understand this is with the example of Google’s Medic Update from a few years back. That update reflected a realignment of the search results with what users expect to see when making certain queries. After the Medic update, search queries for medical topics required search results with a scientific approach. Sites that reflected folk remedies and unscientific did not fit that updated definition of relevance.

There are subtle variations to this realignment of search results that goes directly to answering the question, what do users mean when they ask a search query? Sometimes relevance means informational sites while for other queries it may mean review sites are what users expect to see.

So if your site is hit by a core algorithm update, revisit the SERPs and try to determine what the new SERPs mean in terms of relevance and self-assess whether your site meets this new definition of relevance.

Circling back to Mueller’s response, there is no “going back to just-as-before” and that may be because there has been a subtle shift in relevance. Sometimes the fix is subtle. Sometimes getting back into the search engine results (SERPs) requires a major change in the website so that it meets with user expectations.

3. Thresholds And Ranking Formulas

Another interesting point that Mueller discussed is the difference between an ongoing algorithmic evaluation and the more persistent effects from a ranking system that requires an update cycle before a site can recover.

Someone asked:

“The simple question is whether you need to wait for a new core update to recover from the HCU. A simple “yes” or “no you can recover anytime” would suffice.”

John Mueller answered:

“It’s because not all changes require another update cycle. In practice, I’d assume that stronger effects will require another update. Core updates can include many things.”

Then continued with these interesting comments:

“For example, a ranking formula + some thresholds could be updated. The effects from the updated formula are mostly ongoing, the changes to thresholds often require another update to adjust.

…(“thresholds” is a simplification for any numbers that need a lot of work and data to be recalculated, reevaluated, reviewed)”

The above means there are two kinds of effects that can hit a site. One that is a part of a continually updated ranking formula that can quickly reflect changes made to a site. These used to be called rolling updates where the core algorithm can make relatively instant evaluations about a site and boost or demote the rankings.

The other kind of algorithmic issue is one that requires a massive recalculation. This is what the HCU and even the Penguin algorithms used to be like until they got folded into the core algorithm. They were like massive calculations that seemed to assign scores that were only updated on the following cycle.

4. The Web & Users Change

In another recent exchange on X, John Mueller affirmed that a key to success is keeping track of what users expect.

He tweeted:

“…there is no one-shot secret to long-lasting online success. Even if you find something that works now, the web, user desires, and how they engage with websites changes. It’s really hard to make good, popular, persistent things.”

That statement offers these concepts to keep in mind for online success:

  • The Internet
  • User desires
  • How users engage with websites
  • popularity is not persistent

Those are not algorithm factors. But they could be things that Google picks up on in terms of understanding what users expect to see when they make a search query.

What users expect to see is my preferred definition of relevance. That has practically zero to do with “semantic relevance” and more about what users themselves expect. This is something that some SEOs and publishers trip over. They focus hard on what words and phrases mean and forget that what really matters is what they mean to users.

Mueller posted something similar in an answer about why a website ranks #1 in one country and doesn’t perform as well in another. He said that what users expect to see in response to a query can be different from country to country. The point is that it’s not about semantics and entities and other technical aspects but often search ranking relevance has a lot to do with the users.

He tweeted:

“It’s normal for the search results in countries to vary. Users are different, expectations may vary, and the web is also very different.”

That insight may be helpful for some publishers who have lost rankings in a core algorithm update. It could be that user expectations have changed and the algorithm is reflecting those expectations.

5. Page-Level Signal

Google’s SearchLiaison affirmed that the Helpful Content component of the core algorithm is generally a page-level signal but that there are sitewide ones as well. His tweet quoted the Helpful Content Update FAQ which says:

“Do Google’s core ranking systems assess the helpfulness of content on a page-level or site-wide basis?

Our core ranking systems are primarily designed to work on the page level, using a variety of signals and systems to understand the helpfulness of individual pages. We do have some site-wide signals that are also considered.”

Keep An Open Mind

It’s frustrating to lose rankings in a core algorithm update. I’ve been working in SEO for about 25 years and auditing websites since 2004. Helping site owners identify why their sites no longer rank has taught me that it’s useful is to keep an open mind about what is affecting the rankings.

The core algorithm has a lot of signals, some of which pertain to the helpfulness while others are relevance to users, relevance to site queries and also just plain site quality. So it may be helpful to not get stuck thinking that a site lost rankings because of one thing because it could be something else or even multiple factors.

Featured Image by Shutterstock/Benny Marty

Google’s Mueller Outlines Path To Recovery For Sites Hit By Core Update via @sejournal, @MattGSouthern

Google’s Search Advocate John Mueller recently addressed the SEO community’s concerns about site recovery after being impacted by algorithm updates.

The conversation arose as people questioned whether sites hit by the September helpful content update could regain lost traffic and rankings after future core updates.

The exchange began on X when an SEO professional, Thomas Jepsen, asked Mueller if Google’s previous stance still held true – that the search engine “doesn’t hold a grudge” and sites will recover once issues are resolved.

Mueller confirmed, “That’s still the case,” but cautioned that “some things take much longer to be reassessed (sometimes months, at the moment), and some bigger effects require another update cycle.

Addressing Lingering Confusion

Following Mueller’s statements, confusion persisted around whether sites hit by the helpful content update require a new core update to recover lost rankings.

Mueller clarified:

“… not all changes require another update cycle. In practice, I’d assume that stronger effects will require another update. Core updates can include many things.”

He likened core updates to adjustments in ranking formulas and thresholds, with the latter often necessitating another update cycle.

Dismissing Permanence Concerns

There’s concern that sites affected by the September helpful content update will be permanently classified, obstructing future growth.

Mueller addressed those concerns and affirmed that affected sites could regain traffic by improving quality.

However, Mueller says full recovery to pre-update levels is unrealistic.

He states:

“Permanent changes are not very useful in a dynamic world… However, ‘recover’ implies going back to just-as-before, and IMO that is always unrealistic, since the world, user-expectations, and the rest of the web continues to change. It’s never ‘just-as-before’.”

When asked directly if a site affected by the helpful content update can grow in traffic if it improves in quality, Mueller stated:

“Yes, sites can grow again after being affected by the ‘HCU’ (well, core update now). This isn’t permanent. It can take a lot of work, time, and perhaps update cycles, and/but a different – updated – site will be different in search too.”

The Long Road Ahead

Continuing the conversation on LinkedIn, Mueller stressed that the recovery process isn’t specific to helpful content updates or core updates but applies to all kinds of systems and updates within Google.

Mueller states:

“… to be clear, it’s not that “helpful content update” “recoveries” take longer than other updates. It’s just that some kinds of changes take a long time to build up, and that applies to all kinds of systems & updates in Google & in any other larger computer system. Saying that this is specific to the helpful content system, or to core updates would be wrong & misleading.”

Mueller acknowledged that the recovery process doesn’t have a single, straightforward solution and may require deep analysis and significant work to understand how to make a website relevant again.

“There is, however, the additional aspect of the “core update” being about how our systems assess content overall, how we consider it to be helpful, reliable, relevant to users’ queries. This does not map back to a single change that you can make on a website, so – in my experience – it’s not something that a website can just tweak overnight and be done with it. It can require deep analysis to understand how to make a website relevant in a modern world, and significant work to implement those changes — assuming that it’s something that aligns with what the website even wants.”

Lastly, he adds that a recovery will take more than fixing technical issues. It may require a realignment of business priorities.

“These are not “recoveries” in the sense that someone fixes a technical issue and they’re back on track – they are essentially changes in a business’s priorities (and, a business might choose not to do that).”

Why SEJ Cares

Google’s core algorithm updates can dramatically impact a website’s search visibility and traffic.

For sites negatively affected, clear guidance on recovery is critical – both for setting realistic expectations and charting a practical path forward.

Mueller’s insights reassure that improvement remains possible through strategic realignment with Google’s current quality standards.

How This Can Help You

Mueller’s insights allow impacted sites to set realistic expectations for recovery.

Regaining visibility remains possible with patience, thorough analysis, and persistent effort.

Mueller’s statements offer the following takeaways for sites impacted by Google’s updates:

  • Recovery isn’t out of the question but will require significant effort over multiple update cycles.
  • Simply restoring previous tactics is insufficient; sites must evolve to meet changing user needs and internet best practices.
  • Deep analysis is necessary to identify areas for improvement and realign content strategy with modern relevance signals.
  • Returning to previous ranking positions is unrealistic due to evolving user needs.

Featured Image: rudall30/Shutterstock

Google Confirms Site Reputation Abuse Update via @sejournal, @martinibuster

Google’s SearchLiaison confirmed that Google’s site reputation abuse update started on Monday May 6th. Many sites from across the web took down webpages that could be perceived as hosting third-party content for the purpose of ranking in search engines.

Site Reputation Abuse

An old strategy that’s made a recent comeback is one in which a marketer will piggyback their content on another website in order to rank it in search engines. The best way to describe the practice is that a publisher is piggybacking on another publishers website.

Some newbie marketers slapped the awkward name parasite SEO to the practice. Parasite SEO is an inept name for this strategy because a parasite subsists on an unwilling host organism but this approach to ranking is by agreement and not one site attacking another one without permission.

This isn’t a low-level affiliate marketer strategy though. It’s also one that’s practiced by many major brands, particularly for credit cards and product reviews.

Google Targets Third Party Content

This specific spam policy targets sites that host third party content in which the host publisher has little to do with the content published on their site. It takes more than just hosting third party content however to be targeted as spam.

Google’s formal definition is:

“Site reputation abuse is when third-party pages are published with little or no first-party oversight or involvement, where the purpose is to manipulate Search rankings by taking advantage of the first-party site’s ranking signals. Such third-party pages include sponsored, advertising, partner, or other third-party pages that are typically independent of a host site’s main purpose or produced without close oversight or involvement of the host site, and provide little to no value to users.”

Google’s SearchLiaison confirmed in a tweet that the policy went into effect today.

He tweeted:

“It’ll be starting later today. While the policy began yesterday, the enforcement is really kicking off today.”

Some big brand sites have recently removed sections of their site that featured product reviews that lack evidence that the reviewer actually handled the reviewed products. The reviews lacked original product photos, no product measurements and no testing results.

Read Google’s guidelines on Site Reputation Abuse.

Featured Image by Shutterstock/Lets Design Studio

Apple’s “Intelligent Search” Will Summarize Webpages via @sejournal, @martinibuster

A report based on independently verified accounts notes that Apple’s Safari 18 will come with an Intelligent Search feature that summarize webpages in response to search queries and there may be a new feature called Web Eraser that allows users to permanently remove text, images, advertisements from webpages.

The Potential For Disruption By Apple AI Search

Apple has been gathering website data for years through its Internet crawler that uses the user agent, Applebot. The harvested data has in the past been used in the context of Siri and Apple’s Spotlight Suggestions feature.

Many in the search community have been aware of Applebot and have welcomed the prospect of a new search engine from Apple but despite constant crawling Apple has not released a search engine. A reason Apple has not released a standalone search engine may be that it’s become apparent that the best way to challenge Google Search is with a technology that replaces search engines altogether, much like how the Apple iPhone made digital cameras obsolete.

The latest news coming out about Safari 18 appears to confirm that supplanting Google is the strategy that Apple is pursuing.

Duane Forrester, formerly of Bing and now at Yext, commented on the potentially disruptive quality of Apple’s new technology:

“Intelligent Search could change how iOS consumers get, see and interact with content and answers. We are likely to see Apple taking a big step forward, into the consumer-accessible AI conversation which has been dominated by OpenAI, Microsoft, and Google et al to this point. Our phones are about to become AI-enabled and that could be ground zero for AI Agents. It’s going to impact and potentially change consumer behavior and be the gateway to new experiences.

The approach Apple is taking has the potential to disrupt not just search engines but also the search optimization and publishing industry, who have have been waiting years for an Apple search engine. But the extent of that disruption depends on how Apple implements their AI web search summarizer.

Webpage Summarization

Although news reports didn’t provide details as to how the new search result summaries will appear, it seems reasonable to speculate that Apple will provide attribution in the form of a link to websites so that users can click through to the website.

Duane Forrester speculated:

“Apple was just in court for Google’s anti-trust trial and likely isn’t keen to dive into “monopoly” waters themselves. My guess is that any “result” will have a source.”

This is what was reported:

“With the release of Safari 18, Apple is expected to introduce article summarization through a new Intelligent Search feature — meaning users will have the option to generate a brief summary of any webpage currently on screen.

Apple’s built-in AI software can analyze the keywords and phrases within a webpage or document and generate a short summary containing only the most important information.”

SEOs have been giddy about the prospect of an Apple search engine for years. It now appears that the Google Killer they’ve been waiting for could possibly result in less traffic from search queries but to what extent it’s impossible to tell at this point.

One search marketing expert mused in a private chat that if Intelligent Search summarizes more than it links out then that may signal it’s time to start selling off the domain names they’ve invested in.

On-Device Processing

An interesting feature of the text summarization is that the technology that creates the summary (called Ajax) resides on the mobile device itself. What Ajax does is extract keywords, entities, and use the data to identify the topic and a loose summary of a webpage which is then turned into a text summary for the user.

This is how the functionality is described:

“In analyzing texts, the software takes into account all relevant information available. It can recognize and classify entities such as companies, people, and locations. For instance, if a name appears at the top of a text, the software will likely recognize the name as belonging to the text’s author.”

Apple Also Plans A Web Eraser

As if an Apple search summarizer isn’t bad enough Apple reportedly has a “Web Eraser” functionality planned for Safari. Web Eraser is a feature that removes content from webpages so that site visitors don’t have to look at it anymore. Things like advertising, videos, comments, suggested reading and maybe even popups could be permanently blocked by the Web Eraser.  Once a user “erases” a block of content from a webpage that block stays erased for the site visitor on subsequent visits.

According to the a report about the Apple Web Eraser:

“The feature is expected to build upon existing privacy features within Safari and will allow users to erase unwanted content from any webpage of their choosing. Users will have the option to erase banner ads, images, text or even entire page sections, all with relative ease.”

Technological Disruptions

It’s a natural response to experience anxiety in the face of changes. For many, the dawning of AI Search is their first experience of a major change. But for those of us who have been in search for 25+ years we have experienced and grown accustomed to sudden and transformative changes that alter publishing and SEO. Like Duane, I tend to feel that Apple’s implementation of an AI search engine that summarizes websites will be disruptive but not to the point that it harms websites. It’s in Apple’s self-interest to not disrupt the Internet to the point of extinction.

Featured Image by Shutterstock/frantic00

Google Connects Imported User Data To GA4 Audiences via @sejournal, @MattGSouthern

Google has streamlined the process for using imported user data to build audiences in Google Analytics 4 (GA4).

This eliminates the need for users to visit a website or app before joining designated audiences.

Google’s official announcement states:

“We’ve made it easier to use imported user data with audiences. Now, data you upload, joined via either user ID or client ID can be used to qualify users for your GA4 audiences right away, without them needing to visit your site or app first.”

Google illustrated the new capability with an example:

“Imagine you have a ‘Gold’ loyalty tier audience. By uploading your loyalty tier data, ‘Gold’ users will automatically be added to your GA4 audience without requiring further activity from them.”

Streamlining Audience Building

Google is tapping into GA4’s user data import functionality, which allows you to add external data like customer loyalty status, purchase histories, and lifetime values from CRM systems.

Before this change, uploaded user data informed analysis but didn’t automatically qualify users for audiences.

The new process provides a more seamless way to construct valuable audience segments using offline user insights.

A Google Analytics help document on user data import confirms:

“You can enhance user segmentation and remarketing audiences by importing user metadata that you store outside of Analytics.”

Requirements

Importing user data requires unique User IDs, Client IDs (web), and App Instance IDs (apps) to map user records. Google cautions against uploading files “that include duplicate keys.

Strict guidelines prohibit uploading “personally identifiable information with user-data import.” User IDs must be hashed.

Once uploaded and processed, the imported data becomes instantly available for audience qualification without additional actions. Subsequent uploads or data collection can overwrite values.

Why SEJ Cares

The change bridges the gap between an organization’s customer database and GA4 audience strategy.

Previously, uploaded offline user data informed analysis but didn’t directly build audiences. Building audiences depended on tracked site/app behavior separate from imported attributes like loyalty status.

This overcomes a limitation to utilizing holistic customer insights for GA4 audiences, creating an opportunity for audience-centric use cases.

How It Helps

This update to GA4 can simplify your audience marketing workflows in the following ways.

Audience List Uploads & Portability

You can now translate offline customer lists and CRM segmentation into GA4 audience definitions, allowing audience portability across systems.

Dynamic Audience Refreshes

You can overwrite imported values through periodic data uploads based on the latest offline qualifications. This removes the need for complex script-based audience refreshes or manual list management.

Richer Audience Enrichment

Uploaded attributes provide an additional layer of demographic, interest, intent, and engagement indicators.

These can be layered into audience definitions alongside on-site and app behavior, enabling richer segmentation logic.

Looking Ahead

This update could be a difference-maker for brands that leverage promotions, content experiences, and messaging tailored to their offline customer stratification, like loyalty tiers.

While the ability to upload offline user attributes has existed, instantly aligning those attributes to GA4 audience definitions opens up new opportunities for richer audience segmentation, tailored marketing, and optimized customer experiences.


Featured Image: Piotr Swat/Shutterstock

Big Change To Google’s Product Structured Data Page via @sejournal, @martinibuster

Google revamped their product structured data documentation by splitting up one comprehensive page into three pages that are more closely focused on their topics, providing an example of how to rewrite a page that’s too big and turn it into multiple topically relevant webpages.

Google Product Structured Data

Product structured data is essential for ecommerce and product review websites because they help make them eligible for rich result listings in Google’s search engine results pages (SERPs). When correctly deployed, the product structured data can make a significant contribution to the amount of traffic received by a website and earnings.

Google’s restructuring of the official documentation gives the ecommerce community a lot to take in but also simplifies the increasingly large products structured data webpage.

What Changed?

The most notable change to the documentation is that the entire document has been split into three pages. The original document, Product Structured Data, was renamed to Introduction To Product Structured Data. The word count went from 4,808 words to only 667 words, with approximately 50% of the new document containing the same content. Aside from trivial changes there is a brand new section of the revamped page called Deciding Which Markup To Use, which serves as a jumping off point to the two new pages.

Merchants, product review site publishers and SEOs now have three product structured data documents to read:

  1. Introduction To Product structured data
  2. Product Snippet Structured Data
  3. Merchant listing Structured Data

In addition to the above changes there’s a new blue tinted callout box that draws attention to the Product Variant Structured Data page that replaces similar text that was buried in the document and easily overlooked.

Screenshot of Callout Box

Screenshot

Document Extensively Rewritten

There are instances where the headings of the new documentation were rewritten to make clear what the topic of each section is about.

The new Introduction To Structured Data page now contains a brand new section. The old section was called “Result types” and the revamped section is called Deciding Which Markup To Use, a more descriptive heading.

This is the new content:

“Deciding which markup to use
There are two main classes of product structured data. Follow the requirements for the type that best suits your use case:

Product snippets: For product pages where people can’t directly purchase the product. This markup has more options for specifying review information, like pros and cons on an editorial product review page.

Merchant listings: For pages where customers can purchase products from you. This markup has more options for specifying detailed product information, like apparel sizing, shipping details, and return policy information.

Note that there is some overlap between these two features. In general, adding the required product information properties for merchant listings means that your product pages can also be eligible for product snippets. Both features have their own enhancements, so be sure to review both when deciding which markup makes sense in the context of your site (the more properties you can add, the more enhancements your page can be eligible for).”

First New Page: Product Snippets

I addition to the revamped introduction to product structured data there is now a new standalone page focused on adding product structured data to become eligible for rich results about ratings, reviews, price and product availability.

The title element for the new page is How To Add Product Snippet Structured Data. Excluding some of the structured data examples, the new product snippet page has about 2,500 words.

Much of the content on this new page isn’t new. The similarities between a section of the old 6,000 word mega-page and this new standalone page indicates that they split this section off the old documentation and turned it into its own page..

Second New Page: Merchant Listing Structured Data

The second new page is dedicated to the product structured data that’s specific to merchant listings, critical for surfacing products in the shopping-related rich results which Google calls Merchant Listing Experiences.

Apart from a single illustration, the dedicated page for Merchant Listing Structured Data has no information at all about what Merchant Listing Experiences are and how they look in the search results. The second paragraph of the new page encourages a reader to visit the Introduction To Product Structured Data webpage to learn more about what the Merchant Listing Experiences rich results look like.

An argument could be made that Merchant Structured Data page has context for the Merchant Listing Experiences information. But someone at Google decided that this one page needs to be 100% focused on a single topic (structured data).

The new webpage comes in at about (more or less) 1,700 words.

Editorial Decisions Based On User Needs

Rather than have one giant and comprehensive page about a topic the decision was made to split into its constituent subtopics, resulting in three pages that are now more tightly focused and presumably will not only be better for users but will also rank better.

Read The Documentation

Review the original version of the documentation:

Internet Archive: Product (Product, Review, Offer) structured data

Read the new documentation:

Introduction to Product structured data

Product snippet (Product, Review, Offer) structured data

Merchant listing (Product, Offer) structured data

Featured Image by Shutterstock/achinthamb

Google Performance Max For Marketplaces: Advertise Without A Website via @sejournal, @MattGSouthern

Google has launched a new advertising program called Performance Max for Marketplaces, making it easier for sellers on major e-commerce platforms to promote their products across Google’s advertising channels.

The key draw? Sellers no longer need a website or a Google Merchant Center account to start.

The official Google Ads Help documentation states:

“Performance Max for Marketplaces helps you reach more customers and drive more sales of your products using a marketplace. After you connect your Google Ads account to the marketplace, you can create Performance Max campaigns that send shoppers to your products there.”

The move acknowledges the growing importance of online marketplaces like Amazon in product discovery.

For sellers already listing products on marketplaces, Google is providing a way to tap into its advertising ecosystem, including Search, Shopping, YouTube, Gmail, and more.

As ecommerce marketer Mike Ryan pointed out on LinkedIn:

“Polls vary, but a recent single-choice survey showed that 50% of consumers start product searches on Amazon, while a multiple-choice survey showed that 66% of consumers start on Amazon.”

The source for his data is a 2023 report by PowerReviews.

Getting Started

To use Performance Max for Marketplaces, sellers need an active account on a participating marketplace platform and a Google Ads account.

Google has yet to disclose which marketplaces are included. We contacted Google to request a list and will update this article when we receive it.

Once the accounts are linked, sellers can launch Performance Max campaigns, drawing product data directly from the marketplace’s catalog.

Google’s documentation states:

“You don’t need to have your own website or Google Merchant Center account. You can use your existing marketplace product data to create ads with product information, prices, and images.”

Conversion tracking for sales is handled by the marketplace, with sales of the advertiser’s products being attributed to their Google campaigns.

While details on Performance Max For Marketplaces are still emerging, Google is providing information when asked directly.

Navah Hopkins states on LinkedIn she received these additional details:

“I finally got a straight answer from Google that we DO need a Merchant Center for this, we just don’t need one to start with.”

Differences From Standard Performance Max

These are the key differences from regular Performance Max campaigns:

  • No URL expansion, automatically-created assets, or video assets
  • No cross-account conversion tracking or new customer acquisition modeling
  • No audience segmentation reporting

Why SEJ Cares

Performance Max for Marketplaces represents a new way to use Google advertising while operating on third-party platforms.

Getting products displayed across Google’s ecosystem without the overhead of a standalone ecommerce presence is a significant opportunity.

How This Can Help You

Through Google’s ecosystem, merchants have new ways to connect with customers.

Performance Max for Marketplaces is a potential difference maker for smaller retailers that have struggled to gain traction through Google’s standard shopping campaigns.

Established merchants invested in Google Ads may find the program opens new merchandising opportunities. By making an entire marketplace catalog available for ad serving, sellers could uncover previously undiscovered pockets of demand.

The success of Performance Max for Marketplaces will depend on its execution and adoption by major players like Amazon and Walmart.


Featured Image: Tada Images/Shutterstock