Meta Follows YouTube In Crackdown On Unoriginal Content via @sejournal, @MattGSouthern

Meta announced that it will implement stronger measures against accounts sharing “unoriginal” content on Facebook.

This marks the second major platform policy update in days following YouTube’s similar announcement about mass-produced and repetitive content.

Meta revealed it has removed approximately 10 million profiles impersonating large content creators, and taken action against 500,000 accounts involved in “spammy behavior or fake engagement”.

A Platform-Wide Movement Against Content Farms

Meta’s announcement closely follows YouTube’s monetization update, which clarified its stance on “inauthentic” content.

Both platforms are addressing the growing problem of accounts profiting from reposting others’ work without permission or meaningful additions.

According to Meta, accounts that repeatedly reuse someone else’s videos, photos, or text posts will lose access to Facebook’s monetization programs and face reduced visibility across all content.

Facebook is also testing a system that adds links on duplicate videos to direct viewers to the original creator.

Here’s an example of what that will look like on a reposted video:

Screenshot from: creators.facebook.com/blog/combating-unoriginal-content, July 2025.

Meta stated in its official blog post:

“We believe that creators should be celebrated for their unique voices and perspectives, not drowned out by copycats and impersonators.”

What Counts As Unoriginal Content?

Both Meta and YouTube distinguish between unoriginal content and transformative content, like reaction videos or commentary.

Meta emphasizes that content becomes problematic when creators repost others’ material without permission or meaningful enhancements, such as editing or voiceover.

YouTube creator liaison Renee Richie offered a similar clarification ahead of its own update, stating:

“This is a minor update to YouTube’s long-standing YPP policies to help better identify when content is mass-produced or repetitive”.

How AI & Automation Factor In

Neither platform bans AI-generated content outright. However, their recent updates appear designed to address a wave of low-quality, automated material that offers little value to viewers.

YouTube affirms that creators may use AI tools as long as the final product includes original commentary or educational value, with proper disclosure for synthetic content.

Meta’s guidelines similarly caution against simply “stitching together clips” or relying on recycled content, and encourage “authentic storytelling.”

These concerns implicitly target AI-assisted compilations that lack originality.

Potential Impact

For content creators, the updates from Meta and YouTube reinforce the importance of originality and creative input.

Those who produce reaction videos, commentary, or curated media with meaningful additions are unlikely to be affected. They may even benefit as spammy accounts lose visibility.

On the other hand, accounts that rely on reposting others’ content with minimal editing or variation could see reduced reach and loss of monetization.

To support creators, Meta introduced new post-level insights in its Professional Dashboard and a tool to check if a page is at risk of distribution or monetization penalties. YouTube is similarly offering guidance through its Creator Liaison and support channels.

Best Practices For Staying Compliant

To maintain monetization eligibility, Meta recommends:

  • Posting primarily original content filmed or created by the user.
  • Making meaningful enhancements such as editing, narration, or commentary when using third-party content.
  • Prioritizing storytelling over short, low-effort posts.
  • Avoiding recycled content with watermarks or low production value.
  • Writing high-quality captions with minimal hashtags and capitalization.

Looking Ahead

Meta and YouTube’s updates indicate a wider industry move against unoriginal content, especially AI-generated “slop” and content farms.

While the enforcement rollout may not affect every creator equally, these moves indicate a shift in priorities. Originality and value-added content are becoming the new standard.

The era of effortless monetization through reposting is being phased out. Moving forward, success on platforms like Facebook and YouTube will depend on creative input, storytelling, and a commitment to original expression.


Featured Image: Novikov Aleksey/Shutterstock

Google’s New Graph Foundation Model Catches Spam Up To 40x Better via @sejournal, @martinibuster

Google published details of a new kind of AI based on graphs called a Graph Foundation Model (GFM) that generalizes to previously unseen graphs and delivers a three to forty times boost in precision over previous methods, with successful testing in scaled applications such as spam detection in ads.

The announcement of this new technology is referred to as expanding the boundaries of what has been possible up to today:

“Today, we explore the possibility of designing a single model that can excel on interconnected relational tables and at the same time generalize to any arbitrary set of tables, features, and tasks without additional training. We are excited to share our recent progress on developing such graph foundation models (GFM) that push the frontiers of graph learning and tabular ML well beyond standard baselines.”

Google's Graph Foundation Model shows 3-40 times performance improvement in precision

Graph Neural Networks Vs. Graph Foundation Models

Graphs are representations of data that are related to each other. The connections between the objects are called edges and the objects themselves are called nodes. In SEO, the most familiar type of graph could be said to be the Link Graph, which is a map of the entire web by the links that connect one web page to another.

Current technology uses Graph Neural Networks (GNNs) to represent data like web page content and can be used to identify the topic of a web page.

A Google Research blog post about GNNs explains their importance:

“Graph neural networks, or GNNs for short, have emerged as a powerful technique to leverage both the graph’s connectivity (as in the older algorithms DeepWalk and Node2Vec) and the input features on the various nodes and edges. GNNs can make predictions for graphs as a whole (Does this molecule react in a certain way?), for individual nodes (What’s the topic of this document, given its citations?)…

Apart from making predictions about graphs, GNNs are a powerful tool used to bridge the chasm to more typical neural network use cases. They encode a graph’s discrete, relational information in a continuous way so that it can be included naturally in another deep learning system.”

The downside to GNNs is that they are tethered to the graph on which they were trained and can’t be used on a different kind of graph. To use it on a different graph, Google has to train another model specifically for that other graph.

To make an analogy, it’s like having to train a new generative AI model on French language documents just to get it to work in another language, but that’s not the case because LLMs can generalize to other languages, which is not the case for models that work with graphs. This is the problem that the invention solves, to create a model that generalizes to other graphs without having to be trained on them first.

The breakthrough that Google announced is that with the new Graph Foundation Models, Google can now train a model that can generalize across new graphs that it hasn’t been trained on and understand patterns and connections within those graphs. And it can do it three to forty times more precisely.

Announcement But No Research Paper

Google’s announcement does not link to a research paper. It’s been variously reported that Google has decided to publish less research papers and this is a big example of that policy change. Is it because this innovation is so big they want to keep this as a competitive advantage?

How Graph Foundation Models Work

In a conventional graph, let’s say a graph of the Internet, web pages are the nodes. The links between the nodes (web pages) are called the edges. In that kind of graph, you can see similarities between pages because the pages about a specific topic tend to link to other pages about the same specific topic.

In very simple terms, a Graph Foundation Model turns every row in every table into a node and connects related nodes based on the relationships in the tables. The result is a single large graph that the model uses to learn from existing data and make predictions (like identifying spam) on new data.

Screenshot Of Five Tables

Image by Google

Transforming Tables Into A Single Graph

The research paper says this about the following images which illustrate the process:

“Data preparation consists of transforming tables into a single graph, where each row of a table becomes a node of the respective node type, and foreign key columns become edges between the nodes. Connections between five tables shown become edges in the resulting graph.”

Screenshot Of Tables Converted To Edges

Image by Google

What makes this new model exceptional is that the process of creating it is “straightforward” and it scales. The part about scaling is important because it means that the invention is able to work across Google’s massive infrastructure.

“We argue that leveraging the connectivity structure between tables is key for effective ML algorithms and better downstream performance, even when tabular feature data (e.g., price, size, category) is sparse or noisy. To this end, the only data preparation step consists of transforming a collection of tables into a single heterogeneous graph.

The process is rather straightforward and can be executed at scale: each table becomes a unique node type and each row in a table becomes a node. For each row in a table, its foreign key relations become typed edges to respective nodes from other tables while the rest of the columns are treated as node features (typically, with numerical or categorical values). Optionally, we can also keep temporal information as node or edge features.”

Tests Are Successful

Google’s announcement says that they tested it in identifying spam in Google Ads, which was difficult because it’s a system that uses dozens of large graphs. Current systems are unable to make connections between unrelated graphs and miss important context.

Google’s new Graph Foundation Model was able to make the connections between all the graphs and improved performance.

The announcement described the achievement:

“We observe a significant performance boost compared to the best tuned single-table baselines. Depending on the downstream task, GFM brings 3x – 40x gains in average precision, which indicates that the graph structure in relational tables provides a crucial signal to be leveraged by ML models.”

Is Google Using This System?

It’s notable that Google successfully tested the system with Google Ads for spam detection and reported upsides and no downsides. This means that it can be used in a live environment for a variety of real-world tasks. They used it for Google Ads spam detection and because it’s a flexible model that means it can be used for other tasks for which multiple graphs are used, from identifying content topics to identifying link spam.

Normally, when something falls short the research papers and announcement say that it points the way for future but that’s not how this new invention is presented. It’s presented as a success and it ends with a statement saying that these results can be further improved, meaning it can get even better than these already spectacular results.

“These results can be further improved by additional scaling and diverse training data collection together with a deeper theoretical understanding of generalization.”

Read Google’s announcement:

Graph foundation models for relational data

Featured Image by Shutterstock/SidorArt

Nearly 8 In 10 Americans Use ChatGPT For Search, Adobe Finds via @sejournal, @MattGSouthern

A new report from Adobe states that 77% of Americans who use ChatGPT treat it as a search engine.

Among those surveyed, nearly one in four prefer ChatGPT over Google for discovery, indicating a potential shift in user behavior.

Adobe surveyed 800 consumers and 200 marketers or small business owners in the U.S. All participants self-reported using ChatGPT as a search engine.

ChatGPT Usage Spans All Age Groups

According to the findings, usage is strong across demographics:

  • Gen X: 80%
  • Gen Z: 77%
  • Millennials: 75%
  • Baby Boomers: 74%

Notably, 28% of Gen Z respondents say they start their search journey with ChatGPT. This suggests younger users may be leading the shift in default discovery behavior.

Trust In AI Search Is Rising

Adobe’s report indicates growing trust in conversational AI. Three in ten respondents say they trust ChatGPT more than traditional search engines.

That trust appears to influence behavior, with 36% reporting they’ve discovered a new product or brand through ChatGPT. Among Gen Z, that figure rises to 47%.

The top use cases cited include:

  • Everyday questions (55%)
  • Creative tasks and brainstorming (53%)
  • Financial advice (21%)
  • Online shopping (13%)

Why Users Choose AI Over Traditional Search

The most common reason people use ChatGPT for search is its ability to quickly summarize complex topics (54%). Additionally, 33% said it offers faster answers with fewer clicks than Google.

Respondents also report that AI results feel more personalized. A majority (81%) prefer ChatGPT for open-ended, creative questions, while 77% find its responses more tailored than traditional search results.

Marketers Shift Focus To AI Visibility

Adobe’s survey suggests businesses are already responding to the shift. Nearly half of marketers and business owners (47%) say they use ChatGPT for marketing, primarily to create product descriptions, social media copy, and blog content.

Looking ahead, two-thirds plan to increase their investment in “AI visibility,” with 76% saying it’s essential for their brand to appear in ChatGPT results in 2025.

What Works In AI-Driven Discovery

To improve visibility in conversational AI results, marketers report the best-performing content types are:

  • Data-driven articles (57%)
  • How-to guides (51%)

These formats may align well with AI’s tendency to surface factual, instructive, and referenceable information.

Why This Matters

Adobe’s findings highlight the need for marketers to adapt strategies as users turn to AI tools for product discovery.

Instead of replacing SEO, AI visibility can complement it. Brands tailoring content for conversational search may gain an edge in reaching audiences through personalized pathways.


Featured Image: Roman Samborskyi/Shutterstock

Malware Discovered In Gravity Forms WordPress Plugin via @sejournal, @martinibuster

WordPress security company Patchstack published an advisory about a serious vulnerability in Gravity Forms caused by a supply chain attack. Gravity Forms responded immediately and released an update to fix the issue.

Supply Chain Attack

Patchstack has been monitoring an attack on a WordPress plugin in which the attackers uploaded an infected version of the plugin directly to the publisher’s repository and fetched other files from a domain name similar to the official domain. This, in turn, led to a serious compromise of websites that used that plugin.

A similar attack was observed in Gravity Forms and was immediately addressed by the publisher. Malicious code had been injected into Gravity Forms (specifically in gravityforms/common.php) by the attackers. The code caused the plugin, when installed, to make HTTP POST requests to the rogue domain gravityapi.org, which was registered just days before the attack and controlled by the attacker.

The compromised plugin sent detailed site and server information to the attacker’s server and enabled remote code execution on the infected sites. In the context of a WordPress plugin, a remote code execution (RCE) vulnerability occurs when an attacker can run malicious code on a targeted website from a remote location.

Patchstack explained the extent of the vulnerability:

“…it can perform multiple processes:

  • Upload an arbitrary file to the server.
  • List all of the user accounts on the WordPress site (ID, username, email, display name).
  • Delete any user accounts on the WordPress site.
  • Perform arbitrary file and directory listings on the WordPress server.”

That last one means that the attacker can view any file, regardless of permissions, which would include the wp-config.php file which contains database credentials.

Gravity Forms Responds

RocketGenius, the publishers of Gravity Forms, took immediate action and uploaded a fixed version of the plugin right away, on the very same day. The domain name registrar, Namecheap, suspended the rogue typosquatted domain which effectively blocked any compromised websites from contacting the attackers.

Gravity Forms has released an update to the plugin, version 2.9.13. Users may want to consider updating to the very latest version.

Read more at Patchstack:

Malware Found in Official Gravity Forms Plugin Indicating Supply Chain Breach

Featured Image by Shutterstock/Warm_Tail

Google Explains How To Approach Content For SEO via @sejournal, @martinibuster

Google’s John Mueller and Martin Splitt discussed the problem of how to approach content for achieving business goals, the wisdom of setting expectations, and observed that it may not matter whether a site is optimized if the content is already achieving its intended results.

Getting The Content Right

Anyone can write, but it’s hard to communicate in a way that meets the audience’s needs. One thing SEOs often get wrong is content, which remains the most important ranking factor in modern search engines.

A common mistake is publishing entire sentences that waste time. I think that happens when writers are trying to meet an arbitrary word count and providing context for the high volume keywords they want to rank for.

Martin Splitt started the discussion by asking how to go about writing content and shared his own experience writing content and getting it wrong because he was writing for himself and not for what the audience needs to read.

Splitt shared:

“…how would I know how to go about content? Because now I know who I want to address and probably also roughly what I want to do. But, I mean, that’s a whole different skillset, right? That’s like copywriting and probably some researching and maybe some lettering and editing, and wow. That’s a lot. I love to write. I love to write.

…But I love having a technical writer on the team. Lizzi is a tremendous help with anything that is writing. I honestly thought I’m a good, reasonably good writer. And then Lizzi came and asked three questions on a piece of documentation that I thought was almost perfect.

I basically started questioning the foundations of the universe because I was like, “Okay, no, this document doesn’t even make sense. I haven’t answered the fundamental questions that I need to answer before I can even start writing. I’ve written like three pages.

Holy moly, that is a skill that is an amazingly tricky skill to acquire, I think. How do I start writing? Just write what I think I should be writing, I guess.”

Writing is easy to do, but difficult to do well. I’ve seen many sites that have the SEO fundamentals in place, but are undermined by the content. Splitt’s experience highlights the value in getting a second opinion on content.

Site Visitors Are Your Inspiration

Mueller and Splitt next move on to the topic of what publishers and SEOs should write about it and their answer is to focus on what users want, encouraging to do something as simple as asking their readers or customers.

Mueller observed:

“I think, if you have absolutely no inspiration, one approach could be to ask your existing customers and just ask them like:

  • How did you find me?
  • What were you looking for?
  • Where were you looking?
  • Were you just looking on a map? What is it that brought you here?

This is something that you can ask anyone, especially if you have a physical business.

..It’s pretty easy to just ask this randomly without scaring people away. That’s kind of one aspect I would do and try to build up this collection of ‘these are different searches that people have done in different places, maybe on different systems, and I want to make sure I’m kind of visible there.’”

Set Reasonable Expectations

John Mueller and Martin Splitt next provide a reality check on the keyword phrases that publishers and SEOs choose to optimize for. It’s not always about the difficulty of the phrases; it’s also about how relevant they are to the website.

Mueller commented about what to do with the keyword phrases that are chosen for targeting:

“And then I would take those and just try them out and see what comes up, and think about how reasonable it would be for one of your pages, perhaps to show up there and how reasonable it can be, I think is something where you have to be brutally honest with yourself, because it’s sometimes tempting to say, “Well, I would like to appear first for the search bookstore on the internet.” Probably that’s not going to happen. I mean, who knows? But there’s a lot of competition for some of these terms.

But, if you’re talking about someone searching for bookstores or bookstores in Zurich or bookstores on Maps or something like that, then that’s a lot more well defined and a lot easier for you to look at and see, what are other people doing there? Maybe my pages are already there. And, based on that, you can try to build out, what is it that I need to at least mention on my pages.”

Mueller followed up by downplaying whether a site is search optimized or not, saying that what’s important is if the site is performing as well as intended. Whether or not it’s properly optimized doesn’t matter if it’s already doing well as it is. Some may argue that the site could be doing better, but that’s outside of the context of what Mueller was commenting on. Mueller’s context was a business owner who was satisfied with the performance of the site.

Mueller observed:

“I mean, it all depends on how serious you take your goal, right? If you’re like a small local business you’re saying, ‘Well, I have a website and I hear I should make it SEO, but I don’t really care.’ Then it’s like do whatever you want kind of thing. If you have enough business and you’re happy. There’s no one to judge you to say, “Your website is not SEO optimized.”

Listen to Episode 95 of the Search Off The Record at about the ten minute mark:

Featured Image by Shutterstock/Krakenimages.com

Google’s Advice On Hiring An SEO And Red Flags To Watch For via @sejournal, @martinibuster

Google’s Search Off The Record podcast discussed when a business should hire an SEO consultant and what metrics of success should look like. They also talked about a red flag to watch for when considering a search marketer.

Hire An SEO When It Becomes Time Consuming

Martin Splitt started the conversation off by asking at what point a business should hire an SEO:

“…I know people are hiring agencies and SEO experts. When is the point where you think an expert or an agency should come in? What’s the bits and pieces that are not as easy to do while I do my business that I should have an expert for?”

John replied that there is no one criteria or line to cross at which point a business should hire a consultant. He did however point out that there comes a certain point where doing SEO is time consuming and takes a business person away from the tasks that are directly related to running their business. That’s a point at which hiring an SEO consultant makes sense.

He said:

“Yeah, I don’t know if there’s a one-size-fits-all answer there because it’s a bit like asking, when should I get help for marketing, especially for a small business.

You do everything yourself. At some point, you’re like, ‘Oh, I really hate bookkeeping. I’m going to hire a bookkeeper.’ At that point where you’re like, ‘Well, I don’t appreciate doing all of this work or I don’t have time for it, but I know it has to be done.’ That’s probably the point where you say, ‘Well, okay, I will hire someone for this.’ “

SEO Should Have Measurable Results?

The next factor they discussed is the measurability of results. Over more than twenty-five years of working in SEO, one of the ways that low-quality SEOs have consistently measured their results is by the number of queries a client site is ranking for. Low-quality SEOs charge a monthly retainer and generate a report of all queries the site has ranked for in the previous months, including garbage nonsense queries.

A common metric SEOs use to gauge success is ranking positions and traffic. Those metrics are a little better, and most SEOs agree that they make sense as solid metrics.

But those metrics don’t capture the true success of SEO because those ranking positions could be for low-quality search queries that don’t result in the kind of traffic that converts to leads, sales, affiliate earnings or ad clicks.

Arguably, the most important metric any business should use to gauge the effect of what was done for SEO is how much more revenue is being generated. Keyword rankings and traffic are important metrics to measure, but the most important metric is ultimately the business goal.

Google’s John Mueller appears to agree, as he cites revenue and the business result as key measures of whether the SEO is working.

He explained:

“I think, for in SEO, it kind of makes sense when you realize there’s concrete value in working on SEO for your website, where there’s some business result that comes out of it where you can actually measurably say, ‘When I started doing SEO for my website, I made so much more money’ or whatever it is that goal is that you care about, and ‘I’m happy to invest a portion of that into hiring someone to do SEO.’

That’s one way I would look at it, where if you can measure in one way or another the effects of the SEO work, then it’s easier to say, ‘Well, I will invest this much into having someone else do that for me.’”

There is a bit of a problem with measuring the effects of SEO. The effects on sales or leads from organic SEO cannot always be directly attributed. People who are obsessed with data-driven decisions will be disappointed because it’s not always possible to directly attribute a lead from an organic search. For one thing, Google hides referral data from the search results. Unlike PPC, where you can track a lead from an ad click to the sale, you can’t do that with organic search.

So if you’re using increased sales or leads as a metric, you’ll have to be able to at least separate attributable paid search from earnings, then guesstimate the rest. Not everything can be data-driven.

Hire Someone With Experience

Another thing Mueller and Splitt recommended was to hire someone who has actual experience with SEO. There are many qualifying factors that can be added, including experience monetizing their own websites, ability to interpret HTML code (which is helpful for identifying technical reasons for ranking problems), endorsements and testimonials. A red flag, in my opinion, is hiring someone from a cold call.

John Mueller observed:

“Someone else, ideally, would be someone who has more experience doing SEO. Because, as a small business owner, you have like 500 hats to wear, and you probably can figure out a little bit about each of these things, but understanding all of the details, that’s sometimes challenging.”

Martin agreed:

“Okay. So there’s no one-size-fits-all answer for this one, but you have to find that spot for yourself whenever it makes sense. All right okay. Fair.”

Red Flag About Some SEOs

Up to this point, both Mueller and Splitt avoided cautioning about red flags to watch for when hiring an SEO. Here, they segued into the topic of what to avoid, advising caution about search marketers who guarantee results.

The reason to avoid these kinds of search marketers is that search rankings depend on a wide range of factors that are not under an SEO’s control. The most an SEO can do is align a site to best practices and promote the site. After that, there are external factors, such as competitors, that cannot be influenced. Most importantly, Google is a black box system: you can see what goes in, you can observe what comes out (the search results), but what happens in between is hidden. All search ranking factors, like external signals of trustworthiness, have an unclear influence on the search results.

Here’s what Mueller said:

“One of the things I would watch out for is, if an SEO makes any promises with regards to ranking or traffic from Search, that’s usually a red flag, because a lot of things around SEO you can’t promise ahead of time. And, if someone says, “I’m an expert. I promise you will rank first for these five words.” They can’t do that. They can’t manually go into Google’s systems and tweak the dials and change the rankings.”

Listen to Google’s Search Off The Record podcast here:

Featured Image by Shutterstock/Peshkova

Google Clarifies Structured Data Rules For Returns & Loyalty Programs via @sejournal, @MattGSouthern

Google has updated its structured data documentation to clarify how merchants should implement markup for return policies and loyalty programs.

The updates aim to reduce confusion and ensure compatibility with Google Search features.

Key Changes In Return Policy Markup

The updated documentation clarifies that only a limited subset of return policy data is supported at the product level.

Google now explicitly states that comprehensive return policies must be defined using the MerchantReturnPolicy type under the Organization markup. This ensures a consistent policy across the full catalog.

In contrast, product-level return policies, defined underOffer, should be used only for exceptions and support fewer properties.

Google explains in its return policy documentation:

“Product-level return policies support only a subset of the properties available for merchant-level return policies.”

Loyalty Program Markup Must Be Separate

For loyalty programs, Google now emphasizes that the MemberProgram structured data must be defined under the Organization markup, either on a separate page or in Merchant Center.

While loyalty benefits like member pricing and points can still be referenced at the product level via UnitPriceSpecification, the program structure itself must be maintained separately.

Google notes in the loyalty program documentation:

“To specify the loyalty benefits… separately add UnitPriceSpecification markup under your Offer structured data markup.”

What’s Not Supported

Google’s documentation now states that shipping discounts and extended return windows offered as loyalty perks aren’t supported in structured data.

While merchants may still offer these benefits, they won’t be eligible for enhanced display in Google Search results.

This is particularly relevant for businesses that advertise such benefits prominently within loyalty programs.

Why It Matters

The changes don’t introduce new capabilities, but they clarify implementation rules that have been inconsistently followed or interpreted.

Merchants relying on offer-level markup for return policies or embedding loyalty programs directly in product offers may need to restructure their data.

Here are some next steps to consider:

  • Audit existing markup to ensure return policies and loyalty programs are defined at the correct levels.
  • Use product-level return policies only when needed, such as for exceptions.
  • Separate loyalty program structure from loyalty benefits, using MemberProgram under Organization, and validForMemberTier under Offer.

Staying compliant with these updated guidelines ensures eligibility for structured data features in Google Search and Shopping.


Featured Image: Roman Samborskyi/Shutterstock

Google Explains How Long It Takes For SEO To Work via @sejournal, @martinibuster

Google’s martin Splitt and John Mueller discussed how long it takes for SEO to have an effect. Google’s John Mueller explained that there are different levels of optimization and that some have a more immediate effect than other more complex changes.

Visible Changes From SEO

Some SEOs like to make blanket statements that SEO is all about links. Others boast that their SEO work can have dramatic effect in relatively little time. And it turns out that those kinds of statements really depend on the actual work that was done.

Google’s John Mueller said that a site starting out from virtually zero optimization to some basic optimization may see near immediate ranking changes in Google.

John Mueller started this part of the conversation:

“I guess another question that I sometimes hear with regards to hiring an SEO is, how long does it take for them to make visible changes?”

Martin Splitt responded:

“Yeah. How long does it take? I’m pretty sure it’s not instant. If you say it takes like a week or a couple of weeks to pick things up, is that the reasonable time horizon or is it longer?”

John answered with the really old “it depends” line which is kind of overdone. But in this case it really does depend on multiple factors related to the scale of the work being done which in turn influences how long it will take for Google to index and then recalculate rankings. He said if it’s something simple then it won’t take Google much time. But if it’s a lot of changes then it may take significantly longer.

John’s explanation:

“I think, to speak in SEO lingo, it depends. Some changes are easy to pick up quickly, like simple text changes on a page. They just have to be recrawled and reprocessed and that happens fairly quickly.

But, if you make bigger, more strategic changes on a website, then sometimes that just takes a long time.”

Next Stage Of SEO: Monitor Progress

Mueller then says that a good SEO should monitor how the changes they made are affecting the rankings. This can be a little tricky because some changes will cause an immediate ranking boost that will last for a few days and then drop. My opinion, from my experience, is that an unshakeable top ranking is generally possible if there’s strong word of mouth and other external signals that tell Google that the content is trustworthy and high quality.

Here’s what John Mueller said:

“I think that’s something where a good SEO should be able to help monitor the progress along there. So it shouldn’t be that they go off and make changes and say, ‘Okay, now you have to keep paying me for the next year until we wait what happens.’ They should be able to tell you what is happening, what the progress is, give you some input on the different things that they’re doing regularly. But it is something that is more of a longer term thing.”

Mueller doesn’t go into details about what the hypothetical SEO is “doing regularly” but in my opinion it’s always helpful to be doing basic promotion that boils down to telling people that this content is out there, measuring how people respond to it, getting feedback about it and then making changes or improvement based on those changes.

For content sites, a great way to get immediate user feedback is to enable a moderated comment section in which only comments that are approved can show up. I have received a lot of positive feedback from readers on some of my content sites from what’s in the comments. It’s also useful to make it easy for users to contact the publisher from any page of the site, whether it’s an ecommerce site or an informational blog. User feedback is absolute gold.

Mueller continued his answer:

“I think if you have a website that has never done anything with SEO, probably you’ll see a nice big jump in the beginning as you ramp up and do whatever the best practices are. At some point, it’ll kind of be slow and regular more from there on.”

Martin Splitt expressed how this part about waiting and monitoring requires patience and Mueller agreed, saying:

“I think being patient is good. But you also need someone like an SEO as a partner to give you updates along the way and say, ‘Okay, we did all of these things,’ and they can list them out and tell you exactly what they did. ‘These things are going to take a while, and I can show you when Google crawls, we can follow along to see like what is happening there. Based on that, we can give you some idea of when to expect changes.’”

Takeaways:

SEO Timelines Vary By Scale Of Change

  • Simple on-page edits may result in quick ranking changes.
  • Larger structural or strategic SEO efforts take significantly longer to be reflected in Google rankings.

SEO Results Are Not Instant

  • Indexing and ranking recalculations take time, even for smaller changes.

Monitoring And Feedback Are Necessary

  • Good SEOs track progress and explain what is happening over time.
  • Ongoing feedback from users can help guide further optimization.

Transparency And Communication

  • Effective SEOs regularly report on their actions and expected timeframes for results.

Google’s John Mueller explained that the time it takes for search optimizations to show results depends on the complexity of changes made, with simple updates being processed faster and large-scale changes requiring more time. He emphasized that good SEO isn’t just about making changes because it also involves tracking how those changes affect rankings, communicating progress clearly, and continuous work.

I suggested that user response to content is an important form of feedback because it helps site owners understand what is resonating well with users and where the site is falling short. User feedback, in my opinion, should be a part of the SEO process because Google tracks user behavior signals that indicate a site is trustworthy and relevant to users.

Listen to Search Off The Record Episode 95

Featured Image by Shutterstock/Khosro

OpenAI Quietly Adds Shopify As A Shopping Search Partner via @sejournal, @martinibuster

OpenAI has quietly added Shopify as a third-party search partner to help power their shopping search, which shows shopping-rich results. The addition of Shopify was not formally announced, but quietly tucked into OpenAI ChatGPT search documentation.

Shopify Is An OpenAI Search Partner

Aleyda Solís (LinkedIn profile) recently noticed that OpenAI had updated their Search documentation to add Shopify to the list of third party search providers.

She posted:

“Ecommerce sites: I’ve found that Shopify is listed along with Bing as a ChatGPT third-party search provider! OpenAI added Shopify along with Bing as a third-party search provider in their ChatGPT Search documentation on May 15, 2025; a couple of weeks after their enhanced shopping experience was announced on April 28.”

OpenAI Is Showing Merchants From Multiple Platforms

OpenAI shopping search is returning results from a variety of platforms. For example, a search for hunting dog supplies returns sites hosted on Shopify but also Turbify (formerly Yahoo Stores)

Screenshot Showing Origin Of OpenAI Shopping Rich Results

The rich results with images were sourced from Shopify and Amazon merchants for this specific query.

At least one of the shopping results listed in the Recommended Sellers is a merchant hosted on the Turbify ecommerce platform:

Screenshot Of OpenAI Recommended Retailers With Gun Dog Supply, Hosted On Turbify Platform

OpenAI Shopping Features

OpenAI recently rolled out shopping features for ChatGPT Search. Products are listed like search results and sometimes as rich results with images and other shopping related information like review stars.

ChatGPT Search uses images and structured metadata related to prices and product description, presumably Schema structured data although it’s not explicitly stated. ChatGPT may generate product titles, descriptions, and reviews based on the data received from third-party websites and sometimes may generate summarized reviews.

Merchants are ranked according to how the merchant data is received from third-party data providers, which at this point includes Bing and Shopify.

Ecommerce stores that aren’t on Shopify can apply to have their products included in OpenAI’s shopping results. Stores that want to opt in must not be opted out of OpenAI’s web crawler, OAI-SearchBot .

Featured Image by Shutterstock/kung_tom

TikTok Denies Report Claiming It’s Building a Standalone US App via @sejournal, @MattGSouthern

TikTok has denied a Reuters report claiming it’s building a standalone U.S. app with a separate algorithm.

  • TikTok strongly denies it is developing a separate U.S.-only version of the app.
  • Reuters cites anonymous sources claiming such a project exists, under the codename “M2.”
  • The report highlights the uncertainty around TikTok’s future in the U.S.