Studies Reveal Consumers Easily Detect AI-Generated Content via @sejournal, @MattGSouthern

Two new studies reveal that most consumers can easily spot AI-generated content, both images and text, which may be more than marketers expected.

The results suggest that brands should be careful when using AI in their marketing materials.

Consumers Identify AI-Generated Images

A study by digital marketing consultant Joe Youngblood found that U.S. consumers correctly spotted AI images 71.63% of the time when shown real photos side-by-side with AI versions.

The study surveyed over 4,000 Americans of different ages.

Youngblood states:

“When asking them to determine which photo was real and which one was AI, over 70% of consumers on average could correctly select the AI generated image,”

Detection rates varied by type of image:

  • Celebrity images (Scarlett Johansson as Black Widow): 88.78% identified correctly
  • Natural landscapes (Italian countryside): 88.46% identified correctly
  • Animal photos (baby peacock): 87.97% identified correctly
  • Space images (Jupiter): 83.58% identified correctly

However, some images were more challenging to detect. Only 18.05% correctly spotted an AI version of the Eiffel Tower, and 50.89% identified an AI-created painting of George Washington.

Similar Skepticism Toward AI-Written Content

A separate report by Hookline& surveyed 1,000 Americans about AI-written content.

Key findings include:

  • 82.1% of respondents can spot AI-written content at least some of the time.
  • Among those aged 22–34, the rate rises to 88.4%.
  • Only 11.6% of young people said they never notice AI content.

Christopher Walsh Sinka, CEO of Hookline&, stated:

“Writers and brands aren’t sneaking AI-generated content past readers.”

Reputational Risks for Brands and Writers

Both studies point to the risks of using AI in content.

From the image study, Youngblood warned,

“If consumers determine that AI images are poor quality or a bad fit they may hold that against your brand/product/services.”

The content study showed:

  • 50.1% of respondents would think less of writers who use AI.
  • 40.4% would view brands more negatively if they used AI-generated content.
  • Only 10.1% would view the brands more favorably.

Older consumers (ages 45–65) were the most critical. Nearly 30% said they did not like AI-written content.

Acceptable Use Cases for AI

Despite the caution, both studies indicate that some uses of AI are acceptable to consumers.

The content report found that many respondents approved of using AI for:

  • Brainstorming ideas (53.7%)
  • Conducting research (55.8%)
  • Editing content (50.8%)
  • Data analysis (50.1%)

In the image study, Youngblood noted that consumers might accept AI for fun and informal uses such as memes, video game sprites, cartoons, and diagrams.

However, for important decisions, they prefer real images.

What This Means

These studies offer guidance for those considering incorporating AI-generated content in marketing material:

  1. Be Transparent: Since many consumers can spot AI-generated content, honesty about its use may help maintain trust.
  2. Focus on Quality: Both studies suggest that genuine, professionally produced content is seen as more reliable.
  3. Use AI Wisely: Save AI for tasks like research and editing, but let people handle creative decisions.
  4. Know Your Audience: Younger consumers may be more accepting of AI than older groups. Tailor your strategy accordingly.

Future marketing campaigns should consider how well consumers can detect AI content and adjust their strategies to maintain trust and credibility.

Google DeepMind’s AGI Plan: What Marketers Need to Know via @sejournal, @MattGSouthern

Google DeepMind has shared its plan to make artificial general intelligence (AGI) safer.

The report, titled “An Approach to Technical AGI Safety and Security,” explains how to stop harmful AI uses while amplifying its benefits.

Though highly technical, its ideas could soon affect the AI tools that power search, content creation, and other marketing technologies.

Google’s AGI Timeline

DeepMind believes AGI may be ready by 2030. They expect AI to work at levels that surpass human performance.

The research explains that improvements will happen gradually rather than in dramatic leaps. For marketers, new AI tools will steadily become more powerful, giving businesses time to adjust their strategies.

The report reads:

“We are highly uncertain about the timelines until powerful AI systems are developed, but crucially, we find it plausible that they will be developed by 2030.”

Two Key Focus Areas: Preventing Misuse and Misalignment

The report focuses on two main goals:

  • Stopping Misuse: Google wants to block bad actors from using powerful AI. Systems will be designed to detect and stop harmful activities.
  • Stopping Misalignment: Google also aims to ensure that AI systems follow people’s wishes instead of acting independently.

These measures mean that future AI tools in marketing will likely include built-in safety checks while still working as intended.

How This May Affect Marketing Technology

Model-Level Controls

DeepMind plans to limit certain AI features to prevent misuse.

Techniques like capability suppression ensure that an AI system willingly withholds dangerous functions.

The report also discusses harmlessness post-training, which means the system is trained to ignore requests it sees as harmful.

These steps imply that AI-powered content tools and automation systems will have strong ethical filters. For example, a content generator might refuse to produce misleading or dangerous material, even if pushed by external prompts.

System-Level Protections

Access to the most advanced AI functions may be tightly controlled. Google could restrict certain features to trusted users and use monitoring to block unsafe actions.

The report states:

“Models with dangerous capabilities can be restricted to vetted user groups and use cases, reducing the surface area of dangerous capabilities that an actor can attempt to inappropriately access.”

This means that enterprise tools might offer broader features for trusted partners, while consumer-facing tools will come with extra safety layers.

Potential Impact On Specific Marketing Areas

Search & SEO

Google’s improved safety measures could change how search engines work. New search algorithms might better understand user intent and trust quality content that aligns with core human values.

Content Creation Tools

Advanced AI content generators will offer smarter output with built-in safety rules. Marketers might need to set their instructions so that AI can produce accurate and safe content.

Advertising & Personalization

As AI gets more capable, the next generation of ad tech could offer improved targeting and personalization. However, strict safety checks may limit how much the system can push persuasion techniques.

Looking Ahead

Google DeepMind’s roadmap shows a commitment to advancing AI while making it safe.

For digital marketers, this means the future will bring powerful AI tools with built-in safety measures.

By understanding these safety plans, you can better plan for a future where AI works quickly, safely, and in tune with business values.


Featured Image: Shutterstock/Iljanaresvara Studio

Google Explains SEO Impact Of Adding New Topics via @sejournal, @martinibuster

Google’s Danny Sullivan discussed what happens when a website begins publishing content on a topic that’s different from the one in which it had gained a sitewide reputation. His comments were made at Search Central Live NYC, as part of a wide-ranging discussion about site reputation.

Danny said that introducing a new topic to a website won’t result in the site taking a hit in rankings. But what could happen is that Google might try to figure out how that content fits into the rest of the site.

Here’s what Danny said:

“We have long done work and are going to continue doing that to understand if parts of the site seem to be independent or starkly different than other parts of the site. It is not bad to have a website do whatever you want the website to do for your readers. It’s not bad that you started off covering one thing and you start writing about something else.

I had one person at an event who was very, very concerned. They started writing about snowboards but now wanted to start writing about skis and was terrified.

That if they write about skiing that somehow the topic of the website and the focus will somehow… it doesn’t work that way.

We’re not kind of building it up on the expertise you have in this particular thing, that type of thing, but what we are trying to understand is if the site seems to be different in some way from other parts of the site.”

It Doesn’t Work That Way

What Danny is saying is that Google looks at how different one part of a site is from another. And if another part is vastly different, he went on to say that sometimes it may rank well for a time based on the reputation of the entire site for the main topic but then the new section may lose rankings.

Danny explained that the loss in rankings is not a penalty but rather it’s just a recognition that a section of a site is so vastly different that the reputation of the entire site doesn’t really apply for that particular topic.

Danny used the metaphor of a “mini-site” to explain how Google might split off the reputation of a new section of a site from the rest of the site so that it can earn reputation for its topic. More about the mini-site metaphor here.

It makes sense that Google would differentiate the different parts of a site because it allows it to understand that a collection of pages is on one topic and another collection of pages within the website are about a different topic.

Featured Image by Shutterstock/Rene Jansa

Google On Negative Authorship Signal And Mini-Site Reputation via @sejournal, @martinibuster

At the recent Search Central Live NYC event, Danny Sullivan discussed what happens when a site begins publishing vastly different content on a site and how that may affect rankings, introducing the concept of a mini-site as a metaphor for dividing the reputation of a site. He also discussed the concept of negative authorship authority, which some SEOs believe follows authors from penalized websites and can negatively affect the other sites they publish on.

Negative Authorship Reputation

Danny initially discussed a negative authorship signal that some in the SEO community believe can follow an author from site to site. The idea is that an author of content that is banned on one site will also have their content banned on another site. He denied that Google tracked author authority signals from site to site.

Sullivan explained:

“If you wrote for a site that got a manual action, it doesn’t somehow infect the other site that you might work for later on, so again, this is not something that freelancers should be worried about.

If you’re a publication and for whatever reason you feel like employing a freelancer, and it makes sense, that’s fine. You don’t need to worry about who they worked for before.

And if you are a freelancer you do not need to go back to the publications and say, can you take my byline down because now I can’t get hired from anybody else because they think I’m going to infect them. It is not like that. It’s not a disease. “

The above SEO myth likely began when publishers noticed that content created by a certain author across multiple sites was banned. In that case, it’s reasonable to assume that there was something wrong with the content but that’s not necessarily true. It could have been the case that the website itself was poorly promoted with unnatural links. Or, it could be that the site itself was engaged with selling links.

The takeaway from what Danny Sullivan shared is that a manual action on one site doesn’t follow an author to another site. Another takeaway is that there is no negative authorship signal that Google is tracking.

And if there’s no negative authorship signal could it be that there is no positive author signal as well? In my opinion it’s a reasonable assumption. A signal like that would be too easy to manipulate. Whatever signals Google uses to understand site reputation is likely enough for the purpose of citing an information source in the search results.

Although claims by some SEOs have been made about authorship signals, authorship signals have never been known to be a thing with Google’s algorithms. Google has a long history of denying the use of authorship signals and Danny’s statements offer further validation that Google continues to not use authorship as signals for ranking purposes.

Ranking Drops And Mini-Site Reputation

Danny next begins talking about how a new section of a site could suddenly lose rankings. He says this isn’t necessarily a bad thing. It’s just Google trying to figure out this new section and that if it’s sufficiently different Google could even start treating it as a standalone mini-site. This is a really fascinating thing he gets into here.

Danny used the example of the addition of a forum to a website.

Danny explained

“For example, you might have a site where you start running a forum. Forums can be different and we would want to understand that this looks like a forum so that we can then rank the forum content against other kinds of forum content on kind of a level playing field or understand that that forum content should be included in things where we try to show forum content.”

What can happen is… that it could be that part of your site was doing better because it was seen as part of the overall site. Now we kind of see it as more of independent and part of a full site on its own.

And potentially you could see a traffic drop that comes from that. That doesn’t mean that you suddenly got a site reputation abuse ban issue because first of all that might not have involved third party content abusing first party work, right? Those were the things. So if it doesn’t have any of that it doesn’t have anything to do with that.Secondly, we would have sent you an email. So, it’s not bad.

Because it just could be we’ve had a general re-ranking… It could also mean that in the long run that part of your site might actually do better, because we might recognize it in different ways, that we might be able to surface it in different ways. And it might start sort of earning its own like ‘mini-site’ reputation along the way.”

Three things to take away from that last part.

A ranking drop could be due to benign things, don’t always assume that a ranking drop is due to a spam or other negative algorithmic action.

Second, a rankings drop could be due to a “general re-ranking” which is a vague term that went unexplained but is probably a reference to minor ranking adjustments outside of a core algorithm update.

The third takeaway is the part about a section of a website earning it’s own “mini-site” reputation. I think SEOs should not create theories about mini-sites and mini-site reputations because that’s not what Danny Sullivan said. He used the word “like” which means that he likely used the phrase “mini-site” as a metaphor.

Featured Image by Shutterstock/Joseph Hendrickson

YouTube Unveils New AI-Powered Hook Generator via @sejournal, @MattGSouthern

YouTube has announced three new features in its ‘Inspiration’ suite for creators. These tools use AI to generate fresh ideas that can help keep viewers watching.

The new hook generator is the most eye-catching tool, which offers suggestions for engaging video openings.

Below, I’ll break down the new features and explain how they can improve video performance.

YouTube’s New AI Hook Generator

Video hooks are the first moments in a video that grab the viewer’s attention. With shorter attention spans and more online competition, strong hooks are essential.

YouTube’s new AI tool provides three hook suggestions that can help creators capture their audience quickly.

1. Statement Hook

This hook uses a strong statement to address a common problem. For example, a statement might say, “Stop letting creative block control you. I’ll show you how to beat it for good in this video. Let’s get started.”

2. Visual Hook

The visual hook guides creators to use striking imagery. One suggestion starts with an extreme close-up of a blinking cursor, then zooms out to reveal a messy desk and a frustrated face, ending with a bold brushstroke on a canvas.

3. Action Hook

This hook is all about movement.

YouTube’s example suggests using a hand crumpling paper and repeatedly tossing it in the trash, then quickly changing to a fresh start as the hand picks up a new sheet and begins to draw.

See an example of the interface below:

Screenshot from: YouTube.com/CreatorInsider, March 2025.

Additional Inspiration Features

While hooks are the main highlight, YouTube has introduced two other tools.

Brainstorm from Anywhere

YouTube notes that creators often get new ideas while checking performance data or reading audience comments. This tool uses data from past videos to suggest new content ideas, making brainstorming easier as you work.

Quick Saves

The quick saves feature lets creators save ideas directly from the brainstorming list. This helps you capture inspiration when it strikes without breaking your creative flow.

See each of these tools in action in the video below:

Availability

These new updates are part of YouTube’s growing Inspiration Tab in YouTube Studio. The tab now helps creators with hooks, video outlines, titles, and thumbnails.

The full Inspiration suite is available on desktop for most creators worldwide. However, due to local regulations, it’s not yet available in the European Union, United Kingdom, or Switzerland.

Implications for Content Strategy

YouTube does warn that “AI-generated content may be inaccurate or inappropriate, vary in quality, or provide information that doesn’t reflect YouTube’s views.”

Even so, these tools provide a helpful base for creators to build on and refine.

As online competition grows, these AI-powered features come at the right time. They offer a practical way to engage viewers better and optimize video performance.


Featured Image: Best Smile Studio/Shutterstock

Google’s Martin Splitt: JavaScript-Loaded Images Can Be Indexed via @sejournal, @MattGSouthern

Google’s Developer Advocate Martin Splitt recently debunked a common SEO myth. He confirmed that images loaded with JavaScript can be indexed by Google when set up correctly.

Splitt shared these insights during the SEO for Paws Conference, a live-streamed fundraiser by Anton Shulke.

Here’s how to avoid common image indexing issues when loading images with JavaScript.

JavaScript Image Loading Isn’t the Problem

When asked about images loaded by JavaScript, Splitt clarified that the method is not to blame for indexing issues.

Splitt explains:

“JavaScript to load images is fine. A purely JavaScript image loading solution can absolutely get your images indexed.”

This comment clears up worries among many SEO pros. Images may not appear in Google Images for reasons other than using JavaScript.

The Real Culprits Behind Unindexed Images

Splitt explained that something else is usually wrong if JavaScript-loaded images don’t appear in search results.

He pointed to a few common issues:

  • Sitemap Problems: Sometimes, key images are missing from XML sitemaps.
  • HTTP Headers: Some image files may have headers that stop them from being indexed.
  • Rendered HTML Issues: If images don’t appear in the rendered HTML (the version Google sees after JavaScript runs), they won’t get indexed.

Debugging JavaScript Image Indexing Issues

Splitt offers a simple process to spot problems. Start by checking if images appear in the rendered HTML using tools like Search Console’s URL Inspection tool.

Splitt explains:

“You would have to check: is the rendered HTML containing the images? If it is, fantastic. If it’s not, then something else is off.”

Since Google indexes the rendered HTML, any image missing from it won’t be found by Googlebot.

See Splitt’s full talk on JavaScript SEO in the video below:

Common JavaScript Image Loading Techniques & Their SEO Impact

There are several ways to load images with JavaScript. Some common methods include:

  • Lazy Loading: Loads images only when needed.
  • Progressive Loading: Shows a low-quality image first, then upgrades to a high-quality one.
  • Infinite Scroll Loading: Loads images as users continue to scroll.
  • Background Image Insertion: Adds images through CSS backgrounds.

If they are set up properly, all these methods can work with Google’s indexing. Each may need its own checks to ensure everything is working as expected.

Best Practices for SEO-Friendly JavaScript Image Loading

Even though JavaScript-loaded images can be indexed, following these best practices can help avoid issues:

  • Verify with the URL Inspection Tool: Ensure images appear in the rendered HTML.
  • Update Your XML Sitemaps: Include key images with proper tags.
  • Use Alt Text: Provide clear alt text for images loaded via JavaScript.
  • Use Native Lazy Loading: Add the loading="lazy" attribute where it makes sense.
  • Check Robots.txt: Ensure you are not blocking JavaScript resources that load images.

What This Means for SEO Professionals

Instead of avoiding JavaScript, verify that images are loaded correctly and appear in the rendered HTML.

As websites rely more on JavaScript, understanding these details is key. SEO professionals who learn to troubleshoot and optimize JavaScript-based image loading will be better prepared to support their clients’ visibility in search results.

Looking Ahead

This clarification is timely. Many modern sites built with frameworks like React, Vue, or Angular load images using JavaScript instead of traditional tags.

Splitt’s insights help dispel the myth that JavaScript harms image indexing. Developers can now focus on performance without worrying about SEO penalties.


Featured Image: Alicia97/Shutterstock

Stop Guessing. Start Converting: The Key To Smarter Lead Generation In 2025 via @sejournal, @Juxtacognition

Marketers have always relied on data to fine-tune their strategies.

But for years, that data has been based on assumptions made from broad industry benchmarks, competitor insights, and vague trends.

The result?

Marginal conversion improvements and a constant struggle to truly connect with the audience.

Better Leads → More Sales In 2025: How To Analyze Leads To Improve Marketing Performance from CallRail shows you how to transform customer conversations into actionable insights that improve your marketing performance at every stage of the funnel.

And you can download your copy right now.

What If You Could Stop Guessing & Start Marketing With Precision?

The secret lies in the conversations your customers are already having with you.

Every call, chat, and interaction holds valuable insights and the exact words, concerns, and motivations that drive your audience to take action.

Yet, most businesses overlook this goldmine of data.

Instead, they continue to make decisions based on surface-level analytics like click-through rates, page views, and lead form submissions.

And while that strategy works, it’s only part of the story. Why?

Image by Paulo Bobita/Search Engine Journal, March 2025

While numbers show what’s happening, your customers’ conversations can tell you why a customer did or didn’t convert and what he or she was thinking.

By leveraging these conversations and other first-party conversational data, you can unlock real insights, refine your messaging, and optimize your marketing, sales, customer service, and more.

Without the guesswork.

The Power Of Listening To Your Customers

Better Leads → More Sales In 2025: How To Analyze Leads To Improve Marketing Performance shows you how to make the most of those insights by providing practical, step-by-step strategies for improving lead quality, increasing conversions, and refining your marketing approach.

You’ll learn how to craft messaging that resonates with your audience, optimize funnels to reflect actual user behavior, and uncover friction points before they impact conversions.

From improving marketing copy to boosting customer retention and increasing Customer Lifetime Value (CLV), the strategies outlined are practical and results-driven.

You’ll be able to move away from guesswork with confidence and build marketing campaigns that feel relevant, personal, and persuasive, leading to higher-quality leads and more sales.

The Future Of Marketing Is Data-Driven And Customer-Focused

Brands that win in 2025 won’t be those with the biggest ad budgets.

They’ll be the ones that listen.

When you understand your customers’ frustrations, their needs, and the exact words they use to describe their problems, you can craft campaigns that feel personal, relevant, and persuasive.

This isn’t just about getting more leads.

It’s about getting the right leads and turning them into loyal customers.

What You’ll Learn In This Ebook:

  • Uncover Customer Insights: Learn how to extract powerful insights from customer conversations, sentiment analysis, and first-hand interactions.
  • Improve Marketing Messaging: Use the language your audience naturally speaks to create high-converting ads, landing pages, and content.
  • Optimize Your Lead Generation Funnels & Customer Journeys: Build a pipeline that reflects real customer behavior. Not assumptions.
  • Reduce Friction & Increase Conversions: Identify barriers before they impact your bottom line.
  • Increase CLV & Customer Lifespan: Find upsell opportunities and improve customer retention using call and chat transcript analysis.

Why This Matters

  • Marketing is evolving. Customers expect brands to understand them and not just sell to them.
  • Data beats guesswork. First-party conversational data gives you direct access to what your customers truly care about.
  • Better insights = higher conversions. When your message aligns with customer needs, engagement and sales increase.

Want to put these insights to work for your business?

Download your free copy today and start turning customer conversations into your most powerful marketing asset.


Featured Image: Paulo Bobita/Search Engine Journal

Data Shows Google AIO Is Citing Deeper Into Websites via @sejournal, @martinibuster

New data from BrightEdge that was shared with Search Engine Journal shows that Google’s AI Overviews (AIO) in March 2025 have rapidly expanded in size and are shifting how traffic is distributed across search results. The analysis suggests that deep, specialized content is likelier to be cited than homepages.

This shows that AIO is becoming more precise about the answers it is giving, aligning with the concepts of Predictive Summaries and Grounding Links that Google recently shared with marketers at Search Central NYC.

Google has traditionally favored showing precise answers to questions, something that the home pages of websites generally cannot do. It makes sense that BrightEdge’s data reflects the kind of precise linking that Predictive Summaries and Grounding Links display in AIO.

Google has expanded the pixel height of AIO by 18.26% in the first two weeks of March. Although some may rightly note that it’s reducing the outbound links of the organic search results it’s important to put that into the context that Google’s AIO also has outbound links and that those links are highly precise contextually relevant links.

The expansion of AIO size was not across the board. Industry-specific increases in AI Overview size:

  • Travel: +39.49%
  • B2B Tech: +37.13%
  • Education: +35.49%
  • Finance: +32.89%

Strategic Response for SEOs And Publishers

BrightEdge suggests that publishers and SEOs should monitor performance metrics to track changes in traffic, impressions, CTRs, and clicks to evaluate how AI Overviews may be influencing traffic trends. Of absolute importance is to try to identify sales or revenue trends because those are the most important metrics, not traffic.

Although it may be useful to create citable content, Google is generally summarizing content and then linking to where users can read more. Now more than ever it’s important to be aware of user trends relative to your industry and be able to anticipate them, including the context of a user’s search. Jono Alderson recently suggested targeting users at the very early side of the consumer journey in order to get ahead of AI-based citations.

Importance of In-Depth, Specialized Content

Google AIO is showing a citation preference for deep-linked content, two more clicks from the home page deep (2+ deep). 82.5% of clicks were to 2+ deep pages. Home pages accounted for less than 0.5% of all clicks.

86% of cited pages were ranked for only one keyword, often high-volume. This represents an opportunity for high traffic volume keyword traffic. The median keyword volume for citations was 15,300 monthly searches and 19% of citation-triggering keywords contain traffic volume in excess of 100k monthly searches.

Implications For Technical SEO And Content Optimization

BrightEdge suggests that full site indexing is critical for AI Overviews in order to ensure that every page is available to be cited as a potential source. Even older and otherwise overlooked content may gain value, especially if they’re reviewed and updated to make them suitable to be cited and reflect the most current information.

Google has been citing deeper content for many years now and the age of the primacy of the home page has been long over, except in local search. That said, home pages only accounted for half a percent of clicks from AIO so it’s super important now more than ever to optimize inner pages.

Takeaways:

The following are the top takeaways from the data:

  • Google’s AI Overviews are rapidly expanding in visual size on the SERP
  • Industries like Travel, B2B Tech, Education, and Finance are experiencing the fastest AI Overview growth
  • Deeper, more specific content is overwhelmingly favored for AI citations over homepages
  • Pages cited in AI Overviews often surface for just one keyword—frequently high-volume
  • Technical SEO and full-site indexing are now essential for brand visibility in AI-driven search

Google’s AI Overviews are not just expanding in size; they are improving the contextual relevance of outbound links to websites. Optimizing for AIO should turn an eye toward keeping older content fresh and up to date to keep it relevant for users who will appreciate that content.

BrightEdge data shared with Search Engine Journal has not been published but a monthly updated guide to AIO is available.

Featured Image by Shutterstock/B..Robinson

Google On Scaled Content: “It’s Going To Be An Issue” via @sejournal, @martinibuster

Google’s John Mueller and Danny Sullivan discussed why AI generated content is problematic, citing the newly updated quality rater guideline and sharing examples of how AI can be used in a positive way that has added value.

Danny Sullivan, known as Google Search Liaison, spoke about the topic in more detail, providing an example of what a high quality use of AI generated content is to serve as a contrast to what isn’t a good use of it.

Update To The Quality Rater Guidelines

The quality rater guidelines (QRG) is a book created by Google to provide guidance to third-party quality raters who rate tests of changes to Google’s search results. It was recently updated and it now includes guidance about AI generated content that’s folded into a section about content created with little effort or originality.

Mueller discussed AI generated content in the context of scaled content abuse, noting that the quality raters are taught to rate that kind of content as low quality.

The new section of the QRG advises the raters:

“The lowest rating applies if all or almost all of the MC on the page (including text, images, audio, videos, etc) is copied, paraphrased, embedded, auto or AI generated, or reposted from other sources with little to no effort, little to no originality, and little to no added value for visitors to the website. Such pages should be rated Lowest, even if the page assigns credit for the content to another source.”

Doesn’t Matter How It’s Scaled: It’s Going To Be An Issue

Danny Sullivan, known as Google Search Liaison, started his part of the discussion by saying that to Google, AI generated content is no different than scaled content tactics from the past, comparing it to the spam tactics of 2005 when Google used statistical analysis and other methods to catch scaled content. He also emphasized that it doesn’t matter how the content was scaled.

According to my notes, here’s a paraphrase of what he said:

“The key things are, large amounts of unoriginal content and also no matter how it’s created.

Because like, ‘What are you going to do about AI? How are you going to deal with all the AI explosion? AI can generate thousands of pages?’

Well 2005 just called, it’d like to explain to you how human beings can generate thousands of pages overnight that look like they’re human generated because they weren’t human generated and etcetera, etcetera, etcetera.

If you’ve been in the SEO space for a long time, you well understand that scaled content is not a new type of thing. So we wanted to really stress: we don’t really care how you’re doing this scaled content, whether it’s AI, automation, or human beings. It’s going to be an issue.

So those are things that you should consider if you’re wondering about the scaled content abuse policy and you want to avoid being caught by it.”

How To Use AI In A Way That Adds Value

A helpful thing about Danny’s session is that he offered an example of a positive use AI, citing how retailers offer a summary of actual user reviews that give an overall user sentiment of the product without having to read reviews. This is an example of how AI is providing an added value as opposed to being the entire main content.

This is from my notes of what he said:

“When I go to Amazon, I skip down to the reviews and the reviews have a little AI-generated thing at the top that tells me what the users generally think, and I’m like, this is really helpful.

And the thing that’s really helpful to me about it is, it’s AI applied to original content, the reviews, to give me a summary. That was added value for me and unique value for me. I liked it.”

As Long As It’s High Quality….

Danny next discussed how they tried to put out a detailed policy about AI generated content but he said it was misconstrued by some parts of the SEO community to mean that AI generated content was fine as long as it was quality AI generated content.

In my 25 years of SEO experience, let me tell you, whenever an SEO tells you that an SEO tactic is fine “as long as it’s quality” run. The “as long as it’s quality” excuse has been used to justify low-quality SEO practices like reciprocal links, directory links, paid links, and guest posts – If it’s not already an SEO joke it should be.

Danny continued:

“And then people’s differentiation of what’s quality is all messed up. And they say Google doesn’t care if it’s AI!’ And that is not really what we said.

We didn’t say that.”

Don’t Mislead Yourself About Quality Of Scaled Content

Danny advised that anyone using artificially generated content should think about two things to use as tests for whether it’s a good idea:

  1. The motivation for mass generated content.
  2. Unoriginality of the scaled content.

Traffic Motivated Content

The motivation shouldn’t be because it will bring more traffic. The motivation should because there’s a value-add for site visitors.

This is how Danny Sullivan explained it, according to my notes:

“Any method that you undertake to mass generate content, you should be carefully thinking about it. There’s all sorts of programmatic things, maybe they’re useful. Maybe they’re not. But you should think about it.

And the things to especially think about is if you’re primarily doing into it to game search traffic.

Like, if the primary intent of the content was, ‘I’m going to get that traffic’ and not, ‘some user actually expected it’ if they ever came to my website directly. That’s one of the many things you can use to try to determine it.”

Originality Of Scaled Content

SEOs who praise their AI-generated content lose their enthusiasm when the content is about a topic they’re actually expert in and will concede that it’s not as smart as they are… And what’s going on, that if you are not an expert then you lack the expertise to judge the credibility of the AI generated content.

AI is trained to crank out the next likeliest word in a series of words, a level of unoriginality so extreme that only a computer can accomplish it.

Sullivan next offered a critique of the originality of AI-generated content:.

“The other thing is, is it unoriginal?

If you are just using the tool saying, ‘Write me 100 pages on the 100 different topics that I got because I ran some tool that pulled all the People Also Asked questions off of Google and I don’t know anything about those things and they don’t have any original content or any value. I just kind of think it’d be nice to get that traffic.’

You probably don’t have anything original.

You’re not necessarily offering anything with really unique value with it there.

A lot of AI tools or other tools are very like human beings because they’ve read a lot of human being stuff like this as well. Write really nice generic things that read very well as if they are quality and that they answer what I’m kind of looking for, but they’re not necessarily providing value.

And sometimes people’s idea of quality differ, but that’s not the key point of it when it comes to the policy that we have with it from there, that especially because these days some people would tell you that it’s quality.”

Takeaways:

  • Google doesn’t “care how you’re doing this scaled content, whether it’s AI, automation, or human beings. It’s going to be an issue.”
  • The QRG explicitly includes AI-generated content in its criteria for ‘Lowest’ quality ratings, signaling that this is something Google is concerned about.
  • Ask if the motivation for using AI-generated content is primarily to drive search traffic or to help users
  • Originality and value-add are important qualities of content to consider