Why Chinese manufacturers are going viral on TikTok

Since the video was posted earlier this month, millions of TikTok users have watched as a young Chinese man in a blue T-shirt sits beside a traditional tea set and speaks directly to the camera in accented English: “Let’s expose luxury’s biggest secret.” 

He stands and lifts what looks like an Hermès Birkin bag, one of the world’s most exclusive and expensive handbags, before gesturing toward the shelves filled with more bags behind him. “You recognize them: Hermès, Louis Vuitton, Prada, Gucci—all crafted in our workshops.”

“But brands erase ‘Made in China’ from the tags,” he continues. “Same leather from their tanneries, same hardware from their suppliers, same threads they call luxury. Master artisans they never credit. We earn pennies; they make millions. That is unfair—to us, to you, to anyone who values honesty.” 

He ends by urging viewers to buy directly from his factory.

♬ original sound – DHgate

Video “exposés” like this—where a sales agent breaks down the material cost of luxury goods, from handbags to perfumes to appliances—are everywhere on TikTok right now. 

Some videos claim, for example, that a pair of Lululemon leggings costs just $4 to make. Others show the scale and precision of Chinese manufacturing: Creators walk through spotless factory floors, passing automated assembly lines and teams of workers at clean, orderly stations. Some factories identify themselves as suppliers—or former suppliers—for brands like Dyson, Under Armour, and Victoria’s Secret.

Whether or not their claims are true, these videos and their virality speak to a new, serious push by Chinese manufacturers to connect directly with American consumers. Even with tariffs, many of the products pitched in the videos would still be significantly cheaper than buying from the name brands. (MIT Technology Review did not verify the claims made in the videos about where products are produced and how much the manufacturing costs; Lululemon, Hermès, Kering (the owner of Gucci), and LVMH (the owner of Louis Vuitton) did not reply to requests for comment.)

Fueled by fears of losing international business and frustration over Trump-era tariffs, factories are turning their production lines into content studios to market themselves—filming leather workshops and sewing lines, offering warehouse tours. What began as the work of a few frustrated sourcing agents has morphed into a full-blown genre that’s part protest, part marketing plan, part survival strategy.

It’s “a collective search for a workaround” to the tariffs, says Ivy Yang, an e-commerce expert and founder of the New York–based consulting firm Wavelet Strategy. “Smaller platforms and sourcing agents are jumping in, offering ‘direct from factory’ content on social media as an alternative supply route.”

Cutting out the middleman

The Chinese creators sharing insights into sourcing materials and manufacturing techniques often offer direct purchasing options that effectively bypass traditional retail channels. 

The companies that sell directly to consumers include DHgate, a Chinese B2B e-commerce platform, which users commonly refer to as “the gate” or “the yellow app.” In the US Apple app store, the app jumped from #302 on April 8 to #2 overall in mid-April, just behind ChatGPT. On April 15, it was the most downloaded app in the country. As of April 18, DHgate sat at the top of Apple’s shopping charts in 98 countries. 

After buying on DHgate, users enthusiastically return to TikTok to share their new purchases; one user jokingly bragged, “Ordered my bag from my Chinese plug.”

DHGate told MIT Technology Review that the social media attention has resulted in a surge in transactions on the platform, with categories like home goods, electronics, outdoor gear, and pet supplies seeing the most popularity. During the week of April 12 to 19, home appliances saw a 962% increase in sales, while security tech jumped 601%.

TikTok is indeed not a vanity project for these manufacturers but a survival strategy in an increasingly competitive environment. 

Chinese factories have long sold to overseas markets, but when domestic economic growth started to slow in the past decade, manufacturers increasingly turned to major B2B platforms like Alibaba to connect with buyers abroad without relying on middlemen. In the past few years, however, the cost of gaining visibility to foreign buyers on major platforms like Amazon and Alibaba has skyrocketed. 

“It has become a crowded, saturated space, and it could cost 30,000 to 40,000 RMB [$210,000 to $290,000] a year just to get your factory to show up on the first page in search results,” says Logan Wang, an e-commerce manager at Shendeng Consulting, who advises Chinese manufacturers on overseas operations.

The landscape only got more fraught as traditional manufacturing sectors struggled with oversupply and post-covid stagnation. In 2024, China’s apparel exports to the US grew by less than 1%, while the average unit price of those goods dropped by 7.6%—a sign that competition is fiercer and profit margins are shrinking. 

Add the new tariffs to this mix and Chinese manufacturers are increasingly motivated to find creative ways to reach buyers.

Linda Luo, a manager at a Guangzhou-based apparel factory, says that in the wake of the latest round of sanctions, her factory has paused US shipments, which previously accounted for around 30% of their sales. Now, storage rooms are filling up with products that have no clear destination. 

“Many nearby factories are like us,” Luo says, “holding out to see how these tariffs develop, hoping the situation will resolve itself.” Motivated by the success of peers who’ve gone viral, Luo says, her team is now actively reaching out to TikTok-famous sourcing agents, hoping to forge direct connections with new buyers.

But it’s not just economic conditions pushing the viral videos; there’s also a feeling that Chinese work and craftsmanship are being disrespected. In a Fox News interview on April 3, for instance, Vice President JD Vance made a comment denigrating the “Chinese peasants” who make products for Americans. The remark drew sharp criticism from Chinese officials and from Chinese people across the internet, who viewed it as insulting. 

“Chinese manufacturers have done the dirtiest, most arduous work for Western brands since the 1980s—often with razor-thin margins,” says Wang. “And yet they’re constantly stigmatized, pushed around, and caught in the crossfire of geopolitics. Hearing President Trump frame the past few decades as China taking advantage of the US—that’s a narrative that doesn’t sit right with anyone working in this industry.”

Factory as spectacle

Beyond rage and anxiety, Chinese factories have been inspired by the past viral success of manufacturing content on TikTok, according to Tianyu Fang, a technology and democracy fellow at the think tank New America who studies Chinese technology and globalization. Since 2020, factory videos showing assembly lines producing everyday items like wigs, dolls, and gloves have amassed millions of views. In comments, viewers describe these looping production videos as “soothing” and “mesmerizing.” 

By 2022, factories themselves recognized their work floors as content gold mines. But Alice Gu, who works at a Shenzhen-based digital marketing company and helps factories build their TikTok presence, has seen client inquiries triple over the past year, with many now featuring English-speaking staff as on-camera personalities.

As Fang explains, “These videos resonate with young people in the West on TikTok because manufacturing is so removed from their daily experience. They offer rare glimpses into advanced manufacturing while satisfying genuine curiosity.”

He adds: “Seeing Chinese factory workers address Western audiences directly feels almost subversive.”

The cultural gap between creators and audiences has become an asset rather than a liability, generating authentic moments that resonate with users who are hyper-online. 

One creator, Tony, toggles between American accents while promoting light boxes; he has gained over 1.2 million Instagram followers as the face of LC Sign, a Guangzhou electrical signage company. The “alumununu lady,” a saleswoman with a distinctive accent promoting capsule homes by Etong, turned “Hello, boss” into a catchphrase adopted by countless factory videos. In 2024, Dong Hua Jin Long, an industrial glycine manufacturer, went viral for machine-translated promotional videos boasting unmatched production quality. TikTok users found humor in the niche company’s efforts to connect with potential customers, making it a widely circulated meme.

“These videos appeal largely because they’re so wonderfully out of context,” Fang says. “The popularity of these sourcing videos reflects a desire to understand previously hidden parts of the global economy and find alternatives to mainstream political narratives.”

Despite the trend, experts including Yang and Fang don’t believe large numbers of average American consumers will shift to buying directly from factories, as the process involves too many logistical hurdles. There’s also been plenty of news coverage warning that you may not end up getting an all-but-equal-to-Hermès bag without the brand label. 

Yaling Jiang, writer of the newsletter Following the Yuan, explains that buying through factory back channels is a common practice in China: “It’s an open secret that many local factories produce for prestigious brands, and people often buy through side channels to get similar-quality products at a fraction of the price.” However, Jiang suggests that these arrangements rely on a complex supply and distribution system—and warns that some TikTok sourcing agents may be falsely claiming connections to well-known companies.

On top of all this, these direct-to-consumer videos may not even be available much longer. Yang warns that a lot of the content treads dangerously close to copyright infringement. “This will quickly become an IP minefield for platforms like TikTok and Instagram,” she says. “If the trend continues to grow, rights holders will push back—and platform governance will need to catch up fast.”

MIT Technology Review found that many of the original viral videos promoting knockoff products have already been removed from TikTok. DHgate did not respond to a request for comment regarding whether it facilitates the sale of counterfeit products.

Nevertheless, many Chinese factories will almost certainly continue to build out their own R&D teams—and not just to weather the current moment. “Every factory owner’s dream is to have their own brand,” Wang says. “After decades of making products designed elsewhere, Chinese manufacturers are ready to create, not just produce.”

The Download: China’s manufacturers’ viral moment, and how AI is changing creativity

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why Chinese manufacturers are going viral on TikTok

Since the video was posted earlier this month, millions of TikTok users have watched as a young Chinese man in a blue T-shirt sits beside a traditional tea set and speaks directly to the camera in accented English: “Let’s expose luxury’s biggest secret.” 

He stands and lifts what looks like an Hermès Birkin bag, one of the world’s most exclusive and expensive handbags, before gesturing toward the shelves filled with more bags behind him. “You recognize them: Hermès, Louis Vuitton, Prada, Gucci—all crafted in our workshops.” He ends by urging viewers to buy directly from his factory.

Video “exposés” like this—where a sales agent breaks down the material cost of luxury goods, from handbags to perfumes to appliances—are everywhere on TikTok right now. And whether or not their claims are true, these videos and their virality speak to a new, serious push by Chinese manufacturers to connect directly with American consumers. Read the full story.

—Caiwei Chen

How AI is interacting with our creative human processes

The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. But it also offers a particularly human problem in narrative: How can we make sense of these machines, not just use them?

Three new books examine what we gain and lose when we let machines create, and pose the question: how do the words we choose and stories we tell about technology affect the role we allow it to take on (or even take over) in our creative lives? Read the full story.

—Rebecca Ackermann

This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Inside the powerful Signal chat shaping America
Marc Andreessen’s Chatham House group unites figures across Silicon Valley, politics and journalism. (Semafor)
+ Many in tech may come to regret their investment in Trump. (Vox)

2 RFK Jr’s autism study has got scientists worried
They fear it’ll give credence to unproven theories. (Axios)
+ His claims that autism is caused by environmental toxins are not backed by science. (PBS)
+ Experts say lack of support is the biggest challenge facing autistic people. (The Guardian

3 Only Google can run Chrome properly
That’s what the browser’s general manager told the judge presiding over its antitrust trial. (Bloomberg $)
+ Companies are still expressing interest in buying it, though. (The Verge)

4 Meta’s chatbots will hold explicit conversations with minors
Including chatbots voiced by celebrities, including wrestler-turned-actor John Cena. (WSJ $)
+ An AI companion site is hosting sexually charged conversations with underage celebrity bots. (MIT Technology Review)

5 Here’s why it would be so difficult to build an iPhone in the US
It’s not just about the cost of labor. (FT $)
+ His steep tariffs mean this Christmas will be an even more expensive affair. (Wired $)
+ Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review)

6 Mexico’s drug cartels have become influencers
Their posts are some of the only insights we have into their activities. (The Atlantic $)
+ The mothers of Mexico’s missing use social media to search for mass graves. (MIT Technology Review)

7 People with autism are using AI to navigate everyday situations 
But experts warn that chatbots’ responses should be treated with caution. (WP $)

8 Clean energy is still making progress
Despite those political and economic headwinds. (Vox)
+ Europe is committed to looking beyond fossil fuels. (Politico)
+ 4 technologies that could power the future of energy. (MIT Technology Review)

9 What rats can teach us about hunger 🐀
We’re getting closer to understanding what makes us start and stop eating. (NYT $)
+ We’ve never understood how hunger works. That might be about to change. (MIT Technology Review)

10 It’s no wonder Trump loves AI slop
He’s been pushing a surreal, gaudy vision of the world for years.(New Yorker $)
+ AI slop infiltrated almost every corner of the internet last year. (MIT Technology Review)

Quote of the day

“You know the best thing about these things is that nothing leaks…but it looks like that’s changed a little.”

—A longtime attendee of the secretive intimate networking events favored by tech, media and finance bigwigs spills the beans to The Information

One more thing

AI hype is built on high test scores. Those tests are flawed.

In the past few years, multiple researchers claim to have shown that large language models can pass cognitive tests designed for humans, from working through problems step by step, to guessing what other people are thinking.

These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs. But there’s a problem: There’s little agreement on what those results really mean. Read the full story.

—William Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The Magic Circle has readmitted a female magician who was expelled 30 years ago after she revealed she’d disguised herself as a man to gain access to the formerly male-only society. 🪄
+ These National Parks are stunningly beautiful.
+ The Fear of Flying Subreddit is one of the last pure places remaining on the internet.
+ Why Gen Z is so obsessed with iced coffee.

Multiple Stores, 1 Backend: Pros and Cons

For 20 years I’ve managed data for both large and small ecommerce companies. I’ve migrated data from one platform to another, cleansed data, optimized data for search engines, enhanced data for conversions, and more.

Ecommerce firms with multiple storefronts and overlapping products pose unique data challenges. Combining the storefronts into a single backend centralizes the data and streamlines oversight. But it comes with costs.

What follows are my pros and cons for combining multiple storefronts with the same products into one backend.

Pros

Time management. Multiple storefronts with separate backends mean uploading the same images, descriptions, and product data repeatedly. Combining that info into a single backend saves much time. It eliminates the need to log in to each store and streamlines the process of adding and updating the same products.

Analytics. A single backend makes combined performance tracking (for all stores) easier. It eliminates the need for a separate business intelligence tool to view all data in one place.

Inventory management. Multiple storefronts with different backends typically require third-party software to track overall quantities of the same item. When all stores point to the same backend, standard inventory management tools — native to many ecommerce platforms — can fill that need.

Different regions. Targeting different global regions from multiple storefronts is often easier with the same backend. For example, a merchant could sell the same product in the U.S., Canada, Australia, and the U.K., and use country-specific top-level domains, such as .ca, .co.uk, and .au, to attract local customers. Consumers in each of those countries speak English, allowing the same (or very similar) product descriptions and images. Sellers would need to customize only the currencies, payment, and shipping methods.

Cons

Search engine optimization. Utilizing the same backend to display the same products in different stores results in duplicate search-engine data, such as titles, product descriptions, and images, all competing against each other for rankings. The result is likely lower overall search visibility for all of them.

Unique prices. The same price across all stores prevents standalone prices and discounts. A workaround is custom pricing per channel or customer group. For example, a merchant may sell the same product on both a B2C and a B2B site. The B2B price could be quantity-based versus a single-item amount for B2C customers. This assumes the ecommerce platform has that functionality. In my experience, managing different price groups and price levels is more complex when using the same backend.

Promotional flexibility. Promotions for specific items on specific storements are difficult with a single backend, if not impossible. An example is adding banners for sale items in one store but not another. Some ecommerce platforms allow for unique promos by storefront. Others are SKU-based regardless of the store.

Risk. A single backend risks disabling all storefronts due to data breaches, fraud, platform errors, traffic spikes, and similar issues. Sometimes putting eggs in multiple baskets is better.

Customer, order management. Some ecommerce platforms allow customer segmentation by storefront. Others do not, which complicates emails, audience targeting, and overall marketing initiatives. Without a clear storefront distinction, it can be confusing to know which store drove which order.

Verify

To be sure, my pros and cons depend on the ecommerce platform. Some facilitate multi-store backends better than others. Before consolidating, verify the capabilities, particularly for SEO, inventory management, marketing, and pricing.

Google’s John Mueller: Updating XML Sitemap Dates Doesn’t Help SEO via @sejournal, @MattGSouthern

Google’s John Mueller clarifies that automatically changing XML sitemap dates doesn’t boost SEO and could make it harder for Google to find actual content updates.

The “Freshness Signal” Myth Busted

On Reddit’s r/SEO forum, someone asked if competitors ranked better by setting their XML sitemap dates to the current date to send a “freshness signal” to Google.

Mueller’s answer was clear:

“It’s usually a sign they have a broken sitemap generator setup. It has no positive effect. It’s just a lazy setup.”

The discussion shows a common frustration among SEO pros. The original poster was upset after following Google’s rules for 15 years, only to see competitors using “spam tactics” outrank established websites.

When asked about sites using questionable tactics yet still ranking well, Mueller explained that while some “sneaky things” might work briefly, updating sitemap dates isn’t one of them.

Mueller said:

“Setting today’s date in a sitemap file isn’t going to help anyone. It’s just lazy. It makes it harder for search engines to spot truly updated pages. This definitely isn’t working in their favor.”

XML Sitemaps: What Works

XML sitemaps help search engines understand your website structure and when content was last updated. While good sitemaps are essential for SEO, many people misunderstand the impact they have on rankings.

According to Google, the lastmod tag in XML sitemaps should show when a page was truly last updated. When used correctly, this helps search engines know which pages have new content that needs to be recrawled.

Mueller confirms that faking these dates doesn’t help your rankings and may prevent Google from finding your real content updates.

What This Means for Your SEO

Mueller’s comments remind us that while some SEO tactics might seem to improve rankings, correlation isn’t causation.

Sites ranking well despite questionable methods are likely succeeding due to other factors, rather than manipulated sitemap dates.

For website owners and SEO professionals, the advice is:

  • Keep your XML sitemaps accurate
  • Only update lastmod dates when you change content
  • Focus on creating valuable content instead of technical shortcuts
  • Be patient with ethical SEO strategies – they provide lasting results

It can be frustrating to see competitors seemingly benefit from questionable tactics. However, Mueller suggests these advantages don’t last long and can backfire.

This exchange confirms that Google’s smart algorithms can recognize and eventually ignore artificial attempts to manipulate ranking signals.


Featured Image:  Keronn art/Shutterstock

Google Now Allows Top Ads To Appear At Bottom Of Search Results via @sejournal, @brookeosmundson

Google Ads introduced a quiet but impactful change last week to how ads can show up on the search results page.

High-performing ads that used to be eligible only for top-of-page positions can now also appear at the bottom.

This means advertisers can show up more than once on the same results page: once at the top and again at the bottom, as long as the ads meet Google’s relevance standards.

At a glance, it may feel like a small shift. But in reality, it opens the door to more exposure, smarter bidding strategies, and a clearer glimpse into how Google is thinking about ad experience.

Let’s unpack what’s changing, why it matters, and what this means for your campaigns.

What’s Changing With Search Ad Placements?

Until recently, Google followed a rule where only one ad from a single advertiser could show on a search results page. That ad could only appear in one place, either at the top or the bottom.

That restriction has now been updated.

With this change, if your ad is strong enough to qualify for the top of the page, it can also be eligible to appear again at the bottom.

That’s because Google runs separate auctions for each Search ad location.

Google reports that during testing, this increased the presence of relevant ads by 10% and led to a 14% lift in conversions from bottom-of-page placements.

In short, users weren’t just seeing more ads. They were also interacting with them more.

But this isn’t a free-for-all. Ads still need to meet relevance thresholds, and your bottom-of-page placement won’t just show up by default. It has to earn its spot, the same way your top ad does.

How This Changes the Bigger Quality Picture

For Google, this isn’t just about squeezing in more ads. It’s about improving the experience for users and advertisers at the same time.

By opening up bottom-of-page slots to high-quality ads, Google is trying to ensure users see relevant options whether they click right away or scroll to the end of the page.

It’s a subtle shift, but one that could shape how performance marketers think about their creative and bidding strategies.

It also signals how Google continues to reward quality over quantity.

If your ad copy is weak or your landing page experience is lacking, you’re unlikely to benefit from this expanded eligibility.

But if you’ve invested in thoughtful creative, user-focused content, and clear calls to action, you now have twice the chance to show up and potentially win more conversions.

This move also speaks to inventory optimization. By filling both top and bottom ad spots with the best content available, Google is getting more mileage out of every search without making the results page feel like a cluttered ad wall.

Does This Conflict With Google’s Unfair Advantage Policy?

At first, many advertisers were confused since Google recently updated their Unfair Advantage policy earlier this month.

The Unfair Advantage policy bars advertisers from “double serving” to a single ad location.

Double serving refers to showing multiple ads from different accounts or domains that all point to the same business. Google cracked down on that to ensure fair competition and to prevent advertisers from dominating a single auction by crowding out competitors.

This new update doesn’t violate that principle.

In fact, Google clarified that this change is possible because top and bottom placements run in separate auctions. That means your ad isn’t “beating out” your own other ad in the same auction. It’s simply earning placement in two different areas of the page.

So long as the ads are relevant and helpful to the user, Google’s policy allows for this kind of visibility.

What Advertisers Need To Know About This Change

This update gives advertisers new levers to pull — but only if you know where to look.

First, this isn’t something you need to opt into. If your ads are eligible based on performance, they’ll start showing in both places automatically. But that doesn’t mean you should take a hands-off approach.

Here are some things to keep in mind:

  • Monitor your impression share by position. Use segmentation in Google Ads to break down where your ads are showing (top vs. other) and compare performance.
  • Watch for changes in CTR and Conversion Rate. You may see stronger performance from one position over the other. That can inform whether you want to bid more aggressively, or refine copy and assets to align with what works best.
  • Revisit your Quality Score drivers. With Google prioritizing relevance, improving expected CTR, ad relevance, and landing page experience will help you capture more real estate.
  • Layer in automation, but stay strategic. Smart Bidding might adjust bids automatically to take advantage of new placement opportunities, but make sure you’re reviewing your placement data regularly. Algorithms don’t always know your goals better than you do.
  • Look beyond vanity metrics. Bottom-of-page clicks may cost less, but be sure they’re actually driving value. Focus on leads, sales, or other business outcomes, rather than just volume.

Moving Forward with Better Search Ads

Google’s decision to allow top-performing ads to also appear at the bottom of search results reflects an ongoing effort to enhance user experience and ad relevance.

While the change offers new opportunities for advertisers, it also emphasizes the importance of ad quality and strategic planning.

By understanding and adapting to these updates, advertisers can better position themselves for success in the evolving landscape of search advertising.

If you’ve been focused on creating better ads and improving your landing pages, this update is in your favor.

Reddit Mods Accuse AI Researchers Of Impersonating Sexual Assault Victims via @sejournal, @martinibuster

Researchers testing the ability of AI to influence people’s opinions violated the ChangeMyView subreddit’s rules and used deceptive practices that allegedly were not approved by their ethics committee, including impersonating victims of sexual assault and using background information about Reddit users to manipulate them.

They argue that those conditions may have introduced biases. Their solution was to introduce AI bots into a live environment without telling the forum members they were interacting with an AI bot. Their audience were unsuspecting Reddit users in the Change My View (CMV) subreddit (r/ChangeMyView), even though it was a violation of the subreddit’s rules which prohibit the use of undisclosed AI bots.

After the research was finished the researchers disclosed their deceit to the Reddit moderators who subsequently posted a notice about it in the subreddit, along with a draft copy of the completed research paper.

Ethical Questions About Research Paper

The CMV moderators posted a discussion that underlines that the subreddit prohibits undisclosed bots and that permission to conduct this experiment would never have been granted:

“CMV rules do not allow the use of undisclosed AI generated content or bots on our sub. The researchers did not contact us ahead of the study and if they had, we would have declined. We have requested an apology from the researchers and asked that this research not be published, among other complaints. As discussed below, our concerns have not been substantively addressed by the University of Zurich or the researchers.”

This fact that the researchers violated the Reddit rules was completely absent from the research paper.

Researchers Claim Research Was Ethical

While the researchers omit that the research broke the rules of the subreddit, they do create the impression that it was ethical by stating that their research methodology was approved by an ethics committee and that all generated comments were checked to assure they were not harmful or unethical:

“In this pre-registered study, we conduct the first large-scale field experiment on LLMs’ persuasiveness, carried out within r/ChangeMyView, a Reddit community of almost 4M users and ranking among the top 1% of subreddits by size. In r/ChangeMyView, users share opinions on various topics, challenging others to change their perspectives by presenting arguments and counterpoints while engaging in a civil conversation. If the original poster (OP) finds a response convincing enough to reconsider or modify their stance, they award a ∆ (delta) to acknowledge their shift in perspective.

…The study was approved by the University of Zurich’s Ethics Committee… Importantly, all generated comments were reviewed by a researcher from our team to ensure no harmful or unethical content was published.”

The moderators of the ChangeMyView subreddit dispute the researcher’s claim to the ethical high ground:

“During the experiment, researchers switched from the planned “values based arguments” originally authorized by the ethics commission to this type of “personalized and fine-tuned arguments.” They did not first consult with the University of Zurich ethics commission before making the change. Lack of formal ethics review for this change raises serious concerns.”

Why Reddit Moderators Believe Research Was Unethical

The Change My View subreddit moderators raised multiple concerns about why they believe the researchers engaged in a grave breach of ethics, including impersonating victims of sexual assault. They argue that this qualifies as “psychological manipulation” of the original posters (OPs), the people who started each discussion.

The Reddit moderators posted:

“The researchers argue that psychological manipulation of OPs on this sub is justified because the lack of existing field experiments constitutes an unacceptable gap in the body of knowledge. However, If OpenAI can create a more ethical research design when doing this, these researchers should be expected to do the same. Psychological manipulation risks posed by LLMs is an extensively studied topic. It is not necessary to experiment on non-consenting human subjects.

AI was used to target OPs in personal ways that they did not sign up for, compiling as much data on identifying features as possible by scrubbing the Reddit platform. Here is an excerpt from the draft conclusions of the research.

Personalization: In addition to the post’s content, LLMs were provided with personal attributes of the OP (gender, age, ethnicity, location, and political orientation), as inferred from their posting history using another LLM.

Some high-level examples of how AI was deployed include:

  • AI pretending to be a victim of rape
  • AI acting as a trauma counselor specializing in abuse
  • AI accusing members of a religious group of “caus[ing] the deaths of hundreds of innocent traders and farmers and villagers.”
  • AI posing as a black man opposed to Black Lives Matter
  • AI posing as a person who received substandard care in a foreign hospital.”

The moderator team have filed a complaint with the University Of Zurich

Are AI Bots Persuasive?

The researchers discovered that AI bots are highly persuasive and do a better job of changing people’s minds than humans can.

The research paper explains:

“Implications. In a first field experiment on AI-driven persuasion, we demonstrate that LLMs can be highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.”

One of the findings was that humans were unable to identify when they were talking to a bot and (unironically) they encourage social media platforms to deploy better ways to identify and block AI bots:

“Incidentally, our experiment confirms the challenge of distinguishing human from AI-generated content… Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts. This hints at the potential effectiveness of AI-powered botnets… which could seamlessly blend into online communities.

Given these risks, we argue that online platforms must proactively develop and implement robust detection mechanisms, content verification protocols, and transparency measures to prevent the spread of AI-generated manipulation.”

Takeaways:

  • Ethical Violations in AI Persuasion Research
    Researchers conducted a live AI persuasion experiment without Reddit’s consent, violating subreddit rules and allegedly violating ethical norms.
  • Disputed Ethical Claims
    Researchers claim ethical high ground by citing ethics board approval but omitted citing rule violations; moderators argue they engaged in undisclosed psychological manipulation.
  • Use of Personalization in AI Arguments
    AI bots allegedly used scraped personal data to create highly tailored arguments targeting Reddit users.
  • Reddit Moderators Allege Profoundly Disturbing Deception
    The Reddit moderators claim that the AI bots impersonated sexual assault victims, trauma counselors, and other emotionally charged personas in an effort to manipulate opinions.
  • AI’s Superior Persuasiveness and Detection Challenges
    The researchers claim that AI bots proved more persuasive than humans and remained undetected by users, raising concerns about future bot-driven manipulation.
  • Research Paper Inadvertently Makes Case For Why AI Bots Should Be Banned From Social Media
    The study highlights the urgent need for social media platforms to develop tools for detecting and verifying AI-generated content. Ironically, the research paper itself is a reason why AI bots should be more aggressively banned from social media and forums.

Researchers from the University of Zurich tested whether AI bots could persuade people more effectively than humans by secretly deploying personalized AI arguments on the ChangeMyView subreddit without user consent, violating platform rules and allegedly going outside the ethical standards approved by their university ethics board. Their findings show that AI bots are highly persuasive and difficult to detect, but the way the research itself was conducted raises ethical concerns.

Read the concerns posted by the ChangeMyView subreddit moderators:

Unauthorized Experiment on CMV Involving AI-generated Comments

Featured Image by Shutterstock/Ausra Barysiene and manipulated by author

Navigating The Complexities Of International PPC Working With Agencies via @sejournal, @brookeosmundson

Running PPC campaigns in one country is challenging enough. Add multiple countries, languages, regulatory quirks, and agency partners into the mix, and things get complicated fast.

If you’re overseeing paid media at large enterprises or multi-location brands, international PPC isn’t just a scale problem. It’s a coordination and consistency problem.

You’re not just launching more campaigns; you’re managing different market expectations, aligning with regional teams, and juggling multiple agencies – each with their own style, processes, and priorities.

So, how do you keep your campaigns on track across borders, without losing your mind (or your brand consistency)?

Let’s break it down.

The Realities Of International PPC Management

In a perfect world, every agency partner would follow your brand guidelines to a ‘T’, campaign messaging would be flawlessly localized, and all markets would operate under the same strategy.

The reality? Not so much.

Some of the most common challenges marketing managers face:

  • Lack of consistency: Creative assets, bidding strategies, or keyword targeting often vary widely between markets. This leads to a disjointed user experience and diluted brand impact.
  • Overlapping or conflicting efforts: Without clear global oversight, multiple agencies may compete in the same auctions or target the same audiences, driving up costs unnecessarily.
  • Limited visibility: Reporting formats differ. Some agencies use custom dashboards; others send PDFs. Comparing performance becomes a spreadsheet nightmare.
  • Varying levels of expertise: Not all agencies are created equal. Some have deep experience in a particular market; others learn as they go.
  • Regulatory hurdles: Different countries have different rules around data collection, targeting, and ad content – and it’s easy to miss a compliance detail if you’re not on top of local policies.

The takeaway? International PPC isn’t just about more campaigns. It’s about more moving parts.

Aligning Global Strategy With Local Execution

It’s tempting to create a single strategy and roll it out globally, but that rarely works.

What resonates in the U.S. may fall flat in Germany or South Korea. Your job as a marketing manager is to set the strategic foundation while giving local teams enough flexibility to adapt.

Here’s how to strike that balance:

  • Create a global playbook: Define your core objectives, brand voice, performance metrics, and non-negotiables. Make it clear which elements must be consistent across markets (e.g., logo usage, value propositions) and which can be localized (e.g., promotions, tone, CTAs).
  • Set up centralized tracking and reporting: Use tools like Looker Studio, Funnel, or Tableau to consolidate data from different platforms and agencies. A unified reporting view helps you spot inconsistencies and optimize faster.
  • Define roles and responsibilities: Who owns budget allocation? Who reviews creative? Who has the final say on the copy? Spell this out. Confusion around ownership slows campaigns down.
  • Use regular syncs to stay aligned: Host monthly or bi-weekly meetings with all agency partners. Even if the agendas are light, the face time builds accountability.

For example, say you’re a global hotel chain that operates in multiple continents. A great place to start is to create a shared creative playbook, but allowing each region to tailor their offers:

  • Ski packages in Switzerland.
  • Beach getaways in Spain.

A shared creative playbook helps keep brand visuals consistent while making local campaigns relevant.

Bottom line: Your global strategy is the blueprint, but you still need local architects to tailor the build.

Choosing And Managing Agency Partners

If you’re working with multiple agencies across regions, things can quickly get siloed.

One agency might crush it in Canada while another underperforms in France. Your role is to manage these relationships without getting stuck in the weeds.

Some tips to keep things streamlined:

  • Standardize onboarding: Whether you’re hiring a new agency in Mexico or expanding a partner’s remit into the UK, start with a structured onboarding checklist: tech stack access, brand guidelines, reporting templates, key contacts, etc.
  • Evaluate based on shared key performance indicators (KPIs): Hold every agency accountable to the same high-level metrics (e.g., return on ad spend, cost per acquisition, conversion volume), even if market-specific tactics differ. This makes it easier to identify outliers.
  • Encourage cross-agency collaboration: Set up a shared Slack channel or quarterly town halls where agency teams can exchange learnings. One partner’s success story could inspire a breakthrough elsewhere.
  • Avoid micromanagement, but stay involved: Agencies need room to operate, but that doesn’t mean you go completely hands-off. Review ad copy regularly. Ask questions about performance drivers. Push for experimentation.
  • Consider a lead regional agency model: Some brands appoint one agency as the lead for a particular continent or cluster. This partner acts as a point of coordination, helping to roll out strategies more efficiently.

Say you’re running a consumer electronics brand’s PPC efforts, and the company is looking to expand into Europe, the Middle East, and Africa. It may be easy to give all that work in-house, but that can essentially double your workload, which can make your existing campaigns’ performance drop since your focus has shifted.

Instead, consider hiring an agency for the EMEA region, where your role may be overseeing their operations across Europe.

This frees up your time to still focus on the core markets, but is still visible in the expansion region to understand what’s working and what’s not.

This leads to reduced duplicated efforts, standardized reporting, and improved speed-to-market.

Your agencies aren’t just vendors; they’re extensions of your team. Treat them like it.

Dealing With Localization Without Losing Brand Consistency

One of the biggest risks in international PPC is watering down your brand, or creating an inconsistent brand. When you allow each market to fully customize messaging, things can veer off course quickly.

However, localization doesn’t mean reinventing your brand. It means adapting the core message to fit cultural norms, search behavior, and language nuances.

Here are a few ways to do that well:

  • Provide flexible brand guidelines: Instead of a rigid rulebook, create a toolkit. Include brand values, tone of voice examples, and dos/don’ts – but leave space for creativity.
  • Use transcreation, not translation: Translating ads word-for-word often leads to awkward or irrelevant messaging. Invest in native-language copywriters who understand local search intent.
  • Vet creative with local experts: Even if your agencies are global, ensure that someone close to the market signs off on copy and visuals. One poorly placed idiom or image can derail an entire campaign.
  • Test and learn by market: What works in France might not work in Spain. Build in budget and time to A/B test creative and offers in each country before scaling.

For example, say you’re running back-to-school ads for an apparel brand across the United States and Japan. You think that everyone has a back-to-school need, right?

You’d be correct, but it’d be incorrect to run them at the same time due to Japan’s school year starting in the spring, whereas the United States typically starts in the fall.

Adjusting campaign timing based on regions can help lead to an uplift in engagement.

Consistency doesn’t mean sameness. It means every ad should feel like your brand, even if it says something slightly different.

Navigating Regulatory And Platform Differences

The compliance side of international PPC often gets overlooked – until it’s a problem. From GDPR in Europe to ad content rules in China, regulatory pitfalls can stall or even shut down campaigns.

Keep these guardrails in place:

  • Work with legal early: Involve your legal or compliance teams in the planning process. Get clarity on what’s allowed in each region before campaigns launch.
  • Stay up to date with platform policies: Google Ads, Meta, and Microsoft all have country-specific ad restrictions. Review them regularly. What flies in the U.S. might get disapproved in Germany.
  • Use regional ad accounts: If you’re running large-scale campaigns, separate ad accounts by region. This makes it easier to manage billing, user access, and compliance settings.
  • Document your approach: Create a shared doc outlining how your team handles regulatory compliance, consent tracking, and ad policy enforcement. It helps new team members and agencies get up to speed quickly.

When in doubt, err on the side of caution. It’s better to delay a campaign than clean up a PR or legal mess later.

When To Consolidate Vs. Decentralize

One of the biggest international strategic decisions you’ll face: Should you centralize all campaigns under one global agency, or let each region work with its own partner?

There’s no perfect answer, but here’s a framework to help you decide:

  • Consolidate if:
    • You need unified reporting and brand control.
    • You operate in fewer countries with similar languages or cultures.
    • Your internal team is small and needs a streamlined workflow.
  • Decentralize if:
    • You’re in highly diverse markets with unique buying behaviors.
    • Local teams have strong relationships with trusted regional agencies.
    • You want to test different approaches and compare outcomes.

Some brands use a hybrid approach – central strategy with local execution. The key is to revisit your setup as you grow. What worked at five markets may not work at 15.

What International PPC Success Looks Like

International PPC management isn’t about perfection. It’s about progress, alignment, and adaptability.

Success doesn’t always mean a flawless launch. It might mean catching a costly bidding overlap between two regions. Or, spotting a creative insight from Japan that you can scale to the UK.

At the end of the day, your job as a marketing manager is to keep the wheels turning, the messaging on-brand, and the teams aligned.

Global growth isn’t clean or linear, but with the right agency relationships, guardrails, and communication processes in place, it is manageable – and scalable.

Just don’t expect to do it alone.

More Resources:


Featured Image: Ground Picture/Shutterstock

How LLMs Interpret Content: How To Structure Information For AI Search via @sejournal, @cshel

In the SEO world, when we talk about how to structure content for AI search, we often default to structured data – Schema.org, JSON-LD, rich results, knowledge graph eligibility – the whole shooting match.

While that layer of markup is still useful in many scenarios, this isn’t another article about how to wrap your content in tags.

Structuring content isn’t the same as structured data

Instead, we’re going deeper into something more fundamental and arguably more important in the age of generative AI: How your content is actually structured on the page and how that influences what large language models (LLMs) extract, understand, and surface in AI-powered search results.

Structured data is optional. Structured writing and formatting are not.

If you want your content to show up in AI Overviews, Perplexity summaries, ChatGPT citations, or any of the increasingly common “direct answer” features driven by LLMs, the architecture of your content matters: Headings. Paragraphs. Lists. Order. Clarity. Consistency.

In this article, I’m unpacking how LLMs interpret content — and what you can do to make sure your message is not just crawled, but understood.

How LLMs Actually Interpret Web Content

Let’s start with the basics.

Unlike traditional search engine crawlers that rely heavily on markup, metadata, and link structures, LLMs interpret content differently.

They don’t scan a page the way a bot does. They ingest it, break it into tokens, and analyze the relationships between words, sentences, and concepts using attention mechanisms.

They’re not looking for a tag or a JSON-LD snippet to tell them what a page is about. They’re looking for semantic clarity: Does this content express a clear idea? Is it coherent? Does it answer a question directly?

LLMs like GPT-4 or Gemini analyze:

  • The order in which information is presented.
  • The hierarchy of concepts (which is why headings still matter).
  • Formatting cues like bullet points, tables, bolded summaries.
  • Redundancy and reinforcement, which help models determine what’s most important.

This is why poorly structured content – even if it’s keyword-rich and marked up with schema – can fail to show up in AI summaries, while a clear, well-formatted blog post without a single line of JSON-LD might get cited or paraphrased directly.

Why Structure Matters More Than Ever In AI Search

Traditional search was about ranking; AI search is about representation.

When a language model generates a response to a query, it’s pulling from many sources – often sentence by sentence, paragraph by paragraph.

It’s not retrieving a whole page and showing it. It’s building a new answer based on what it can understand.

What gets understood most reliably?

Content that is:

  • Segmented logically, so each part expresses one idea.
  • Consistent in tone and terminology.
  • Presented in a format that lends itself to quick parsing (think FAQs, how-to steps, definition-style intros).
  • Written with clarity, not cleverness.

AI search engines don’t need schema to pull a step-by-step answer from a blog post.

But, they do need you to label your steps clearly, keep them together, and not bury them in long-winded prose or interrupt them with calls to action, pop-ups, or unrelated tangents.

Clean structure is now a ranking factor – not in the traditional SEO sense, but in the AI citation economy we’re entering.

What LLMs Look For When Parsing Content

Here’s what I’ve observed (both anecdotally and through testing across tools like Perplexity, ChatGPT Browse, Bing Copilot, and Google’s AI Overviews):

  • Clear Headings And Subheadings: LLMs use heading structure to understand hierarchy. Pages with proper H1–H2–H3 nesting are easier to parse than walls of text or div-heavy templates.
  • Short, Focused Paragraphs: Long paragraphs bury the lede. LLMs favor self-contained thoughts. Think one idea per paragraph.
  • Structured Formats (Lists, Tables, FAQs): If you want to get quoted, make it easy to lift your content. Bullets, tables, and Q&A formats are goldmines for answer engines.
  • Defined Topic Scope At The Top: Put your TL;DR early. Don’t make the model (or the user) scroll through 600 words of brand story before getting to the meat.
  • Semantic Cues In The Body: Words like “in summary,” “the most important,” “step 1,” and “common mistake” help LLMs identify relevance and structure. There’s a reason so much AI-generated content uses those “giveaway” phrases. It’s not because the model is lazy or formulaic. It’s because it actually knows how to structure information in a way that’s clear, digestible, and effective, which, frankly, is more than can be said for a lot of human writers.

A Real-World Example: Why My Own Article Didn’t Show Up

In December 2024, I wrote a piece about the relevance of schema in AI-first search.

It was structured for clarity, timeliness, and was highly relevant to this conversation, but didn’t show up in my research queries for this article (the one you are presently reading). The reason? I didn’t use the term “LLM” in the title or slug.

All of the articles returned in my search had “LLM” in the title. Mine said “AI Search” but didn’t mention LLMs explicitly.

You might assume that a large language model would understand “AI search” and “LLMs” are conceptually related – and it probably does – but understanding that two things are related and choosing what to return based on the prompt are two different things.

Where does the model get its retrieval logic? From the prompt. It interprets your question literally.

If you say, “Show me articles about LLMs using schema,” it will surface content that directly includes “LLMs” and “schema” – not necessarily content that’s adjacent, related, or semantically similar, especially when it has plenty to choose from that contains the words in the query (a.k.a. the prompt).

So, even though LLMs are smarter than traditional crawlers, retrieval is still rooted in surface-level cues.

This might sound suspiciously like keyword research still matters – and yes, it absolutely does. Not because LLMs are dumb, but because search behavior (even AI search) still depends on how humans phrase things.

The retrieval layer – the layer that decides what’s eligible to be summarized or cited – is still driven by surface-level language cues.

What Research Tells Us About Retrieval

Even recent academic work supports this layered view of retrieval.

A 2023 research paper by Doostmohammadi et al. found that simpler, keyword-matching techniques, like a method called BM25, often led to better results than approaches focused solely on semantic understanding.

The improvement was measured through a drop in perplexity, which tells us how confident or uncertain a language model is when predicting the next word.

In plain terms: Even in systems designed to be smart, clear and literal phrasing still made the answers better.

So, the lesson isn’t just to use the language they’ve been trained to recognize. The real lesson is: If you want your content to be found, understand how AI search works as a system – a chain of prompts, retrieval, and synthesis. Plus, make sure you’re aligned at the retrieval layer.

This isn’t about the limits of AI comprehension. It’s about the precision of retrieval.

Language models are incredibly capable of interpreting nuanced content, but when they’re acting as search agents, they still rely on the specificity of the queries they’re given.

That makes terminology, not just structure, a key part of being found.

How To Structure Content For AI Search

If you want to increase your odds of being cited, summarized, or quoted by AI-driven search engines, it’s time to think less like a writer and more like an information architect – and structure content for AI search accordingly.

That doesn’t mean sacrificing voice or insight, but it does mean presenting ideas in a format that makes them easy to extract, interpret, and reassemble.

Core Techniques For Structuring AI-Friendly Content

Here are some of the most effective structural tactics I recommend:

Use A Logical Heading Hierarchy

Structure your pages with a single clear H1 that sets the context, followed by H2s and H3s that nest logically beneath it.

LLMs, like human readers, rely on this hierarchy to understand the flow and relationship between concepts.

If every heading on your page is an H1, you’re signaling that everything is equally important, which means nothing stands out.

Good heading structure is not just semantic hygiene; it’s a blueprint for comprehension.

Keep Paragraphs Short And Self-Contained

Every paragraph should communicate one idea clearly.

Walls of text don’t just intimidate human readers; they also increase the likelihood that an AI model will extract the wrong part of the answer or skip your content altogether.

This is closely tied to readability metrics like the Flesch Reading Ease score, which rewards shorter sentences and simpler phrasing.

While it may pain those of us who enjoy a good, long, meandering sentence (myself included), clarity and segmentation help both humans and LLMs follow your train of thought without derailing.

Use Lists, Tables, And Predictable Formats

If your content can be turned into a step-by-step guide, numbered list, comparison table, or bulleted breakdown, do it. AI summarizers love structure, so do users.

Frontload Key Insights

Don’t save your best advice or most important definitions for the end.

LLMs tend to prioritize what appears early in the content. Give your thesis, definition, or takeaway up top, then expand on it.

Use Semantic Cues

Signal structure with phrasing like “Step 1,” “In summary,” “Key takeaway,” “Most common mistake,” and “To compare.”

These phrases help LLMs (and readers) identify the role each passage plays.

Avoid Noise

Interruptive pop-ups, modal windows, endless calls-to-action (CTAs), and disjointed carousels can pollute your content.

Even if the user closes them, they’re often still present in the Document Object Model (DOM), and they dilute what the LLM sees.

Think of your content like a transcript: What would it sound like if read aloud? If it’s hard to follow in that format, it might be hard for an LLM to follow, too.

The Role Of Schema: Still Useful, But Not A Magic Bullet

Let’s be clear: Structured data still has value. It helps search engines understand content, populate rich results, and disambiguate similar topics.

However, LLMs don’t require it to understand your content.

If your site is a semantic dumpster fire, schema might save you, but wouldn’t it be better to avoid building a dumpster fire in the first place?

Schema is a helpful boost, not a magic bullet. Prioritize clear structure and communication first, and use markup to reinforce – not rescue – your content.

How Schema Still Supports AI Understanding

That said, Google has recently confirmed that its LLM (Gemini), which powers AI Overviews, does leverage structured data to help understand content more effectively.

In fact, John Mueller stated that schema markup is “good for LLMs” because it gives models clearer signals about intent and structure.

That doesn’t contradict the point; it reinforces it. If your content isn’t already structured and understandable, schema can help fill the gaps. It’s a crutch, not a cure.

Schema is a helpful boost, but not a substitute, for structure and clarity.

In AI-driven search environments, we’re seeing content without any structured data show up in citations and summaries because the core content was well-organized, well-written, and easily parsed.

In short:

  • Use schema when it helps clarify the intent or context.
  • Don’t rely on it to fix bad content or a disorganized layout.
  • Prioritize content quality and layout before markup.

The future of content visibility is built on how well you communicate, not just how well you tag.

Conclusion: Structure For Meaning, Not Just For Machines

Optimizing for LLMs doesn’t mean chasing new tools or hacks. It means doubling down on what good communication has always required: clarity, coherence, and structure.

If you want to stay competitive, you’ll need to structure content for AI search just as carefully as you structure it for human readers.

The best-performing content in AI search isn’t necessarily the most optimized. It’s the most understandable. That means:

  • Anticipating how content will be interpreted, not just indexed.
  • Giving AI the framework it needs to extract your ideas.
  • Structuring pages for comprehension, not just compliance.
  • Anticipating and using the language your audience uses, because LLMs respond literally to prompts and retrieval depends on those exact terms being present.

As search shifts from links to language, we’re entering a new era of content design. One where meaning rises to the top, and the brands that structure for comprehension will rise right along with it.

More Resources:


Featured Image: Igor Link/Shutterstock

Google Updates Gemini/Vertex AI User Agent Documentation via @sejournal, @martinibuster

Google updated the documentation for the Google-Extended user agent, which publishers can use to control whether Google Gemini and Vertex use their data for training purposes or for grounding AI answers.

Updated Guidance

Google updated their guidance on Google-Extended based on publisher feedback for the purpose of improving clarity and adding more specific details.

Previous Documentation:

“Google-Extended is a standalone product token that web publishers can use to manage whether their sites help improve Gemini Apps and Vertex AI generative APIs, including future generations of models that power those products. Grounding with Google Search on Vertex AI does not use web pages for grounding that have disallowed Google-Extended.”

Updated Version

The updated documentation provides more detail and is easier to understand explanation of what the user agent is for and what blocking it accomplishes.

“Google-Extended is a standalone product token that web publishers can use to manage whether content Google crawls from their sites may be used for training future generations of Gemini models that power Gemini Apps and Vertex AI API for Gemini and for grounding (providing content from the Google Search index to the model at prompt time to improve factuality and relevancy) in Gemini Apps and Grounding with Google Search on Vertex AI.”

Google-Extended Is Not A Ranking Signal

Google also updated one sentence to make it clear that Google-Extended isn’t used as a ranking signal for Google Search. That means that allowing Google-Extended to use the data for grounding Gemini AI answers won’t be counted as a ranking signal.

Grounding is a reference to using web data (and knowledge base data) to improve answers provided by a large language model with up to date and factual information, helping to avoid fabrications (also known as hallucinations).

The previous version omitted mention of ranking signals:

“Google-Extended does not impact a site’s inclusion or ranking in Google Search.”

The newly updated version specifically mentions Google-Extended in the context of a ranking signas:

“Google-Extended does not impact a site’s inclusion in Google Search nor is it used as a ranking signal in Google Search.”

Documentation Matches Other Guidance

The updated documentation matches a short passage about Google-Extended that’s elsewhere in Google Search Central. The other longstanding guidance explains that Google-Extended is not a way to control how website information is shown in Google Search, demonstrating how Google-Extended is separated from Google Search.

Here’s the other guidance that’s found on a page about preventing content from appearing in Google AI Overviews:

“Google-Extended is not a method for managing how your content appears in Google Search. Instead, use other methods to manage your content in Search, such as robots.txt or other robot controls.”

Takeaways

  • Google-Extended Documentation Update:
    The Google-Extended documentation was clarified and expanded to make its purpose and effects easier to understand.
  • Separation From Ranking Signals:
    The updated guidance explicitly states that Google-Extended does not affect Google Search inclusion nor is it a ranking signal.
  • Internal Use By AI Models:
    Clarified that Google-Extended controls whether site content is used for training and grounding Gemini models.
  • Consistency Across Documentation:
    The updated language now matches longstanding guidance elsewhere in Google’s documentation, reinforcing its separation from search visibility controls.

Google updated its Google-Extended documentation to explain that publishers can block their content from being used for AI training and grounding without affecting Google Search rankings. The update also matches longstanding guidance that explains Google-Extended has no effect on how sites are indexed or ranked in Search.

Featured Image by Shutterstock/JHVEPhoto

ChatGPT Shopping Is Coming

Diligent code hounds discovered Shopify variable names in ChatGPT’s JavaScript, leading to speculation that the ecommerce platform might soon announce a new AI integration.

Several sources reported finding the Shopify-related code on April 20, 2025, beginning with Alexey Shabanov’s article in TestingCatalog.

Shabanov displayed screen captures of the code and speculated that OpenAI might have an affiliate relationship with Shopify.

The article also noted that ChatGPT already has some ecommerce capabilities in its Operator research agent, including the ability to book travel and order groceries.

ChatGPT’s Code

Finding a Shopify reference in ChatGPT’s code is as simple as opening up the inspection tool in a web browser.

In Chrome or a browser derived from it:

  • Navigate to Chatgpt.com and right-click.
  • In the menu, select “Inspect.”
  • Open the “Sources” tab.
  • Find the “assets” folder, which may be under a CDN.
  • Right-click on the folder and select “search in folder.”
  • Type “shopify” and find the references.

The code for ChatGPT 4o included a Shopify reference on April 24, 2025, using the Arc browser. Click image to enlarge.

AI Shopping

The Shopify discovery generated buzz in the ecommerce community, but it is not surprising.

AI-powered shopping became inevitable with the mainstream adoption of large language models. Consumers who utilize AI tools for research and internet searches will also want to shop.

This apparent relationship with OpenAI is also not the first time Shopify has worked with an AI tool. In November 2024, Perplexity announced its “Buy with Pro” feature, which integrates with Shopify.

“When you ask Perplexity a shopping question, you’ll still get the precise, objective answers you expect, plus easy-to-read product cards showing the most relevant items, along with key details presented in a simple, visual format. These cards aren’t sponsored — they’re unbiased recommendations, tailored to your search by our AI,” Perplexity stated in its announcement.

Separately, on April 18, 2025, Microsoft announced a merchant program for its Copilot AI. Online shops can apply to integrate with the Copilot app to “gain visibility, acquire more customers, and generate sales.”

Finally, since at least 2024 Google Shopping has integrated AI into various features and search results.

AI Optimization

AI-powered shopping assistants represent concerns and opportunities for online sellers. A core question for merchants is whether AI shopping will be organic or paid.

Perplexity, for example, has stated that it does not earn revenue for including Shopify-based products in its results. For the moment, it’s an organic promotional channel for online merchants.

Similarly, Microsoft does not appear to require merchants to pay for inclusion in its program.

Thus if AI tools do not charge merchants, the trend could lead to practices similar to search engine optimization. Online sellers would optimize product feeds to match likely prompts and otherwise retain traditional SEO tactics.

Paid Inclusion

It is also possible that merchants will pay to list products in an AI shopping assistant or a premium tier.

Payment could have a few forms. Sellers could build an AI storefront not unlike Amazon Seller Central. A monthly fee could provide access to the platform and enable direct product listings.

Alternatively, merchants could list products via a feed or platform integration and then pay for featured products, which might include an add-to-cart link or click-to-buy feature, similar to ChatGPT’s apparent integration with Shopify.

Merchants would presumably treat paid inclusions like other advertising channels, weighing customer acquisition costs against average order value or customer lifetime value.

Fragmented or Concentrated?

A second consideration for the forthcoming AI shopping trend is whether the market will be fragmented or concentrated.

The market would be fragmented if the various AI tools developed their own product discovery and indexing tools. Merchants would provide product feeds to ChatGPT, Copilot, Perplexity, Grok, and others individually or via a feed distribution tool.

Conversely, if it developed strategic partnerships with top AI providers, Shopify could concentrate AI shopping on its platform. Merchants would work with one tool — Shopify — and disseminate product info to all leading AI platforms.