OpenAI Scraps ChatGPT Watermarking Plans via @sejournal, @MattGSouthern

OpenAI has decided against implementing text watermarking for ChatGPT-generated content despite having the technology ready for nearly a year.

This decision, reported by The Wall Street Journal and confirmed in a recent OpenAI blog post update, stems from user concerns and technical challenges.

The Watermark That Wasn’t

OpenAI’s text watermarking system, designed to subtly alter word prediction patterns in AI-generated text, promised near-perfect accuracy.

Internal documents cited by the Wall Street Journal claim it was “99.9% effective” and resistant to simple paraphrasing.

However, OpenAI has revealed that more sophisticated tampering methods, like using another AI model for rewording, can easily circumvent this protection.

User Resistance: A Key Factor

Perhaps more pertinent to OpenAI’s decision was the potential user backlash.

A company survey found that while global support for AI detection tools was strong, almost 30% of ChatGPT users said they would use the service less if watermarking was implemented.

This presents a significant risk for a company rapidly expanding its user base and commercial offerings.

OpenAI also expressed concerns about unintended consequences, particularly the potential stigmatization of AI tools for non-native English speakers.

The Search For Alternatives

Rather than abandoning the concept entirely, OpenAI is now exploring potentially “less controversial” methods.

Its blog post mentions early-stage research into metadata embedding, which could offer cryptographic certainty without false positives. However, the effectiveness of this approach remains to be seen.

Implications For Marketers and Content Creators

This news may be a relief to the many marketers and content creators who have integrated ChatGPT into their workflows.

The absence of watermarking means greater flexibility in how AI-generated content can be used and modified.

However, it also means that ethical considerations around AI-assisted content creation remain largely in users’ hands.

Looking Ahead

OpenAI’s move shows how tough it is to balance transparency and user growth in AI.

The industry needs new ways to tackle authenticity issues as AI content booms. For now, ethical AI use is the responsibility of users and companies.

Expect more innovation here, from OpenAI or others. Finding a sweet spot between ethics and usability remains a key challenge in the AI content game.


Featured Image: Ascannio/Shutterstock

Google Found in Violation of Antitrust Law, Judge Rules via @sejournal, @MattGSouthern

A federal judge has ruled that Google violated U.S. antitrust law by illegally maintaining monopolies in the markets for general search services and general search text advertising.

Judge Amit P. Mehta of the U.S. District Court for the District of Columbia, ruling in a case brought against Google by the Justice Department, said that Google had abused its monopoly power over the search business in part by paying companies to present its search engine as the default choice on their devices and web browsers.

Judge Mehta wrote in his opinion filed Monday:

“After having carefully considered and weighed the witness testimony and evidence, the court reaches the following conclusion: Google is a monopolist, and it has acted as one to maintain its monopoly. It has violated Section 2 of the Sherman Act.”

The court found that Google abused its dominant position in several ways:

  • Paying hefty sums to ensure default status on devices and browsers
  • Leveraging user data to reinforce its search engine’s dominance
  • Illegally protecting its monopoly over search-related advertising

Key Findings Of Anticompetitive Behavior

The judge found that Google’s agreements with Apple, Mozilla, and Android partners foreclosed about 50% of the search market and 45% of the search advertising market from rivals.

These exclusive distribution agreements deprived competitors like Microsoft’s Bing of the scale needed to compete with Google in search and search advertising.

Judge Mehta concluded that Google’s conduct had anticompetitive effects:

  • Foreclosing a substantial share of the market
  • Depriving rivals of scale needed to compete
  • Reducing incentives for rivals to invest and innovate in search

The case began in 2020 and culminated in a 10-week trial last fall.

Financial Revelations

The trial disclosed financial details of Google’s default search agreements.

In 2022, Google paid Apple $20 billion for default search placement on iOS devices, an increase from $18 billion in 2021.

Additionally, Google shares 36% of Safari’s search ad revenue with Apple.

These figures highlight the value of default search positioning in the industry.

Google’s Defense & Market Share

Throughout the trial, Google maintained that its market dominance resulted from superior product quality rather than anticompetitive practices.

The company disputed the DOJ’s estimate that it held a 90% share of the search market, arguing for a broader definition of its competitive landscape.

However, Judge Mehta rejected this defense:

“Google has thwarted true competition by foreclosing its rivals from the most effective channels of search distribution.”

Ruling On Search Advertising

On search advertising, the judge found Google could charge supra-competitive prices for text ads without rivals’ constraints.

However, the judge ruled in Google’s favor on some claims, finding Google doesn’t have monopoly power in the broader search advertising market.

Potential Ramifications

While Judge Mehta has yet to determine specific remedies, the ruling opens the door to potentially far-reaching consequences for Google’s business model. Possible outcomes could include:

  • Forced changes to Google’s search operations
  • Divestiture of specific business segments
  • Restrictions on default search agreements

The decision is likely to face appeals, and the final resolution may evolve, as seen in the Microsoft antitrust case of the 1990s.

Broader Context

This ruling sets a precedent that could influence other ongoing antitrust cases against tech giants like Amazon, Apple, and Meta.

It signals a shift in how century-old antitrust laws are applied to modern digital markets.

What’s Next

Google is expected to appeal the decision, potentially leading to a protracted legal battle that could shape the future of online search and digital advertising.

The Department of Justice and a group of attorneys general from 38 states and territories, who filed similar antitrust suits against Google in 2020, will eagerly anticipate the next steps in this legal battle.


Featured Image: Sergei Elagin/Shutterstock

Facebook Attracts Gen Z Users While TikTok’s Boomer Audience Grows via @sejournal, @MattGSouthern

According to a recent report by eMarketer, Facebook is experiencing a resurgence among Gen Z users, while TikTok is gaining traction with baby boomers.

Despite these shifts, both platforms maintain a stable core user base.

Facebook’s Gen Z Renaissance

Facebook’s seeing unexpected Gen Z growth despite overall decline. U.S. Gen Z users are projected to increase from 49.0% (33.9M) in 2024 to 56.9% (40.5M) by 2028.

Key drivers:

  1. Utility: Event planning, niche groups, and Marketplace appeal to younger users.
  2. Demo shift: ~36% of Gen Z are still under 18, many just entering the social media space.

E-commerce potential strong: 75.0% of Gen Z Facebook users (15-26) bought on Marketplace last year.

However, Gen Z still trails Gen X and millennials in user numbers and time spent on the platform. Interestingly, time on Facebook is decreasing for users under 55, suggesting a shift in how younger generations interact with the platform.

TikTok’s Boomer Boom

TikTok’s Gen Z market is saturated, but it’s seeing surprising growth among boomers.

Projections show a 10.5% increase in U.S. boomer users next year, from 8.7M to 9.7M.

This modest uptick underscores TikTok’s accessibility and its appeal to older adults who want to stay culturally relevant and connected with younger relatives.

While boomers are the fastest-growing demographic, TikTok adoption rates are rising steadily across all generations, indicating the platform’s broad appeal.

Shifting Social Media Landscape

Facebook use continues to decrease across all generations except Gen Z, highlighting the platform’s evolving role in the social media ecosystem.

This trend, coupled with TikTok’s growth among older users, suggests a blurring of generational lines in social media usage. Platforms that can adapt to changing user demographics while maintaining their core appeal will be best positioned for long-term success.

Implications For Marketers

Platforms and users are constantly changing. Brands must adapt or risk losing ground to competitors.

TikTok’s boomer growth opens up new avenues for brands targeting older demographics, but marketers should be mindful of the platform’s primarily young user base.

For Facebook marketers, the growing Gen Z user base presents new opportunities, especially in e-commerce via Marketplace. However, decreasing time spent on the platform means content needs to be more engaging and targeted.

Action items:

  1. Audit strategy: Check content appeal across age groups and platforms.
  2. Diversify: Create multi-faceted strategies for different demographics while maintaining brand identity.
  3. Leverage analytics: Track engagement by age group and adjust tactics.
  4. Test and optimize: Experiment with content formats and messaging for each platform.
  5. Stay current: Follow platform updates and demographic trends.

Stay flexible and update strategies as user demographics and preferences change.

Brands that can reach across generations while respecting platform-specific norms will likely see the most success in this changing landscape.


Screenshot from: Halfpoint/Shutterstock

Google Gives 5 SEO Insights On Google Trends via @sejournal, @martinibuster

Google published a video that disclosed five insights about Google Trends that could be helpful for SEO, topic research and debugging issues with search rankings. The video was hosted by Daniel Waisberg, a Search Advocate at Google.

1. What Does Google Trends Offer?

Google Trends is an official tool created by Google that shows a representation of how often people search with certain keyword phrases and how those searches have changed over time. It’s not only helpful for discovering time-based changes in search queries but it also segments queries by geographic popularity which is useful for learning who to focus content for (or even to learn what geographic areas may be best to get links from).

This kind of information is invaluable for debugging why a site may have issues with organic traffic as it can show seasonal and consumer trends.

2. Google Trends Only Uses A Sample Of Data

An important fact about Google Trends that Waisberg shared is that the data that Google Trends reports on is based on a statistically significant but random sample of actual search queries.

He said:

“Google Trends is a tool that provides a random sample of aggregated, anonymized and categorized Google searches.”

This does not mean that the data is less accurate. The phrase statistically significant means that the data is representative of the actual search queries.

The reason Google uses a sample is that they have an enormous amount of data and it’s simply faster to work with samples that are representative of actual trends.

3. Google Cleans Noise In The Trends Data

Daniel Waisberg also said that Google cleans the data to remove noise and data that relates to user privacy.

“The search query data is processed to remove noise in the data and also to remove anything that might compromise a user’s privacy.”

An example of private data that is removed is the full names of people. An example of “noise” in the data are search queries made by the same person over and over, using the example of a trivial search for how to boil eggs that a person makes every morning.

That last one, about people repeating a search query is interesting because back in the early days of SEO, before Google Trends existed, SEOs used a public keyword volume tool by Overture (owned by Yahoo). Some SEOs poisoned the data by making thousands of searches for keyword phrases that were rarely queried by users, inflating the query volume, so that competitors would focus on optimizing on the useless keywords.

4. Google Normalizes Google Trends Data?

Google doesn’t show actual search query volume like a million queries per day for one query and 200,000 queries per day for another. Instead Google will select the point where a keyword phrase is searched the most and use that as the 100% mark and then adjust the Google Trends graph to percentages that are relative to that high point. So if the most searches a query gets in a day is 1 million, then a day in which it gets searched 500,000 times will be represented on the graph as 50%. This is what it means that Google Trends data is normalized.

5. Explore Search Queries And Topics

SEOs have focused on optimizing for keywords for over 25 years. But Google has long moved beyond keywords and has been labeling documents by the topics and even by queries they are relevant to (which also relates more to topics than keywords).

That’s why in my opinion one of the most useful offerings is the ability to explore the topic that’s related to the entity of the search query. Exploring the topic shows the query volume of all the related keywords.

The “explore by topic” tool arguably offers a more accurate idea of how popular a topic is, which is important because Google’s algorithms, machine learning systems, and AI models create representations of content at the sentence, paragraph and document level, representations that correspond to topics. I believe that’s what is one of the things referred to when Googlers talk about Core Topicality Systems.

Waisberg explained:

“Now, back to the Explore page. You’ll notice that, sometimes, in addition to a search term, you get an option to choose a topic. For example, when you type “cappuccino,” you can choose either the search term exactly matching “cappuccino” or the “cappuccino coffee drink” topic, which is the group of search terms that relate to that entity. These will include the exact term as well as misspellings. The topic also includes acronyms, and it covers all languages, which can be very useful, especially when looking at global data.

Using topics, you also avoid including terms that are unrelated to your interests. For example, if you’re looking at the trends for the company Alphabet, you might want to choose the Alphabet Inc company topic. If you just type “alphabet,” the trends will also include a lot of other meanings, as you can see in this example.”

The Big Picture

One of the interesting facts revealed in this video is that Google isn’t showing normalized actual search trends, that it’s showing a normalized “statistically significant” sample of the actual search trends. A statistically significant sample is one in which random chance is not a factor and thus represents the actual search trends.

The other noteworthy takeaway is the reminder that Google Trends is useful for exploring topics, which in my opinion is far more useful than Google Suggest and People Also Ask (PAA) data.

I have seen evidence that slavish optimization with Google Suggest and PAA data can make a website appear to be optimizing for search engines and not for people, which is something that Google explicitly cautions against. Those who were hit by the recent Google Updates should think hard about the implications of what their SEO practices in relation to keywords.

Exploring and optimizing with topics won’t behind statistical footprints of optimizing for search engines because the authenticity of content based on topics will always shine through.

Watch the Google Trends video:

Intro to Google Trends data

Featured Image by Shutterstock/Luis Molinero

AI In Marketing Copy: A Surprising Sales Killer, Study Finds via @sejournal, @MattGSouthern

Research shows that name-dropping AI in marketing copy might backfire, lowering consumer trust and purchase intent.

A WSU-led study published in the Journal of Hospitality Marketing & Management found that explicitly mentioning AI in product descriptions could turn off potential buyers despite AI’s growing presence in consumer goods.

Key Findings

The study, polling 1,000+ U.S. adults, found AI-labeled products consistently underperformed.

Lead author Mesut Cicek of WSU noted: “AI mentions decrease emotional trust, hurting purchase intent.”

The tests spanned diverse categories—smart TVs, high-end electronics, medical devices, and fintech. Participants saw identical product descriptions, differing only in the presence or absence of “artificial intelligence.”

Impact on High-Risk Products

AI aversion spiked for “high-risk” offerings, which are products with steep financial or safety stakes if they fail. These items naturally trigger more consumer anxiety and uncertainty.

Cicek stated:

“We tested the effect across eight different product and service categories, and the results were all the same: it’s a disadvantage to include those kinds of terms in the product descriptions.”

Implications For Marketers

The key takeaway for marketers is to rethink AI messaging. Cicek advises weighing AI mentions carefully or developing tactics to boost emotional trust.

Spotlight product features and benefits, not AI tech. “Skip the AI buzzwords,” Cicek warns, especially for high-risk offerings.

The research underscores emotional trust as a key driver in AI product perception.

This creates a dual challenge for AI-focused firms: innovate products while simultaneously building consumer confidence in the tech.

Looking Ahead

AI’s growing presence in everyday life highlights the need for careful messaging about its capabilities in consumer-facing content.

Marketers and product teams should reassess how they present AI features, balancing transparency and user comfort.

The study, co-authored by WSU professor Dogan Gursoy and Temple University associate professor Lu Lu lays the groundwork for further research on consumer AI perceptions across different contexts.

As AI advances, businesses must track changing consumer sentiments and adjust marketing accordingly. This work shows that while AI can boost product features, mentioning it in marketing may unexpectedly impact consumer behavior.


Featured Image: Wachiwit/Shutterstock

Google Confirms Robots.txt Can’t Prevent Unauthorized Access via @sejournal, @martinibuster

Google’s Gary Illyes confirmed a common observation that robots.txt has limited control over unauthorized access by crawlers. Gary then offered an overview of access controls that all SEOs and website owners should know.

Common Argument About Robots.txt

Seems like any time the topic of Robots.txt comes up there’s always that one person who has to point out that it can’t block all crawlers.

Gary agreed with that point:

“robots.txt can’t prevent unauthorized access to content”, a common argument popping up in discussions about robots.txt nowadays; yes, I paraphrased. This claim is true, however I don’t think anyone familiar with robots.txt has claimed otherwise.”

Next he took a deep dive on deconstructing what blocking crawlers really means. He framed the process of blocking crawlers as choosing a solution that inherently controls or cedes control to a website. He framed it as a request for access (browser or crawler) and the server responding in multiple ways.

He listed examples of control:

  • A robots.txt (leaves it up to the crawler to decide whether or not to crawl).
  • Firewalls (WAF aka web application firewall – firewall controls access)
  • Password protection

Here are his remarks:

“If you need access authorization, you need something that authenticates the requestor and then controls access. Firewalls may do the authentication based on IP, your web server based on credentials handed to HTTP Auth or a certificate to its SSL/TLS client, or your CMS based on a username and a password, and then a 1P cookie.

There’s always some piece of information that the requestor passes to a network component that will allow that component to identify the requestor and control its access to a resource. robots.txt, or any other file hosting directives for that matter, hands the decision of accessing a resource to the requestor which may not be what you want. These files are more like those annoying lane control stanchions at airports that everyone wants to just barge through, but they don’t.

There’s a place for stanchions, but there’s also a place for blast doors and irises over your Stargate.

TL;DR: don’t think of robots.txt (or other files hosting directives) as a form of access authorization, use the proper tools for that for there are plenty.”

Use The Proper Tools To Control Bots

There are many ways to block scrapers, hacker bots, search crawlers, visits from AI user agents and search crawlers. Aside from blocking search crawlers, a firewall of some type is a good solution because they can block by behavior (like crawl rate), IP address, user agent, and country, among many other ways. Typical solutions can be at the server level with something like Fail2Ban, cloud based like Cloudflare WAF, or as a WordPress security plugin like Wordfence.

Read Gary Illyes post on LinkedIn:

robots.txt can’t prevent unauthorized access to content

Featured Image by Shutterstock/Ollyy

Google Ads Experiencing Outage Impacting Key Features via @sejournal, @MattGSouthern

Google Ads is currently experiencing a widespread outage that has affected several components of its platform.

The incident, which began on August 1, 2024, at 15:27 UTC, has left many advertisers unable to access vital tools and reports.

According to the Google Ads Status Dashboard, multiple features are currently unavailable:

  1. Report Editor
  2. Dashboards
  3. Saved Reports
  4. Products, Product Groups, and Listing Groups pages

The issue spans the Google Ads web interface, API, and Google Ads Editor, indicating a comprehensive system-wide problem.

Ginny Marvin, Google’s Ads Liaison, addressed the situation in a public statement:

“We’re actively looking into an issue with Google Ads. Report Editor, Dashboards, and Saved Reports in the Google Ads web interface are currently down. The Products, Product Groups, and Listing Groups pages are down across the web interface, API, and Google Ads Editor. Thank you for your patience. We will provide an update as soon as we have more information.”

Impact On Advertisers

This outage will likely disrupt Google Ads advertisers’ daily operations.

Without access to the Report Editor, Dashboards, and Saved Reports, marketers may struggle to analyze campaign performance, make data-driven decisions, or present client results.

Inability to access the Products, Product Groups, and Listing Groups pages is concerning for ecommerce advertisers who use these features to manage their product feeds and shopping campaigns.

Further, the API outage means that third-party tools and custom integrations dependent on Google Ads data may also be affected, potentially causing a ripple effect.

What Advertisers Can Do

While Google works to resolve the issue, advertisers are advised to:

  1. Monitor the Google Ads Status Dashboard for real-time updates
  2. Document any discrepancies or issues noticed in campaigns during this period
  3. Prepare alternative reporting methods using previously exported data if available
  4. Communicate with clients about potential delays in reporting or campaign adjustments

As of the latest update at 7:38 p.m. UTC on August 1, 2024, Google has not provided an estimated time for resolution.

The company affirms it’s actively investigating the problem and will provide updates as more information becomes available.


Featured Image: eamesBot/Shutterstock

Google Clarifies Autocomplete Functionality Amid User Concerns via @sejournal, @MattGSouthern

Google’s Communications team recently took to X to clarify its Search Autocomplete feature following user complaints and misconceptions.

Autocomplete’s Purpose & Functionality

Addressing claims of search term censorship Google stated:

“Autocomplete is just a tool to help you complete a search quickly.”

Google notes that users can always search for their intended queries regardless of Autocomplete predictions.

Recent Issues Explained

Google acknowledged two specific problems that had sparked user concerns.

Addressing lack of predictions for certain political queries, Google says:

“Autocomplete wasn’t providing predictions for queries about the assassination attempt against former President Trump.”

Google claims this was due to”built-in protections related to political violence” that were outdated.

The company said it’s working on improvements that are “already rolling out.”

Google also addressed missing autocomplete predictions for some political figures.

Google described this as:

“… a bug that spanned the political spectrum, also affecting queries for several past presidents, such as former President Obama.”

The issue extended to other queries like “vice president k,” which showed no predictions.

Google confirmed it’s “made an update that has improved these predictions across the board.”

Algorithmic Nature Of Predictions

Google emphasized the algorithmic basis of its prediction and labeling systems, stating:

“While our systems work very well most of the time, you can find predictions that may be unexpected or imperfect, and bugs will occur,”

The company noted that such issues are not unique to their platform, stating:

“Many platforms, including the one we’re posting on now, will show strange or incomplete predictions at various times.”

Commitment To Improvement

The thread concluded with a pledge from Google to address issues as they arise:

“For our part, when issues come up, we will make improvements so you can find what you’re looking for, quickly and easily.”

Broader Context

This explanation from Google comes at a time when tech companies face increasing scrutiny over their influence on information access.

This incident also highlights the broader debate about algorithmic transparency in tech.

While autocomplete might seem like a background feature, it significantly impacts what people search for and the websites they visit.


Featured Image: Galeh Nur Wihantara/Shutterstock

Google Chrome Adds Visual Search, Tab Compare, & Smart History via @sejournal, @MattGSouthern

Google has announced three features for its Chrome browser, which will roll out in the coming weeks.

These additions, incorporating Google’s AI and Gemini models, offer new ways to interact with web content and manage browsing history.

Desktop Integration Of Google Lens

The first update brings Google Lens, previously a mobile-only feature, to the desktop version of Chrome. This tool allows you to search and ask questions about visual content on webpages.

You can activate Lens via an icon in the address bar or through the right-click menu, then select areas of a page to initiate a visual search.

Results appear in a side panel, where users can refine searches or ask follow-up questions.

Screenshot from blog.google.com, August 2024.

Tab Compare For Product Research

A new feature called Tab Compare is being introduced, initially for U.S. users.

This tool generates an AI-powered overview of products from multiple open tabs, compiling information such as specifications, features, prices, and ratings into a single comparison table.

The feature is designed to streamline online shopping research, though its effectiveness in real-world scenarios remains to be seen.

Screenshot from blog.google.com, August 2024.

Natural Language Processing For Browser History

Google is updating Chrome’s history feature with natural language processing capabilities.

This will allow users to search their browsing history using conversational queries, such as “What was that ice cream shop I looked at last week?”

Google states that this feature will be optional and can be turned on or off in the browser settings.

Screenshot from blog.google.com, August 2024.

Privacy Considerations

While these features promise enhanced functionality, they also raise potential privacy concerns.

Google assures that the enhanced history search will not include data from incognito mode browsing. However, the extent of data collection and processing required for these AI features is unclear from the announcement.

Broader Context

These updates show AI’s growing role in browsers. As tech companies race to add advanced features, we see trade-offs between functionality and privacy, with potential ripple effects on web usage and ecommerce.

Parisa Tabriz, Vice President of Chrome, hints at more AI features in the pipeline, signaling a broader push to weave AI into browsing tools.

The rollout starts stateside and will be phased. As always, performance and user uptake will be the success metrics.