Free WordPress AI Writing Assistant By Jetpack via @sejournal, @martinibuster

Jetpack announced a free WordPress writing tool called Write Brief With AI that improves the clarity and conciseness of content. The AI writing assistant is based on an internal tool used at Automattic and is now available without limitations regardless of whether a user is subscribed to Jetpack AI Assistant or not.

Write Brief With AI Is Free

The new AI tool started as an internal writing tool used at Automattic, the company behind WordPress.com, Jetpack, WooCommerce, and other companies. They are now integrating as part of the Jetpack AI plugin. Although Jetpack AI is a premium plugin (with a limited free trial), the functionality and usage of Write Brief with AI is available to all users both free and paid.

What It Does

The new Jetpack AI writing tool does three important things that can improve engagement and the overall quality of the content.

  1. It measures the readability of the text.
  2. Flags long-winded sentences.
  3. Highlights words that convey uncertainty.

Importance Of Readability

Readability and a direct writing style are important for clearly expressing the content’s topic, which can indirectly benefit SEO, conversions, and engagement. This is because clarity and conciseness make the topic more evident and easily understood by search algorithms.

Why Removing Uncertainty Is Important

Regarding flagging words that sound uncertain, that has the effect of encouraging the writer to consider revisions that make the content more definitive and confident.

Here are examples of how confident writing improves content:

Example 1

This sentence expresses uncertainty:

I think we should consider expanding our marketing efforts.

This improved version of the same sentence is more confident:

We should expand our marketing efforts.

Example 2

This sentence is unconfident:

Maybe we should review the budget before making a decision.

This sentence is direct and definitive:

We should review the budget before making a decision.

The above examples show how improving directness and making sentences more decisive removes a level of ambiguity and makes them more understandable.

Will that help a web page rank better? Communicating without ambiguity makes it easy for search-related algorithms to understand content which in turn makes it easier to rank for the respective topic.

Embedded Within The WordPress Editor

The editor is located within the WordPress editor. Blocks must be enabled because it won’t work within the Classic Editor. Additionally, the functionality is turned off by default and has to be activated by toggling on within the AI Assistant Settings sidebar.

Should You Try Write Brief With AI?

If your site is already using blocks then it may be convenient to give the new writing assistant a try. The tool is focused on improving content according to best practices but not actually doing the writing itself. That’s a good use of AI because it preserves the authenticity of human authored content.

Download Jetpack and activate the free trial of the AI Assistant. Write Brief With AI is switched off by default, so toggle it on in the AI Assistant settings.  While AI Assistant is limited in how many times it can be used, Write Brief With AI is in Beta and can be used without limitations.

Download Jetpack here:

Jetpack by Automattic

Learn More About Write Brief With AI

Read more at the official WordPress.com announcement:

Clearer Writing at Your Fingertips: Introducing Write Brief with AI (Beta)

Read the documentation on requirements, activation instructions and how to use it:

Create Better Content with Jetpack AI

Featured Image by Shutterstock/Velishchuk Yevhen

Google’s AI Now Chooses Your Local Ad Photos via @sejournal, @MattGSouthern

Google has announced a new update to its Local Services Ads (LSA) platform, implementing an automated photo selection feature.

Ginny Marvin, Google Ads Liaison, revealed that the company will now choose photos from advertisers’ LSA profiles to display in their ads.

According to the announcement, this change is designed to increase ad engagement. The selection process will be based on an image’s perceived likelihood of improving user interaction with the advertisement.

Key Points of the Update:

  1. Photo inclusion may affect ad ranking
  2. Google recommends uploading 3-5 images to LSA profiles
  3. Photos should be high-quality, relevant, and original
  4. Not all ads will consistently include photos

Impact On Advertisers

This update introduces a new variable for Local Services advertisers to consider.

While Google suggests that adding photos could improve ad rankings, the actual impact remains to be seen.

Advertisers may need to reassess their visual content strategies in light of this change.

Photo Requirements & Recommendations

Google says photos must be high quality, relevant to the advertiser’s work, and original.

The company explicitly states that copied or stolen images are not permitted. Advertisers can manage their photos through the Profile and Budget page in their LSA dashboard.

Variable Photo Display

It’s important to note that photo inclusion in ads is not guaranteed. Google states that ad appearance will vary depending on user queries and other unspecified factors.

This variability may present challenges for advertisers seeking to control their ad presentation consistently.

As this feature rolls out, local service providers using Google’s advertising platform must monitor its effects on their ad performance and adjust their strategies accordingly.

How This Can Help You

This LSA update matters for digital marketers and local businesses.

It changes how visuals impact local service ads, potentially shaking up ad performance and user engagement.

What it means for LSA advertisers:

  • Better visibility: Good photos could boost your ad placement.
  • More clicks: Eye-catching visuals might up your CTR.
  • Edge over competitors: Quick adapters could get ahead.
  • Time-saver: No more manual image selection headaches.

What it means for marketers and agencies:

  • New optimization angles: Fresh ways to tweak LSA campaigns.
  • Added value for clients: Guide them on nailing their LSA imagery.
  • Data insights: Track how this change impacts performance metrics.

Keep a close eye on your LSA performance and be ready to pivot. Savvy marketers can turn this update into a win for their local ad game.


Featured Image: Mamun sheikh K/Shutterstock

Reddit Considers Adding AI-Powered Search Results via @sejournal, @MattGSouthern

During Reddit’s Q2 2024 earnings call, CEO Steve Huffman revealed the company is exploring implementing AI-powered search results on its platform.

Though details remain limited, this feature could enhance content discovery.

Huffman stated during the call:

“Later this year, we will begin testing new search result pages powered by AI to summarize and recommend content.”

He suggested this could help users find information on products, shows, and games and discover new communities.

Reddit’s consideration of AI search aligns with broader industry trends, as many tech companies integrate AI capabilities into their products.

Financial Context

This announcement was made alongside Reddit’s Q2 2024 financial results.

The company reported 54% year-over-year revenue growth, reaching $281.2 million for the quarter.

User growth also increased, though specific figures were not provided in this initial report.

Potential Challenges

While AI-powered search could improve content discovery, its implementation may face hurdles.

These could include technical challenges, user adoption concerns, and questions about how AI-curated results might affect the visibility of certain communities or content types on the platform.

Reddit hasn’t provided a specific timeline for testing or rolling out this feature, nor has it shared details on how it would be developed or implemented.

Reddit Blocks Most Search Engines

Any change to Reddit’s on-site search is notable, as it’s one of the only ways to search the website.

Reddit’s latest robots.txt update has prevented most search engines from crawling its recent content.

The big exception? Google, thanks to a $60M deal for AI training data.

See: Reddit Limits Search Engine Access, Google Remains Exception

Key points from the above article:

  • Only Google and Reddit’s on-site search can now reliably find fresh Reddit posts.
  • Bing, DuckDuckGo, and others are left in the cold for new content.
  • SEOs and marketers face new hurdles in tracking Reddit discussions.

This move fits the trend of platforms monetizing their content and protecting it from AI scrapers.

This could impact users in the following ways:

  • Users must use Google or Reddit’s native search for recent posts.
  • SEOs need new strategies for Reddit content monitoring.
  • Google gains an edge in providing access to Reddit’s vast user-generated content.

It’s a sign of the times as platforms balance openness with monetization in the AI era.

Looking Ahead

As Reddit continues to grow, reporting a 54% year-over-year revenue increase in Q2 2024, this AI initiative could play a pivotal role in the platform’s future.

As the company moves forward with testing and potential implementation, users and industry observers alike will be watching closely to see how this AI-powered search transforms the Reddit experience.


Featured Image: T. Schneider/Shutterstock

OpenAI Scraps ChatGPT Watermarking Plans via @sejournal, @MattGSouthern

OpenAI has decided against implementing text watermarking for ChatGPT-generated content despite having the technology ready for nearly a year.

This decision, reported by The Wall Street Journal and confirmed in a recent OpenAI blog post update, stems from user concerns and technical challenges.

The Watermark That Wasn’t

OpenAI’s text watermarking system, designed to subtly alter word prediction patterns in AI-generated text, promised near-perfect accuracy.

Internal documents cited by the Wall Street Journal claim it was “99.9% effective” and resistant to simple paraphrasing.

However, OpenAI has revealed that more sophisticated tampering methods, like using another AI model for rewording, can easily circumvent this protection.

User Resistance: A Key Factor

Perhaps more pertinent to OpenAI’s decision was the potential user backlash.

A company survey found that while global support for AI detection tools was strong, almost 30% of ChatGPT users said they would use the service less if watermarking was implemented.

This presents a significant risk for a company rapidly expanding its user base and commercial offerings.

OpenAI also expressed concerns about unintended consequences, particularly the potential stigmatization of AI tools for non-native English speakers.

The Search For Alternatives

Rather than abandoning the concept entirely, OpenAI is now exploring potentially “less controversial” methods.

Its blog post mentions early-stage research into metadata embedding, which could offer cryptographic certainty without false positives. However, the effectiveness of this approach remains to be seen.

Implications For Marketers and Content Creators

This news may be a relief to the many marketers and content creators who have integrated ChatGPT into their workflows.

The absence of watermarking means greater flexibility in how AI-generated content can be used and modified.

However, it also means that ethical considerations around AI-assisted content creation remain largely in users’ hands.

Looking Ahead

OpenAI’s move shows how tough it is to balance transparency and user growth in AI.

The industry needs new ways to tackle authenticity issues as AI content booms. For now, ethical AI use is the responsibility of users and companies.

Expect more innovation here, from OpenAI or others. Finding a sweet spot between ethics and usability remains a key challenge in the AI content game.


Featured Image: Ascannio/Shutterstock

Google Found in Violation of Antitrust Law, Judge Rules via @sejournal, @MattGSouthern

A federal judge has ruled that Google violated U.S. antitrust law by illegally maintaining monopolies in the markets for general search services and general search text advertising.

Judge Amit P. Mehta of the U.S. District Court for the District of Columbia, ruling in a case brought against Google by the Justice Department, said that Google had abused its monopoly power over the search business in part by paying companies to present its search engine as the default choice on their devices and web browsers.

Judge Mehta wrote in his opinion filed Monday:

“After having carefully considered and weighed the witness testimony and evidence, the court reaches the following conclusion: Google is a monopolist, and it has acted as one to maintain its monopoly. It has violated Section 2 of the Sherman Act.”

The court found that Google abused its dominant position in several ways:

  • Paying hefty sums to ensure default status on devices and browsers
  • Leveraging user data to reinforce its search engine’s dominance
  • Illegally protecting its monopoly over search-related advertising

Key Findings Of Anticompetitive Behavior

The judge found that Google’s agreements with Apple, Mozilla, and Android partners foreclosed about 50% of the search market and 45% of the search advertising market from rivals.

These exclusive distribution agreements deprived competitors like Microsoft’s Bing of the scale needed to compete with Google in search and search advertising.

Judge Mehta concluded that Google’s conduct had anticompetitive effects:

  • Foreclosing a substantial share of the market
  • Depriving rivals of scale needed to compete
  • Reducing incentives for rivals to invest and innovate in search

The case began in 2020 and culminated in a 10-week trial last fall.

Financial Revelations

The trial disclosed financial details of Google’s default search agreements.

In 2022, Google paid Apple $20 billion for default search placement on iOS devices, an increase from $18 billion in 2021.

Additionally, Google shares 36% of Safari’s search ad revenue with Apple.

These figures highlight the value of default search positioning in the industry.

Google’s Defense & Market Share

Throughout the trial, Google maintained that its market dominance resulted from superior product quality rather than anticompetitive practices.

The company disputed the DOJ’s estimate that it held a 90% share of the search market, arguing for a broader definition of its competitive landscape.

However, Judge Mehta rejected this defense:

“Google has thwarted true competition by foreclosing its rivals from the most effective channels of search distribution.”

Ruling On Search Advertising

On search advertising, the judge found Google could charge supra-competitive prices for text ads without rivals’ constraints.

However, the judge ruled in Google’s favor on some claims, finding Google doesn’t have monopoly power in the broader search advertising market.

Potential Ramifications

While Judge Mehta has yet to determine specific remedies, the ruling opens the door to potentially far-reaching consequences for Google’s business model. Possible outcomes could include:

  • Forced changes to Google’s search operations
  • Divestiture of specific business segments
  • Restrictions on default search agreements

The decision is likely to face appeals, and the final resolution may evolve, as seen in the Microsoft antitrust case of the 1990s.

Broader Context

This ruling sets a precedent that could influence other ongoing antitrust cases against tech giants like Amazon, Apple, and Meta.

It signals a shift in how century-old antitrust laws are applied to modern digital markets.

What’s Next

Google is expected to appeal the decision, potentially leading to a protracted legal battle that could shape the future of online search and digital advertising.

The Department of Justice and a group of attorneys general from 38 states and territories, who filed similar antitrust suits against Google in 2020, will eagerly anticipate the next steps in this legal battle.


Featured Image: Sergei Elagin/Shutterstock

Facebook Attracts Gen Z Users While TikTok’s Boomer Audience Grows via @sejournal, @MattGSouthern

According to a recent report by eMarketer, Facebook is experiencing a resurgence among Gen Z users, while TikTok is gaining traction with baby boomers.

Despite these shifts, both platforms maintain a stable core user base.

Facebook’s Gen Z Renaissance

Facebook’s seeing unexpected Gen Z growth despite overall decline. U.S. Gen Z users are projected to increase from 49.0% (33.9M) in 2024 to 56.9% (40.5M) by 2028.

Key drivers:

  1. Utility: Event planning, niche groups, and Marketplace appeal to younger users.
  2. Demo shift: ~36% of Gen Z are still under 18, many just entering the social media space.

E-commerce potential strong: 75.0% of Gen Z Facebook users (15-26) bought on Marketplace last year.

However, Gen Z still trails Gen X and millennials in user numbers and time spent on the platform. Interestingly, time on Facebook is decreasing for users under 55, suggesting a shift in how younger generations interact with the platform.

TikTok’s Boomer Boom

TikTok’s Gen Z market is saturated, but it’s seeing surprising growth among boomers.

Projections show a 10.5% increase in U.S. boomer users next year, from 8.7M to 9.7M.

This modest uptick underscores TikTok’s accessibility and its appeal to older adults who want to stay culturally relevant and connected with younger relatives.

While boomers are the fastest-growing demographic, TikTok adoption rates are rising steadily across all generations, indicating the platform’s broad appeal.

Shifting Social Media Landscape

Facebook use continues to decrease across all generations except Gen Z, highlighting the platform’s evolving role in the social media ecosystem.

This trend, coupled with TikTok’s growth among older users, suggests a blurring of generational lines in social media usage. Platforms that can adapt to changing user demographics while maintaining their core appeal will be best positioned for long-term success.

Implications For Marketers

Platforms and users are constantly changing. Brands must adapt or risk losing ground to competitors.

TikTok’s boomer growth opens up new avenues for brands targeting older demographics, but marketers should be mindful of the platform’s primarily young user base.

For Facebook marketers, the growing Gen Z user base presents new opportunities, especially in e-commerce via Marketplace. However, decreasing time spent on the platform means content needs to be more engaging and targeted.

Action items:

  1. Audit strategy: Check content appeal across age groups and platforms.
  2. Diversify: Create multi-faceted strategies for different demographics while maintaining brand identity.
  3. Leverage analytics: Track engagement by age group and adjust tactics.
  4. Test and optimize: Experiment with content formats and messaging for each platform.
  5. Stay current: Follow platform updates and demographic trends.

Stay flexible and update strategies as user demographics and preferences change.

Brands that can reach across generations while respecting platform-specific norms will likely see the most success in this changing landscape.


Screenshot from: Halfpoint/Shutterstock

Google Gives 5 SEO Insights On Google Trends via @sejournal, @martinibuster

Google published a video that disclosed five insights about Google Trends that could be helpful for SEO, topic research and debugging issues with search rankings. The video was hosted by Daniel Waisberg, a Search Advocate at Google.

1. What Does Google Trends Offer?

Google Trends is an official tool created by Google that shows a representation of how often people search with certain keyword phrases and how those searches have changed over time. It’s not only helpful for discovering time-based changes in search queries but it also segments queries by geographic popularity which is useful for learning who to focus content for (or even to learn what geographic areas may be best to get links from).

This kind of information is invaluable for debugging why a site may have issues with organic traffic as it can show seasonal and consumer trends.

2. Google Trends Only Uses A Sample Of Data

An important fact about Google Trends that Waisberg shared is that the data that Google Trends reports on is based on a statistically significant but random sample of actual search queries.

He said:

“Google Trends is a tool that provides a random sample of aggregated, anonymized and categorized Google searches.”

This does not mean that the data is less accurate. The phrase statistically significant means that the data is representative of the actual search queries.

The reason Google uses a sample is that they have an enormous amount of data and it’s simply faster to work with samples that are representative of actual trends.

3. Google Cleans Noise In The Trends Data

Daniel Waisberg also said that Google cleans the data to remove noise and data that relates to user privacy.

“The search query data is processed to remove noise in the data and also to remove anything that might compromise a user’s privacy.”

An example of private data that is removed is the full names of people. An example of “noise” in the data are search queries made by the same person over and over, using the example of a trivial search for how to boil eggs that a person makes every morning.

That last one, about people repeating a search query is interesting because back in the early days of SEO, before Google Trends existed, SEOs used a public keyword volume tool by Overture (owned by Yahoo). Some SEOs poisoned the data by making thousands of searches for keyword phrases that were rarely queried by users, inflating the query volume, so that competitors would focus on optimizing on the useless keywords.

4. Google Normalizes Google Trends Data?

Google doesn’t show actual search query volume like a million queries per day for one query and 200,000 queries per day for another. Instead Google will select the point where a keyword phrase is searched the most and use that as the 100% mark and then adjust the Google Trends graph to percentages that are relative to that high point. So if the most searches a query gets in a day is 1 million, then a day in which it gets searched 500,000 times will be represented on the graph as 50%. This is what it means that Google Trends data is normalized.

5. Explore Search Queries And Topics

SEOs have focused on optimizing for keywords for over 25 years. But Google has long moved beyond keywords and has been labeling documents by the topics and even by queries they are relevant to (which also relates more to topics than keywords).

That’s why in my opinion one of the most useful offerings is the ability to explore the topic that’s related to the entity of the search query. Exploring the topic shows the query volume of all the related keywords.

The “explore by topic” tool arguably offers a more accurate idea of how popular a topic is, which is important because Google’s algorithms, machine learning systems, and AI models create representations of content at the sentence, paragraph and document level, representations that correspond to topics. I believe that’s what is one of the things referred to when Googlers talk about Core Topicality Systems.

Waisberg explained:

“Now, back to the Explore page. You’ll notice that, sometimes, in addition to a search term, you get an option to choose a topic. For example, when you type “cappuccino,” you can choose either the search term exactly matching “cappuccino” or the “cappuccino coffee drink” topic, which is the group of search terms that relate to that entity. These will include the exact term as well as misspellings. The topic also includes acronyms, and it covers all languages, which can be very useful, especially when looking at global data.

Using topics, you also avoid including terms that are unrelated to your interests. For example, if you’re looking at the trends for the company Alphabet, you might want to choose the Alphabet Inc company topic. If you just type “alphabet,” the trends will also include a lot of other meanings, as you can see in this example.”

The Big Picture

One of the interesting facts revealed in this video is that Google isn’t showing normalized actual search trends, that it’s showing a normalized “statistically significant” sample of the actual search trends. A statistically significant sample is one in which random chance is not a factor and thus represents the actual search trends.

The other noteworthy takeaway is the reminder that Google Trends is useful for exploring topics, which in my opinion is far more useful than Google Suggest and People Also Ask (PAA) data.

I have seen evidence that slavish optimization with Google Suggest and PAA data can make a website appear to be optimizing for search engines and not for people, which is something that Google explicitly cautions against. Those who were hit by the recent Google Updates should think hard about the implications of what their SEO practices in relation to keywords.

Exploring and optimizing with topics won’t behind statistical footprints of optimizing for search engines because the authenticity of content based on topics will always shine through.

Watch the Google Trends video:

Intro to Google Trends data

Featured Image by Shutterstock/Luis Molinero

AI In Marketing Copy: A Surprising Sales Killer, Study Finds via @sejournal, @MattGSouthern

Research shows that name-dropping AI in marketing copy might backfire, lowering consumer trust and purchase intent.

A WSU-led study published in the Journal of Hospitality Marketing & Management found that explicitly mentioning AI in product descriptions could turn off potential buyers despite AI’s growing presence in consumer goods.

Key Findings

The study, polling 1,000+ U.S. adults, found AI-labeled products consistently underperformed.

Lead author Mesut Cicek of WSU noted: “AI mentions decrease emotional trust, hurting purchase intent.”

The tests spanned diverse categories—smart TVs, high-end electronics, medical devices, and fintech. Participants saw identical product descriptions, differing only in the presence or absence of “artificial intelligence.”

Impact on High-Risk Products

AI aversion spiked for “high-risk” offerings, which are products with steep financial or safety stakes if they fail. These items naturally trigger more consumer anxiety and uncertainty.

Cicek stated:

“We tested the effect across eight different product and service categories, and the results were all the same: it’s a disadvantage to include those kinds of terms in the product descriptions.”

Implications For Marketers

The key takeaway for marketers is to rethink AI messaging. Cicek advises weighing AI mentions carefully or developing tactics to boost emotional trust.

Spotlight product features and benefits, not AI tech. “Skip the AI buzzwords,” Cicek warns, especially for high-risk offerings.

The research underscores emotional trust as a key driver in AI product perception.

This creates a dual challenge for AI-focused firms: innovate products while simultaneously building consumer confidence in the tech.

Looking Ahead

AI’s growing presence in everyday life highlights the need for careful messaging about its capabilities in consumer-facing content.

Marketers and product teams should reassess how they present AI features, balancing transparency and user comfort.

The study, co-authored by WSU professor Dogan Gursoy and Temple University associate professor Lu Lu lays the groundwork for further research on consumer AI perceptions across different contexts.

As AI advances, businesses must track changing consumer sentiments and adjust marketing accordingly. This work shows that while AI can boost product features, mentioning it in marketing may unexpectedly impact consumer behavior.


Featured Image: Wachiwit/Shutterstock