Mullenweg Takes On Inc Magazine For “Biased” Interview via @sejournal, @martinibuster

Matt Mullenweg accused Inc Magazine of distorting an interview with him, publishing verifiably fake facts and quoting people who lacked credibility. Mullenweg posted compelling examples of how the Inc magazine misrepresented his quotes and presented false fact, citing the selection of unflattering photos as evidence of a conscious effort to negatively slant their interview of him.

Mullenweg explains why he agreed to the interview:

When Inc Magazine reached out to have David H. Freedman (website powered by WordPress) write a feature piece I was excited because though Inc wasn’t a magazine I have read much since I was a teenager, David seemed like a legit journalist who usually writes for better publications like The Atlantic. I opened up to David with a number of vulnerable stories, and allowed the photo shoot in my home in Houston.”

The article begins with an unflattering portrait of Mullenweg as a control freak that is fussy about the kind of toilet paper and soap is provided at Automattic’s offices. Mullenweg writes that he had shared an anecdote with the writer of the time he visited Google’s headquarters in 2004 and was surprised by what he felt was “cheap toilet” paper. Years later when he had his own offices he made the decision to spend extra on good soap and toilet paper to benefit his employee’s experience at work. In other words, the choice to do that came from altruism and a concern for others, not a desire to control every detail.

But that’s not how Inc magazine portrayed it.

They write:

“Stooping to fling open a storage cabinet built into the bathroom wall, he points to a neat stack of wrapped toilet paper rolls. “The best toilet paper you can buy,” he assures me. “How much extra does really nice toilet paper cost? A buck or two?” The handsome bottles of soap by the sinks are premium, too, he adds.

I ask him who at Automattic, the estimated $710-million company of which Mullenweg is CEO, is responsible for toilet paper and soap quality control?

“Me,” he says, beaming.

Of course, Mullenweg’s control of Automattic extends well beyond the bathroom walls.”

Grim Images In Photographs

The author of the article described Mullenweg as a young looking forty year old with a “near-constant grin” which contradicted the photographs Inc chose to publish, neither of which showed him smiling. Of the two photographs from the interview they chose to publish, one captures Mullenweg mid-blink, resulting in an absurd image of him typing with his eyes closed.

There are two other photographs from the past nine and twelve years ago which do show him smiling. Mullenweg’s smile is not an affectation; it’s an authentic expression. Videos of him participating in interviews or speaking publicly consistently show him smiling. Mullenweg is correct to point out that Inc magazine made a deliberate choice to not publish an image of him smiling, which is his characteristic expression, as noted in the article itself.

Poorly Researched Article

Mullenweg’s critique of the article zeroes in on a series of false statements that are indicative of poor research, including a consistent misrepresentation of a company’s earnings with its valuation.

One of the false facts wrongfully asserts that Mullenweg coded WordPress in three “obsessive days” when the actual time period was four months. This might seem minor but it’s not because it’s evidence of what Mullenweg points out is poor research that could have been easily verified on Wikipedia.

His critique is thoroughly convincing and shows how he agreed to the interview with openness and the expectation of balanced reporting. His dismay at the results is palpably communicated in his blog post about it.

Nevertheless he goes on to say that he supports journalism and puts the blame on the editor of the article.

He writes:

“I know a lot of entrepreneurs follow me and I don’t want your takeaway to be “don’t talk to journalists” or “don’t engage with mainstream media.”

…this is a good example of where a decent journalist can’t overcome a crappy editor and quality control. I probably wouldn’t be excited to work with Inc Magazine again while Mike Hofman is in charge as editor-in-chief, he’s clearly overseeing a declining brand. But I will continue to engage with other media, and blog, and tweet, and tell my story directly.

When an editor wants to make you look good, they can! If they decide they want to drag you, they can too. Everything in my interactions with David and Inc made it seem this would be a positive piece, so be careful.

We’ll see if Inc Magazine has any journalistic integrity by their updates to the article.”

Rightfully Disappointed

Mullenweg researched the interviewer and verified that they were a competent and respectable writer. From Mullenweg’s point of view the Inc magazine article was poorly researched and heavily slanted against him, what he termed a hit piece.

Read Mullenweg’s account of the interview:

Inc Hit Piece

Featured Image by Shutterstock/tomertu

Mullenweg Pauses WordPress Services – Hopes To Reopen Next Year via @sejournal, @martinibuster

Matt Mullenweg announced the abrupt pause in services offered by WordPress.org, affecting plugin submissions, reviews, theme submissions, and additions to the photo directory. He offers to keep providing these services to WP Engine, citing the recent court order against him and Automattic that compels him to offer “free labor and services.”

Pause For The Holidays

Mullenweg published a post on the official WordPress blog to announce a pause in free services offered by WordPress.org to give the “many tired volunteers around WordPress.org a break for the holidays.”

The pause affects:

  • New account registrations on WordPress.org
  • New plugin directory submissions
  • New plugin reviews
  • New theme directory submissions
  • New photo directory submissions

The pause doesn’t affect the ability to install new instances of WordPress sites or accounts, which sounds contradictory.

Here’s what he wrote in his list of what services are paused:

“New account registrations on WordPress.org (clarifying so press doesn’t confuse this: people can still make their own WordPress installs and accounts)”

Mullenweg makes a point to note that the pause doesn’t affect WP Engine, stating that he’s legally required to keep providing free labor and services” services to WP Engine, writing that if WP Engine requires those services they can have their “high-priced attorneys” speak to his “high-priced attorneys” to gain access.

He then shared a cryptic message that implied there was a chance that WordPress may not resume those services in 2025, saying that it hinged on his being able to find the “time, energy, and money” to undo the pause in 2025, which he writes is being expended defending against WP Engine’s lawsuit against him and Automattic.

Mullenweg wrote:

“Right now much of the time I would spend making WordPress better is being taken up defending against WP Engine’s legal attacks. Their attacks are against Automattic, but also me individually as the owner of WordPress.org, which means if they win I can be personally liable for millions of dollars of damages.”

He signs off inviting those who’d like to fund those attacks on him to sign up for WP Engine and that those who don’t can sign up for other web hosts, linking to both WP Engine and a WordPress.org page that offers promotions to induce WP Engine clients to switch away.

Read Mullenweg’s announcement here:

Holiday Break

Featured Image by Shutterstock/MPIX

Google Launches (Final?) Spam Update Of The Year via @sejournal, @MattGSouthern

Google announced the rollout of the December 2024 spam update.

The update, expected to be completed within a week, arrives amid ongoing industry discussions about the effectiveness of Google’s spam-fighting measures.

This December update caps off a year of spam-fighting measures, including the June Spam Update and the March Core Update, which targeted policy-violating websites and aimed to reduce “unhelpful” content by 40%.

It’s also worth mentioning that this update closely follows the December core update.

Looking Back At A Year Of Updates

This year saw an unprecedented frequency of major algorithm updates, with core updates in March, August, November, and December.

The August update, which took nearly three weeks to complete, targeted low-value SEO content while promoting high-quality material.

The December core update, launched on December 12, came unusually close to the November update, with Google explaining that different systems are often improved in parallel.

Policy Transformation

This year marked a shift in Google’s approach to spam detection and prevention with three major policy updates.

1.Site Reputation Abuse

Introduced in May 2024, Google began targeting “parasite SEO” practices where third-party content exploits established domains’ authority.

This update mainly affected:

  • Major publishers hosting third-party product reviews
  • News sites with extensive coupon sections
  • Sports websites with AI-generated content

The policy change led to notable casualties, including several high-profile publishers receiving manual actions for hosting third-party content without sufficient oversight.

2. Expired Domain Abuse

Google’s enhanced focus on expired domain manipulation addressed:

  • Purchase of expired domains for backlink exploitation
  • Repurposing authoritative domains for unrelated content
  • Domain squatting for search ranking manipulation

3. Scaled Content Abuse

Previously known as “spammy auto-generated content,” this rebranded policy expanded to include:

  • AI-generated content at scale
  • Mass-produced content across multiple sites
  • Content translation manipulation
  • Automated content transformation techniques

See more: An In-Depth Look At Google Spam Policies Updates And What Changed

Spam-Specific Updates

June 2024 Spam Update

  • Week-long implementation period
  • Focused on policy-violating websites
  • Enhanced detection of automated content

November 2024 SRA Enforcement

  • Implementation of site reputation abuse penalties
  • Affected major publishers’ sponsored content strategies
  • Required significant content policy adjustments across news sites

Looking Ahead

With the December core update having completed its rollout and the new spam update now underway, prepare for another round of potential ranking fluctuations through the end of the year.

The spam update is expected to be completed next week, with progress tracked through Google’s Search Status Dashboard.


Featured Image: JHVEPhoto/Shutterstock

AI Crawlers Account For 28% Of Googlebot’s Traffic, Study Finds via @sejournal, @MattGSouthern

A report released by Vercel highlights the growing impact of AI bots in web crawling.

OpenAI’s GPTBot and Anthropic’s Claude generate nearly 1 billion requests monthly across Vercel’s network.

The data indicates that GPTBot made 569 million requests in the past month, while Claude accounted for 370 million.

Additionally, PerplexityBot contributed 24.4 million fetches, and AppleBot added 314 million requests.

Together, these AI crawlers represent approximately 28% of Googlebot’s total volume, which stands at 4.5 billion fetches.

Here’s what this could mean for SEO.

Key Findings On AI Crawlers

The analysis looked at traffic patterns on Vercel’s network and various web architectures. It found some key features of AI crawlers:

  • Major AI crawlers do not render JavaScript, though they do pull JavaScript files.
  • AI crawlers are often inefficient, with ChatGPT and Claude spending over 34% of their requests on 404 pages.
  • The type of content these crawlers focus on varies. ChatGPT prioritizes HTML (57.7%), while Claude focuses more on images (35.17%).

Geographic Distribution

Unlike traditional search engines that operate from multiple regions, AI crawlers currently maintain a concentrated U.S. presence:

  • ChatGPT operates from Des Moines (Iowa) and Phoenix (Arizona)
  • Claude operates from Columbus (Ohio)

Web Almanac Correlation

These findings align with data shared in the Web Almanac’s SEO chapter, which also notes the growing presence of AI crawlers.

According to the report, websites now use robots.txt files to set rules for AI bots, telling them what they can or cannot crawl.

GPTBot is the most mentioned bot, appearing on 2.7% of mobile sites studied. The Common Crawl bot, often used to collect training data for language models, is also frequently noted.

Both reports stress that website owners need to adjust to how AI crawlers behave.

3 Ways To Optimize For AI Crawlers

Based on recent data from Vercel and the Web Almanac, here are three ways to optimize for AI crawlers.

1. Server-Side Rendering

AI crawlers don’t execute JavaScript. This means any content that relies on client-side rendering might be invisible.

Recommended actions:

  • Implement server-side rendering for critical content
  • Ensure main content, meta information, and navigation structures are present in the initial HTML
  • Use static site generation or incremental static regeneration where possible

2. Content Structure & Delivery

Vercel’s data shows distinct content type preferences among AI crawlers:

ChatGPT:

  • Prioritizes HTML content (57.70%)
  • Spends 11.50% of fetches on JavaScript files

Claude:

  • Focuses heavily on images (35.17%)
  • Dedicates 23.84% of fetches to JavaScript files

Optimization recommendations:

  • Structure HTML content clearly and semantically
  • Optimize image delivery and metadata
  • Include descriptive alt text for images
  • Implement proper header hierarchy

3. Technical Considerations

High 404 rates from AI crawlers mean you need to keep these technical considerations top of mind:

  • Maintain updated sitemaps
  • Implement proper redirect chains
  • Use consistent URL patterns
  • Regular audit of 404 errors

Looking Ahead

For search marketers, the message is clear: AI chatbots are a new force in web crawling, and sites need to adapt their SEO accordingly.

Although AI bots may rely on cached or dated information now, their capacity to parse fresh content from across the web will grow.

You can help ensure your content is crawled and indexed with server-side rendering, clean URL structures, and updated sitemaps.


Featured Image: tete_escape/Shutterstock

Cut The Malarkey. Speaking Frankly About AI Search & SEO via @sejournal, @martinibuster

Search marketing is undergoing dramatic changes, with many debating whether SEO is on its way out as AI Search rises in popularity. What follows is a candid assessment of what is going on with SEO and search engines today.

An SEO School Shuts Down

An SEO school by a group called Authority Hackers recently announced their closure, emphasizing that it’s not because SEO is dead but due to the collapse of the content site model. They cited three reasons for this situation. The following is not about the SEO school, that’s just a symptom of something important going on today.

1. Google Updates is one of the reasons cited for the decline of the content site model. Here’s the candid part: If the Google updates killed your publishing site, that’s kind of the red flag that there’s something about the SEO that needs examination.

Here’s the frank part: Google’s updates have generally crushed websites that begin with keyword research, are followed by stealing content ideas from competitors and scraping Google’s SERPs for more keyword phrases. That’s not audience research, that’s search engine research. Search engine research results in Made For Search Engine websites. This doesn’t describe all websites that lost rankings but it’s a common method of SEO that in my opinion seriously needs to be reassessed.

2. The other reason cited by the SEO school is the “AI content tsunami.” I’m not sure what that means because it can mean a lot of things. Is that AI content spam? Or is that a reference to AI content sites overwhelming the publisher who cranks out two articles a week?

Do I need to say out loud what content output implies about site authority?

3. The third reason for the decline of the content model is the dramatic changes to Search Engine Results Pages (SERPs). Now this, this is a valid reason, but not for the reasons most SEOs think.

The organic SERPs have, for the past 25 years, been dominated by the top three ranked positions, with about 20-30% of the traffic siphoned off to Google Ads for search topics that convert. That’s the status quo: Three sites are winning and everyone else is losing.

AI Overviews has not changed a thing. AIO doubled down on the status quo. According to BrightEdge research, the top ranked websites in AIO are largely the same as the organic top ranked websites. What that means is that three sites are still winning and everyone else is still losing.

The biggest change to the SERPs that most SEOs are missing is what I already mentioned, that made for search engine websites have been getting wiped out by Google updates.

The helpful content update (HCU) is the scapegoat but that’s just ONE algorithm out of hundreds. There is literally no way for anyone to claim with 100% certainty that the HCU is the reason why any given site lost rankings. Google is a black box algorithm. A lot of people are saying but none of them can explain how they are able to pick out the effects of one algorithm out of hundreds.

The thing about being in SEO for 25 years is that people like me are accustomed to dramatic changes. Yes, the SERPs have changed dramatically. That’s how search engines have always done things.

If you’ve only been doing SEO for ten years, I can understand how the recent changes seem dramatic. But when you’ve been in it for as long as I have, dramatic changes are expected. That’s the status quo. Dramatic SERP changes is how it’s always been.

SEO Is Now AEO?

Someone started a discussion with two sentences that said AEO is the new SEO and that ChatGPT was quickly becoming the leading search engine, inspiring well over a hundred responses. The discussion is in a private Facebook group called AI/ChatGPT Prompts for Entrepreneurs.

AEO is a relatively new acronym meaning Answer Engine Optimization. It describes AI Search Optimization. AISEO is more a more precise acronym but it sounds too close to E-I-E-I-O.

Is AEO really a thing? Consider this: All AI search engines use a search index and traditional search ranking algorithms. For goodness sakes, Perplexity AI uses a version of Google’s PageRank, one of the most traditional ranking algorithms of all time.

People in that discussion generally agreed that AEO is not a thing, that AI Search Engines were not yet a major challenge to Google and that SEO is still a thing.

All is not upside down with the world because at least in that discussion the overwhelming sentiment is that AEO is not a thing. Many observed that ChatGPT uses Bing’s index, so if you’re doing “AEO” for ChatGPT you’re actually just doing SEO for Bing. Others expressed that the average person has no experience with ChatGPT and until it’s integrated into a major browser it’s going to remain a niche search engine.

There was one person insisting that Perplexity AI was designed as an AI Search Engine, completely misunderstanding that Perplexity AI uses a search index and identifies authoritative websites with an updated version of Google’s old PageRank algorithm.

AI has been a strong search engine factor in Google since at least 10 years. Longer if you consider that Google Brain began as a project in 2011.

  • AI in search is not new.
  • Search results summaries aren’t new either (Featured Snippets).
  • Google’s Information Gain patent for AI Chatbots filed in 2018.

AI in search feels new but it’s not new. The biggest difference isn’t in the back end, it’s in the front and it’s changing how users interact with data. This is the big change that all SEOs should be paying close attention to.

Featured Image by Shutterstock/pathdoc

OpenAI Announces 1-800-ChatGPT via @sejournal, @martinibuster

OpenAI ChatGPT just rolled out speech access to ChatGPT by phone and text access through the WhatsApp messaging system. The new services allow users to talk to and message ChatGPT to get answers. The phone access method enables users with an unstable or no data connection to use ChatGPT from a telephone while on the go, even without a ChatGPT account.

Speak With ChatGPT By Phone

Speaking with ChatGPT only requires setting up ChatGPT as a contact, using their 1-800-ChatGPT phone number, which in numbers is 1-800-242-8478. Once added to the phone’s contacts list a user can now phone and speak with ChatGPT to get answers.

Screenshot of video presenters pointing downward to a banner that reads Call Toll Free 1-800-ChatGPT

The presenters phoned ChatGPT with an iPhone, an old flip phone and with a rotary dial telephone to demonstrate how it’s a phone call that is used to reach ChatGPT and access answers. You can do it on the road or at home from a land line.

The functionality is currently only available in the United States and is limited to 15 minutes of free calling per month. However you can also download the ChatGPT App and create an account to speak even longer.

Image of a man speaking with ChatGPT with an old fashioned rotary phone

An example phone call involved asking ChatGPT to explain Reinforcement Learning as if to a five year old.

ChatGPT spoke the following answer:

“Sure! Imagine you have a robot friend and you want to teach it to clean up your room you give it a treat every time it does a good job that’s reinforcement fine-tuning the robot learns to do better by getting rewards.”

ChatGPT On WhatsApp

OpenAI also announced a way to reach ChatGPT with WhatsApp, and it’s available to users around anywhere in the world. The demonstration showed the presenters accessing 1-800-ChatGPT on WhatsApp through the mobile phone’s contacts list. But it can also be accessed by scanning the following QR code.

Screenshot Of ChatGPT On WhatsApp QR Code

The WhatsApp experience is currently limited to texting with ChatGPT and users can access it without having an account. OpenAI is working on ways to authenticate the WhatsApp access with a ChatGPT account and to be able to search with images.

Facts About New Access Methods

The new functionalities use the ChatGPT 40 Mini model. OpenAI engineers literally created these new functionalities over the past few weeks, which is pretty amazing.

Watch the announcement of the new ways to interact with ChatGPT:

1-800-ChatGPT

Marketing Trend: Consumers Prefer Relatability via @sejournal, @martinibuster

A new iStock 2025 Marketing Trends report finds declining consumer trust in social media and influencers, emphasizing the importance of relatability over perfection for marketers and businesses.

Trust For Marketing Success

The iStock report finds that 81% of consumers don’t trust content on social media. Nevertheless they still trust visual platforms like TikTok and Instagram Reels for discovery and inspiration. In terms of influence, 64% of consumers trust businesses over celebrities and influencers, particularly brands that align with their values (58%).

Authenticity And Real-User Content (RUC)

iStock’s data shows that consumer perception of influencer “realness” has declined, with 67% of people trusting traditional advertising over sponsored influencer posts. iStock is recommending what it calls Real-User Content (RUC), images and videos that project realness. Video content was highlighted by iStock as a strong trend that consumers should consider as more consumers turn to video content for learning and inspiration.

iStock recommends that marketers focus on being “real, truthful, and original” as the key to building trust. While authenticity is important, iStock is emphasizing offering real stories and being relatable as opposed to content that reflects virtually unattainable perfection.

They write:

“This change is affecting how people interact with visual content, especially on social media. Despite people’s lack of trust, they still find these platforms valuable, 82% of users still go to places like TikTok, Instagram Reels, and YouTube Shorts for video content to learn something new or get inspiration. In other words, people want the benefits of social media, without the negative effects. This shift has also made video-driven social search more popular, where platforms focused on video are no longer just for scrolling —they’ve become places to search and discover. In 2025, to succeed, you need to speak directly to your audience, this approach will always be more effective than a flood of generic posts.”

The report recommends radical honesty by showing the company in ways that include imperfect moments. iStock’s 2025 Marketing Trends report shows an approach to connecting with consumers in a way that reflects qualities of realness that people are looking for in the content they consume.

Read iStock’s report:

Crack the Code on Trust: 2025 Marketing Insights for Small Businesses

Featured Image by Shutterstock/HAKINMHAN

Bing Search Updates: Faster, More Precise Results via @sejournal, @MattGSouthern

Microsoft has announced updates to Bing’s search infrastructure incorporating large language models (LLMs), small language models (SLMs), and new optimization techniques.

This update aims to improve performance and reduce costs in search result delivery.

In an announcement, the company states:

“At Bing, we are always pushing the boundaries of search technology. Leveraging both Large Language Models (LLMs) and Small Language Models (SLMs) marks a significant milestone in enhancing our search capabilities. While transformer models have served us well, the growing complexity of search queries necessitated more powerful models.”

Performance Gains

Using LLMs in search systems can create problems with speed and cost.

To solve these problems, Bing has trained SLMs, which it claims are 100 times faster than LLMs.

The announcement reads:

“LLMs can be expensive to serve and slow. To improve efficiency, we trained SLM models (~100x throughput improvement over LLM), which process and understand search queries more precisely.”

Bing also uses NVIDIA TensorRT-LLM to improve how well SLMs work.

TensorRT-LLM is a tool that helps reduce the time and cost of running large models on NVIDIA GPUs.

Impact On “Deep Search”

According to a technical report from Microsoft, integrating Nvidia’s TensorRT-LLM technology has enhanced the company’s “Deep Search” feature.

Deep Search leverages SLMs in real time to provide relevant web results.

Before optimization, Bing’s original transformer model had a 95th percentile latency of 4.76 seconds per batch (20 queries) and a throughput of 4.2 queries per second per instance.

With TensorRT-LLM, the latency was reduced to 3.03 seconds per batch, and throughput increased to 6.6 queries per second per instance.

This represents a 36% reduction in latency and a 57% decrease in operational costs.

The company states:

“… our product is built on the foundation of providing the best results, and we will not compromise on quality for speed. This is where TensorRT-LLM comes into play, reducing model inference time and, consequently, the end-to-end experience latency without sacrificing result quality.”

Benefits For Bing Users

This update brings several potential benefits to Bing users:

  • Faster search results with optimized inference and quicker response times
  • Improved accuracy through enhanced capabilities of SLM models, delivering more contextualized results
  • Cost efficiency, allowing Bing to invest in further innovations and improvements

Why Bing’s Move to LLM/SLM Models Matters

Bing’s switch to LLM/SLM models and TensorRT optimization could impact the future of search.

As users ask more complex questions, search engines need to better understand and deliver relevant results quickly. Bing aims to do that using smaller language models and advanced optimization techniques.

While we’ll have to wait and see the full impact, Bing’s move sets the stage for a new chapter in search.


Featured Image: mindea/Shutterstock

Google Formalizes Decade-Old Faceted Navigation Guidelines via @sejournal, @MattGSouthern

Google has updated its guidelines on faceted navigation by turning an old blog post into an official help document.

What started as a blog post in 2014 is now official technical documentation.

This change reflects the complexity of ecommerce and content-heavy websites, as many sites adopt advanced filtering systems for larger catalogs.

Faceted Navigation Issues

Ever used filters on an e-commerce site to narrow down products by size, color, and price?

That’s faceted navigation – the system allowing users to refine search results using multiple filters simultaneously.

While this feature is vital for users, it can create challenges for search engines, prompting Google to release new official documentation on managing these systems.

Modern Challenges

The challenge with faceted navigation lies in the mathematics of combinations: each additional filter option multiplies the potential URLs a search engine might need to crawl.

For example, a simple product page with options for size (5 choices), color (10 choices), and price range (6 ranges) could generate 300 unique URLs – for just one product.

According to Google Analyst Gary Illyes, this multiplication effect makes faceted navigation the leading cause of overcrawling issues reported by website owners.

The impact includes:

  • Wasting Server Resources: Many websites use too much computing power on unnecessary URL combinations.
  • Inefficient Crawl Budget: Crawlers may take longer to find important new content because they are busy with faceted navigation.
  • Weakening SEO Performance: Having several URLs for the same content can hurt a website’s SEO.

What’s Changed?

The new guidance is similar to the 2014 blog post, but it includes some important updates:

  1. Focus on Performance: Google now clearly warns about the costs of using computing resources.
  2. Clear Implementation Options: The documentation gives straightforward paths for different types of websites.
  3. Updated Technical Recommendations: Suggestions now account for single-page applications and modern SEO practices.

Implementation Guide

For SEO professionals managing sites with faceted navigation, Google now recommends a two-track approach:

Non-Critical Facets:

  • Block via robots.txt
  • Use URL fragments (#)
  • Implement consistent rel=”nofollow” attributes

Business-Critical Facets:

  • Maintain standardized parameter formats
  • Implement proper 404 handling
  • Use strategic canonical tags

Looking Ahead

This documentation update suggests Google is preparing for increasingly complex website architectures.

SEO teams should evaluate their current faceted navigation against these guidelines to ensure optimal crawling efficiency and indexing performance.


Featured Image: Shutterstock/kenchiro168

Google Refreshes Generative AI Prohibited Use Policy via @sejournal, @MattGSouthern

Google has updated its Generative AI Prohibited Use Policy to clarify the proper use of its generative AI products and services.

The update simplifies the language, and lists prohibited behaviors with examples of unacceptable conduct.

Key Updates To Policy

The updated policy clarifies existing rules without adding new restrictions.

It specifically bans using Google’s AI tools to create or share non-consensual intimate images or to conduct security breaches through phishing or malware.

The policy states:

“We expect you to engage with [generative AI models] in a responsible, legal, and safe manner.”

Prohibited activities include dangerous, illegal, sexually explicit, violent, hateful, or deceptive actions, as well as content related to child exploitation, violent extremism, self-harm, harassment, and misinformation.

Prohibited Activities

The policy prohibits using Google’s generative AI for an expansive range of dangerous, illegal, and unethical activities:

  • Illegal Activities: Engaging in or facilitating child exploitation, violent extremism, terrorism, non-consensual intimate imagery, self-harm, or other illegal activities.
  • Security Violations: Compromising security through phishing, malware, spam, infrastructure abuse, or circumventing safety protections.
  • Explicit and Harmful Content: Generating sexually explicit content, hate speech, harassment, violence incitement, or other abusive content.
  • Deception and Misinformation: Impersonation without disclosure, misleading claims of expertise, misrepresenting content provenance, or spreading misinformation related to health, governance, and democratic processes.

Exceptions Allowed

New language in the policy carves out exceptions for some restricted activities in particular contexts.

Educational, documentary, scientific, artistic, and journalistic uses may be permitted, as well as other cases “where harms are outweighed by substantial benefits to the public.”

Why This Matters

The policy update addresses the rapid advancement of generative AI technologies that create realistic text, images, audio, and video.

This progress raises concerns about ethics, misuse, and societal impact.

Looking Ahead

Google’s updated policy is now in effect, and the old and new versions are publicly available.

Leading AI companies like OpenAI and Microsoft have released their own usage rules. However, raising awareness and consistently enforcing these rules still need to be improved.

As generative AI becomes more common, creating clear usage guidelines is essential to ensure responsible practices and reduce harm.


Featured Image: Algi Febri Sugita/Shutterstock