The HCU Effect In Google Updates via @sejournal, @martinibuster

It’s fairly commonplace for Google Updates to prompt SEOs to raise concerns about the Helpful Content Update (HCU). A careful consideration of known facts reveals that it may be possible to determine whether a site is truly impacted by HCU-related signals.

Black Box Systems

Black box systems are something that all SEOs should consider understanding because it helps prevent misunderstandings about the consequences of a Google Update, both the core and spam updates.

A black box is a system where someone observing from the outside knows what goes in (the input) and can see what comes out  (the output). What the observer cannot do is infer what is happening inside the box based on the input or the output of the black box.

There are literally thousands of processes going on inside Google’s black box algorithm which makes it impossible to isolate the impact of a single factor. Identifying the ranking effect of the HCU is even more impossible because Google removed it as a standalone system and integrated into the core algorithm.

On a side note, the black-box nature of Google’s ranking algorithms is why SEO ranking factor research based on millions of search results (the output) is unreliable. Those studies make good clickbait and will continue to be created as long as SEOs remain unaware of the principle of the black box.

Helpful Content Update (HCU)

The helpful content system (commonly referred to as the HCU) was integrated into Google’s core algorithm in March 2024 and the system no longer exists.  It now exists as a component of the ranking algorithm among other ranking-related algorithms.

What that means is that it’s not a standalone system that impacts sites a couple times a year. It’s now integrated into the ranking systems that run all the time.

Previously, when Google updated the Helpful Content System, a site’s rankings drop could reasonably be attributed to the system. That’s no longer the case, as it’s now a part of the ranking algorithm that runs continuously.

When Google announces an update they no longer mention if the former HCU was updated because it’s not a system anymore, it’s just a bunch of signals in the ranking algorithm that runs all the time.

This is how Google explained it:

“Announced in 2022 as the “Helpful Content Update”, this was a system designed to better ensure people see original, helpful content written by people, for people, in search results, rather than content made primarily to gain search engine traffic. In March 2024, it evolved and became part of our core ranking systems, as our systems use a variety of signals and systems to present helpful results to users.”

Here’s the exception to that rule:

If Google’s update announcement mentions they are improving the signals for identifying “people-first” content then it’s fairly reasonable to assume that some component of the former HCU was updated.

What Needs To Be Understood About The HCU

It needs to be understood that there is literally no way to claim with 100% certainty that the HCU is the reason why any given site lost rankings during a core algorithm update. There is no way to isolate the effects of the HCU signals from the hundreds or thousands of other signals.

Except for when Google’s update announcement specifically mentions one of the components, but even then it’s important to identify the effects on the site instead of shrugging and declaring it’s the HCU. That’s an excuse, not a diagnosis.

How To Diagnose Effects Of HCU

Google recommends reading their documentation about all the signals for helpful “people-first” content in order to understand the effects of HCU related issues. People-first means content that is not search-engine first.

The documentation says:

“Google’s automated ranking systems are designed to present helpful, reliable information that’s primarily created to benefit people, not to gain search engine rankings, in the top Search results. This page is designed to help creators evaluate if they’re producing such content.”

Google’s documentation on people-first content recommends the following topics for debugging ranking issues:

  1. Content and quality
  2. Expertise
  3. Page experience
  4. People-first content
  5. Search engine-first content

Google’s documentation goes on to say that after identifying relevant web pages, other signals are applied to check if the content exhibit “aspects” of experience, expertise, authoritativeness, and trustworthiness (EEAT).

“After identifying relevant content, our systems aim to prioritize those that seem most helpful. To do this, they identify a mix of factors that can help determine which content demonstrates aspects of experience, expertise, authoritativeness, and trustworthiness, or what we call E-E-A-T.

Of these aspects, trust is most important. The others contribute to trust, but content doesn’t necessarily have to demonstrate all of them. For example, some content might be helpful based on the experience it demonstrates, while other content might be helpful because of the expertise it shares.”

Takeaway

The takeaway from Google’s documentation about people-first content is that there are multiple components to what HCU is. People-first content is super important, especially because it’s the standard SEO practice to create search-engine first content by starting with top ranked keywords, organizing site architecture around keywords, and generally optimizing for keywords and not for people.  For more on that topic read: A Candid Assessment Of AI Search & SEO

Google’s People-First Documentation

Creating helpful, reliable, people-first content

Featured Image by Shutterstock/Studio Romantic

Mullenweg’s WordPress Pause Triggers Unexpected Complications via @sejournal, @martinibuster

Matt Mullenweg recently paused multiple WordPress.org services for a holiday break but unintended effects started almost immediately, affecting the attendance of future WordPress conferences taking place around the world. Joost de Valk requested a fix on GitHub.

Mullenweg’s Pause In Services

Mullenweg’s unexpected pause in WordPress services affected new account registrations on WordPress.org, new plugin, theme and photo directory submissions and reviews of new plugins. Mullenweg did not set a time for the return of those services, only saying that they’ll return once he has the “time, energy, and money” sometime in 2025. So the pause in WordPress.org services is for an indeterminate amount of time.

Unintended Consequences Of WordPress Pause

Joost de Valk filed a GitHub ticket calling attention to a serious issue affecting WordCamp registration for new community members. The GitHub ticket foregrounds the problem inherent in Mullenweg’s unilateral decision to pause certain WordPress.org services.

Mullenweg’s dramatic pause in services had the unintended consequence of diminishing the growth, energy and momentum of the WordPress community itself.

Joost’s GitHub ticket explains why Mullenweg’s holiday break is disruptive:

“Recently, a change was made to require people to have a WordPress.org account to buy a ticket for a WordCamp. Because of that change, the new Holiday Break imposed by Matt causes issues. Because of that imposed holiday break, people can no longer sign up for a WordPress.org account and thus can no longer do that before buying a WordCamp ticket.

There are several, large and small, WordCamps that might be affected by this, as can be seen from the list on Central, and probably including WordCamp Asia 2025.”

Members of the WordPress community agreed. These are a sample of the comments representative of WordPress community members’ concerns:

MakarandMane shared:

“Following 2 weeks there are two Wordcamp Kolhapur & Kolkata.. After 2 weeks another WordCamp in pune.
Kolhapur is new community which focus totally on new attendee who don’t have an account. This will affect our tickets sales and contributor day.”

A concern about WordCamp EU was also raised:

“And WCEU has just opened their ticket sales…
In the WordCamp rules, we have to be inclusive…. refusing to sell a WordCamp ticket is not really…. welcoming to new community members”

Solution Found

A solution was proposed to fix the issue caused by Mullenweg’s pause in services.

WordPress community member dd32 posted:

“It’s been agreed to re-open the registration for WordCamp purposes, that’s been done in https://meta.trac.wordpress.org/changeset/14325”

Community members were grateful for the fix although some reported that they were still blocked from registering a new account for WordPress but that was a glitch in their browser, fixed by switching to a new browser or a new IP address.

Concern Raised About Solution

Not everyone agreed that the solution was ideal. One WordPress community member posted their concern and received four likes from other members, indicating that others agreed with them.

decodekult wrote:

“I would suggest reconsidering this solution. It addresses the urgent thing here (people could not buy ticket!) but it ignores the primary petition: remove wordpress.org login requirement for buying WordCamp tickets.

The change that closed this ticket does not do that: it guesses where you came from, and if it contains the magic words, then you are lucky enough as to create an account on wordpress.org.

Given that the primary reason for requiring a wordpress.org account that the owner can log into for buying WordCamp tickets was precisely preventing specific people from buying WordCamp tickets, because they could not log into their accounts due to their relationship with a specific company, and given that this ban was legally lifted by a court decision, I raise my hand here and request, as this ticket did from its own title, that the wordpress.org login requirement be removed for buying WordCamp tickets.”

What Happens When Decisions Are Imposed

The importance of what happened is not just about the inability of new community members to register for local WordCamps. The issue is one of decisions and control. One person, Matt Mullenweg, appears to have made the unilateral decision to pause WordPress.org services. Joost de Valk himself uses the word “imposed” to characterize the pause, writing:

“…the new Holiday Break imposed by Matt causes issues. Because of that imposed holiday break, people can no longer sign up for a WordPress.org account…”

The word “imposed” in this context means a unilateral decision made by one person without consultation or choice from community members. Imposed is a strong (and appropriate) word because it conveys that the holiday break was not optional or voluntary but mandated by Matt Mullenweg.

Although this issue was solved by the WordPress community, it would never had happened if the decision had been made with input from stakeholders across the entire WordPress community, from developers, core contributors to WordCamp organizers. This is what happens when decision-making lacks community input and accountability.

Read the GitHub ticket:

Remove wordpress.org login requirement for buying WordCamp tickets

Featured Image by Shutterstock/Studio Romantic

Mullenweg Takes On Inc Magazine For “Biased” Interview via @sejournal, @martinibuster

Matt Mullenweg accused Inc Magazine of distorting an interview with him, publishing verifiably fake facts and quoting people who lacked credibility. Mullenweg posted compelling examples of how the Inc magazine misrepresented his quotes and presented false fact, citing the selection of unflattering photos as evidence of a conscious effort to negatively slant their interview of him.

Mullenweg explains why he agreed to the interview:

When Inc Magazine reached out to have David H. Freedman (website powered by WordPress) write a feature piece I was excited because though Inc wasn’t a magazine I have read much since I was a teenager, David seemed like a legit journalist who usually writes for better publications like The Atlantic. I opened up to David with a number of vulnerable stories, and allowed the photo shoot in my home in Houston.”

The article begins with an unflattering portrait of Mullenweg as a control freak that is fussy about the kind of toilet paper and soap is provided at Automattic’s offices. Mullenweg writes that he had shared an anecdote with the writer of the time he visited Google’s headquarters in 2004 and was surprised by what he felt was “cheap toilet” paper. Years later when he had his own offices he made the decision to spend extra on good soap and toilet paper to benefit his employee’s experience at work. In other words, the choice to do that came from altruism and a concern for others, not a desire to control every detail.

But that’s not how Inc magazine portrayed it.

They write:

“Stooping to fling open a storage cabinet built into the bathroom wall, he points to a neat stack of wrapped toilet paper rolls. “The best toilet paper you can buy,” he assures me. “How much extra does really nice toilet paper cost? A buck or two?” The handsome bottles of soap by the sinks are premium, too, he adds.

I ask him who at Automattic, the estimated $710-million company of which Mullenweg is CEO, is responsible for toilet paper and soap quality control?

“Me,” he says, beaming.

Of course, Mullenweg’s control of Automattic extends well beyond the bathroom walls.”

Grim Images In Photographs

The author of the article described Mullenweg as a young looking forty year old with a “near-constant grin” which contradicted the photographs Inc chose to publish, neither of which showed him smiling. Of the two photographs from the interview they chose to publish, one captures Mullenweg mid-blink, resulting in an absurd image of him typing with his eyes closed.

There are two other photographs from the past nine and twelve years ago which do show him smiling. Mullenweg’s smile is not an affectation; it’s an authentic expression. Videos of him participating in interviews or speaking publicly consistently show him smiling. Mullenweg is correct to point out that Inc magazine made a deliberate choice to not publish an image of him smiling, which is his characteristic expression, as noted in the article itself.

Poorly Researched Article

Mullenweg’s critique of the article zeroes in on a series of false statements that are indicative of poor research, including a consistent misrepresentation of a company’s earnings with its valuation.

One of the false facts wrongfully asserts that Mullenweg coded WordPress in three “obsessive days” when the actual time period was four months. This might seem minor but it’s not because it’s evidence of what Mullenweg points out is poor research that could have been easily verified on Wikipedia.

His critique is thoroughly convincing and shows how he agreed to the interview with openness and the expectation of balanced reporting. His dismay at the results is palpably communicated in his blog post about it.

Nevertheless he goes on to say that he supports journalism and puts the blame on the editor of the article.

He writes:

“I know a lot of entrepreneurs follow me and I don’t want your takeaway to be “don’t talk to journalists” or “don’t engage with mainstream media.”

…this is a good example of where a decent journalist can’t overcome a crappy editor and quality control. I probably wouldn’t be excited to work with Inc Magazine again while Mike Hofman is in charge as editor-in-chief, he’s clearly overseeing a declining brand. But I will continue to engage with other media, and blog, and tweet, and tell my story directly.

When an editor wants to make you look good, they can! If they decide they want to drag you, they can too. Everything in my interactions with David and Inc made it seem this would be a positive piece, so be careful.

We’ll see if Inc Magazine has any journalistic integrity by their updates to the article.”

Rightfully Disappointed

Mullenweg researched the interviewer and verified that they were a competent and respectable writer. From Mullenweg’s point of view the Inc magazine article was poorly researched and heavily slanted against him, what he termed a hit piece.

Read Mullenweg’s account of the interview:

Inc Hit Piece

Featured Image by Shutterstock/tomertu

Mullenweg Pauses WordPress Services – Hopes To Reopen Next Year via @sejournal, @martinibuster

Matt Mullenweg announced the abrupt pause in services offered by WordPress.org, affecting plugin submissions, reviews, theme submissions, and additions to the photo directory. He offers to keep providing these services to WP Engine, citing the recent court order against him and Automattic that compels him to offer “free labor and services.”

Pause For The Holidays

Mullenweg published a post on the official WordPress blog to announce a pause in free services offered by WordPress.org to give the “many tired volunteers around WordPress.org a break for the holidays.”

The pause affects:

  • New account registrations on WordPress.org
  • New plugin directory submissions
  • New plugin reviews
  • New theme directory submissions
  • New photo directory submissions

The pause doesn’t affect the ability to install new instances of WordPress sites or accounts, which sounds contradictory.

Here’s what he wrote in his list of what services are paused:

“New account registrations on WordPress.org (clarifying so press doesn’t confuse this: people can still make their own WordPress installs and accounts)”

Mullenweg makes a point to note that the pause doesn’t affect WP Engine, stating that he’s legally required to keep providing free labor and services” services to WP Engine, writing that if WP Engine requires those services they can have their “high-priced attorneys” speak to his “high-priced attorneys” to gain access.

He then shared a cryptic message that implied there was a chance that WordPress may not resume those services in 2025, saying that it hinged on his being able to find the “time, energy, and money” to undo the pause in 2025, which he writes is being expended defending against WP Engine’s lawsuit against him and Automattic.

Mullenweg wrote:

“Right now much of the time I would spend making WordPress better is being taken up defending against WP Engine’s legal attacks. Their attacks are against Automattic, but also me individually as the owner of WordPress.org, which means if they win I can be personally liable for millions of dollars of damages.”

He signs off inviting those who’d like to fund those attacks on him to sign up for WP Engine and that those who don’t can sign up for other web hosts, linking to both WP Engine and a WordPress.org page that offers promotions to induce WP Engine clients to switch away.

Read Mullenweg’s announcement here:

Holiday Break

Featured Image by Shutterstock/MPIX

Google Launches (Final?) Spam Update Of The Year via @sejournal, @MattGSouthern

Google announced the rollout of the December 2024 spam update.

The update, expected to be completed within a week, arrives amid ongoing industry discussions about the effectiveness of Google’s spam-fighting measures.

This December update caps off a year of spam-fighting measures, including the June Spam Update and the March Core Update, which targeted policy-violating websites and aimed to reduce “unhelpful” content by 40%.

It’s also worth mentioning that this update closely follows the December core update.

Looking Back At A Year Of Updates

This year saw an unprecedented frequency of major algorithm updates, with core updates in March, August, November, and December.

The August update, which took nearly three weeks to complete, targeted low-value SEO content while promoting high-quality material.

The December core update, launched on December 12, came unusually close to the November update, with Google explaining that different systems are often improved in parallel.

Policy Transformation

This year marked a shift in Google’s approach to spam detection and prevention with three major policy updates.

1.Site Reputation Abuse

Introduced in May 2024, Google began targeting “parasite SEO” practices where third-party content exploits established domains’ authority.

This update mainly affected:

  • Major publishers hosting third-party product reviews
  • News sites with extensive coupon sections
  • Sports websites with AI-generated content

The policy change led to notable casualties, including several high-profile publishers receiving manual actions for hosting third-party content without sufficient oversight.

2. Expired Domain Abuse

Google’s enhanced focus on expired domain manipulation addressed:

  • Purchase of expired domains for backlink exploitation
  • Repurposing authoritative domains for unrelated content
  • Domain squatting for search ranking manipulation

3. Scaled Content Abuse

Previously known as “spammy auto-generated content,” this rebranded policy expanded to include:

  • AI-generated content at scale
  • Mass-produced content across multiple sites
  • Content translation manipulation
  • Automated content transformation techniques

See more: An In-Depth Look At Google Spam Policies Updates And What Changed

Spam-Specific Updates

June 2024 Spam Update

  • Week-long implementation period
  • Focused on policy-violating websites
  • Enhanced detection of automated content

November 2024 SRA Enforcement

  • Implementation of site reputation abuse penalties
  • Affected major publishers’ sponsored content strategies
  • Required significant content policy adjustments across news sites

Looking Ahead

With the December core update having completed its rollout and the new spam update now underway, prepare for another round of potential ranking fluctuations through the end of the year.

The spam update is expected to be completed next week, with progress tracked through Google’s Search Status Dashboard.


Featured Image: JHVEPhoto/Shutterstock

AI Crawlers Account For 28% Of Googlebot’s Traffic, Study Finds via @sejournal, @MattGSouthern

A report released by Vercel highlights the growing impact of AI bots in web crawling.

OpenAI’s GPTBot and Anthropic’s Claude generate nearly 1 billion requests monthly across Vercel’s network.

The data indicates that GPTBot made 569 million requests in the past month, while Claude accounted for 370 million.

Additionally, PerplexityBot contributed 24.4 million fetches, and AppleBot added 314 million requests.

Together, these AI crawlers represent approximately 28% of Googlebot’s total volume, which stands at 4.5 billion fetches.

Here’s what this could mean for SEO.

Key Findings On AI Crawlers

The analysis looked at traffic patterns on Vercel’s network and various web architectures. It found some key features of AI crawlers:

  • Major AI crawlers do not render JavaScript, though they do pull JavaScript files.
  • AI crawlers are often inefficient, with ChatGPT and Claude spending over 34% of their requests on 404 pages.
  • The type of content these crawlers focus on varies. ChatGPT prioritizes HTML (57.7%), while Claude focuses more on images (35.17%).

Geographic Distribution

Unlike traditional search engines that operate from multiple regions, AI crawlers currently maintain a concentrated U.S. presence:

  • ChatGPT operates from Des Moines (Iowa) and Phoenix (Arizona)
  • Claude operates from Columbus (Ohio)

Web Almanac Correlation

These findings align with data shared in the Web Almanac’s SEO chapter, which also notes the growing presence of AI crawlers.

According to the report, websites now use robots.txt files to set rules for AI bots, telling them what they can or cannot crawl.

GPTBot is the most mentioned bot, appearing on 2.7% of mobile sites studied. The Common Crawl bot, often used to collect training data for language models, is also frequently noted.

Both reports stress that website owners need to adjust to how AI crawlers behave.

3 Ways To Optimize For AI Crawlers

Based on recent data from Vercel and the Web Almanac, here are three ways to optimize for AI crawlers.

1. Server-Side Rendering

AI crawlers don’t execute JavaScript. This means any content that relies on client-side rendering might be invisible.

Recommended actions:

  • Implement server-side rendering for critical content
  • Ensure main content, meta information, and navigation structures are present in the initial HTML
  • Use static site generation or incremental static regeneration where possible

2. Content Structure & Delivery

Vercel’s data shows distinct content type preferences among AI crawlers:

ChatGPT:

  • Prioritizes HTML content (57.70%)
  • Spends 11.50% of fetches on JavaScript files

Claude:

  • Focuses heavily on images (35.17%)
  • Dedicates 23.84% of fetches to JavaScript files

Optimization recommendations:

  • Structure HTML content clearly and semantically
  • Optimize image delivery and metadata
  • Include descriptive alt text for images
  • Implement proper header hierarchy

3. Technical Considerations

High 404 rates from AI crawlers mean you need to keep these technical considerations top of mind:

  • Maintain updated sitemaps
  • Implement proper redirect chains
  • Use consistent URL patterns
  • Regular audit of 404 errors

Looking Ahead

For search marketers, the message is clear: AI chatbots are a new force in web crawling, and sites need to adapt their SEO accordingly.

Although AI bots may rely on cached or dated information now, their capacity to parse fresh content from across the web will grow.

You can help ensure your content is crawled and indexed with server-side rendering, clean URL structures, and updated sitemaps.


Featured Image: tete_escape/Shutterstock

Cut The Malarkey. Speaking Frankly About AI Search & SEO via @sejournal, @martinibuster

Search marketing is undergoing dramatic changes, with many debating whether SEO is on its way out as AI Search rises in popularity. What follows is a candid assessment of what is going on with SEO and search engines today.

An SEO School Shuts Down

An SEO school by a group called Authority Hackers recently announced their closure, emphasizing that it’s not because SEO is dead but due to the collapse of the content site model. They cited three reasons for this situation. The following is not about the SEO school, that’s just a symptom of something important going on today.

1. Google Updates is one of the reasons cited for the decline of the content site model. Here’s the candid part: If the Google updates killed your publishing site, that’s kind of the red flag that there’s something about the SEO that needs examination.

Here’s the frank part: Google’s updates have generally crushed websites that begin with keyword research, are followed by stealing content ideas from competitors and scraping Google’s SERPs for more keyword phrases. That’s not audience research, that’s search engine research. Search engine research results in Made For Search Engine websites. This doesn’t describe all websites that lost rankings but it’s a common method of SEO that in my opinion seriously needs to be reassessed.

2. The other reason cited by the SEO school is the “AI content tsunami.” I’m not sure what that means because it can mean a lot of things. Is that AI content spam? Or is that a reference to AI content sites overwhelming the publisher who cranks out two articles a week?

Do I need to say out loud what content output implies about site authority?

3. The third reason for the decline of the content model is the dramatic changes to Search Engine Results Pages (SERPs). Now this, this is a valid reason, but not for the reasons most SEOs think.

The organic SERPs have, for the past 25 years, been dominated by the top three ranked positions, with about 20-30% of the traffic siphoned off to Google Ads for search topics that convert. That’s the status quo: Three sites are winning and everyone else is losing.

AI Overviews has not changed a thing. AIO doubled down on the status quo. According to BrightEdge research, the top ranked websites in AIO are largely the same as the organic top ranked websites. What that means is that three sites are still winning and everyone else is still losing.

The biggest change to the SERPs that most SEOs are missing is what I already mentioned, that made for search engine websites have been getting wiped out by Google updates.

The helpful content update (HCU) is the scapegoat but that’s just ONE algorithm out of hundreds. There is literally no way for anyone to claim with 100% certainty that the HCU is the reason why any given site lost rankings. Google is a black box algorithm. A lot of people are saying but none of them can explain how they are able to pick out the effects of one algorithm out of hundreds.

The thing about being in SEO for 25 years is that people like me are accustomed to dramatic changes. Yes, the SERPs have changed dramatically. That’s how search engines have always done things.

If you’ve only been doing SEO for ten years, I can understand how the recent changes seem dramatic. But when you’ve been in it for as long as I have, dramatic changes are expected. That’s the status quo. Dramatic SERP changes is how it’s always been.

SEO Is Now AEO?

Someone started a discussion with two sentences that said AEO is the new SEO and that ChatGPT was quickly becoming the leading search engine, inspiring well over a hundred responses. The discussion is in a private Facebook group called AI/ChatGPT Prompts for Entrepreneurs.

AEO is a relatively new acronym meaning Answer Engine Optimization. It describes AI Search Optimization. AISEO is more a more precise acronym but it sounds too close to E-I-E-I-O.

Is AEO really a thing? Consider this: All AI search engines use a search index and traditional search ranking algorithms. For goodness sakes, Perplexity AI uses a version of Google’s PageRank, one of the most traditional ranking algorithms of all time.

People in that discussion generally agreed that AEO is not a thing, that AI Search Engines were not yet a major challenge to Google and that SEO is still a thing.

All is not upside down with the world because at least in that discussion the overwhelming sentiment is that AEO is not a thing. Many observed that ChatGPT uses Bing’s index, so if you’re doing “AEO” for ChatGPT you’re actually just doing SEO for Bing. Others expressed that the average person has no experience with ChatGPT and until it’s integrated into a major browser it’s going to remain a niche search engine.

There was one person insisting that Perplexity AI was designed as an AI Search Engine, completely misunderstanding that Perplexity AI uses a search index and identifies authoritative websites with an updated version of Google’s old PageRank algorithm.

AI has been a strong search engine factor in Google since at least 10 years. Longer if you consider that Google Brain began as a project in 2011.

  • AI in search is not new.
  • Search results summaries aren’t new either (Featured Snippets).
  • Google’s Information Gain patent for AI Chatbots filed in 2018.

AI in search feels new but it’s not new. The biggest difference isn’t in the back end, it’s in the front and it’s changing how users interact with data. This is the big change that all SEOs should be paying close attention to.

Featured Image by Shutterstock/pathdoc

OpenAI Announces 1-800-ChatGPT via @sejournal, @martinibuster

OpenAI ChatGPT just rolled out speech access to ChatGPT by phone and text access through the WhatsApp messaging system. The new services allow users to talk to and message ChatGPT to get answers. The phone access method enables users with an unstable or no data connection to use ChatGPT from a telephone while on the go, even without a ChatGPT account.

Speak With ChatGPT By Phone

Speaking with ChatGPT only requires setting up ChatGPT as a contact, using their 1-800-ChatGPT phone number, which in numbers is 1-800-242-8478. Once added to the phone’s contacts list a user can now phone and speak with ChatGPT to get answers.

Screenshot of video presenters pointing downward to a banner that reads Call Toll Free 1-800-ChatGPT

The presenters phoned ChatGPT with an iPhone, an old flip phone and with a rotary dial telephone to demonstrate how it’s a phone call that is used to reach ChatGPT and access answers. You can do it on the road or at home from a land line.

The functionality is currently only available in the United States and is limited to 15 minutes of free calling per month. However you can also download the ChatGPT App and create an account to speak even longer.

Image of a man speaking with ChatGPT with an old fashioned rotary phone

An example phone call involved asking ChatGPT to explain Reinforcement Learning as if to a five year old.

ChatGPT spoke the following answer:

“Sure! Imagine you have a robot friend and you want to teach it to clean up your room you give it a treat every time it does a good job that’s reinforcement fine-tuning the robot learns to do better by getting rewards.”

ChatGPT On WhatsApp

OpenAI also announced a way to reach ChatGPT with WhatsApp, and it’s available to users around anywhere in the world. The demonstration showed the presenters accessing 1-800-ChatGPT on WhatsApp through the mobile phone’s contacts list. But it can also be accessed by scanning the following QR code.

Screenshot Of ChatGPT On WhatsApp QR Code

The WhatsApp experience is currently limited to texting with ChatGPT and users can access it without having an account. OpenAI is working on ways to authenticate the WhatsApp access with a ChatGPT account and to be able to search with images.

Facts About New Access Methods

The new functionalities use the ChatGPT 40 Mini model. OpenAI engineers literally created these new functionalities over the past few weeks, which is pretty amazing.

Watch the announcement of the new ways to interact with ChatGPT:

1-800-ChatGPT

Marketing Trend: Consumers Prefer Relatability via @sejournal, @martinibuster

A new iStock 2025 Marketing Trends report finds declining consumer trust in social media and influencers, emphasizing the importance of relatability over perfection for marketers and businesses.

Trust For Marketing Success

The iStock report finds that 81% of consumers don’t trust content on social media. Nevertheless they still trust visual platforms like TikTok and Instagram Reels for discovery and inspiration. In terms of influence, 64% of consumers trust businesses over celebrities and influencers, particularly brands that align with their values (58%).

Authenticity And Real-User Content (RUC)

iStock’s data shows that consumer perception of influencer “realness” has declined, with 67% of people trusting traditional advertising over sponsored influencer posts. iStock is recommending what it calls Real-User Content (RUC), images and videos that project realness. Video content was highlighted by iStock as a strong trend that consumers should consider as more consumers turn to video content for learning and inspiration.

iStock recommends that marketers focus on being “real, truthful, and original” as the key to building trust. While authenticity is important, iStock is emphasizing offering real stories and being relatable as opposed to content that reflects virtually unattainable perfection.

They write:

“This change is affecting how people interact with visual content, especially on social media. Despite people’s lack of trust, they still find these platforms valuable, 82% of users still go to places like TikTok, Instagram Reels, and YouTube Shorts for video content to learn something new or get inspiration. In other words, people want the benefits of social media, without the negative effects. This shift has also made video-driven social search more popular, where platforms focused on video are no longer just for scrolling —they’ve become places to search and discover. In 2025, to succeed, you need to speak directly to your audience, this approach will always be more effective than a flood of generic posts.”

The report recommends radical honesty by showing the company in ways that include imperfect moments. iStock’s 2025 Marketing Trends report shows an approach to connecting with consumers in a way that reflects qualities of realness that people are looking for in the content they consume.

Read iStock’s report:

Crack the Code on Trust: 2025 Marketing Insights for Small Businesses

Featured Image by Shutterstock/HAKINMHAN

Bing Search Updates: Faster, More Precise Results via @sejournal, @MattGSouthern

Microsoft has announced updates to Bing’s search infrastructure incorporating large language models (LLMs), small language models (SLMs), and new optimization techniques.

This update aims to improve performance and reduce costs in search result delivery.

In an announcement, the company states:

“At Bing, we are always pushing the boundaries of search technology. Leveraging both Large Language Models (LLMs) and Small Language Models (SLMs) marks a significant milestone in enhancing our search capabilities. While transformer models have served us well, the growing complexity of search queries necessitated more powerful models.”

Performance Gains

Using LLMs in search systems can create problems with speed and cost.

To solve these problems, Bing has trained SLMs, which it claims are 100 times faster than LLMs.

The announcement reads:

“LLMs can be expensive to serve and slow. To improve efficiency, we trained SLM models (~100x throughput improvement over LLM), which process and understand search queries more precisely.”

Bing also uses NVIDIA TensorRT-LLM to improve how well SLMs work.

TensorRT-LLM is a tool that helps reduce the time and cost of running large models on NVIDIA GPUs.

Impact On “Deep Search”

According to a technical report from Microsoft, integrating Nvidia’s TensorRT-LLM technology has enhanced the company’s “Deep Search” feature.

Deep Search leverages SLMs in real time to provide relevant web results.

Before optimization, Bing’s original transformer model had a 95th percentile latency of 4.76 seconds per batch (20 queries) and a throughput of 4.2 queries per second per instance.

With TensorRT-LLM, the latency was reduced to 3.03 seconds per batch, and throughput increased to 6.6 queries per second per instance.

This represents a 36% reduction in latency and a 57% decrease in operational costs.

The company states:

“… our product is built on the foundation of providing the best results, and we will not compromise on quality for speed. This is where TensorRT-LLM comes into play, reducing model inference time and, consequently, the end-to-end experience latency without sacrificing result quality.”

Benefits For Bing Users

This update brings several potential benefits to Bing users:

  • Faster search results with optimized inference and quicker response times
  • Improved accuracy through enhanced capabilities of SLM models, delivering more contextualized results
  • Cost efficiency, allowing Bing to invest in further innovations and improvements

Why Bing’s Move to LLM/SLM Models Matters

Bing’s switch to LLM/SLM models and TensorRT optimization could impact the future of search.

As users ask more complex questions, search engines need to better understand and deliver relevant results quickly. Bing aims to do that using smaller language models and advanced optimization techniques.

While we’ll have to wait and see the full impact, Bing’s move sets the stage for a new chapter in search.


Featured Image: mindea/Shutterstock