Marketing Trend: Consumers Prefer Relatability via @sejournal, @martinibuster

A new iStock 2025 Marketing Trends report finds declining consumer trust in social media and influencers, emphasizing the importance of relatability over perfection for marketers and businesses.

Trust For Marketing Success

The iStock report finds that 81% of consumers don’t trust content on social media. Nevertheless they still trust visual platforms like TikTok and Instagram Reels for discovery and inspiration. In terms of influence, 64% of consumers trust businesses over celebrities and influencers, particularly brands that align with their values (58%).

Authenticity And Real-User Content (RUC)

iStock’s data shows that consumer perception of influencer “realness” has declined, with 67% of people trusting traditional advertising over sponsored influencer posts. iStock is recommending what it calls Real-User Content (RUC), images and videos that project realness. Video content was highlighted by iStock as a strong trend that consumers should consider as more consumers turn to video content for learning and inspiration.

iStock recommends that marketers focus on being “real, truthful, and original” as the key to building trust. While authenticity is important, iStock is emphasizing offering real stories and being relatable as opposed to content that reflects virtually unattainable perfection.

They write:

“This change is affecting how people interact with visual content, especially on social media. Despite people’s lack of trust, they still find these platforms valuable, 82% of users still go to places like TikTok, Instagram Reels, and YouTube Shorts for video content to learn something new or get inspiration. In other words, people want the benefits of social media, without the negative effects. This shift has also made video-driven social search more popular, where platforms focused on video are no longer just for scrolling —they’ve become places to search and discover. In 2025, to succeed, you need to speak directly to your audience, this approach will always be more effective than a flood of generic posts.”

The report recommends radical honesty by showing the company in ways that include imperfect moments. iStock’s 2025 Marketing Trends report shows an approach to connecting with consumers in a way that reflects qualities of realness that people are looking for in the content they consume.

Read iStock’s report:

Crack the Code on Trust: 2025 Marketing Insights for Small Businesses

Featured Image by Shutterstock/HAKINMHAN

Bing Search Updates: Faster, More Precise Results via @sejournal, @MattGSouthern

Microsoft has announced updates to Bing’s search infrastructure incorporating large language models (LLMs), small language models (SLMs), and new optimization techniques.

This update aims to improve performance and reduce costs in search result delivery.

In an announcement, the company states:

“At Bing, we are always pushing the boundaries of search technology. Leveraging both Large Language Models (LLMs) and Small Language Models (SLMs) marks a significant milestone in enhancing our search capabilities. While transformer models have served us well, the growing complexity of search queries necessitated more powerful models.”

Performance Gains

Using LLMs in search systems can create problems with speed and cost.

To solve these problems, Bing has trained SLMs, which it claims are 100 times faster than LLMs.

The announcement reads:

“LLMs can be expensive to serve and slow. To improve efficiency, we trained SLM models (~100x throughput improvement over LLM), which process and understand search queries more precisely.”

Bing also uses NVIDIA TensorRT-LLM to improve how well SLMs work.

TensorRT-LLM is a tool that helps reduce the time and cost of running large models on NVIDIA GPUs.

Impact On “Deep Search”

According to a technical report from Microsoft, integrating Nvidia’s TensorRT-LLM technology has enhanced the company’s “Deep Search” feature.

Deep Search leverages SLMs in real time to provide relevant web results.

Before optimization, Bing’s original transformer model had a 95th percentile latency of 4.76 seconds per batch (20 queries) and a throughput of 4.2 queries per second per instance.

With TensorRT-LLM, the latency was reduced to 3.03 seconds per batch, and throughput increased to 6.6 queries per second per instance.

This represents a 36% reduction in latency and a 57% decrease in operational costs.

The company states:

“… our product is built on the foundation of providing the best results, and we will not compromise on quality for speed. This is where TensorRT-LLM comes into play, reducing model inference time and, consequently, the end-to-end experience latency without sacrificing result quality.”

Benefits For Bing Users

This update brings several potential benefits to Bing users:

  • Faster search results with optimized inference and quicker response times
  • Improved accuracy through enhanced capabilities of SLM models, delivering more contextualized results
  • Cost efficiency, allowing Bing to invest in further innovations and improvements

Why Bing’s Move to LLM/SLM Models Matters

Bing’s switch to LLM/SLM models and TensorRT optimization could impact the future of search.

As users ask more complex questions, search engines need to better understand and deliver relevant results quickly. Bing aims to do that using smaller language models and advanced optimization techniques.

While we’ll have to wait and see the full impact, Bing’s move sets the stage for a new chapter in search.


Featured Image: mindea/Shutterstock

Google Formalizes Decade-Old Faceted Navigation Guidelines via @sejournal, @MattGSouthern

Google has updated its guidelines on faceted navigation by turning an old blog post into an official help document.

What started as a blog post in 2014 is now official technical documentation.

This change reflects the complexity of ecommerce and content-heavy websites, as many sites adopt advanced filtering systems for larger catalogs.

Faceted Navigation Issues

Ever used filters on an e-commerce site to narrow down products by size, color, and price?

That’s faceted navigation – the system allowing users to refine search results using multiple filters simultaneously.

While this feature is vital for users, it can create challenges for search engines, prompting Google to release new official documentation on managing these systems.

Modern Challenges

The challenge with faceted navigation lies in the mathematics of combinations: each additional filter option multiplies the potential URLs a search engine might need to crawl.

For example, a simple product page with options for size (5 choices), color (10 choices), and price range (6 ranges) could generate 300 unique URLs – for just one product.

According to Google Analyst Gary Illyes, this multiplication effect makes faceted navigation the leading cause of overcrawling issues reported by website owners.

The impact includes:

  • Wasting Server Resources: Many websites use too much computing power on unnecessary URL combinations.
  • Inefficient Crawl Budget: Crawlers may take longer to find important new content because they are busy with faceted navigation.
  • Weakening SEO Performance: Having several URLs for the same content can hurt a website’s SEO.

What’s Changed?

The new guidance is similar to the 2014 blog post, but it includes some important updates:

  1. Focus on Performance: Google now clearly warns about the costs of using computing resources.
  2. Clear Implementation Options: The documentation gives straightforward paths for different types of websites.
  3. Updated Technical Recommendations: Suggestions now account for single-page applications and modern SEO practices.

Implementation Guide

For SEO professionals managing sites with faceted navigation, Google now recommends a two-track approach:

Non-Critical Facets:

  • Block via robots.txt
  • Use URL fragments (#)
  • Implement consistent rel=”nofollow” attributes

Business-Critical Facets:

  • Maintain standardized parameter formats
  • Implement proper 404 handling
  • Use strategic canonical tags

Looking Ahead

This documentation update suggests Google is preparing for increasingly complex website architectures.

SEO teams should evaluate their current faceted navigation against these guidelines to ensure optimal crawling efficiency and indexing performance.


Featured Image: Shutterstock/kenchiro168

Google Refreshes Generative AI Prohibited Use Policy via @sejournal, @MattGSouthern

Google has updated its Generative AI Prohibited Use Policy to clarify the proper use of its generative AI products and services.

The update simplifies the language, and lists prohibited behaviors with examples of unacceptable conduct.

Key Updates To Policy

The updated policy clarifies existing rules without adding new restrictions.

It specifically bans using Google’s AI tools to create or share non-consensual intimate images or to conduct security breaches through phishing or malware.

The policy states:

“We expect you to engage with [generative AI models] in a responsible, legal, and safe manner.”

Prohibited activities include dangerous, illegal, sexually explicit, violent, hateful, or deceptive actions, as well as content related to child exploitation, violent extremism, self-harm, harassment, and misinformation.

Prohibited Activities

The policy prohibits using Google’s generative AI for an expansive range of dangerous, illegal, and unethical activities:

  • Illegal Activities: Engaging in or facilitating child exploitation, violent extremism, terrorism, non-consensual intimate imagery, self-harm, or other illegal activities.
  • Security Violations: Compromising security through phishing, malware, spam, infrastructure abuse, or circumventing safety protections.
  • Explicit and Harmful Content: Generating sexually explicit content, hate speech, harassment, violence incitement, or other abusive content.
  • Deception and Misinformation: Impersonation without disclosure, misleading claims of expertise, misrepresenting content provenance, or spreading misinformation related to health, governance, and democratic processes.

Exceptions Allowed

New language in the policy carves out exceptions for some restricted activities in particular contexts.

Educational, documentary, scientific, artistic, and journalistic uses may be permitted, as well as other cases “where harms are outweighed by substantial benefits to the public.”

Why This Matters

The policy update addresses the rapid advancement of generative AI technologies that create realistic text, images, audio, and video.

This progress raises concerns about ethics, misuse, and societal impact.

Looking Ahead

Google’s updated policy is now in effect, and the old and new versions are publicly available.

Leading AI companies like OpenAI and Microsoft have released their own usage rules. However, raising awareness and consistently enforcing these rules still need to be improved.

As generative AI becomes more common, creating clear usage guidelines is essential to ensure responsible practices and reduce harm.


Featured Image: Algi Febri Sugita/Shutterstock

Google’s Ex-CEO on AI: What SEOs Should Pay Attention To via @sejournal, @martinibuster

Former Google CEO Eric Schmidt said that the trajectory of AI is both “enticing” and “frightening.” He emphasized that AI is not just an evolution of technology, it’s about shaping the future of humanity. His comments reflect how the highest levels of technology leaders think about AI and carry implications for how this will play out for SEO.

Tech Companies Shouldn’t Be Making The Decisions

Asked if the decisions about the future of technology should be left to people like him, Eric Schmidt responded no. He cited Henry Kissinger who ten years ago said that people like Schmidt should not be making the decisions and used the example of social media to explain why.

“Let’s look at social media. We’ve now arrived at a situation where we have these huge companies in which I was part of. And they all have this huge positive implication for entertainment and culture, but they have significant negative implications in terms of tribalism, misinformation, individual harm, especially against young people, and especially against young women.

None of us foresaw that. Maybe if we’d had some non-technical people doing this with us, we would have foreseen the impact on society. I don’t want us to make that mistake again with a much more powerful tool.”

AI Is Both Frightening & Enticing

Eric Schmidt has been an active participant in the development of computer technology since 1975 to the present. The awe he expresses for the point in time we are in now is something that everyone at every level of search marketing, from publishing, SEO, advertising to ecommerce should be aware of. The precipice we find ourselves at right now should not be underestimated and at this point it barely seems possible to overestimate it.

Given that Sundar Pichai, Google’s current CEO, stated that search will be changing in profound ways in 2025 and the revelation that Google Gemini 2.0 will play a role in powering AI search, Schmidt’s declarations about the mind-boggling scale of computing capabilities should be of high importance to search marketers in both the enticing capabilities for them and frightening realities of what Google will be doing.

Schmidt observed:

“There are two really big things happening right now in our industry. One is the the development of what are called agents, where agents can do something. So you can say I want to build a house so you find the architect, go through the land use, buy the houses. Can all be done by computer not just by humans.

And then the other thing is the ability for the computer to write code. So if I say to you I wanted sort of study the audience for this show and I want you to figure out how to make a variant of my show for each and every person who’s watching it. The computer can do that. That’s how powerful the programming capabilities of AI are.

In my case, I’ve managed programmers my whole life and they typically don’t do what I want. You know, they do whatever they want.

With a computer, it’ll do exactly what you say. And the gains in computer programming from the AI systems are frightening, they’re both enticing because they will change the slope.
Right now, the slope of AI is like this…”

Screenshot Of Schmidt Illustrating The Slope Of AI

He continued his answer:

“…and when you have AI scientists, that is computers developing AI, the slope will go this… it will go wham! But that development puts an awful lot of power in the hands of an awful lot of people.”

Screenshot Of Eric Schmidt Illustrating The Future AI Slope

Embedding The Intrinsic Goodness Of Humanity In AI

The interview ended with a question and answer around the possibility of embedding positive human values and ethical principles into AI systems during their development.

There are some people who complain about the ethical guardrails placed on AI, claiming that the guardrails are based on political or ideological values, reflecting the tension between those who feel entitled to the freedom to use AI to whatever ends they desire and those who fear that AI may be used for evil purposes.

Eric Schmidt addresses this tension by saying that machines can be embedded with the best of human goodness.

The interviewer noted that Schmidt, in his book, expressed confidence that machines will reflect “the intrinsic goodness in humanity” and asked whether humanity can truly be considered inherently good, especially when some people clearly aren’t.

Schmidt acknowledged that there is a certain percentage of people who are evil. But he also expressed that in general people tend to be good and that humans can put ethical rules into AI machines.

He explained:

“The good news is the vast majority of humans on the planet are well meaning, they’re social creatures. They want themselves to do well and they want their neighbors and especially their tribe, to do well.

I see no reason to think that we can’t put those rules into the computers.

One of the tech companies started its training of its model by putting in the Constitution and the Constitution was embedded inside of the model of how you treat things.

Now, of course, we can disagree on what the Constitution is. But these systems are under our control.

There are humans who are making decisions to train them, and furthermore, the systems that you use, whether it’s ChatGPT or Gemini or or Claude or what have you, have all been carefully examined after they were produced to make sure they don’t have any really horrific rough edges.

So humans are directly involved in the creation of these models, and they have a responsibility to make sure that nothing horrendous occurs as a result of them.”

That statement seems to presume that people like him shouldn’t be making the decisions but that they should be made with consultation with outsiders, as he said at the beginning of the interview. Nevertheless, the decisions are always made by corporations.

People Mean Well But Corporations Answer To Profits

The question that wasn’t asked is that with a few exceptions (like the outdoor clothing company Patagonia), considering that corporations generally aren’t motivated by “human goodness” or base their decisions on ethics, can they be trusted to imbue machines with human goodness?

Despite click bait articles to the contrary, Google still publishes their “don’t be evil” motto on their Code Of Conduct page, they simply moved it to the bottom of the page. Nevertheless, Google’s corporate decisions, including about search, are strongly based on profit.

On the issue of whether AI Search is strip mining Internet websites out of existence, Sundar Pichai, the current Google CEO, struggled to say what Google does to preserve the web ecosystem. That’s the outcome of a system that prioritizes profits.

Is that evil, or is it just the banality of a corporate system that prioritizes profit over everything else, leading to harmful outcomes? What does that say about the future of AI Search and the web ecosystem?

Screenshot of Google’s De-Prioritized Don’t Be Evil Motto

Watch The Interview With Eric Schmidt:

Featured Image by Shutterstock/AYO Production

ChatGPT Update: Free Web Search, New Voice & Maps Features via @sejournal, @MattGSouthern

OpenAI has updated ChatGPT to make web search available to all registered users. The update also includes voice search and maps integration.

With voice search, you can ask questions about current events and local information in a natural way. This feature works in multiple languages and allows for real-time queries.

Additionally, ChatGPT’s mobile apps now include maps, which can help you find businesses and restaurants near you.

Lastly, for those using ChatGPT as their default search provider, OpenAI has improved its handling of navigational queries.

Search Available For Free

OpenAI announced that the web search feature of ChatGPT, which was previously only available to Plus subscribers, is now accessible to all logged-in users worldwide.

This service can be accessed through chatgpt.com as well as the mobile and desktop applications.

For more on ChatGPT Search, see:

Advanced Voice Search Integration

A key improvement with this update is advanced voice search.

This lets you find current web information through natural conversation.

The system can now handle complex questions, including travel planning and local events. It also supports multiple languages and provides real-time information.

In a video about the advanced voice mode, an OpenAI representative demonstrates how you can have natural conversations with ChatGPT to get information about events and activities.

For instance, when asked about festive activities in Zurich, Switzerland, for the week of December 23rd, 2024, ChatGPT provided details on Christmas markets, singing Christmas tree concerts, and Circus Kinelli.

The video also shows that ChatGPT can give specific information, like the days and hours of the Christkindlmarkt at Zurich’s main station.

It easily switches to answer questions about family-friendly events in New York City during the same week, mentioning the New York Botanical Garden’s Holiday Train Show and the Bank of America Winter Village at Bryant Park.

Navigational Searches

OpenAI has improved the user experience when using ChatGPT as the default search engine in web browsers.

In another video, representatives from OpenAI explained that the company has prioritized making it faster to navigate directly to websites from the browser’s address bar.

Now, by simply typing in keywords such as “Netflix” or “hotel booking sites,” users can quickly access the most relevant links without needing to sift through lengthy AI-generated responses.

Maps Addition

OpenAI has added maps to the ChatGPT mobile apps to help you find local restaurants and businesses.

This feature gives you up-to-date information, so you can easily search for and discuss options while you’re on the go.

In Summary

ChatGPT’s search features – previously Premium-only – are now free for all users.

The update adds voice search and maps, plus better direct navigation to websites.

To use these tools on the web or mobile, you only need a ChatGPT account. Voice search works in multiple languages, and the maps feature helps with local searches.


Featured Image: JarTee/Shutterstock

Google Warns: Beware Of Fake Googlebot Traffic via @sejournal, @MattGSouthern

Google’s Developer Advocate, Martin Splitt, warns website owners to be cautious of traffic that appears to come from Googlebot. Many requests pretending to be Googlebot are actually from third-party scrapers.

He shared this in the latest episode of Google’s SEO Made Easy series, emphasizing that “not everyone who claims to be Googlebot actually is Googlebot.”

Why does this matter?

Fake crawlers can distort analytics, consume resources, and make it difficult to assess your site’s performance accurately.

Here’s how to distinguish between legitimate Googlebot traffic and fake crawler activity.

Googlebot Verification Methods

You can distinguish real Googlebot traffic from fake crawlers by looking at overall traffic patterns rather than unusual requests.

Real Googlebot traffic tends to have consistent request frequency, timing, and behavior.

If you suspect fake Googlebot activity, Splitt advises using the following Google tools to verify it:

URL Inspection Tool (Search Console)

  • Finding specific content in the rendered HTML confirms that Googlebot can successfully access the page.
  • Provides live testing capability to verify current access status.

Rich Results Test

  • Acts as an alternative verification method for Googlebot access
  • Shows how Googlebot renders the page
  • Can be used even without Search Console access

Crawl Stats Report

  • Shows detailed server response data specifically from verified Googlebot requests
  • Helps identify patterns in legitimate Googlebot behavior

There’s a key limitation worth noting: These tools verify what real Googlebot sees and does, but they don’t directly identify impersonators in your server logs.

To fully protect against fake Googlebots, you would need to:

  • Compare server logs against Google’s official IP ranges
  • Implement reverse DNS lookup verification
  • Use the tools above to establish baseline legitimate Googlebot behavior

Monitoring Server Responses

Splitt also stressed the importance of monitoring server responses to crawl requests, particularly:

  • 500-series errors
  • Fetch errors
  • Timeouts
  • DNS problems

These issues can significantly impact crawling efficiency and search visibility for larger websites hosting millions of pages.

Splitt says:

“Pay attention to the responses your server gave to Googlebot, especially a high number of 500 responses, fetch errors, timeouts, DNS problems, and other things.”

He noted that while some errors are transient, persistent issues “might want to investigate further.”

Splitt suggested using server log analysis to make a more sophisticated diagnosis, though he acknowledged that it’s “not a basic thing to do.”

However, he emphasized its value, noting that “looking at your web server logs… is a powerful way to get a better understanding of what’s happening on your server.”

Potential Impact

Beyond security, fake Googlebot traffic can impact website performance and SEO efforts.

Splitt emphasized that website accessibility in a browser doesn’t guarantee Googlebot access, citing various potential barriers, including:

  • Robots.txt restrictions
  • Firewall configurations
  • Bot protection systems
  • Network routing issues

Looking Ahead

Fake Googlebot traffic can be annoying, but Splitt says you shouldn’t worry too much about rare cases.

Suppose fake crawler activity becomes a problem or uses too much server power. In that case, you can take steps like limiting the rate of requests, blocking specific IP addresses, or using better bot detection methods.

For more on this issue, see the full video below:


Featured Image: eamesBot/Shutterstock

Mullenweg Disgusted & Sickened As WP Engine Regains Access via @sejournal, @martinibuster

WP Engine regained control of their Advanced Custom Forms plugin and login access to WordPress.org. Matt Mullenweg responded by expressing that he is “disgusted and sickened.”

Mullenweg tweeted about how he felt about how things turned out:

“I’m disgusted and sickened by being legally forced to provide free labor and services to @wpengine, a dangerous precedent that should chill every open source maintainer. While I disagree with the court’s decision, I’ve fully complied with its order. You can see most changes on the site. They have access to ACF slug but haven’t changed it… must not have been the emergency they claimed.”

The response to Matt’s tweet was predictable.

One person reflected Matt’s words back at him:

I’m disgusted and sickened that you released software as GPL, made it intimately dependent on a private website+APIs you personally own and then you’re shocked when you learn you can’t discriminate against users

Another accused Mullenweg of tricking the WordPress community:

“And what about all of the free labor that you, @photomatt , tricked the WordPress community into providing to your personal .org website that the community believed was owned by the Foundation?”

Despite the compliance, Mullenweg pointed out that WP Engine had yet to change the plugin slug, questioning their claim of urgency. The ACF team subsequently reclaimed the plugin slug and tweeted an announcement about it.

On December 13, 2024, WP Engine’s official Advanced Custom Fields account confirmed on X (formerly Twitter) that they had regained access. The WordPress.org plugin directory now displays the original ACF plugin instead of Mullenweg’s forked version, Secure Custom Fields.

The ACF team tweeted:

“We’re pleased to share that our team has had account access restored on WordPress dot org along with control of the ACF plugin repo. This means all ACF users can rest assured that the ACF team you trust is once again maintaining the plugin. There’s no action required if you have installed ACF directly from the ACF website or you are an ACF PRO user.”

Members of the WordPress community congratulated WP Engine.

Some offered congratulations:

“Excellent news. Congratulations!”

Others expressed their happiness that ACF’s access was restored:

Happy for @wpengine. You have done a great job.

👏🏼 YES!!!!
https://x.com/CaroManelR/status/1867934316992610459

Another person tweeted:

NEVER trusting wordpess dot org again.

Origin Of Mullenweg – WP Engine Dispute

Matt Mullenweg claims that WP Engine does not contribute enough to the WordPress ecosystem. He has also raised concerns about WP Engine’s use of the word “WordPress” and has written about his years long attempt to get WP Engine to pay a “fair share” back into the WordPress open source project. On the September 20, 2024 Matt Mullenweg publicy denounced WP Engine at the United States WordCamp conference, after WP Engine declined to agree to his demands for $30 million dollars.

WP Engine sued Automattic and Matt Mullenweg in federal court, obtaining a preliminary injunction that required Automattic and Mullenweg to restore WP Engine’s access to WordPress.org, the plugin repository, logins and to remove a WP Engine customer list from a website Mullenweg created to encourage customers to leave WP Engine.

Mullenweg’s History Of Disputes

There is some history of Mullenweg engaging in disputes related to GPL licensing of code and trademarks. In 2010 Mullenweg rightfully challenged Chris Pearson and his theme company Thesis over software licensing. Chris Pearson himself has acknowledged that he was ignorant at the time about software licensing.

Mullenweg escalated his dispute with Pearson by offering Thesis customers any premium theme of their choice in exchange for abandoning their use of the Thesis them. These disputes caused Pearson to lose a significant amount of business and gain a negative perception in the WordPress community, which he described in a blog post:

“…I was woefully ignorant about software licensing, and I felt as though I was being backed into a corner and asked to accept something I didn’t fully understand. Instead of handling it in a measured, polite manner, I was a jerk.

I made a mistake, and I paid dearly for it.The WordPress community’s reaction towards me was incredibly negative, but on top of that, Matt did whatever he could to further damage what was left of my business. His most blatant effort in this regard was making a public offer to buy Thesis customers the premium, GPL-licensed Theme of their choice if they quit using Thesis.”

Three years later Mullenweg purchased the Thesis.com domain name which began another dispute with Pearson that Mullenweg also won. His motivation for going after the Thesis.com domain name was never fully acknowledged but the WordPress community largely understood it as “retribution” against Pearson.

The comments in a WP Tavern report about Automattic were largely negative, with one person’s comment representative of the negative sentiment:

“I don’t think anyone is saying what Automattic did was illegal, they’re saying it was unethical.

It’s possible to be a jerk without breaking the law, but that doesn’t make it acceptable behavior.”

In 2016 Matt Mullenweg initiated a dispute with Wix in relation to GPL licensing. Wix’s CEO responded with his own blog post showing how Wix had contributed over 224 open source projects, writing:

“Yes, we did use the WordPress open source library for a minor part of the application (that is the concept of open source right?), and everything we improved there or modified, we submitted back as open source, see here in this link – you should check it out, pretty cool way of using it on mobile native. I really think you guys can use it with your app (and it is open source, so you are welcome to use it for free). And, by the way, the part that we used was in fact developed by another and modified by you.”

Wix eventually removed the disputed code from their mobile app.

Mullenweg Complies To Court Order… With Humor

The court’s ruling emphasizes the importance of adherence to legal agreements within the WordPress ecosystem. WP Engine’s victory may bolster its chances of prevailing in the ongoing federal lawsuit. Automattic’s to their loss signals their intention to challenge the outcome during a full trial, stating:

“We look forward to prevailing at trial as we continue to protect the open-source ecosystem during full-fact discovery and a full review of the merits.”

Matt Mullenweg continues to provoke WP Engine, only this time using humor. Automattic removed a checkmark from the WordPress.org login page that previously required users to affirm that they are not associated with WP Engine. Today there’s a checkbox asking users to affirm that pineapple on pizza is delicious.

Screenshot of updated WordPress.org login page

Google Rolls Out December 2024 Core Update via @sejournal, @MattGSouthern

Google announced it’s rolling out the December core algorithm update, which the company expects to complete over the next two weeks.

The news comes just a week after Google finished rolling out the November core update.

Google’s Announcement

In a post on X, Google stated:

“The Dec. 2024 core update is rolling out, and we expect it will complete in two weeks.

If you’re wondering why there’s a core update this month after one last month, we have different core systems we’re always improving. This past blog post explains more,”

Google’s post included a link to a blog post from November 2023 titled “A Q&A on Google Search updates.”

The blog post provides context around the company’s cadence of algorithm updates.

Multiple Ranking Systems

According to the announcement, Google uses “multiple ranking systems that do different things” and is “always looking at ways to improve these systems to show better results.”

The company said it generally shares information about “notable” updates that it thinks might produce noticeable changes in search results.

Regarding the proximity of the November and December updates, Google explained that while it tries to separate notable updates, “it’s not always possible” given the large number of updates the company implements overall. The post stated:

“If we have updates that can improve Search, that have been developed over the course of several months, we release them when they’re ready.”

Advice For Websites

As with previous core updates, the December update’s specific changes are unknown. However, Google has consistently advised that the best way for creators to succeed through these updates is to remain focused on creating helpful, reliable, people-first content.

Site owners who notice changes in traffic following an update are advised to look closely at Google’s update-specific guidance, which can be found via the Google Search Status Dashboard. The dashboard also allows users to check the status of an update rollout and subscribe to an RSS feed for alerts.

Wrapping Up a Year of Algorithm Updates

The December core update caps off a busy year of algorithm changes for Google Search.

We will closely watch traffic patterns and search rankings to assess the impact as the December update rolls out over the coming weeks.

Search Engine Journal will continue to monitor the situation and provide updates as they become available.


Featured Image: Rohit-Tripathi/Shutterstock

Google Launches New ’24 Hours’ View In Search Console via @sejournal, @MattGSouthern

Google launches 24-hour data view in Search Console, offering near real-time insights for website performance monitoring.

  • Google Search Console now shows performance data from the last 24 hours with minimal delay.
  • The new feature includes hourly data breakdowns and works across Search, Discover, and News reports.
  • Google has cut data delay times by nearly half, making performance tracking more immediate.