WordPress Backup Plugin Vulnerability Affects 3+ Million Sites via @sejournal, @martinibuster

A high severity vulnerability in a popular WordPress backup plugin allows unauthenticated attackers to exploit the flaw. The vulnerability is rated 8.8 on a scale of 0.0 to 10.

UpdraftPlus: WP Backup & Migration Plugin

The vulnerability affects the popular Updraft Plus WordPress plugin, installed in over 3 million websites. Updraft Plus comes in a free and paid version that allows users to upload backups to a user’s cloud storage or to email the files. The plugin allows users to manually backup the website or schedule it for automatic backups. It offers a tremendous amount of flexibility of what can be backed up and can make a huge difference for recovering from a catastrophic server issue and is also useful for migrating to a different server altogether.

Wordfence explains the vulnerability:

“The UpdraftPlus: WP Backup & Migration Plugin plugin for WordPress is vulnerable to PHP Object Injection in all versions up to, and including, 1.24.11 via deserialization of untrusted input in the ‘recursive_unserialized_replace’ function. This makes it possible for unauthenticated attackers to inject a PHP Object.

No known POP chain is present in the vulnerable software. If a POP chain is present via an additional plugin or theme installed on the target system, it could allow the attacker to delete arbitrary files, retrieve sensitive data, or execute code. An administrator must perform a search and replace action to trigger the exploit.”

The Updraft Plus changelog seems to minimize the vulnerability, it doesn’t even call the update a security patch, it’s labeled as a “tweak.”

From the official Updraft Plus WordPress plugin changelog:

“TWEAK: Complete the review and removal of calls to the unserialize() PHP function allowing class instantiation begun in 1.24.7. (The final removal involved a theoretical security defect, if your development site allowed an attacker to post content to it which you migrated to another site, and which contained customised code that could perform destructive actions which the attacker knew about, prior to you then cloning the site. The result of this removal is that some search-replaces, highly unlikely to be encountered in practice, will be skipped).”

Updraft Plus Vulnerability Patched

Users are recommended to consider updating their installations of Updraft Plus to the latest version, 1.24.12. All versions prior to the latest version are vulnerable.

Read the Wordfence advisory:

UpdraftPlus: WP Backup & Migration Plugin <= 1.24.11 – Unauthenticated PHP Object Injection

Featured Image by Shutterstock/Tithi Luadthong

Google Shows How To Confirm Indexing Issues Due To JavaScript via @sejournal, @martinibuster

SearchNorwich recently published an excellent video featuring Google’s Martin Splitt discussing how to debug crawling and indexing issues related to JavaScript, saying that most of the times it’s not JavaScript that’s causing indexing issues, the actual cause is something else. Even if you don’t know how to code with JavaScript, the tips that Martin shares will enable anyone to get a good start on debugging crawl issues that are originating on a website.

JavaScript Is Rarely The Cause Of SEO Issues

Martin’s SearchNorwich video was published a month ago. Just a few days ago John Mueller advises that too much JavaScript can have a negative impact on SEO, which aligns with Martin’s assertion that JavaScript is rarely the reason for SEO issues, that it’s either the misuse of JavaScript or something else entirely.

He explains that of the issues that virtually all suspected JavaScript issues that get emailed to him end up being something else. He pins the blame on a flawed approach to debugging SEO issues. What he describes is confirmation bias, which is suspecting that something is the cause and then looking for clues to justify that opinion. The definition of confirmation bias is the tendency to interpret existing evidence or to look for evidence that confirms existing beliefs, while ignoring evidence that contradicts those beliefs.

Martin explained:

“…it seems to me, as someone on the Google side of things, that SEOs look for clues that allow them to blame things they’re seeing on JavaScript. Then they show up, or someone from their team shows up, in my inbox or on my social media and says, “We found a bug. It’s JavaScript. You say JavaScript works in Google Search, but we have a strong hint that it doesn’t, and you know it’s because of JavaScript.”

He goes on to say that out of hundreds of times a year that he’s approached with a diagnosis that JavaScript is to blame for an SEO problem he has only seen one actual instance where an actual bug related to JavaScript was to blame. Just one.

He also says:

“People often claim, “You say it works if you use client-side rendering, but clearly, it is not working. It must be a JavaScript problem and maybe even a bug in Google.” Surprisingly, many of the people who end up in my inbox suspect it’s a Google bug. I find that interesting, especially when a small, niche website claims to be affected by a bug that doesn’t affect any other websites. Most of the time, it’s not us—it’s you.”

Splitt explains that when JavaScript is involved in a crawling or rendering issue, it’s most often not because JavaScript is to blame but rather it’s being used incorrectly

Finding Source Of Rendering Issues

Martin suggests debugging rendering issues by checking how Google “sees” the web page. Rendering, in the context of Googlebot crawling, is the process of downloading all the resources from a web page like fonts, JavaScript, CSS and HTML and then creating fully functional web page that’s similar to what a human user would experience in a web browser.

Debugging how Google renders a page may show that the page renders fine, that certain parts don’t render or that the page cannot be indexed at all.

He recommends using the following tools for debugging possible JavaScript issues:

1. Google Search Console URL Inspection Tool

2. Google Rich Results Test

3. Chrome Dev Tools

Easy JavaScript Debugging

Both of the first two tools let you submit a URL that gets immediately crawled by Google and they’ll show you the rendered page, what the page looks like for Google for indexing purposes.

Martin explains the usefulness of the JavaScript console messages in Chrome Dev Tools:

“There’s also more info that gives you very helpful details about what happened in the JavaScript console messages and what happened in the network. If your content is there and it’s what you expect it to be, then it’s very likely not going to be JavaScript that is causing the problem. If people were doing just that, checking these basics, 90% of the people showing up in my inbox would not show up in my inbox. That’s what I do.”

He also explained that just because the JavaScript console flags an error that doesn’t mean that the problem is with the JavaScript itself. He uses the example of an error in how JavaScript failed to execute that was caused by an API that’s blocked by Robots.txt, preventing the page from rendering.

Why Do So Many SEOs Blame JavaScript?

Martin implies that not knowing how to debug JavaScript is the cause of the reputation it’s received as a cause of crawling and indexing issues. I get it, I learned the basics of coding JavaScript by hand 25 years ago and I disliked it then and now, it’s never been my thing.

But Martin’s right that knowing a few tricks for debugging JavaScript will save a lot of wasted time chasing down the wrong problem.

Watch Martin Splitt’s presentation here:

Maybe It Isn’t JavaScript – Martin Splitt at SearchNorwich 18

Featured Image by Shutterstock/Artem Samokhvalov

OpenAI Blames Cloud Provider For ChatGPT Outage via @sejournal, @martinibuster

OpenAI published an incident report detailing the cause of last weeks ChatGPT outage and what they are doing to prevent a repeat. The outage began on December 26th, 2024 at 10:40 AM and was mostly resolved by 3:11 PM, except for except ChatGPT which was 100% recovered by 6:20 PM.

The following services were impacted:

  • ChatGPT
  • Sora video creation
  • APIs: agents, realtime speech, batch, and DALL-E

Cause of OpenAI Outage

The cause of the outage was a cloud provider data center failure which impacted OpenAI databases. While the databases are mirrored across regions, switching over to a backup database required manual interaction on the part of the cloud provider to redirect operations to a backup datacenter in another region. The manual intervention was cited as how the outage was fixed but the given reason for why it took so long was the scale of the project.

A failover is an automated process for switching to a backup system in the event of a system failure. OpenAI announced that they are working toward creating infrastructure changes to improve responses to future cloud database failures.

OpenAI explained:

“In the coming weeks, we will embark on a major infrastructure initiative to ensure our systems are resilient to an extended outage in any region of any of our cloud providers by adding a layer of indirection under our control in between our applications and our cloud databases. This will allow significantly faster failover.”

Significant ChatGPT Outage

OpenAI’s said the ChatGPT outage was due to a regional cloud provider database failure but the effect was global, evidenced by user reports on social media from across Europe and North America.

Screenshot of Google Trends graph showing largest ever spike in searches for query

Google Trends, which tracks search volume, indicates that this may have been the largest such event, with more people searching for information about it than for any previous outage.

Featured Image by Shutterstock/lilgrapher

Google Podcast Discusses SEO Expertise via @sejournal, @martinibuster

Google’s recent Search Off the Record podcast touched on the issue of SEO expertise and the disconnect between how SEOs think Google ranks websites how Googlers understand it. The disparity is so great that Gary Ilyes remarked that sometimes he doesn’t know what SEOs are talking about.

Googlers Question SEO Expertise

Martin Splitt discussed meeting Turkish publishers and SEOs of different experience levels at a Google event in Turkey in which the attendees complained of poor search results. Turned out that the problem wasn’t Google’s search results, it was an issue with how Turkish websites are created, which indirectly called into question the SEO expertise of Turkish language publishers.

He said:

“And then eventually we worked out as a group as a whole, that there are a lot of problems with the way that content is created in Turkish language websites…”

Gary Illyes expanded on Martin’s comment about experience levels to say that it’s a subjective thing, that some people who self-describe themselves as newbies are actually extremely knowledgeable on the fine details of indexing and crawling, while other SEO gurus ask questions that don’t make sense.

Gary shared:

“The thing you mentioned about experience, I came to realize the past few years that that’s a very subjective thing. Like, when you are asking people, ‘What’s your experience?’ And they are like, ‘Oh, I’m a guru,’ and then on the opposite end of the spectrum, like, ‘I’m a complete newbie.’

And then you start talking to them and the newbie knows way more about like HTTP, for example, than I do and crawling and indexing and whatever, like how it’s perceived externally.

And then you talk to the guru and the guru is like… the questions themselves don’t make sense. Like, you can’t interpret the question that they are asking.”

That part about the questions not making sense describes a disconnect between what SEOs and Googlers believe about SEO. Let’s face it, there’s a disconnect.

The Knowledge And Experience Gap

Sometimes there’s a gap separating how SEOs experience the ranking algorithm and how Googlers try to explain how it works.  A classic example is the disconnect in the SEO belief in the concept of domain authority and Google’s denial of its existence. A few years ago, in a Google Search Central Hangout, a person told John Mueller that a core update eliminated the rankings of all of their keywords.

They asked,

“How could it be possible that our authority can drop more than 50% overnight? What actions could we take to increase our authority?”

Mueller answered:

“So in general, Google doesn’t evaluate a site’s authority. So it’s not something where we would give you a score on authority and say this is the general score for authority for your website. So that’s not something that we would be applying here.”

That belief in “domain authority” is one example of out of many where what SEOs think they know about Google is completely disconnected from what Googlers know about how search rankings work.

What Do SEO Experts Really Know?

Martin Splitt continues the conversation to proxies for judging expertise of SEOs by how big the sites are that they manage but concludes that those proxy metrics don’t really say much about their SEO expertise, either. Ultimately they conclude that they need to engage in a deeper conversation with the search marketing and publishing community to identify if there’s something Google could do better to explain what SEOs should be doing.

He explained:

“I mean, we try to gauge experience by asking them how many years have you been doing this kind of job and how many years have you been in this industry, and how many impressions do you manage a month, roughly? And these are proxy metrics. And as you say, it’s super subjective.”

He mentions the wide range of complexity of technical issues an SEO needs to understand and John Mueller adds that even specialists in a specific SEO niche can have gaps in fundamental SEO concepts. The point of the conversation is to speculate if the root of the disconnect is in Google’s documentation or just that the SEO experts just don’t know.

John commented:

“It’s like someone could be like super focused on web workers or trying to get them indexed and at the same time, like, ‘How do I block a page from being indexed?’”

Martin agreed, saying:

“Yeah. And that’s probably why it is so subjective. And it’s super interesting, super interesting to see how they’re like, ‘Yeah, we got everything nailed down. We are running a tight ship here.’ And then you see, like some of the stuff that is discussed at large in all of the beginner documentation is being missed.

And that left me with a question. Is it that they are not aware that this documentation exists? Is it that they had a hard time fielding the amount of information we put out there? Or is it that they don’t know?”

Lizzi Sassman then asked:

“Did you get a sense, just in conversation with them, if they knew about the documentation or if there was like sort of a, I don’t know, a feeling or a vibe about like that the translation is bad or something like that.”

Martin answered:

“That’s exactly what I don’t know, because we were so busy during the event fielding all the conversations, like everyone wanted to talk to us. And that’s great. That’s fantastic. That’s why we are doing it.

But it doesn’t really give you the space to reflect on things on the spot. So I reflected, basically, on my flight back home, I was like, ‘Hm. I wonder. Dang. I should have asked these questions.’ But, you know, this means we have to go back and ask them again.”

What Is An SEO Expert?

SEO expertise is subjective. Anyone who insists that SEO is one thing is out of touch with the reality that there is no single definition of SEO. I disagree with many SEOs about what they think is a good practice and with more experience some of them eventually come around to agreeing with me. There are some SEOs whose experience is wildly different than mine and I sit humbly and listen to them as they share what they know over dinner.

Many of us work from home but it should be understood that we’re all members of the search marketing community and we should be able to listen to what others say about SEO and not only have polite disagreements about the “right way” but to expect that others will disagree and to not let it polarize you, but rather, keep an open mind.

Google’s JavaScript Warning & How It Relates To AI Search via @sejournal, @MattGSouthern

A recent discussion among the Google Search Relations team highlights a challenge in web development: getting JavaScript to work well with modern search tools.

In Google’s latest Search Off The Record podcast, the team discussed the rising use of JavaScript, and the tendency to use it when it’s not required.

Martin Splitt, a Search Developer Advocate at Google, noted that JavaScript was created to help websites compete with mobile apps, bringing in features like push notifications and offline access.

However, the team cautioned that excitement around JavaScript functionality can lead to overuse.

While JavaScript is practical in many cases, it’s not the best choice for every part of a website.

The JavaScript Spectrum

Splitt described the current landscape as a spectrum between traditional websites and web applications.

He says:

“We’re in this weird state where websites can be just that – websites, basically pages and information that is presented on multiple pages and linked, but it can also be an application.”

He offered the following example of the JavaScript spectrum:

“You can do apartment viewings in the browser… it is a website because it presents information like the square footage, which floor is this on, what’s the address… but it’s also an application because you can use a 3D view to walk through the apartment.”

Why Does This Matter?

John Mueller, Google Search Advocate, noted a common tendency among developers to over-rely on JavaScript:

“There are lots of people that like these JavaScript frameworks, and they use them for things where JavaScript really makes sense, and then they’re like, ‘Why don’t I just use it for everything?’”

As I listened to the discussion, I was reminded of a study I covered weeks ago. According to the study, over-reliance on JavaScript can lead to potential issues for AI search engines.

Given the growing prominence of AI search crawlers, I thought it was important to highlight this conversation.

While traditional search engines typically support JavaScript well, its implementation demands greater consideration in the age of AI search.

The study finds AI bots make up an increasing percentage of search crawler traffic, but these crawlers can’t render JavaScript.

That means you could lose out on traffic from search engines like ChatGPT Search if you rely too much on JavaScript.

Things To Consider

The use of JavaScript and the limitations of AI crawlers present several important considerations:

  1. Server-Side Rendering: Since AI crawlers can’t execute client-side JavaScript, server-side rendering is essential for ensuring visibility.
  2. Content Accessibility: Major AI crawlers, such as GPTBot and Claude, have distinct preferences for content consumption. GPTBot prioritizes HTML content (57.7%), while Claude focuses more on images (35.17%).
  3. New Development Approach: These new constraints may require reevaluating the traditional “JavaScript-first” development strategy.

The Path Foward

As AI crawlers become more important for indexing websites, you need to balance modern features and accessibility for AI crawlers.

Here are some recommendations:

  • Use server-side rendering for key content.
  • Make sure to include core content in the initial HTML.
  • Apply progressive enhancement techniques.
  • Be cautious about when to use JavaScript.

To succeed, adapt your website for traditional search engines and AI crawlers while ensuring a good user experience.

Listen to the full podcast episode below:


Featured Image: Ground Picture/Shutterstock

Google Speculates If SEO ‘Is On A Dying Path’ via @sejournal, @martinibuster

Google’s latest Search Off the Record podcast discussed whether ‘SEO is on a dying path’ because of AI Search. Their assessment sought to explain that SEO remains unchanged by the introduction of AI Search, revealing a divide between their ‘nothing has changed’ outlook for SEO and the actual experiences of digital marketers and publishers.

Google Speculates If AI Is On A Dying Path

At a certain point in the podcast they started talking about AI after John Mueller introduced the topic of the impact of AI on SEO.

John asked:

“So do you think AI will replace SEO? Is SEO on a dying path?”

Gary Illyes expressed skepticism, asserting that SEOs have been predicting the decline of SEO for decades.

Gary expressed optimism that SEO is not dead, observing:

“I mean, SEO has been dying since 2001, so I’m not scared for it. Like, I’m not. Yeah. No. I’m pretty sure that, in 2025,the first article that comes out is going to be about how SEO is dying again.”

He’s right. Google began putting the screws to the popular SEO tactics of the day around 2004, gaining momentum in 2005 with things like statistical analysis.

It was a shock to SEOs when reciprocal links stopped working. Some refused to believe Google could suppress those tactics, speculating instead about a ‘Sandbox’ that arbitrarily kept sites from ranking. The point is, speculation has always been the fallback for SEOs who can’t explain what’s happening, fueling the decades-long fear that SEO is dying.

What the Googlers avoided discussing are the thousands of large and small publishers that have been wiped out over the last year.

More on that below.

RAG Is How SEOs Can Approach SEO For AI Search

Google’s Lizzi Sassman then asked how SEO is relevant in 2025 and after some off-topic banter John Mueller raised the topic of RAG, Retrieval Augmented Generation. RAG is a technique that helps answers generated by a large language model (LLM) up to date and grounded in facts. The system retrieves information from an external source like a search index and/or a knowledge graph and the large language model subsequently generates the answer, retrieval augmented generation. The Chatbot interface then provides the answer in natural language.

When Gary Illyes confessed he didn’t know how to explain it, Googler Martin Splitt stepped in with an analogy of documents (representing the search index or knowledge base), search and retrieval of information from those documents, and an output of the information from “out of the bag”).

Martin offered this simplified analogy of RAG:

“Probably nowadays it’s much better and you can just show that, like here, you upload these five documents, and then based on those five documents, you get something out of the bag.”

Lizzi Sassman commented:

“Ah, okay. So this question is about how the thing knows its information and where it goes and gets the information.”

John Mueller picked up this thread of the discussion and started weaving a bigger concept of how RAG is what ties SEO practices to AI Search Engines, saying that there is still a crawling, indexing and ranking part to an AI search engine. He’s right, even an AI search engine like Perplexity AI uses an updated version of Google’s old PageRank algorithm.

Mueller explained:

“I found it useful when talking about things like AI in search results or combined with search results where SEOs, I feel initially, when they think about this topic, think, “Oh, this AI is this big magic box and nobody knows what is happening in there.

And, when you talk about kind of the retrieval augmented part, that’s basically what SEOs work on, like making content that’s crawlable and indexable for Search and that kind of flows into all of these AI overviews.

So I kind of found that angle as being something to show, especially to SEOs who are kind of afraid of AI and all of these things, that actually, these AI-powered search results are often a mix of the existing things that you’re already doing. And it’s not that it suddenly replaces crawling and indexing.”

Mueller is correct that the traditional process of indexing, crawling, and ranking still exists, keeping SEO relevant and necessary for ensuring websites are discoverable and optimized for search engines.

However, the Googlers avoided discussing the obvious situation today, which is the thousands of large and small publishers in the greater web ecosystem that have been wiped out by Google’s AI algorithms on the backend.

The Real Impacts Of AI On Search

What’s changed (and wasn’t addressed) is that the important part of AI  in Search isn’t the one on the front end with AI Overviews. It’s the part on the back-end making determinations based on opaque signals of authority, topicality and the somewhat ironic situation that an artificial intelligence is deciding whether content is made for search engines or humans.

Organic SERPs Are Explicitly Obsolete

The traditional ten blue links have been implicitly obsolete for about 15 years but AI has made them explicitly obsolete.

Natural Language Search Queries

The context of search users who ask precise conversational questions within several back and forth turns is a huge change to search queries. Bing claims that this makes it easier to understand search queries and provide increasingly precise answers. That’s the part that unsettles SEOs and publishers because , let’s face it, a significant amount of content was created to rank in the keyword-based query paradigm, which is gradually disappearing as users increasingly shift to more complex queries. How content creators optimize for that is a big concern.

Backend AI Algorithms

The word “capricious” means the tendency to make sudden and unexplainable changes in behavior. It’s not a quality publishers and SEOs desire in a search engine. Yet capricious back-end algorithms that suddenly throttle traffic and subsequently change their virtual minds months later  is a reality.

Is Google Detached From Reality Of The Web Ecosystem?

Industry-wide damage caused by AI-based algorithms that are still “improving” have unquestionably harmed a considerable segment of the web ecosystem. Immense amounts of traffic to publishers of all sizes has been wiped out since the increased integration of AI into Google’s backend, an issue that the recent Google Search Off The Record avoided discussing.

Many hope Google will address this situation in 2025 with greater nuance than their CEO Sundar Pichai who struggled to articulate how Google supports the web ecosystem, seemingly detached from the plight of thousands of publishers.

Maybe the question isn’t whether SEO is on a dying path but whether publishing itself is in decline because of AI on both the backend and the front of Google’s search box and Gemini apps.

Check out these related articles:

Google CEO’s 2025 AI Strategy Deemphasizes The Search Box

Google Gemini Deep Research May Erode Website Earnings

Google CEO: Search Will Change Profoundly In 2025

Featured Image by Shutterstock/Shutterstock AI Generator

Google AI Overviews Claims More Pixel Height in SERPs via @sejournal, @martinibuster

New data from BrightEdge reveals that Google’s AI Overviews is increasingly blocking organic search results. If this trend continues, Google AI Overviews and advertisements could cover well over half of the available space in search results.

Organic Results Blocking Creeping Up

Google’s AI Overviews feature, launched in May 2024, has been a controversial feature among publishers and SEOs since day one. Many publishers resent that Google is using their content to create answers in the search results that discourage users from clicking through and reading more, thereby negatively influencing earnings.

Many publishers, including big brand publishers, have shut down from a combination of declining traffic from Google and algorithmic suppression of rankings. AI Overviews only added to publisher woes and has caused Google to become increasingly unpopular with publishers.

Google AIO Taking Over Entire Screen

BrightEdge’s research shows that AI Overviews started out in May 2024 taking up to 600 pixels of screen space, crowding out the organic search results, formerly known as the ten blue links. When advertising is factored in there isn’t much space left over for links to publisher sites.

By the end of summer the amount of space taken over by Google’s AIO increased to 800 pixels and continued to climb. At this pace BrightEdge predicts that Google could eventually reach 1,000 pixels of screen space. To put that in perspective, 600 pixels is considered “above the fold,” what users typically see without scrolling.

Graph Showing  Growth Of AIO Pixel Size By Height

Percentage Of Queries Showing AIOs

The percentage of queries that display Google’s AI Overviews have also been creeping up. Health related search queries have been trending higher than any other niche. B2B Technology, eCommerce, and finance queries are also increasingly showing AI Overview search results.

Healthcare search queries initially triggered AIO at around 70% of the time. Health related queries are now triggered over 80% of the time.

B2B technology queries started out in May 2024 showing AIO results at about 30% of the time. Now those same queries trigger AIO results almost 50% of the time.

Finance queries that trigger AI Overviews have grown from around 5% to 20% of the time. BrightEdge data shows that Google AIO coverage is trending upwards and is predicted to cover an increasing amount of search queries across other topics, specifically in travel, restaurants, and entertainment.

BrightEdge’s data shows:

“Finance shows most dramatic trajectory: starting at just 5.3% but projected to reach 15-20% by June 2025

-Healthcare led (67.5% in June)
-B2B Tech: 33.2% → 38.4%, projected 45-50%
-eCommerce: 26.9% → 35.1%, projected 40-45%
-Emerging sectors showing dramatic growth:

Entertainment (shows, events, venues): 0.3% → 5.2%
Travel (destinations, lodging, activities): 0.1% → 4.1%
Restaurants (dining, menus, reservations): ~0% → 6.0%”

BrightEdge explains that restaurant search query coverage started out small, focusing on long tail search queries like “restaurants with vegetarian food for groups” but is now is rolling out in higher amounts, suggesting that Google is feeling more comfortable with their AIO results and is expected to roll out across more search queries in 2025.

They explain:

“AIO’s evolved from basic definitions to understanding complex needs combining multiple requirements (location + features + context)

In 2025, expect AIO’s to handle even more sophisticated queries as they shift from informational to actionable responses.

-Healthcare stable at 65-70%
-B2B Tech/eCommerce will reach 40-50%
-Finance sector will surge from 5.3% to 25%
-Emerging sectors could see a 50-100x growth potential
-AIOs will evolve from informational to actionable (reservations, bookings, purchases)
-Feature complexity: 2.5x current levels”

The Takeaway

I asked BrightEdge for a comment about what they feel publishers should get ahead of for 2025.

Jim Yu, CEO of BrightEdge, responded:

“Publishers will need to adapt to the complexity of content creation and optimization while leaning into core technical SEO to guarantee their sites are seen and valued as authoritative sources.

Citations are a new form of ranking. As search and AI continue to converge, brands need to send the right signals to search and AI engines to help them decide if the content is helpful, unique, and informative. In a multi-modal world, this means schema tags about a publisher’s company, products, images, videos, overall site and content structure, reviews, and more!

In 2025, content, site structure, and authority will matter more than ever, and SEO has a huge role to play in that.

Key Questions marketers need to address in 2025

  • Is your content ready for 4-5 layered intents?
    Can you match Google’s growing complexity?
    Have you mapped your industry’s intent combinations?

Key Actions for 2025

The Pattern is clear: Simple answers → rich, context-aware responses!

  • Intent Monitoring: See which intents AIO’s are serving for your space
    Query Evolution: Identify what new keyword patterns are emerging that AIO’s are serving
    Citation Structure: Align content structure to intents and queries AIO’s are focused on to ensure you are cited
    Competitive Intelligence: Track which competitor content AIOs select and why

AIOs aren’t just displaying content differently – they’re fundamentally changing how users find and interact with information.

The takeaway from the data is that publishers are encouraged to create unambiguous content that directly address topics in order to rank for complex search queries. A careful eye on how AI Overviews are displayed and what kinds of content are cited and linked to is encouraged.

Google’s CEO, Sundar Pichai, recently emphasized increasing the amount of coverage that AI assistants like Gemini handle, which implies that Google’s focus on AI, if successful, may begin to eat into the amount of traffic from the traditional search box. That’s a trend to be on the watch for and a wakeup call to get on top of creating content that resonates with today’s AI Search.

The source of AIO data is from the proprietary BrightEdge Generative Parser™ and DataCubeX, which regularly informs the BrightEdge guide to AIO.

WordPress Developer Publishes Code To Block Mullenweg’s Web Hosting Clients via @sejournal, @martinibuster

A prolific WordPress plugin publisher who has created over three dozen free plugins has released code that other plugin and theme publishers can use to block client’s of Matt Mullenweg’s WordPress.com commercial web hosting platform from using them.

What The Plugin & Theme Code Does

The plugin was created so that other plugin and theme makers can prevent websites hosted on WordPress.com from activating or using them. The code detects whether it is being used within the WordPress.com environment and if discovers that it is then plugin will display a message to the users advising them that the functionality is blocked. The developer who created the code explains exactly how it works and walks plugins and theme makers through the code.

It does three main things:

  1. Environment Detection
  2. Plugin Deactivation
  3. Admin Context Only (deactivates it on the admin side)

Reason For Creating The Code

Robert DeVore, the developer who created the code, explained in a tweet that it’s a way to flip the bird at Matt, a way to send a statement to Matt Mullenweg expressing disapproval for his actions, specifically the leadership “overreach.”

He wrote:

“Take a Stand for the Community
This script isn’t just about restricting your plugin.

It’s a statement against the centralization and overreach demonstrated by WordPress.com and Automattic’s (lack of) leadership.

WordPress® developers deserve a level playing field – free from monopolistic B.S. that stifles innovation and community growth.”

The code is available on his website here:

How to Stop Your Plugins & Themes from Being Used on WordPress.com Hosting

Featured Image by Shutterstock/Anatoliy Cherkas

Google CEO Describes A 2025 Beyond A Search Box via @sejournal, @martinibuster

Google’s Sundar Pichai outlined the 2025 strategy, emphasizing consumer-focused AI, rapid development of agentic apps, a Chrome AI prototype called Project Mariner, and upgrades to Gemini and Project Astra, signaling a shift toward AI apps as the user interface for search.

Although Pichai did not say Google is de-emphasizing the Google Search box, he did emphasize that 2025 will increase the focus on AI apps as the main point of contact between users and how they interact with Google.

For example, Project Mariner is a Chrome AI extension that can do things like take a top ten restaurants list from TripAdvisor and drop it into Google Maps.

This focus on AI shows that Google is in transition toward an AI-based user experiences that represent a larger interpretation of what Search means, a search experience that goes far beyond textual question and answering.

Google’s Future Hinges On AI

Google CEO, Sundar Pichai, outlined a vision for 2025 that emphasizes an urgency to go back to its roots as a company that innovates quickly, what Pichai referred to as being “scrappy” which means being tough and resourceful, able to accomplish a lot in a quick amount of time (and fewer resources). Most importantly he emphasized solving real-world problems.

He also prioritizes “building big, new business” which could mean creating new business opportunities with AI, reflecting a strong focus on AI as the engine for innovation in 2025.

Gemini App

Pichai also cited Gemini App as a central focus for 2025, commenting that they’re experiencing growth with Gemini and that scaling broader adoption of Gemini will be a focus in 2025. This aligns with the observation that Google is increasingly focusing on a Search-adjacent approach to consumer focused AI products and services.

What this means for SEO is that we really need to start thinking in terms of a bigger picture of what Search means. Perhaps 2025 will be, after over 15 years of Google’s departure from the ten blue links paradigm, that the SEO community thinks deeper about what search means when it’s multimodal.

Pichai was quoted as saying:

“With the Gemini app, there is strong momentum, particularly over the last few months… But we have some work to do in 2025 to close the gap and establish a leadership position there as well. …Scaling Gemini on the consumer side will be our biggest focus next year.”

AI Products Will “Evolve Massively”

The co-founder of Google Deep Mind was quoted as saying that Google was going to “turbo charge” the Gemini app, saying that:

“…the products themselves are going to evolve massively over the next year or two.”

That means that the Gemini app is going to gain more functionalities in a bid to make it more ubiquitous as the interface between potential website visitors and Google Search, a significant departure from interfacing with the search box.

This is something that publishers and SEOs need to think really hard about as we enter 2025. Google is focusing on increasing user adoption of the Gemini app. If that happens then that will mean more people interfacing with that instead of the Google Search box.

Universal Assistant (Project Astra)

Another thing that the SEO industry seriously needs to consider is Google’s universal assistant that’s code-named Project Astra. The Deep Mind co-founder is reported to have discussed their Universal Assistant, which what Project Astra is referred as.

Screenshot of DeepMind Project Astra web page showing how it is referred to as a Universal AI Assistant.

He’s quoted as saying that it can:

“…seamlessly operate over any domain, any modality or any device.”

What that word “domain” means is that it can function across any subject, like answering questions about healthcare, directions, entertainment, over any topic. The part about modality is a reference to text, voice, images, and video.

This is a serious situation for SEO. Google’s new Deep Research agentic search is an example of a disruptive technology that may have a negative impact on the web ecosystem.

One of the Google Deep Mind researchers cited as working on Project Astra is also listed as a co-inventor on a patent about controlling interactive AI agents through multi-modal inputs.

The patent is titled, Controlling Interactive Agents Using Multi-Modal Inputs. The description of the invention is:

“Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for controlling agents. In particular, an interactive agent can be controlled based on multi-modal inputs that include both an observation image and a natural language text sequence.”

That’s just one of dozens of researchers cited as having worked on Project Astra. Astra is another one of the projects that Google is working on that replaces the traditional search box as people’s point of contact for interacting with web data.

Takeaway About Google’s Plans For 2025

The takeaway from all this is that publishers and SEOs need to take a break from focusing solely on the search box and give some time to considering what’s going on in multimodal AI.  In 2025, AI is not just AI Overviews. AI is Gemini, it’s new features coming to Gemini and possibly the release of features developed from Project Astra, a multimodal universal agent. Agentic Search is already here in the form of Gemini Deep Research. All of these are a departure from the traditional search box as the point of contact between users, Google and websites.

Read the report on CNBC

Google CEO Pichai tells employees to gear up for big 2025: ‘The stakes are high’

Google Gemini Deep Research May Erode Website Earnings via @sejournal, @martinibuster

There’s a compelling theory floating around that Google’s AI agent, called Deep Research, could negatively impact affiliate sites. If true, not only would this impact affiliate site earnings, it could also decrease ad revenues and web traffic and to informational sites, including to the “lucky” sites that are linked to by Google’s AI research assistant.

Gemini Deep Research

Gemini Deep Research is a new tool available to premium subscribers to Gemini Advanced. Deep Research takes a user’s queries and researches an answer on the web then generates a report. The research can be further refined to produce increasingly precise results.

Google rolled out Deep Research on December 11th. It describes it as a time-saver that creates a research plan and once approved will carry out the research.

Google explains:

“Deep Research uses AI to explore complex topics on your behalf and provide you with findings in a comprehensive, easy-to-read report, and is a first look at how Gemini is getting even better at tackling complex tasks to save you time.

Under your supervision, Deep Research does the hard work for you. After you enter your question, it creates a multi-step research plan for you to either revise or approve. Once you approve, it begins deeply analyzing relevant information from across the web on your behalf.”

Deep Research presents a report that features a summary and recommendations. If searching for a product it will summarize the pros and cons with enough data that a user won’t need to click a link to visit a site, they can just go directly to a retailer and purchase the product, thereby eliminating the possibility of a site visitor clicking an affiliate link from a review website and depriving that informational site of revenue.

According to an article by Marie Haynes on YouKnowAI, the thoroughness of the summary generated by Gemini Deep Research negates the need to visit a websites, thereby depriving the site of affiliate link revenue.

YouKnowAI explains:

“…perhaps sites like foodnetwork.com will get clicks and subsequent affiliate sales. I’ve found in my own research so far that I’m not clicking on sites as I get what I need to know from the research and then go to official sites or perhaps Amazon, or stores near me to purchase.

…The obvious question here is what happens when sites like foodnetwork.com and seriouseats.com see a reduction in traffic? “

If it’s true that Gemini Deep Research users won’t need to visit sites to make up their minds then it’s possible that this new tool will also negatively affect web traffic and advertising revenue.

Is Google Out Of Touch With The Web Ecosystem?

In a recent interview, Google’s CEO, Sundar Pichai, insisted that Google cares about the web ecosystem. When asked how Google supports the web ecosystem he struggled to articulate an answer. After a long series of uhms and false starts he started talking about how Google’s own YouTube platform enables multinational media corporations to monetize their intellectual properties on YouTube.

“He avoids mentioning websites, speaking in the abstract about the “ecosystem” and then when he runs out of things to say changes course and begins speaking about how Google compensates copyright holders who sign up for YouTube’s Content ID program.

He answered:

‘Look I… uh… It’s a… very important question… uhm… look I… I… think… I think more than any other company… look you know… we for a long time through… you know… be it in search making sure… while it’s often debated, we spend a lot of time thinking about the traffic we send to the ecosystem.

Even through the moment through the transition over the past couple of years. It’s an important priority for us.’”

This Is Why Google CEO’s Explanation Falls Short

1. YouTube is not the web ecosystem, it’s Google’s own platform.

2. Multinational mega corporations are not web creators.

Pichai’s answer sent the unintended message that Google is completely out of touch with web creators and if the author of the article about Google Gemini’s Deep Research tool is correct, this is further proof that Google continues to focus on providing information to users at the expense of creators.

Is Gemini Deep Research Harvesting Data Without Giving Back?

There’s an old television episode of The Twilight Zone called To Serve Man that relates the story of a benevolent race of aliens who bring advanced technologies that allow humans to live in peace, with food security and prosperity for everyone. As evidence of their good intentions they give the world a book written in an alien language that’s titled To Serve Man. The episode ends when government cryptographers translate the book and discover that it’s a cookbook and that the aliens true intentions are to farm humans as a food source.

Google’s mission statement promising “to organize the world’s information and make it universally accessible and useful” also seems like proof of their good intentions. However, the mission statement doesn’t explicitly say that Google will refer users to the sources of information. It only promises to organize and provide the information itself in a way that’s accessible and useful. While referring users to the creators of the information could be a part of making information accessible and useful, it’s not explicitly stated; it’s not even implied in the mission statement.

Is Google Gemini Deep Research further proof that Google is harvesting websites as an information source?

If you’re a creator, does it make you feel farmed?

Featured Image by Shutterstock/Nomad_Soul