WordPress Developer Publishes Code To Block Mullenweg’s Web Hosting Clients via @sejournal, @martinibuster

A prolific WordPress plugin publisher who has created over three dozen free plugins has released code that other plugin and theme publishers can use to block client’s of Matt Mullenweg’s WordPress.com commercial web hosting platform from using them.

What The Plugin & Theme Code Does

The plugin was created so that other plugin and theme makers can prevent websites hosted on WordPress.com from activating or using them. The code detects whether it is being used within the WordPress.com environment and if discovers that it is then plugin will display a message to the users advising them that the functionality is blocked. The developer who created the code explains exactly how it works and walks plugins and theme makers through the code.

It does three main things:

  1. Environment Detection
  2. Plugin Deactivation
  3. Admin Context Only (deactivates it on the admin side)

Reason For Creating The Code

Robert DeVore, the developer who created the code, explained in a tweet that it’s a way to flip the bird at Matt, a way to send a statement to Matt Mullenweg expressing disapproval for his actions, specifically the leadership “overreach.”

He wrote:

“Take a Stand for the Community
This script isn’t just about restricting your plugin.

It’s a statement against the centralization and overreach demonstrated by WordPress.com and Automattic’s (lack of) leadership.

WordPress® developers deserve a level playing field – free from monopolistic B.S. that stifles innovation and community growth.”

The code is available on his website here:

How to Stop Your Plugins & Themes from Being Used on WordPress.com Hosting

Featured Image by Shutterstock/Anatoliy Cherkas

Google CEO Describes A 2025 Beyond A Search Box via @sejournal, @martinibuster

Google’s Sundar Pichai outlined the 2025 strategy, emphasizing consumer-focused AI, rapid development of agentic apps, a Chrome AI prototype called Project Mariner, and upgrades to Gemini and Project Astra, signaling a shift toward AI apps as the user interface for search.

Although Pichai did not say Google is de-emphasizing the Google Search box, he did emphasize that 2025 will increase the focus on AI apps as the main point of contact between users and how they interact with Google.

For example, Project Mariner is a Chrome AI extension that can do things like take a top ten restaurants list from TripAdvisor and drop it into Google Maps.

This focus on AI shows that Google is in transition toward an AI-based user experiences that represent a larger interpretation of what Search means, a search experience that goes far beyond textual question and answering.

Google’s Future Hinges On AI

Google CEO, Sundar Pichai, outlined a vision for 2025 that emphasizes an urgency to go back to its roots as a company that innovates quickly, what Pichai referred to as being “scrappy” which means being tough and resourceful, able to accomplish a lot in a quick amount of time (and fewer resources). Most importantly he emphasized solving real-world problems.

He also prioritizes “building big, new business” which could mean creating new business opportunities with AI, reflecting a strong focus on AI as the engine for innovation in 2025.

Gemini App

Pichai also cited Gemini App as a central focus for 2025, commenting that they’re experiencing growth with Gemini and that scaling broader adoption of Gemini will be a focus in 2025. This aligns with the observation that Google is increasingly focusing on a Search-adjacent approach to consumer focused AI products and services.

What this means for SEO is that we really need to start thinking in terms of a bigger picture of what Search means. Perhaps 2025 will be, after over 15 years of Google’s departure from the ten blue links paradigm, that the SEO community thinks deeper about what search means when it’s multimodal.

Pichai was quoted as saying:

“With the Gemini app, there is strong momentum, particularly over the last few months… But we have some work to do in 2025 to close the gap and establish a leadership position there as well. …Scaling Gemini on the consumer side will be our biggest focus next year.”

AI Products Will “Evolve Massively”

The co-founder of Google Deep Mind was quoted as saying that Google was going to “turbo charge” the Gemini app, saying that:

“…the products themselves are going to evolve massively over the next year or two.”

That means that the Gemini app is going to gain more functionalities in a bid to make it more ubiquitous as the interface between potential website visitors and Google Search, a significant departure from interfacing with the search box.

This is something that publishers and SEOs need to think really hard about as we enter 2025. Google is focusing on increasing user adoption of the Gemini app. If that happens then that will mean more people interfacing with that instead of the Google Search box.

Universal Assistant (Project Astra)

Another thing that the SEO industry seriously needs to consider is Google’s universal assistant that’s code-named Project Astra. The Deep Mind co-founder is reported to have discussed their Universal Assistant, which what Project Astra is referred as.

Screenshot of DeepMind Project Astra web page showing how it is referred to as a Universal AI Assistant.

He’s quoted as saying that it can:

“…seamlessly operate over any domain, any modality or any device.”

What that word “domain” means is that it can function across any subject, like answering questions about healthcare, directions, entertainment, over any topic. The part about modality is a reference to text, voice, images, and video.

This is a serious situation for SEO. Google’s new Deep Research agentic search is an example of a disruptive technology that may have a negative impact on the web ecosystem.

One of the Google Deep Mind researchers cited as working on Project Astra is also listed as a co-inventor on a patent about controlling interactive AI agents through multi-modal inputs.

The patent is titled, Controlling Interactive Agents Using Multi-Modal Inputs. The description of the invention is:

“Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for controlling agents. In particular, an interactive agent can be controlled based on multi-modal inputs that include both an observation image and a natural language text sequence.”

That’s just one of dozens of researchers cited as having worked on Project Astra. Astra is another one of the projects that Google is working on that replaces the traditional search box as people’s point of contact for interacting with web data.

Takeaway About Google’s Plans For 2025

The takeaway from all this is that publishers and SEOs need to take a break from focusing solely on the search box and give some time to considering what’s going on in multimodal AI.  In 2025, AI is not just AI Overviews. AI is Gemini, it’s new features coming to Gemini and possibly the release of features developed from Project Astra, a multimodal universal agent. Agentic Search is already here in the form of Gemini Deep Research. All of these are a departure from the traditional search box as the point of contact between users, Google and websites.

Read the report on CNBC

Google CEO Pichai tells employees to gear up for big 2025: ‘The stakes are high’

TikTok Shop’s Coming Seller Boom

TikTok faces an unprecedented U.S. ban, caught between its massive consumer popularity and lawmakers’ distrust. The crisis started in 2020 when some citizens and legislators grew concerned about the China-based social media company’s privacy practices and its relationship with that country’s communist government.

In April 2024, President Biden signed the Protecting Americans from Foreign Adversary Controlled Applications Act, requiring TikTok’s parent company to sell the platform by January 19, 2025.

The U.S. Supreme Court will hear an appeal on January 10, 2025, and many commentators expect TikTok to survive in the United States, with or without new owners.

TikTok Shop

Meanwhile, TikTok Shop has been a boon for some sellers. The social commerce platform will reportedly reach $50 billion in worldwide gross merchandise volume in 2024. More than $10 billion of that total is likely to come from the U.S. market.

TikTok Shop is among the top social commerce platforms but has not likely reached its full potential.

Collectively, the nascent social commerce platform recorded $137 million in 2024 Black Friday and Cyber Monday U.S. sales.

Yet TikTok Shop may be far from its potential as online sellers, great and small, are waiting to learn how the ban plays out before adding another platform.

Ecommerce platforms now include at least four types of selling environments.

  • Ecommerce shops are storefronts from software suppliers such as Shopify, WooCommerce, BigCommerce, and Wix, to name a few. Most ecommerce merchants use these storefronts as a home base or flagship store online.
  • Ecommerce marketplaces — Amazon, Walmart, eBay, many more — are also common, if not essential, for online sellers. The number of marketplaces continues to grow, perhaps most notably with Temu’s U.S. seller program.
  • Ecommerce mobile apps for Apple’s iOS and Google’s Android are common for enterprise retailers as a primary storefront or separate.
  • Social commerce storefronts include TikTok, Facebook, Instagram, and others. Published estimates put the total global social commerce market at $2.9 trillion. Consumers increasingly expect sellers to have a presence on multiple social commerce platforms.

These various and sometimes repetitive ecommerce selling platforms are necessary because American shoppers tend to use several of them.

Mixed Signals

One could argue that U.S. consumers send a lot of mixed signals. Consider a few examples of contradictory survey results.

  • “49% of customers start and end their shopping journeys on retailer websites or apps,” according to a Hostinger article — points for ecommerce shops and mobile apps.
  • “Amazon accounted for 37.6% of the U.S. ecommerce market in 2023, followed by Walmart (6.4%), Apple (3.6%), eBay (3%), [and] Target (1.9%),” according to Jungle Scout — chalk up one for marketplace platforms.
  • “56% of consumers start their product searches on Amazon,” according to Jungle Scout again — continuing the argument for marketplaces.
  • “People aged 18 to 34 were the most into buying through social media, with about 73% of them saying they’ve bought something via social media channels,” according to an article from SellersCommerce — validating the need to use social commerce too.

Those statistics confirm why online merchants use many platforms in combination — including post-ban-paranoia TikTok Shop.

TikTok Redeemed

So if, as some predict, TikTok remains in the U.S. market and its future is no longer in doubt, expect a boom in sellers. Many ecommerce businesses, from small merchants to massive enterprises, will likely begin offering their products on the social platform.

Google Gemini Deep Research May Erode Website Earnings via @sejournal, @martinibuster

There’s a compelling theory floating around that Google’s AI agent, called Deep Research, could negatively impact affiliate sites. If true, not only would this impact affiliate site earnings, it could also decrease ad revenues and web traffic and to informational sites, including to the “lucky” sites that are linked to by Google’s AI research assistant.

Gemini Deep Research

Gemini Deep Research is a new tool available to premium subscribers to Gemini Advanced. Deep Research takes a user’s queries and researches an answer on the web then generates a report. The research can be further refined to produce increasingly precise results.

Google rolled out Deep Research on December 11th. It describes it as a time-saver that creates a research plan and once approved will carry out the research.

Google explains:

“Deep Research uses AI to explore complex topics on your behalf and provide you with findings in a comprehensive, easy-to-read report, and is a first look at how Gemini is getting even better at tackling complex tasks to save you time.

Under your supervision, Deep Research does the hard work for you. After you enter your question, it creates a multi-step research plan for you to either revise or approve. Once you approve, it begins deeply analyzing relevant information from across the web on your behalf.”

Deep Research presents a report that features a summary and recommendations. If searching for a product it will summarize the pros and cons with enough data that a user won’t need to click a link to visit a site, they can just go directly to a retailer and purchase the product, thereby eliminating the possibility of a site visitor clicking an affiliate link from a review website and depriving that informational site of revenue.

According to an article by Marie Haynes on YouKnowAI, the thoroughness of the summary generated by Gemini Deep Research negates the need to visit a websites, thereby depriving the site of affiliate link revenue.

YouKnowAI explains:

“…perhaps sites like foodnetwork.com will get clicks and subsequent affiliate sales. I’ve found in my own research so far that I’m not clicking on sites as I get what I need to know from the research and then go to official sites or perhaps Amazon, or stores near me to purchase.

…The obvious question here is what happens when sites like foodnetwork.com and seriouseats.com see a reduction in traffic? “

If it’s true that Gemini Deep Research users won’t need to visit sites to make up their minds then it’s possible that this new tool will also negatively affect web traffic and advertising revenue.

Is Google Out Of Touch With The Web Ecosystem?

In a recent interview, Google’s CEO, Sundar Pichai, insisted that Google cares about the web ecosystem. When asked how Google supports the web ecosystem he struggled to articulate an answer. After a long series of uhms and false starts he started talking about how Google’s own YouTube platform enables multinational media corporations to monetize their intellectual properties on YouTube.

“He avoids mentioning websites, speaking in the abstract about the “ecosystem” and then when he runs out of things to say changes course and begins speaking about how Google compensates copyright holders who sign up for YouTube’s Content ID program.

He answered:

‘Look I… uh… It’s a… very important question… uhm… look I… I… think… I think more than any other company… look you know… we for a long time through… you know… be it in search making sure… while it’s often debated, we spend a lot of time thinking about the traffic we send to the ecosystem.

Even through the moment through the transition over the past couple of years. It’s an important priority for us.’”

This Is Why Google CEO’s Explanation Falls Short

1. YouTube is not the web ecosystem, it’s Google’s own platform.

2. Multinational mega corporations are not web creators.

Pichai’s answer sent the unintended message that Google is completely out of touch with web creators and if the author of the article about Google Gemini’s Deep Research tool is correct, this is further proof that Google continues to focus on providing information to users at the expense of creators.

Is Gemini Deep Research Harvesting Data Without Giving Back?

There’s an old television episode of The Twilight Zone called To Serve Man that relates the story of a benevolent race of aliens who bring advanced technologies that allow humans to live in peace, with food security and prosperity for everyone. As evidence of their good intentions they give the world a book written in an alien language that’s titled To Serve Man. The episode ends when government cryptographers translate the book and discover that it’s a cookbook and that the aliens true intentions are to farm humans as a food source.

Google’s mission statement promising “to organize the world’s information and make it universally accessible and useful” also seems like proof of their good intentions. However, the mission statement doesn’t explicitly say that Google will refer users to the sources of information. It only promises to organize and provide the information itself in a way that’s accessible and useful. While referring users to the creators of the information could be a part of making information accessible and useful, it’s not explicitly stated; it’s not even implied in the mission statement.

Is Google Gemini Deep Research further proof that Google is harvesting websites as an information source?

If you’re a creator, does it make you feel farmed?

Featured Image by Shutterstock/Nomad_Soul

Google Explains How CDNs Impact Crawling & SEO via @sejournal, @martinibuster

Google published an explainer that discusses how Content Delivery Networks (CDNs) influence search crawling and improve SEO but also how they can sometimes cause problems.

What Is A CDN?

A Content Delivery Network (CDN) is a service that caches a web page and displays it from a data center that’s closest to the browser requesting that web page. Caching a web page means that the CDN creates a copy of a web page and stores it. This speeds up web page delivery because now it’s served from a server that’s closer to the site visitor, requiring less “hops” across the Internet from the origin server to the destination (the site visitor’s browser).

CDNs Unlock More Crawling

One of the benefits of using a CDN is that Google automatically increases the crawl rate when it detects that web pages are being served from a CDN. This makes using a CDN attractive to SEOs and publishers who are concerned about increasing the amount of pages that are crawled by Googlebot.

Normally Googlebot will reduce the amount of crawling from a server if it detects that it’s reaching a certain threshold that’s causing the server to slow down. Googlebot slows the amount of crawling, which is called throttling. That threshold for “throttling” is higher when a CDN is detected, resulting in more pages crawled.

Something to understand about serving pages from a CDN is that the first time pages are served they must be served directly from your server. Google uses an example of a site with over a million web pages:

“However, on the first access of a URL the CDN’s cache is “cold”, meaning that since no one has requested that URL yet, its contents weren’t cached by the CDN yet, so your origin server will still need serve that URL at least once to “warm up” the CDN’s cache. This is very similar to how HTTP caching works, too.

In short, even if your webshop is backed by a CDN, your server will need to serve those 1,000,007 URLs at least once. Only after that initial serve can your CDN help you with its caches. That’s a significant burden on your “crawl budget” and the crawl rate will likely be high for a few days; keep that in mind if you’re planning to launch many URLs at once.”

When Using CDNs Backfire For Crawling

Google advises that there are times when a CDN may put Googlebot on a blacklist and subsequently block crawling. This effect is described as two kinds of blocks:

1. Hard blocks

2. Soft blocks

Hard blocks happen when a CDN responds that there’s a server error. A bad server error response can be a 500 (internal server error) which signals a major problem is happening with the server. Another bad server error response is the 502 (bad gateway). Both of these server error responses will trigger Googlebot to slow down the crawl rate. Indexed URLs are saved internally at Google but continued 500/502 responses can cause Google to eventually drop the URLs from the search index.

The preferred response is a 503 (service unavailable), which indicates a temporary error.

Another hard block to watch out for are what Google calls “random errors” which is when a server sends a 200 response code, which means that the response was good (even though it’s serving an error page with that 200 response). Google will interpret those error pages as duplicates and drop them from the search index. This is a big problem because it can take time to recover from this kind of error.

A soft block can happen if the CDN shows one of those “Are you human?” pop-ups (bot interstitials) to Googlebot. Bot interstitials should send a 503 server response so that Google knows that this is a temporary issue.

Google’s new documentation explains:

“…when the interstitial shows up, that’s all they see, not your awesome site. In case of these bot-verification interstitials, we strongly recommend sending a clear signal in the form of a 503 HTTP status code to automated clients like crawlers that the content is temporarily unavailable. This will ensure that the content is not removed from Google’s index automatically.”

Debug Issues With URL Inspection Tool And WAF Controls

Google recommends using the URL Inspection Tool in the Search Console to see how the CDN is serving your web pages. If the CDN firewall, called a Web Application Firewall (WAF), is blocking Googlebot by IP address you should be able to check for the blocked IP addresses and compare them to Google’s official list of IPs to see if one of them are on the list.

Google offers the following CDN-level debugging advice:

“If you need your site to show up in search engines, we strongly recommend checking whether the crawlers you care about can access your site. Remember that the IPs may end up on a blocklist automatically, without you knowing, so checking in on the blocklists every now and then is a good idea for your site’s success in search and beyond. If the blocklist is very long (not unlike this blog post), try to look for just the first few segments of the IP ranges, for example, instead of looking for 192.168.0.101 you can just look for 192.168.”

Read Google’s documentation for more information:

Crawling December: CDNs and crawling

Featured Image by Shutterstock/JHVEPhoto

ChatGPT Search Manipulated With Hidden Instructions via @sejournal, @martinibuster

New report claims that ChatGPT Search can be manipulated with hidden text featuring instructions telling ChatGPT Search how to respond to an answer Tests also showed that ChatGPT could be manipulated without the instructions, with just the hidden text.

ChatGPT Search Can Be Manipulated With Hidden Text

A report from The Guardian outlines how they used hidden text on a fake website to trick ChatGPT Search to show them a response from hidden text on the web page. Text is hidden when the font matches the background color of a page, like a white font on a white background.

They then asked ChatGPT Search to visit the website and answer a question based on the text on the site. ChatGPT Search browsed the site, indexed the hidden content and used it in the answer.

They first assessed ChatGPT using a non-exploit control page on a fake review website to test ChatGPT’s response. It read the reviews and returned a normal response.

Researchers at The Guardian next sent ChatGPT Search to a fake website that had instructions to give a positive review and ChatGPT Search followed the instructions and returned positive reviews.

The researchers did a third test with positive reviews written in hidden text but without instructions and ChatGPT Search again returned positive reviews.

This is how The Guardian explained it:

“…when hidden text included instructions to ChatGPT to return a favourable review, the response was always entirely positive. This was the case even when the page had negative reviews on it – the hidden text could be used to override the actual review score.

The simple inclusion of hidden text by third parties without instructions can also be used to ensure a positive assessment, with one test including extremely positive fake reviews which influenced the summary returned by ChatGPT.”

The above test is similar to a test of ChatGPT that computer science university professor did in March 2023 where he tricked ChatGPT to say that he was a time travel expert.

What these tests prove is that ChatGPT’s training data and the ChatGPT Search Bot ingest hidden text but can also be manipulated into following directions. The Guardian quotes a security expert saying that OpenAI was made aware of the exploit and that it might be fixed by the time the article is published.

Why Can AI Search Engines Be Manipulated?

One loophole in AI Search is a technology called RAG (Retrieval Augmented Generation), a technique that can fetch information from a search engine so that an AI can then use it for generating answers to questions from up to date and (presumably) authoritative sources. How do AI Search Engines determine authoritative web pages? Perplexity AI, for example, uses a modified version of PageRank in order to identify trustworthy web pages to cite in their AI search engine.

ChatGPT Search is based on Bing but it also has its own crawler that can fetch real-time information. It’s probably not unreasonable to speculate that if a site is included in Bing’s search index then it’s probably included within ChatGPT Search, which should protect ChatGPT Search from being influenced by hidden text. Presumably, sites with hidden text would be excluded from Bing’s search index. That said, it may be possible to cloak a website so that it shows different content to the ChatGPT Search Bot (an up to date list of OpenAI Search Crawler bots is available here).

Other Ways To Manipulate AI Search Engines

There are said to be other ways that researchers discovered last year that might still be effective (Read: Researchers Discover How To SEO For AI Search). In this research paper from last year the researchers tested nine strategies for influencing AI search engines:

Nine Strategies For Manipulating AI Search Engines

  1. Authoritative: Changing the writing style to be more persuasive in authoritative claims
  2. Keyword optimization: Adding more keywords from the search query
  3. Statistics Addition: Changing existing content to include statistics instead of interpretative information.
  4. Cite Sources (quoting reliable sources)
  5. Quotation Addition: Adding quotes and citation from high quality sources
  6. Easy-to-Understand: Making the content simpler to understand
  7. Fluency Optimization is about making the content more articulate
  8. Unique Words: Adding words that are less widely used, rare and unique but without changing the meaning of the content
  9. Technical Terms: This strategy adds both unique and technical terms wherever it makes sense to do so and without changing the meaning of the content

The researchers discovered that the first three strategies worked the best. Notably, adding keywords into web pages helped a lot.

ChatGPT Search Can Be Manipulated?

I overheard claims made at a recent search conference that Google AI Overviews could be manipulated to show certain big brand products in response to search queries. I didn’t verify whether that was true but the claim was made by a reliable and authoritative source. With regard to ChatGPT Search, I’ve noticed some interesting things about what sites it chooses to surface information and under what circumstances, which could be a way to influence rankings. So it’s not surprising that there are ranking loopholes in ChatGPT Search. AI Search is looking a lot of like the early days of traditional search.

Featured Image by Shutterstock/Antonello Marangi

Matt Mullenweg: What Drama Can I Create In 2025? via @sejournal, @martinibuster

Matt Mullenweg started a Reddit discussion in the r/WPDrama subreddit asking what kind of drama he can create in 2025, sparking an avalanche of responses that subsequently generated a spinoff discussion about one of his responses to another Redditor.

The public r/WPDrama subreddit was created in October 2024 as a place to discuss the fallout from Mullenweg’s conflict with WP Engine. It currently has over 1,300 members.

Mullenweg And Drama

Matt Mullenweg, co-founder of the WordPress CMS (Content Management System), was involved in a self-described “nuclear” war with WP Engine in the latter half of 2024. The conflict has generally caused a negative backlash against Mullenweg and has led to a call for a restructuring of the governance of the open source WordPress project by Joost de Valk (co-founder of the Yoast SEO Plugin).

So it was somewhat surprising that he showed up on Reddit asking what further drama could he stir up in 2025. The post was provocatively titled: What drama should I create in 2025?

His intent for the post was not about creating actual drama but rather it was about what changes could be made to WordPress. The post title appeared to be a tongue in cheek but provocative choice for the Reddit discussion.

Mullenweg posted:

“I’m very open to suggestions. Should we stop naming releases after jazz musicians and name them after Drake lyrics? Eliminate all dashboard notices? Take over any plugins into core? Change from blue to purple?

I think we can brainstorm together and come up with way better things than I could on my own. ☺️ Also, Merry Christmas!”

His discussion starter generated nearly 600 responses, seemingly all of them negative.

The moderator of the subreddit pinned their response to the top of the discussion, which partially reads:

“I have a fantastic idea for some drama we can get up to. Why don’t we create a charitable foundation governing our open source software product, instead of our for-profit company. Why don’t we also operate our main website as its own separate entity, with employees and volunteers provided by yet another entity. Then, why dont we have all of these entities take action against one of our competitors and their entire customer base, refusing to do business with any customers until they stop working with our competitor. Why don’t we ban ALL of those people from our services, and try to compel them to use our service instead?”

That pretty much set the tone for the entire discussion.

ryanduff answered Matt’s question with:

“You should log off and find a good therapist”

Matt Mullenweg responded:

“Hi Ryan, freelance WordPress developer. I’m glad that WordPress and WooCommerce have been tools that have provided you some utility and economic benefit in the past, and hopefully again in the future. Your profile notes your strong religious belief, I’d ask before you post something like this again you ask: WWJD?”

WWJD is an acronym for “What Would Jesus Do?” Redditors responded with riffs on that comment.

His response received 85 downvotes, representing the displeasure and unpopularity of his post with other Redditors. One of the responses to his WWJD post referenced the federal judge who granted WP Engine’s request for a preliminary injunction.

They posted:

“What would Judge (Araceli Martínez-Olguín) do? We’ll see.”

Another Redditor responded:

“Maybe you should ask yourself WWJD (What Would Joost Do)”

That branch of the discussion went completely off the rails with various suggestions of what Mullenweg should do and spawned a standalone discussion titled, “Is Matt low-key threatening or attempting to intimidate this r/WPDrama user?

That spinoff discussion spawned responses such as this one:

“He either thinks it’s funny, likes the attention, or is so far gone psychologically it doesn’t event register in his mind that your / our views matter at all. I’d guess it’s a combination of all of the above.

The best thing we can all do is break from WP completely and withdraw all volunteering, all financial interactions, cut using it, stop recommending it, and use something else for clients, or use a fork.

He was enjoying being a bully before he picked on WP Engine and now is losing the plot as they say and hopefully Joost can start his own fork that will finally be supported and marketed well enough to get going or can arrange some kind of ouster of spoiled baby MM.”

Matt Mullenweg’s Response To The Negativity

The response to his discussion was overwhelmingly negative. Nevertheless Mullenweg ended his participation by wishing everyone a Merry Christmas.

He posted:

“I’m signing off for the night, it’s time for family movie time. Thank you for the conversation everyone. 🙂 I really do enjoy talking with people on the internet, even if we don’t always agree, and I appreciate everyone taking the time to share their perspective. Forums like this is how I got my start as a teenager. If you think Reddit is spicy, you should have seen Usenet and IRC back in the day! I hope you all have an amazing Christmas and very happy new year.”

That relatively upbeat post received seven down votes.

Read the entire discussion here:

What drama should I create in 2025?

Featured Image by Shutterstock/STILLFX

Google Says Temporal Anomalies Affect Googlebot Crawl via @sejournal, @martinibuster

Google’s John Mueller answered a question on Reddit about Googlebot crawling and the completeness of the snapshot. The person asking the question received a response that touched on edge cases and temporal anomalies in crawling.

A Googlebot “screenshot” refers to a representation of what a web page looks like to Googlebot.

What a web page looks like depends on how it renders the page after executing JavaScript, loading CSS and downloading necessary images.

Google Search Console’s URL inspection tool gives an idea of what a web page looks like to Google. This tool helps publishers and SEOs understand how Google “sees” a web page.

Question About Knowing What Googlebot “Sees”

The person asking the question was talking about Googlebot screenshots. What they apparently meant was the rendered page as Googlebot itself sees it.

This is the question the Redditor asked:

“Is the Googlebot screenshot a complete picture of what Google can see?”

They later clarified with the following answers to questions:

“How can I know what google see in my article? …I want to know what Googlebot see in my website.”

Is Googlebot Screenshot A Complete Picture?

Returning to the original question of whether the “Googlebot screenshot a complete picture of what Google can see,” Google’s John Mueller offered the following answer.

“For the most part, yes. But there are some edge cases and temporal anomalies. Tell us more about what you’re trying to check.”

Mueller’s response acknowledges that the Googlebot screenshot represents what Google sees when it crawls a page.

Temporal Anomalies In Googlebot Screenshot

The person asking the question referred to a Googlebot screenshot as what Googlebot “sees” when it visits a web page. That also seems to be the context of Mueller’s answer.

Mueller’s answer referred to temporal anomalies which could be a reference to temporary issues at the time the web page was crawled that could have effected what resources were downloaded and consequently affected how the web page looked to Googlebot in that moment.

Google Search Console’s URL Inspection Tool also provides a snapshot that shows a live preview of how a web page appears to Google. It’s a good way to check if everything is rendered by Google the way it’s supposed to look.

Read the discussion on Reddit:

Is the Googlebot screenshot a complete picture of what Google can see?

Featured Image by Shutterstock/Sammby

The world’s first industrial-scale plant for green steel promises a cleaner future

As of 2023, nearly 2 billion metric tons of it were being produced annually, enough to cover Manhattan in a layer more than 13 feet thick. 

Making this metal produces a huge amount of carbon dioxide. Overall, steelmaking accounts for around 8% of the world’s carbon emissions—one of the largest industrial emitters and far more than such sources as aviation. The most common manufacturing process yields about two tons of carbon dioxide for every ton of steel.  

A handful of groups and companies are now making serious progress toward low- or zero-emission steel. Among them, the Swedish company Stegra stands out. (Originally named H2 Green Steel, the company renamed itself Stegra—which means “to elevate” in Swedish—in September.) The startup, formed in 2020, has raised close to $7 billion and is building a plant in Boden, a town in northern Sweden. It will be the first industrial-scale plant in the world to make green steel. Stegra says it is on track to begin production in 2026, initially producing 2.5 million metric tons per year and eventually making 4.5 million metric tons. 

The company uses so-called green hydrogen, which is produced using renewable energy, to process iron ore into steel. Located in a part of Sweden with abundant hydropower, Stegra’s plant will use hydro and wind power to drive a massive electrolyzer that splits water to make the hydrogen. The hydrogen gas will then be used to pull the oxygen out of iron ore to make metallic iron—a key step in steelmaking.  

This process of using hydrogen to make iron—and subsequently steel—has already been used at pilot plants by Midrex, an American company from which Stegra is purchasing the equipment. But Stegra will have to show that it will work in a far larger plant.

The world produces about 60,000 metric tons of steel every 15 minutes.

“We have multiple steps that haven’t really been proven at scale before,” says Maria Persson Gulda, Stegra’s chief technology officer. These steps include building one of the world’s largest electrolyzers. 

Beyond the unknowns of scaling up a new technology, Stegra also faces serious business challenges. The steel industry is a low-margin, intensely competitive sector in which companies win customers largely on price.

aerial view of construction site
The startup, formed in 2020, has raised close to $7 billion in financing and expects to begin operations in 2026 at its plant in Boden.
STEGRA

Once operations begin, Stegra calculates, it can come close to producing steel at the same cost as the conventional product, largely thanks to its access to cheap electricity. But it plans to charge 20% to 30% more to cover the €4.5 billion it will take to build the plant. Gulda says the company has already sold contracts for 1.2 million metric tons to be produced in the next five to seven years. And its most recent customers—such as car manufacturers seeking to reduce their carbon emissions and market their products as green—have agreed to pay the 30% premium. 

Now the question is: Can Stegra deliver? 

The secret of hydrogen

To make steel—an alloy of iron and carbon, with a few other elements thrown in as needed—you first need to get the oxygen out of the iron ore dug from the ground. That leaves you with the purified metal.

The most common steelmaking process starts in blast furnaces, where the ore is mixed with a carbon-­rich coal derivative called coke and heated. The carbon reacts with the oxygen in the ore to produce carbon dioxide; the metal left behind then enters another type of furnace, where more oxygen is forced into it under high heat and pressure. The gas reacts with remaining impurities to produce various oxides, which are then removed—leaving steel behind.  

The second conventional method, which is used to make a much smaller share of the world’s steel, is a process called direct reduction. This usually employs natural gas, which is separated into hydrogen and carbon monoxide. Both gases react with the oxygen to pull it out of the iron ore, creating carbon dioxide and water as by-products. 

The iron that remains is melted in an electric arc furnace and further processed to remove impurities and create steel. Overall, this method is about 40% lower in emissions than the blast furnace technique, but it still produces over a ton of carbon dioxide for every ton of steel.

But why not just use hydrogen instead of starting with natural gas? The only by-product would be water. And if, as Stegra plans to do, you use green hydrogen made using clean power, the result is a new and promising way of making steel that can theoretically produce close to zero emissions. 

Stegra’s process is very similar to the standard direct reduction technique, except that since it uses only hydrogen, it needs a higher temperature. It’s not the only possible way to make steel with a negligible carbon footprint, but it’s the only method on the verge of being used at an industrial scale. 

Premium marketing

Stegra has laid the foundations for its plant and is putting the roof and walls on its steel mill. The first equipment has been installed in the building where electric arc furnaces will melt the iron and churn out steel, and work is underway on the facility that will house a 700-megawatt electrolyzer, the largest in Europe.

To make hydrogen, purify iron, and produce 2.5 million metric tons of green steel annually, the plant will consume 10 terawatt-hours of electricity. This is a massive amount, on par with the annual usage of a small country such as Estonia. Though the costs of electricity in Stegra’s agreements are confidential, publicly available data suggest rates around €30 ($32) per megawatt-hour or more. (At that rate, 10 terawatt-hours would cost $320 million.) 

STEGRA

Many of the buyers of the premium green steel are in the automotive industry; they include Mercedes-Benz, Porsche, BMW, Volvo Group, and Scania, a Swedish company that makes trucks and buses. Six companies that make furniture, appliances, and construction material—including Ikea—have also signed up, as have five companies that buy steel and distribute it to many different manufacturers.

Some of these automakers—including Volvo, which will buy from Stegra and rival SSAB—are marketing cars made with the green steel as “fossil-free.” And since cars and trucks also have many parts that are much more expensive than the steel they use, steel that costs the automakers a bit more adds only a little to the cost of a vehicle—perhaps a couple of hundred dollars or less, according to some estimates. Many companies have also set internal targets to reduce emissions, and buying green steel can get them closer to those goals.

Stegra’s business model is made possible in part by the unique economic conditions within the European Union. In December 2022, the European Parliament approved a tariff on imported carbon-­intensive products such as steel, known as the Carbon Border Adjustment Mechanism (CBAM). As of 2024, this law requires those who import iron, steel, and other commodities to report the materials’ associated carbon emissions. 

Starting in 2026, companies will have to begin paying fees designed to be proportional to the materials’ carbon footprint. Some companies are already betting that it will be enough to make Stegra’s 30% premium worthwhile. 

crane hoisting an i-beam  next to a steel building frame

STEGRA

Though the law could incentivize decarbonization within the EU and for those importing steel into Europe, green steelmakers will probably also need subsidies to defray the costs of scaling up, says Charlotte Unger, a researcher at the Research Institute for Sustainability in Potsdam, Germany. In Stegra’s case, it will receive €265 million from the European Commission to help build its plant; it was also granted €250 million from the European Union’s Innovation Fund.  

Meanwhile, Stegra is working to reduce costs and beef up revenues. Olof Hernell, the chief digital officer, says the company has invested heavily in digital products to improve efficiency. For example, a semi-automated system will be used to increase or decrease usage of electricity according to its fluctuating price on the grid.

Stegra realized there was no sophisticated software for keeping track of the emissions that the company is producing at every step of the steelmaking process. So it is making its own carbon accounting software, which it will soon sell as part of a new spinoff company. This type of accounting is ultra-important to Stegra, Hernell says, since “we ask for a pretty significant premium, and that premium lives only within the promise of a low carbon footprint.” 

Not for everyone

As long as CBAM stays in place, Stegra believes, there will be more than enough demand for its green steel, especially if other carbon pricing initiatives come into force. The company’s optimism is boosted by the fact that it expects to be the first to market and anticipates costs coming down over time. But for green steel to affect the market more broadly, or stay viable once several companies begin making significant quantities of it, its manufacturing costs will eventually have to be competitive with those of conventional steel.

Stegra has sold contracts for 1.2 million metric tons of steel to be produced in the next five to seven years.

Even if Stegra has a promising outlook in Europe, its hydrogen-based steelmaking scheme is unlikely to make economic sense in many other places in the world—at least in the near future. There are very few regions with such a large amount of clean electricity and easy access to the grid. What’s more, northern Sweden is also rich in high-quality ore that is easy to process using the hydrogen direct reduction method, says Chris Pistorius, a metallurgical engineer and co-director of the Center for Iron and Steelmaking Research at Carnegie Mellon University.

Green steel can be made from lower-grade ore, says Pistorius, “but it does have the negative effects of higher electricity consumption, hence slower processing.”

Given the EU incentives, other hydrogen-based steel plants are in the works in Sweden and elsewhere in Europe. Hybrit, a green steel technology developed by SSAB, the mining company LKAB, and the energy producer Vattenfall, uses a process similar to Stegra’s. LKAB hopes to finish a demonstration plant by 2028 in Gällivare, also in northern Sweden. However, progress has been delayed by challenges in getting the necessary environmental permit.

Meanwhile, a company called Boston Metal is working to commercialize a different technique to break the bonds in iron oxide by running a current through a mixture of iron ore and an electrolyte, creating extremely high heat. This electrochemical process yields a purified iron metal that can be turned into steel. The technology hasn’t been proved at scale yet, but Boston Metal hopes to license its green steel process in 2026. 

Understandably, these new technologies will cost more at first, and consumers or governments will have to foot the bill, says Jessica Allen, an expert on green steel production at the University of Newcastle in Australia. 

In Stegra’s case, both seem willing to do so. But it will be more difficult outside the EU. What’s more, producing enough green steel to make a large dent in the sector’s emissions will likely require a portfolio of different techniques to succeed. 

Still, as the first to market, Stegra is playing a vital role, Allen says, and its performance will color perceptions of green steel for years to come. “Being willing to take a risk and actually build … that’s exactly what we need,” she adds. “We need more companies like this.”

For now, Stegra’s plant—rising from the boreal forests of northern Sweden—represents the industry’s leading effort. When it begins operations in 2026, that plant will be the first demonstration that steel can be made at an industrial scale without releasing large amounts of carbon dioxide—and, just as important, that customers are willing to pay for it. 

Douglas Main is a journalist and former senior editor and writer at National Geographic.

Ecommerce SEO Pro on AI, 2025 Tactics

Jeff Oxford’s initial attempt at ecommerce was selling dropshipped beer pong tables in 2013. The business, he says, didn’t survive, but his love for optimizing organic search traffic did. Thus began his SEO career and the launch of 180 Marketing, his agency.

Fast forward to 2024, and Jeff is an ecommerce SEO authority. Link building was on his mind when he appeared on the podcast in 2022. He now advises engagement — getting folks to click an organic search listing and consume the page’s content — and reminds merchants that AI search has helped ecommerce rankings.

He and I discussed those tactics and more in our recent conversation. Our entire audio is embedded below. The transcript is edited for clarity and length.

Eric Bandholz: Give us a quick rundown of who you are.

Jeff Oxford: I’ve been an ecommerce SEO nerd for 13 years. I got into it by trying to start my own dropshipping sites, like selling beer pong tables or 3D printers, but things didn’t go as planned. That’s when I fell in love with search engine optimization, particularly for ecommerce websites, and I’ve stuck with it. SEO can be complicated, but at its core, it’s about a few key activities to help sites rank better.

If you’re looking to rank well in 2025, user engagement is crucial. Google determines whether visitors click on your site, stay, or quickly hit the back button. User engagement is a strong signal for rankings. You could have a perfectly optimized page, but you won’t rank well if people aren’t staying.

Google’s data leak earlier this year confirmed what many SEOs suspected — user engagement plays a significant role in rankings. SEO practitioners theorize that Google collects data from Android devices and Chrome to assess how long people stay on a site, influencing rankings.

Bandholz: How do you protect against bots that attempt to manipulate engagement data by visiting and bouncing off competitor sites?

Oxford: The click-through rate on search result pages is a major ranking factor. More clicks should increase rankings. Rand Fishkin, a prominent SEO expert, demonstrated this with live experiments in which people clicked on a site and boosted its ranking from 5th to 1st or 2nd in real time.

But manipulating the click-through rate via bots isn’t that easy. Google has advanced technology to detect fake clicks, primarily to protect its ad system. Google monitors patterns, IPs, and behaviors. So, if you tried to flood your site with bot traffic, Google would likely detect it. Some people use platforms such as Mechanical Turk to pay for manual clicks, but making the pattern look natural is hard. Spikes in traffic are red flags, and Google uses pattern recognition to detect anomalies.

Bandholz: With AI evolving, how can companies detect if competitors are using malicious AI tactics?

Oxford: Negative SEO attacks were a more prominent issue before 2014. Back then, bad backlinks could penalize a site, and competitors could build spammy links to hurt you.

However, in 2014 Google changed its approach. Instead of penalizing sites for bad links, they devalue or ignore them. The same applies to click manipulation. Google can’t always tell if the click manipulation is from the site owner or a competitor. It’s more about neutralizing the effects rather than handing out penalties.

Bandholz: What are some tactics for increasing click-throughs and dwell time?

Oxford: Dwell time is easier to control since it depends on the user experience. To improve it, focus on fast page loads and usability. Basic conversion rate optimization techniques, such as A/B and usability testing, can help keep visitors engaged.

To increase branded traffic, though, it’s more about good digital marketing than manipulation. If you’re running strong branding and marketing campaigns, your branded search traffic will increase naturally. You might run Facebook ads, sponsor events, or create content. Branded search is a symptom of good digital marketing, not the direct goal.

Bandholz: Is there still an opportunity for bottom-of-funnel SEO strategies to capture sales for people searching for products?

Oxford: Absolutely. Backlinks still work surprisingly well despite many thinking they’re dead. Most of the top-ranking sites still have backlinks. For ecommerce sites, getting backlinks to category pages can help with rankings. Google confirmed this in its algorithm leak — backlinks remain a confirmed ranking factor.

It’s harder to get backlinks today than it was 10 years ago. Back then, we could send bulk emails to journalists and get backlinks quickly. But now, bloggers know their value. They’re more likely to ask for money in exchange for backlinks. However, ecommerce sites can still build backlinks through product reviews. You can contact bloggers, offer free products for honest reviews, and earn backlinks.

Bandholz: What else should entrepreneurs and operators consider when improving their SEO?

Oxford: Besides engagement metrics and link building, optimized content is essential. Many ecommerce sites overlook category pages and have only a product grid and a heading. Adding 200 to 300 words of relevant content, discussing the benefits of products, target audience, and usage tips can help these pages rank better. You don’t want fluffy content. Instead, focus on helpful information for potential buyers.

Bandholz: How much should brands budget for optimal SEO?

Oxford: It depends on your niche. In a competitive space like weight loss supplements or CBD, you’re looking at tens of thousands of dollars a month. For less competitive niches, $2,000 to $3,000 might suffice. For ecommerce sites with seven or eight figures in annual revenue, a budget of $3,000 to $8,000 per month is typical. Most of the budget will go toward link building, content creation, and technical SEO.

Bandholz: What other trends do you see in the ecommerce SEO space?

Oxford: Many merchants panicked over artificial intelligence, worrying it might take market share from Google or hurt SEO. But AI has been beneficial for ecommerce sites. Google recently cracked down on low-quality affiliate blogs trying to rank for “best of” keywords, wiping out around 90% of these sites. Now, ecommerce sites are benefiting as they’re absorbing a lot of this traffic.

Google is prioritizing brands over generic affiliate sites. Ecommerce brands with physical locations, verified reviews, and a trustworthy presence get favored in rankings. So, if you have an ecommerce site, blogging will be more impactful now than ever.

Bandholz: Where can people learn more?

Oxford: Our website is 180marketing.com. I’m on LinkedIn.