New AI-Assisted Managed WordPress Hosting For Ecommerce via @sejournal, @martinibuster

Bluehost announced two competitively priced managed WordPress ecommerce hosting solutions that make it easy for content creators and ecommerce stores to get online with WordPress and start accepting orders.

Both plans feature AI site migration tools that help users switch web hosting providers, free content delivery networks to speed up web page downloads, AI-assisted site creation tools and NVMe (Non-Volatile Memory Express) solid state storage which provides faster speeds than traditional web hosting storage.

The new plans enable users to sell products with WooCommerce and even offer paid courses online, all within a managed WordPress hosting environment that’s optimized for WordPress websites.

According to Bluehost:

“Bluehost’s eCommerce Essentials equips content creators with an intuitive, all‑in‑one toolkit—complete with AI‑powered site building, seamless payment integrations, paid courses and memberships, social logins, email templates and SEO tools—to effortlessly engage audiences and turn their passion into profit.”

There are two plans, eCommerce Essentials and eCommerce Premium, with the premium version offering more ecommerce features built-in. Both plans are surprisingly affordable considering the many features offered.

Satish Hemachandran, Chief Product Officer at Bluehost commented:

“At Bluehost, we understand the unique needs of today’s content creators and entrepreneurs who are building personal brands or online stores and turning their passion into profit.

With Bluehost WordPress eCommerce hosting plans, creators get a streamlined platform to easily develop personalized commerce experiences. From launching a store to engaging an audience and monetizing content, our purpose-built tools are designed to simplify the process and support long-term growth. Our mission is to empower creators with the right resources to strengthen their brand, increase their income, and succeed in the digital economy.”

Read more about the new ecommerce WordPress hosting here:

WooCommerce Online Stores – The future of online selling is here.

The Trump administration has shut down more than 100 climate studies

The Trump administration has terminated National Science Foundation grants for more than 100 research projects related to climate change amid a widening campaign to slash federal funding for scientists and institutions studying the rising risks of a warming world.

The move will cut off what’s likely to amount to tens of millions of dollars for studies that were previously approved and, in most cases, already in the works. 

Affected projects include efforts to develop cleaner fuels, measure methane emissions, improve understanding of how heat waves and sea-level rise disproportionately harm marginalized groups, and help communities transition to sustainable energy, according to an MIT Technology Review review of a GrantWatch database—a volunteer-led effort to track federal cuts to research—and a list of terminated grants from the National Science Foundation (NSF) itself. 

The NSF is one of the largest sources of US funding for university research, so the cancellations will deliver a big blow to climate science and clean-energy development.

They come on top of the White House’s broader efforts to cut research funding and revenue for universities and significantly raise their taxes. The administration has also strived to slash staff and budgets at federal research agencies, halt efforts to assess the physical and financial risks of climate change, and shut down labs that have monitored and analyzed the levels of greenhouse gases in the air for decades.

“I don’t think it takes a lot of imagination to understand where this is going,” says Daniel Schrag, co-director of the science, technology, and public policy program at Harvard University, which has seen greater funding cuts than any other university amid an escalating legal conflict with the administration. “I believe the Trump administration intends to zero out funding for climate science altogether.”

The NSF says it’s terminating grants that aren’t aligned with the agency’s program goals, “including but not limited to those on diversity, equity, and inclusion (DEI), environmental justice, and misinformation/disinformation.”

Trump administration officials have argued that DEI considerations have contaminated US science, favoring certain groups over others and undermining the public’s trust in researchers.

“Political biases have displaced the vital search for truth,” Michael Kratsios, head of the White House Office of Science and Technology Policy, said to a group of NSF administrations and others last month, according to reporting in Science.

Science v. politics

But research projects that got caught in the administration’s anti-DEI filter aren’t the only casualties of the cuts. The NSF has also canceled funding for work that has little obvious connections to DEI ambitions, such as research on catalysts. 

Many believe the administration’s broader motivation is to  undermine the power of the university system and prevent research findings that cut against its politics. 

Trump and his officials have repeatedly argued, in public statements and executive orders, that climate fears are overblown and that burdensome environmental regulations have undermined the nation’s energy security and economic growth.

“It certainly seems like a deliberate attempt to undo any science that contradicts the administration,” says Alexa Fredston, an assistant professor of ocean sciences at the University of California, Santa Cruz. 

On May 28, a group of states including California, New York, and Illinois sued the NSF, arguing that the cuts illegally violated diversity goals and funding priorities clearly established by Congress, which controls federal spending.

A group of universities also filed a lawsuit against the NSF over its earlier decision to reduce the indirect cost rate for research, which reimburses universities for overhead expenses associated with work carried out on campuses. The plaintiffs included the California Institute of Technology, Carnegie Mellon University, and the Massachusetts Institute of Technology, which has also lost a number of research grants.

(MIT Technology Review is owned by, but editorially independent from, MIT.)

The NSF declined to comment.

‘Theft from the American people’

GrantWatch is an effort among researchers at rOpenSci, Harvard, and other organizations to track terminations of grants issued by the National Institutes of Health (NIH) and NSF. It draws on voluntary submissions from scientists involved as well as public government information. 

A search of its database for the terms “climate change,” “clean energy,” “climate adaptation,” “environmental justice,” and “climate justice” showed that the NSF has canceled funds for 118 projects, which were supposed to receive more than $100 million in total. Searching for the word “climate” produces more than 300 research projects that were set to receive more than $230 million. (That word often indicates climate-change-related research, but in some abstracts it refers to the cultural climate.) 

Some share of those funds has already been issued to research groups. The NSF section of the database doesn’t include that “outlaid” figure, but it’s generally about half the amount of the original grants, according to Noam Ross, a computational researcher and executive director of rOpenSci, a nonprofit initiative that promotes open and reproducible science.

A search for “climate change” among the NIH projects produces another 22 studies that were terminated and were still owed nearly $50 million in grants. Many of those projects explored the mental or physical health effects of climate change and extreme weather events.

The NSF more recently released its own list of terminated projects, which mostly mirrored GrantWatch’s findings and confirms the specific terminations mentioned in this story.

“These grant terminations are theft from the American people,” Ross said in an email response. “By illegally ending this research the Trump administration is wasting taxpayer dollars, gutting US leadership in science, and telling the world that the US government breaks its promises.”

Harvard, the country’s oldest university, has been particularly hard hit.

In April, the university sued the Trump administration over cuts to its research funding and efforts to exert control over its admissions and governance policies. The White House, in turn, has moved to eliminate all federal funds for the university, including hundreds of NSF and NIH grants. 

Daniel Nocera, a professor at Harvard who has done pioneering work on so-called artificial photosynthesis, a pathway for producing clean fuels from sunlight, said in an email that all of his grants were terminated. 

“I have no research funds,” he added.

Another terminated grant involved a collaboration between Harvard and the NSF National Center for Atmospheric Research (NCAR), designed to update the atmospheric chemistry component of the Community Earth System Model, an open-source climate model widely used by scientists around the world.

The research was expected to “contribute to a better understanding of atmospheric chemistry in the climate system and to improve air quality predictions within the context of climate change,” according to the NSF abstract. 

“We completed most of the work and were able to bring it to a stopping point,” Daniel Jacob, a professor at Harvard listed as the principal investigator on the project, said in an email. “But it will affect the ability to study chemistry-climate interactions. And it is clearly not right to pull funding from an existing project.”

Plenty of the affected research projects do, in one way or another, grapple with issues of diversity, equity, and inclusion. But that’s because there is ample evidence that disadvantaged communities experience higher rates of illness from energy-sector pollution, will be harder hit by the escalating effects of extreme weather and are underrepresented in scientific fields.

One of the largest terminations cut off about $4 million dollars of remaining funds for the CLIMATE Justice Initiative, a fellowship program at the University of California, Irvine designed to recruit, train and mentor a more diverse array of researchers in Earth sciences.  

The NSF decision occurred halfway into the 5-year program, halting funds for a number of fellows who were in the midst of environmental justice research efforts with community partners in Southern California. Kathleen Johnson, a professor at UC Irvine and director of the initiative, says the university is striving to find ways to fund as many participants as possible for the remainder of their fellowships.

“We need people from all parts of society who are trained in geoscience and climate science to address all these global challenges that we are facing,” she says. “The people who will be best positioned to do this work …  are the people who understand the community’s needs and are able to therefore work to implement equitable solutions.”

“Diverse teams have been shown to do better science,” Johnson adds.

Numerous researchers whose grants were terminated didn’t respond to inquiries from MIT Technology Review or declined to comment, amid growing concerns that the Trump administration will punish scientists or institutions that criticize their policies.

Coming cuts

The termination of existing NSF and NIH grants is just the start of the administration’s plans to cut federal funding for climate and clean-energy research. 

The White House’s budget proposal for the coming fiscal year seeks to eliminate tens of billions of dollars in funding across federal agencies, specifically calling out “Green New Scam funds” at the Department of Energy; “low-priority climate monitoring satellites” at NASA; “climate-dominated research, data, and grant programs” at the National Oceanic and Atmospheric Administration; and “climate; clean energy; woke social, behavioral, and economic sciences” at the NSF.

The administration released a more detailed NSF budget proposal on May 30th, which called for a 60% reduction in research spending and nearly zeroed out the clean energy technology program. It also proposed cutting funds by 97% for the US Global Change Research Program, which produces regular assessments of climate risks; 80% for the Ocean Observatories Initiative, a global network of ocean sensors that monitor shifting marine conditions; and 40% for NCAR, the atmospheric research center.

If Congress approves budget reductions anywhere near the levels the administration has put forward, scientists fear, it could eliminate the resources necessary to carry on long-running climate observation of oceans, forests, and the atmosphere. 

The administration also reportedly plans to end the leases on dozens of NOAA facilities, including the Global Monitoring Laboratory in Hilo, Hawaii. The lab supports the work of the nearby Mauna Loa Observatory, which has tracked atmospheric carbon dioxide levels for decades.

Even short gaps in these time-series studies, which scientists around the world rely upon, would have an enduring impact on researchers’ ability to analyze and understand weather and climate trends.

“We won’t know where we’re going if we stop measuring what’s happening,” says Jane Long, formerly the associate director of energy and environment at Lawrence Livermore National Lab. “It’s devastating—there’s no two ways around it.” 

Stunting science 

Growing fears that public research funding will take an even larger hit in the coming fiscal year are forcing scientists to rethink their research plans—or to reconsider whether they want to stay in the field at all, numerous observers said.

“The amount of funding we’re talking about isn’t something a university can fill indefinitely, and it’s not something that private philanthropy can fill for very long,” says Michael Oppenheimer, a professor of geosciences and international affairs at Princeton University. “So what we’re talking about is potentially cataclysmic for climate science.”

“Basically it’s a shit show,” he adds, “and how bad a shit show it is will depend a lot on what happens in the courts and Congress over the next few months.”

One climate scientist, who declined to speak on the record out of concern that the administration might punish his institution, said the declining funding is forcing researchers to shrink their scientific ambitions down to a question of “What can I do with my laptop and existing data sets?”

“If your goal was to make the United States a second-class or third-class country when it comes to science and education, you would be doing exactly what the administration is doing,” the scientist said. “People are pretty depressed, upset, and afraid.”

Given the rising challenges, Harvard’s Schrag fears that the best young climate scientists will decide to shift their careers outside of the US, or move into high tech or other fields where they can make significantly more money.

“We might lose a generation of talent—and that’s not going to get fixed four years from now,” he says. “The irony is that Trump is attacking the institutions and foundation of US science that literally made America great.”

The Download: US climate studies are being shut down, and building cities from lava

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The Trump administration has shut down more than 100 climate studies

The Trump administration has terminated National Science Foundation grants for more than 100 research projects related to climate change, according to an MIT Technology Review analysis of a database that tracks such cuts.

The move will cut off what’s likely to amount to tens of millions of dollars for studies that were previously approved and, in most cases, already in the works. Many believe the administration’s broader motivation is to undermine the power of the university system and prevent research findings that cut against its politics. Read the full story.

—James Temple

This architect wants to build cities out of lava

Arnhildur Pálmadóttir is an architect with an extraordinary mission: to harness molten lava and build cities out of it.

Pálmadóttir believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques that could change how future homes are designed and built from repurposed lava. Read the full story.

—Elissaveta M. Brandon

This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 America is failing to win the tech race against China
In fields as diverse as drones and energy. (WSJ $)
+ Humanoid robots is an area of particular interest. (Bloomberg $)
+ China has accused the US of violating the pair’s trade truce. (FT $)

2 Who is really in charge of DOGE?
According to a fired staffer, it wasn’t Elon Musk. (Wired $)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

3 Brazilians will soon be able to sell their digital data
It’s the first time citizens will be able to monetize their digital footprint. (Rest of World)

4 The Trump administration’s anti-vaccine stance is stoking fear among scientists
It’s slashing funding for mRNA trials, and experts are afraid to speak out. (The Atlantic $)
+ This annual shot might protect against HIV infections. (MIT Technology Review)

5 Tech companies want us to spend longer talking to chatbots
Those conversations can easily veer into dangerous territory. (WP $)
+ How we use AI in the future is up to us. (New Yorker $)
+ This benchmark used Reddit’s AITA to test how much AI models suck up to us. (MIT Technology Review)

6 Tiktok’s mental health videos are rife with misinformation
A lot of the advice is useless at best, and harmful at worst. (The Guardian)

7 Lawyers are hooked on ChatGPT
Even though it’s inherently unreliable. (The Verge)
+ Yet another lawyer has been found referencing nonexistent citations. (The Guardian)
+ How AI is introducing errors into courtrooms. (MIT Technology Review)

8 How chefs are using generative AI 👩‍🍳
They’re starting to experiment with using it to create innovative new dishes. (NYT $)
+ Watch this robot cook shrimp and clean autonomously. (MIT Technology Review)

9 The influencer suing her rival has dropped her lawsuit
The legal fight over ownership of a basic aesthetic has come to an end. (NBC News)

10 Roblox’s new game has sparked a digital fruit underground market
And players are already spending millions of dollars every week. (Bloomberg $)

Quote of the day

“We can’t substitute complex thinking with machines. AI can’t replace our curiosity, creativity or emotional intelligence.”

—Mateusz Demski, a journalist in Poland, tells the Guardian about how his radio station employer laid him off, only to later launch shows fronted by AI-generated presenters.

One more thing

​​Adventures in the genetic time machine

An ancient-DNA revolution is turning the high-speed equipment used to study the DNA of living things on to specimens from the past.

The technology is being used to create genetic maps of saber-toothed cats, cave bears, and thousands of ancient humans, including Vikings, Polynesian navigators, and numerous Neanderthals. The total number of ancient humans studied is more than 10,000 and rising fast.

The old genes have already revealed remarkable stories of human migrations around the globe.

But researchers are hoping ancient DNA will be more than a telescope on the past—they hope it will have concrete practical use in the present. Read the full story

—Antonio Regalado

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The ancient Persians managed to keep cool using an innovative breeze-catching technique that could still be useful today.
+ Knowledge is power—here’s a helpful list of hoaxes to be aware of.
+ How said it: Homer Simpson or Pete Hegseth?
+ I had no idea London has so many cat statues.

The Post-Traffic SEO Shift

Google’s new AI Mode highlights the dramatic changes in organic search. AI answers often eliminate the need to click, though users wanting more details must search unlinked brand names separately.

The result is massive shifts in optimizing for search engines:

  • Traffic is no longer a key ecommerce performance indicator, as many shoppers will make purchase decisions without clicking.
  • Optimizing for brand-name search is increasingly important because consumers often query a brand or product name, as AI answers don’t usually link to them.

Business owners are understandably concerned and unsure how to adjust SEO.

Here’s my overview.

Position Products

Generative AI platforms use external sources to recommend products and brands. Unless they encounter the benefits of a product or company, the platforms are unlikely to include or recommend them.

Thus an AI-driven SEO strategy includes creating and marketing “brand knowledge content,” which explains:

  • The brand’s unique value proposition.
  • Differences from competitors (e.g., price, quality, service, etcetera).
  • Targeted audience, including geographic focus.

The goal is to supply info to large language models about your business to increase its chances of being surfaced in AI answers for related consumer questions.

The example below is a chart from Google’s AI Mode comparing Zoho and HubSpot, two popular customer management platforms, in response to a query.

Comparison table from AI Mode summarizing key differences between Zoho CRM and HubSpot CRM, including customization, integration, user interface, AI capabilities, and pricing. HubSpot is noted for ease of use and advanced features, while Zoho is highlighted for customization and affordability.

Table from AI Mode comparing Zoho CRM to HubSpot. Click image to enlarge.

Brand Mentions, Backlinks

Brand mentions are as important as backlinks for genAI algorithms. ChatGPT, Gemini, and others rely on “similarity” and co-occurrence, i.e., where a brand name appears in a relevant context of a query.

Yet backlinks remain important for traditional organic search rankings, and genAI platforms also rely on those engines: Google, Bing, others.

Hence optimizing for AI search should include link building and brand marketing. The following tactics will help with both:

  • Co-citation link building, such as appearing or being linked in listicles alongside competitors.
  • Media outreach for generating links and mentions from reputable outlets.
  • Reddit community building: Participating in relevant subreddits or managing your own. Reddit can raise visibility with journalists, Google, and ChatGPT.

Solve Problems, not Keywords

Generative AI search engines use a so-called “query fan-out” technique. Google introduced the term, but other LLMs use similar methods.

This technique goes beyond direct answers. It includes related and follow-up concepts to provide a more detailed explanation and solve users’ problems more efficiently.

Keyword research remains essential for understanding shoppers’ journeys, but optimizing for those terms is more than including them in titles, headings, and body text. Think about the problems driving each keyword and address them with additional info on your page.

My GPT, “SEO: Search Query Analyzer,” can assist, as can ChatGPT and Gemini via this prompt:

My target keyword is [KEYWORD]. What follow-up questions and additional information would help my target audience searching on this query?

Google Search Console Fails To Report Half Of All Search Queries via @sejournal, @MattGSouthern

New research from ZipTie reveals an issue with Google Search Console.

The study indicates that approximately 50% of search queries driving traffic to websites never appear in GSC reports. This leaves marketers with incomplete data regarding their organic search performance.

The research was conducted by Tomasz Rudzki, co-founder of ZipTie. His tests show that Google Search Console consistently overlooks conversational searches. These are the natural language queries people use when interacting with voice assistants or AI chatbots.

Simple Tests Prove The Data Gap

Rudzki started with a basic experiment on his website.

For several days, he searched Google using the same conversational question from different devices and accounts. These searches directed traffic to his site, which he could verify through other analytics tools.

However, when he checked Google Search Console for these specific queries, he found nothing. “Zero. Nada. Null,” as Rudzki put it.

To confirm this wasn’t isolated to his site, Rudzki asked 10 other SEO professionals to try the same test. All received identical results: their conversational queries were nowhere to be found in GSC data, even though the searches generated real traffic.

Search Volume May Affect Query Reporting

The research suggests that Google Search Console uses a minimum search volume threshold before it begins tracking queries. A search term may need to reach a certain number of searches before it appears in reports.

According to tests conducted by Rudzki’s colleague Jakub Łanda, when queries finally become popular enough to track, historical data from before that point appears to vanish.

Consider how people might search for iPhone information:

  • “What are the pros and cons of the iPhone 16?”
  • “Should I buy the new iPhone or stick with Samsung?”
  • “Compare iPhone 16 with Samsung S25”

Each question may receive only 10-15 searches per month individually. However, these variations combined could represent hundreds of searches about the same topic.

GSC often overlooks these low-volume variations, despite their significant combined impact.

Google Shows AI Answers But Hides the Queries

Here’s the confusing part: Google clearly understands conversational queries. Rudzki analyzed 140,000 questions from People Also Asked data and found that Google shows AI Overviews for 80% of these conversational searches.

Rudzki observed:

“So it seems Google is ready to show the AI answer on conversational queries. Yet, it struggles to report conversational queries in one of the most important tools in SEO’s and marketer’s toolkits.”

Why This Matters

When half of your search data is missing, strategic decisions turn into guesswork.

Content teams create articles based on keyword tools instead of genuine user questions. SEO professionals optimize for visible queries while overlooking valuable conversational searches that often go unreported.

Performance analysis becomes unreliable when pages appear to underperform in GSC but draw significant unreported traffic. Teams also lose the ability to identify emerging trends ahead of their competitors, as new topics only become apparent after they reach high search volumes.

What’s The Solution?

Acknowledge that GSC only shows part of the picture and adjust your strategy accordingly.

Switch from the Query tab to the Pages tab to identify which content drives traffic, regardless of the specific search terms used. Focus on creating comprehensive content that fully answers questions rather than targeting individual keywords.

Supplement GSC data with additional research methods to understand conversational search patterns. Consider how your users interact with an AI assistant, as that’s increasingly how they search.

What This Means for the Future

The gap between how people search and the tools that track their searches is widening. Voice search is gaining popularity, with approximately 20% of individuals worldwide using it on a regular basis. AI tools are training users to ask detailed, conversational questions.

Until Google addresses these reporting gaps, successful SEO strategies will require multiple data sources and approaches that account for the invisible half of search traffic, which drives real results yet remains hidden from view.

The complete research and instructions to replicate these tests can be found in ZipTie’s original report.


Featured Image: Roman Samborskyi/Shutterstock

WordPress Takes Steps To Integrate AI via @sejournal, @martinibuster

WordPress announced the formation of an AI Team that will focus on coordinating the development and integration of AI within the WordPress core. The team is to function similarly to the Performance Team, focusing on developing canonical plugins that users can install to test new functionality before a decision is made about whether or how to integrate new functionalities into the WordPress core itself.

The goal for the team is to help create a strategic focus, rapid testing to deployment and to provide a centralized location for collaborating on ideas and projects.

The team will include two Google employees, Felix Arntz and Pascal Birchler. Arntz is a Senior Software Engineer at Google who contributes to the WordPress core and to other WordPress plugins and has worked as a lead for the Performance Team.

Pascal Birchler, a Developer Relations Engineer and WordPress core committer, recently led a project to integrate the Model Context Protocol (MCP) with WordPress via WP-CLI.

The WordPress announcement called it an important step:

“This is an exciting and important step in WordPress’s evolution. I look forward to seeing what we’ll create together and in the open.”

WordPress First Steps On Path Blazed By Competitors

The formation of an AI team is long overdue, as even the new open source Drupal CMS designed to provide an easy to use interface for marketers and creators has AI-powered features built-in. Third-party proprietary CMS provider Wix already and shopping platform Shopify have both integrated AI into their user’s workflows.

Read the official WordPress announcement:

Announcing the Formation of the WordPress AI Team

Featured Image by Shutterstock/Hananeko_Studio

WordPress Unpauses Development But Has It Run Out Of Time? via @sejournal, @martinibuster

Automattic announced that it is reversing its four-month pause in WordPress development and will return to focusing on the WordPress core, Gutenberg, and other projects. The pause in contributions came at a critical moment, as competitors outpaced WordPress in ease of use and technological innovation left the platform behind.

Did WordPress Need A Four-Month Pause?

Automattic’s return to normal levels of contributions were initially contingent on WP Engine withdrawing their lawsuit against Automattic and Mullenweg, with the announcement stating:

“We’re excited to return to active contributions to WordPress core, Gutenberg, Playground, Openverse, and WordPress.org when the legal attacks have stopped.”

WP Engine and Automattic are still locked in litigation, so what changed?

Automattic suggests that it has reconsidered its place as the future of content management:

“After pausing our contributions to regroup, rethink, and plan strategically, we’re ready to press play again and return fully to the WordPress project.

…We’ve learned a lot from this pause that we can bring back to the project, including a greater awareness of the many ways WordPress is used and how we can shape the future of the web alongside so many passionate contributors. We’re committed to helping it grow and thrive…”

Automattic’s announcement suggests that they realized moving forward with WordPress is important despite continued litigation.

But did Automattic really need a four-month pause to come to that realization?

Where Did The WordPress Money Go?

And it’s not like Automattic was hurting for money to throw at WordPress. Salesforce Ventures invested $300 million dollars into Automattic in 2019 and an elated Mullenweg wrote that this would enable them to almost double the pace of innovation for WP.com, their enterprise offering WordPress VIP, WooCommerce, Jetpack, and increase resources to WordPress.org and Gutenberg.

Mullenweg wrote:

“For Automattic, the funding will allow us to accelerate our roadmap (perhaps by double) and scale up our existing products—including WordPress.com, WordPress VIP, WooCommerce, Jetpack, and (in a few days when it closes) Tumblr. It will also allow us to increase investing our time and energy into the future of the open source WordPress and Gutenberg.”

In the years immediately following the $300 million investment, updates to WooCommerce increased by 47.62% and as high as 80.95% and just a little bit higher for the year 2024. Jetpack continued at an average release schedule of 7 updates per year although it shot up to 22 updates in 2024. The enterprise level WordPress VIP premium service may have also benefited (changelog here).

Updates to the WordPress Core remained fairly unchanged according to the official release announcements and the pace of Gutenberg releases also followed a steady pace, with no significant increases.

List of number of WordPress release announcements per year:

  • 2019 – 29 announcements
  • 2020 28 announcements
  • 2021 26 announcements
  • 2022 27 announcements
  • 2023 26 announcements
  • 2024 30 announcements
  • 2025 9 announcements

All the millions of dollars invested in Automattic, along with any other income earned, had no apparent effect on the pace of innovation in the WordPress core.

Survival Of The Fittest CMS

A positive development from Automattic’s pause to rethink is the announcement of a new AI group, modeled after their Performance group. The new team is tasked with coordinating AI initiatives within WordPress’ core development. Like their Performance group, the new AI group was formed after their competitors had outpaced them, so WordPress is once again late in adapting to user needs and the fast pace of technology.

Matt Mullenweg struggled to answer where WordPress would be in five years when asked at the February 2025 WordCamp Asia event. He asked someone from Automattic to join him on stage to answer the question, but that other person also couldn’t answer because there was, in fact, no plan or idea other than the short-term roadmap focused on the immediate future.

Mullenweg explained the lack of a long-term vision as a strategic decision to remain adaptable to the fast pace of technology:

“Outside of Gutenberg, we haven’t had a roadmap that goes six months or a year, or a couple versions, because the world changes in ways you can’t predict.

But being responsive is, I think, really is how organisms survive.

You know, Darwin, said it’s not the fittest of the species that survives. It’s the one that’s most adaptable to change. I think that’s true for software as well.”

That’s a somewhat surprising statement, given that WordPress has a history of being years late to prioritizing website performance and AI integration. Divi, Elementor, Beaver Builder, and other WordPress editing environments had already cracked the code on democratizing web design in 2017 with block-based, point-and-click editors when WordPress began their effort to develop their own block-based editor.

Eight years later, Gutenberg is so difficult for many users that the official Classic Editor plugin has over ten million installations, and advanced web developers prefer other, more advanced web builders.

Takeaways:

  • Automattic’s Strategic Reversal
    Automattic reversed its pause on WordPress contributions despite unresolved litigation with WP Engine, perhaps signaling a change in internal priorities or external pressures.
  • Delayed Response to AI Trends
    A new AI group has been formed within WordPress core development, but this move comes years after competitors embraced AI—suggesting a reactive rather than proactive strategy.
  • Lack of Long-Term Vision
    WordPress leadership admits to having no roadmap beyond the short term, framing adaptability as a strength even as the platform lags in addressing user needs and keeping up with technological trends.
  • Minimal Impact from Major Investments
    Despite receiving hundreds of millions in funding, core WordPress and Gutenberg development showed no significant acceleration, raising questions about where investment actually went.
  • Usability and Competitive Lag
    Gutenberg arguably struggles with usability, as shown by the popularity of the Classic Editor plugin and user preference for third-party builders.
  • WordPress at a Competitive Disadvantage
    WordPress now finds itself needing to catch up in a CMS market that has evolved rapidly in both ease of use and innovation.

The bottom line is that the pace of development for the WordPress core and Gutenberg remained steady after the 2019 investment, and after all of the millions of dollars that Automattic received from companies like Newfold Digital, sponsored contributions, and volunteer contributions from individuals themselves, the effect on the speed of development and innovation maintained the same follow-the-competitors-from-behind pace.

Automattic’s return to WordPress core development inadvertently calls attention to how far the platform has fallen behind competitors like Wix in usability and innovation, despite major investments and years of community support. For users and developers, this means that WordPress must now work to regain trust by proving it can adapt quickly and deliver the tools that modern site developers, businesses, and content creators actually need.

Automattic has a legitimate dispute with WP Engine, but the way it was approached became a major distraction that resulted in an arguably unnecessary four-month pause to WordPress development. The platform might have been in danger of losing relevance if not for the work of third-party innovators, and it still arguably lags behind competitors.

Yoast AI Optimize is now available for Classic Editor

We’re excited to announce that Yoast AI Optimize is now also available when using the Classic Editor in WordPress!

You’ve finished your copy, great! But those pesky Yoast SEO Analysis lights aren’t all green and you have to make manual changes. That’s where Yoast AI Optimize comes in. With Yoast SEO Premium, you can now get AI-powered suggestions right inside your Classic Editor to help fine-tune your content.

What is Yoast AI Optimize?

Yoast AI Optimize brings smart, targeted SEO support into your writing flow. It gives AI-powered suggestions for specific assessments in the Yoast SEO analysis, such as length, structure, and keyphrase distribution.  You’ll see exactly where improvements can be made and get quick, editable suggestions to help you fix them. You can quickly apply or dismiss them; the final decision always remains yours.

Benefits:

  • Get real-time AI suggestions that help improve SEO and readability
  • Edit suggestions to match your style and tone of voice
  • Apply or dismiss suggestions easily without breaking your writing flow
  • Use it in both the Classic and Block editors with Yoast SEO Premium
  • Supports optimization for:
    • Keyphrase in introduction
    • Keyphrase distribution
    • Keyphrase density
    • Sentence length
    • Paragraph length

Whether you’re using the Classic Editor or sticking with the Block Editor, Yoast AI Optimize helps you improve your SEO score faster, without losing the personal touch.

If you’re curious to know how we built this feature, check out our developer blog post with all the behind the scenes.

Ready to optimize smarter?
Update to Yoast SEO Premium to try AI Optimize in the Classic Editor today!

How CMOs Can Use Conversion Tracking & Attribution For Smarter Paid Media Strategy via @sejournal, @MenachemAni

For chief marketing officers of retail brands and businesses, knowing which channels and campaigns deserve the marketing budget can directly impact the success and length of their tenure.

But in today’s omnichannel environment of walled gardens, customers engage with your campaigns (and other assets) multiple times before converting.

Since there is no perfect conversion tracking or attribution, you need a system to decide where to spend your money.

Too many marketers still rely on outdated or overly complex attribution models, incomplete data, or pure guesswork.

Common side effects include over-investing in either the upper or lower funnel, while underfunding channels and campaigns that balance demand generation and demand capture.

In this article, we’ll break down how CMOs and marketing leaders can use conversion tracking and attribution data to:

  • Understand true channel performance.
  • Make better budget decisions.
  • Improve full-funnel efficiency.

Conversion Tracking In Google Ads: Limitations & Blind Spots

Running a Google Ads or paid media campaign without native conversion tracking is asking for trouble.

Not only will your account operate with blinders that prevent the system from finding improvements and patterns, but you won’t also have any in-platform metrics to measure your own database against.

I also see some accounts can take several weeks for reporting data to be attributed fully, primarily because of the click-to-purchase duration.

Google may not be fully accurate with all metrics, but you want it to understand what actions are meaningful to your business.

Lead Generation

  • Online conversion actions: form fill, chat, phone call.
  • Offline conversion stages: qualified lead, converted lead.
  • Support tools: WhatConverts, HubSpot, or other CRM to track lead data + Zapier for connectivity.

With leads, there is a challenge in terms of reconciling what is recorded online and what happens outside of the Google ecosystem.

Google’s system knows it got you a certain number of form fills, chats, or calls. It needs to know how many of those were good quality leads. How many of those went on to become actual sales?

That would lead you to create a “next step” in the process, such as qualified leads, and feed this back into Google. You can also then bid against those or use them as observations, but they will be in the system as a positive funnel event.

Read more: Building A Lead Generation Plan

Ecommerce

  • Online conversion actions: purchase, add to cart.
  • Offline conversion stages: subscriber, repeat buyer.
  • Support tools: Shopify to track returns, exchanges, etc.

For ecommerce, it’s typically a lot simpler to track the right events, but it’s trickier to rate their value to the business.

Google can record purchase transactions as an event, but it lacks your backend data on which locations have the fewest returns or exchanges, which products lead to higher rates of subscription and repeat purchases, and what each product’s margin is.

If you’re using Shopify, they have a Google and YouTube app that does pretty much all the heavy lifting you need to do to link the two platforms and track ecommerce sales.

How To Use Performance Data To Fuel Better Marketing Investments

“I know which channels and campaigns are providing the best ROI” is verbal gold for a CMO.

Being able to quantify the impact of where they spend their marketing budget positions them to make smarter decisions and increase their value to the business that employs them.

Unfortunately, this is easier said than done. Here are some ways to think through the more common hurdles that get in your way as a marketing leader.

Thinking Through In-Platform Attribution

Once you set up tracking and make sure you’re getting good performance data in, you can use it to inform attribution and omnichannel strategy.

My methodology is different from how many marketers approach this. I’m of the mind that attribution is not something that can be fully solved, and over-relying on third-party tools will set you in the wrong direction because they all have different biases.

Certain tools can’t see the actual power of YouTube, for example.

One study by Haus showed that YouTube in-platform reporting is three times less than what they see. So many third-party attribution tools can’t see view-through or engagement data for YouTube, so they end up with a higher-than-ideal margin of error on the reporting.

Some other tools can see the click and view attribution for Meta, but only click attribution for Google. What I like to do is optimize each campaign in-platform based on that platform’s data.

Handling Conflicting Attribution Data

When we come across situations where different platforms show us conflicting attribution data, we use overall sales reports and tools like TripleWhale or Northbeam to help validate that data.

This helps us understand directionally, if we put another 20% of our budget into a specific campaign type, how does that impact the overall revenue?

It’s really about looking at blended numbers – some people call it media efficiency rate (MER) or blended return on ad spend (ROAS) – to see how that data changes over time with different campaign and marketing changes.

We use this to allocate budget according to what really moves the needle as far as revenue and profit are concerned. This is much better than just relying on what a platform tells you.

With lead generation, this is less of a problem because most lead form fills happen pretty quickly after the initial click.

If the user submits the form on the same page they landed on, you will very likely capture UTM and GCLID parameters.

For lead gen, we typically look to verify that the number of leads in the customer relationship management (CRM) is within 10% of what Google attributes to itself.

Point Of Diminishing Returns: Why All Growth Stalls

One thing many people forget is that with visibility and success in digital advertising, you pay a price in terms of incremental headroom.

In other words, you have much more untapped opportunity at 30% impression share than you do at 85%. Getting from 30% to 85% is going to probably be much less expensive than going from 85% to 90%.

If you look at Google Ads’ own attribution, there’s a finite amount of headroom with Search and Shopping.

Once you hit the top of that, it usually tapers off somewhere between 70-80%, and you’ve got to start finding other campaigns/platforms to start feeding the funnel. That could be other Google properties (like YouTube) or channels like Facebook, Instagram, or TikTok.

Fortunately, Google is now starting the rollout to show you this data for Performance Max in addition to Search and Shopping. This means you can take advantage of benefits like finding new advertising opportunities while still applying optimization tactics that you’re used to.

The other thing that’s really important, especially for newer advertisers, is not to expect the same performance from every campaign type.

People who have been around the block in PPC know, for example, that a 5x ROAS on branded search is realistic, but for YouTube, it might be 1x or even less.

You need to be okay with that, as long as you can get all the numbers to line up in terms of your total costs versus total revenue and margin.

Good Strategy Is Always Built On Clear Business Goals

Conversion tracking and attribution are essential parts of the CMO toolkit, but they mean little without the skill and literacy to interpret performance data in the context of a business.

If we were to sum up the most important part of this thought process, it would be:

  • Native platform tracking is crucial, but it’s only one part of the puzzle. Feed meaningful business outcomes back to the ad platforms to improve performance over time.
  • No attribution model is perfect. Treat attribution as directional rather than as an absolute, and be cautious about over-relying on third-party insights as they have their own blind spots.
  • Use blended metrics and cross-platform validation to make strategic choices based on actual business needs and financial goals, not just the metrics that one channel reports.
  • Recognize diminishing returns as you scale inside one platform and diversify intelligently across multiple channels to maintain growth.

Ultimately, your ability to optimize campaigns hinges on a central, unbiased source of truth that isn’t influenced by the incentives of any single ad platform.

Google or Meta are businesses built to serve their own business objectives and those of their shareholders, which don’t always align with those of your business.

By owning your data and attribution strategy, you set your brand up to make smarter, more confident marketing investments instead of pinning all your hopes on a long shot that’s rarely (if ever) accurate.

More Resources:


Featured Image: voronaman/Shutterstock

How To Create An SEO Roadmap That Actually Drives Results via @sejournal, @coreydmorris

Many companies approach SEO reactively – chasing rankings, responding to algorithm updates, being distracted by AI, or focusing on quick wins – without a long-term plan.

But successful SEO strategies start with a structured roadmap that aligns with business objectives, technical priorities, and content planning.

Planning isn’t an exciting term, and many planning processes are never-ending, poorly defined, or difficult to translate into an impactful deliverable.

I get it. SEO is a discipline that requires a lot of iterative updates, testing, learning, and can have a seemingly infinite number of ways to stack your tactics to end at the same goal result.

I will point out, though, that if you’ve ever been disillusioned with the results you received or the (lack) of return on investment in time or resources, you may not have had a strong enough plan or detailed enough roadmap guiding the process.

To help give you a better opportunity to reach goals and reduce future regrets, I’m going to walk through what should go into a strategic, results-driven SEO roadmap, including:

  • Aligning SEO with business objectives and outcomes.
  • Setting realistic SEO goals with clear key performance indicators (KPIs).
  • Prioritizing SEO tactics and tasks.
  • Bonus: Seeing it through to success.

Aligning SEO With Business Objectives And Outcomes

This isn’t a new topic, and it is also not one that is exclusive to just SEO as a digital marketing channel. However, it is critical.

Digital marketing doesn’t have to be an expensive line item. It can (and should) be an investment, and investments expect a return.

If you’re a CMO or in marketing leadership, you feel this expectation daily.

If you’re deep in the details, wearing a digital marketing or SEO specialist hat, you likely have recurring conversations with those who are trying to quantify your efforts further.

Your plan needs to have a clearly defined set of goals. SEO (like most marketing initiatives) can’t fix brand, product, customer service, or retention problems in a business.

Whether in a dedicated SEO role, or a broader digital marketing one, or even in leadership, I can attest to how uncomfortable it can be when company politics, silos, and other barriers exist.

It is much easier to just do the things you can control and not wade into messes.

If you don’t have business outcomes defined and aligned with your SEO KPIs, though, at some point, someone is going to ask and want to connect the dots between the efforts and resources and how it impacts the bottom line of the organization.

I recommend getting people at the highest levels possible, as well as the broader business plans, metrics, and overall performance, connected with those doing SEO, to utilize in the roadmap.

That way, when down the road things are happening at a technical or detailed level and questions arise about the direction of the plan and resources, there’s a business-case foundation.

Setting Realistic SEO Goals With Clear KPIs

Sometimes goals are dictated to us, and in other cases, things are wide open, and we are asked to share what we think is reasonable in terms of conversions or KPIs.

If you are able to align with business goals, you should have a good measure and understanding of what SEO could and should drive toward.

However, that doesn’t mean that the work is done when it comes to translating that down further into SEO measures.

It is getting harder and harder to accurately project organic search traffic with the rise of zero-click searches on Google and reduced clicks from AI Overviews.

The days of broad strategies and focusing on a quantity of traffic and letting your site filter for quality are over.

I strongly recommend that SEO KPIs be focused on quality metrics.

Working backwards from the business outcomes, in alignment with your funnels and customer journey maps, back to the first possible touch point from SEO.

By looking at all the branches and anticipated ways someone might enter from SEO, you can categorize them and come up with a quality-first approach to your KPIs and expectations when looking at your current performance data and third-party research data.

You might find that the volume and expectations for what SEO can drive need to be tempered at this point, and this is the time to do it before getting way down the road in investment.

Prioritizing SEO Tactics And Tasks

Your roadmap so far is pretty metric and goal-heavy. That’s on purpose. However, the other big category that I often see tank, even the most data-driven approach to SEO, is prioritization and resourcing.

Years ago, when I had a national restaurant chain as a client, we had an awesome strategy mapped out. We did a test with a few locations and saw massive success.

When the roadmap and plan for the next year were ready to roll out, we were blindsided by resource constraints. The problem wasn’t with investment in the SEO functions or content creation, or even with the budget for the dev team. It was a priority for the IT and dev teams.

We didn’t know that they were booked out for the next six months on a big in-store POS system initiative and wouldn’t be able to touch our plan or anything more than a triaged website emergency.

While I pivoted the plan to local search and getting them to the top of Google Maps, it was a big letdown for all of us invested in the full plan, as we didn’t account for this type of challenge.

SEO is more than just SEO. It requires a range of other skills and disciplines. Maybe you have someone who wears all the hats (or it is you), but regardless of the situation, you’ve got to plot out all of the tactics and resources needed.

You can’t get it all done at once, but also in the pacing of the plan, you can’t allow things to get put on the shelf when other tasks are stealing attention.

Knowing the non-SEO factors that can have an impact on SEO is really important in crafting your plan.

If you’re struggling with the specific tactics that should go in your plan, or the cadence for them, find external checklists and support, but be careful not to rely solely on checklists as a substitute for your tailored strategy.

Bonus: Seeing It Through To Success

If you’re struggling with the planning process, a framework that I recommend is the START Planning Process (full disclosure: I created it).

It provides a five-step process for digital marketing planning and can be applied to a multi-channel approach or narrowed to just focus on SEO and help you get through the strategy, tactics, and rest of the steps needed to arrive at your ultimate plan and roadmap.

When you activate your plan and put the roadmap into place, there will be distractions. Internal distractions and disruptions will happen. External changes will impact your perfectly crafted plan.

When any of these things happen, they become what I like to call “trigger events.” They are opportunities to revisit your roadmap, see how they might change your priorities and focuses for SEO, and then get back to work.

Even if trigger events don’t happen, you want to make sure that your plan, resource scheduling, and tasks have built-in reflection points where you can take a step back, evaluate your plan, and recalibrate if necessary.

Wrap Up

SEO is hard work. It is changing with AI, and if we weren’t previously, we definitely are focused now on quality more than quantity.

Traffic sources are diversifying, and we are working hard to keep up with where things are going, balanced with what works still for our businesses and growth today. Whew.

I hate hearing that “SEO doesn’t work for my company” or for other reasons why it gets written off when I see it working for competitors or others in the same industry.

Yes, there are some limited cases where that’s true, and it isn’t something to consider.

In most others, though, so many symptoms of it not working are connected to a root cause of not having a roadmap or plan.

I’m a strong believer in the more we’re being distracted, the more – now than ever – that we need to be disciplined, documented, and working off of a solid foundation.

More Resources:


Featured Image: KT Stock photos/Shutterstock